content stringlengths 86 994k | meta stringlengths 288 619 |
|---|---|
Speaker: Cliff Young
The Unreasonable Effectiveness of Deep Learning
The algorithms/models keep changing, meaning that the systems problem keeps changing too.
The engineering is currently ahead of the science - we want to try and understand why e.g. TPUs are so effective.
The Revolution
Starts with AlexNet, but GPUs were expensive and inefficient
TPU v1:
• deployed 2015, paper in 2017
• 30x perf compared to CPU/GPUs
• Maybe the first high-volume matrix architecture?
Using systolic arrays:
• Grid that expands from corner step by step, rather than standard linear pipeline
TPU v2:
• 2 Cores each with scalar, vector and matrix units
Cloud TPU v3 out now
"Cambrian Explosion" in DL Accelerators
• Many startups targeting this space (e.g. GraphCore)
• Inference has huge diversity of design points
• But trainin surprisingly convergent
Data Parallelism: replicate the model N times.
Model Parallelism: cut up the model into multiple pieces (hard problem)
Floating point formats
HPC people want more bit FP computations, whereas ML people can get away with 16 or even 8
Some benefits for ML for high precision though. Is the fiture mixed-precision algorithms?
Working today:
• Pruning on the inference side
• Structured spartisy (e.g. sparse attention)
Promising: GNNs
Sparsity in NNs is low by HPC standards (HPC ≥ 98% = sparse)
Brains may be sparse
Science: how can we make sparse training work?
Engineering: what are the sparce architectures that are worth building?
Weird unscientific observations
Distillation → Going larger, training, and going back smaller is more effective than direct training at that size
Feedback alignment → random feedback weights work just as well in backprop
Lottery Ticket Hypothesis → sparse accurate nets already exist inside random init arch and we just have to chip away to get them?
Some Factorisations work for CNNs → Inception (2014), Depthwise separable convs (2016)
Space race in language understanding
Ever larger machines. OpenAI's 10k GPU cluster, 3640 petaflop days of training.
Science: ask why
• Sapir-Whorf hypothesis: language you speak helps/hurts concepts you can think about - same applis to machines
• Can be do LHC-scale things under out desks? | {"url":"https://thecharlieblake.co.uk/tpus","timestamp":"2024-11-03T02:55:20Z","content_type":"text/html","content_length":"114872","record_id":"<urn:uuid:4888542c-e7f8-4885-8dec-1c178603c353>","cc-path":"CC-MAIN-2024-46/segments/1730477027770.74/warc/CC-MAIN-20241103022018-20241103052018-00791.warc.gz"} |
How do you calculate time clock hours?
How do you calculate time clock hours?
Here’s how to determine hours worked:
1. Convert all times to 24 hour clock (military time): Convert 8:45 am to 08:45 hours.
2. Next, Subtract the start time from the end time.
3. Now you have the actual hours and minutes worked for the day.
4. Finally to determined total wage, you will need to convert this to a decimal format.
How do you calculate a 30 minute break?
If you work over 5 hours in a day, you are entitled to a meal break of at least 30 minutes that must start before the end of the fifth hour of your shift.
Is time clock rounding legal?
Your hours can’t lawfully be ’rounded down’ to the nearest 15 or 30 minutes. When it comes to pay, you’re entitled to be paid for all the time you work. This means if you’re required to come in 10
minutes early or stay back after close to count the till or clean, it should be paid time.
Is the 7 minute rule a law?
The 7-minute rule, also known as the ⅞ rule, allows an employer to round employee time for payroll purposes. Employers may legally round employee time, as long as time is rounded correctly and
adheres to FLSA regulations regarding overtime and minimum wage pay.
How do you calculate minutes between two times?
The Generic Formula to calculate the minutes between two times is: We subtract time/dates in excel to get the number of days. Since a day has 1440 (24*60) minutes, we multiply the result by 1440 to
get the exact number of minutes.
How do you calculate hours between two dates?
Finding the number of hours or the time between two times / dates is simple, just subtract the start date/time from the end date/time and multiply the result by 24 hours. If you want to enter the
dates and times separately (which is loads easier than typing in a date/time in one cell) then add the date/times together.
How do you calculate weekly hours?
To calculate your average weekly working time you should add up the number of hours you worked in the reference period. Then divide that figure by the number of weeks in the reference period which is
normally 17 weeks. You have a standard working week of 40 hours (eight hours a day).
How do you calculate time card hours?
To calculate time cards manually, gather all the information regarding the hours worked. Take the hours worked–from the period that employee clocked in until he clocked out–and subtract from this
time for breaks and lunch. Keeping precise records of time worked is important for employees who are paid hourly. | {"url":"https://forwardonclimate.org/lifehacks/how-do-you-calculate-time-clock-hours/","timestamp":"2024-11-14T11:33:33Z","content_type":"text/html","content_length":"55282","record_id":"<urn:uuid:7f6d0b06-1eef-49c8-bf77-de7aae575b53>","cc-path":"CC-MAIN-2024-46/segments/1730477028558.0/warc/CC-MAIN-20241114094851-20241114124851-00626.warc.gz"} |
New Metrics Shed New Light on Understanding How a Network's Total Supply is Faring - Santiment Community Insights
New Metrics Shed New Light on Understanding How a Network's Total Supply is Faring
Two new metrics are the latest addition to Santiment, and they are more helpful than you probably expect!
Next time you log on to Sanbase, check out the following metrics:
• Total Supply in Profit
• Percent of Total Supply in Profit
These new metrics provide a new and fresh way to analyze the markets. Their calculations should be somewhat self-explanatory, based on their names. Regardless, here is a quick walkthrough about how
each of them work:
Total Supply in Profit offers a great way to understand just how much the total supply, which exists on a network, is up or down at any given time. This is a simplified way to simply see whether a
coin is worth more or less now, compared to the time in which it was first minted/mined/entered circulation. Even +0.00001% is considered 'in profit', and is essentially identical to a coin that is
Percent of Total Supply in Profit works quite similarly, and only looks at the percentage of supply that is available at the time, as opposed to just the total amount of coins (which could sometimes
be misleading since more coins are mined and enter the network's ecosystem over time). Either way, remember that this is a very binary way of seeing the ratio of the total supply simply being in
profit, even if it is a very small profit.
These metrics could arguably work very well when paired with the MVRV, RSI, and Network Realized Profit/Loss metrics. Just like supply in profit, these metrics offer more shorter term perspectives at
how the network is profiting or losing value on their investments over time. And remember, this matters because crypto is a zero sum game. When networks are heavily profiting, according to these
kinds of metrics, then watch out. But if they are heavily bleeding, the metrics provide you with a transparent, open window to suggest adding to your positions.
Let us know what you think of these metrics, and we are very much open to feedback on this, and other suggestions for future additions to the growing Santiment collection of great leading metrics!
Disclaimer: The opinions expressed in the post are for general informational purposes only and are not intended to provide specific advice or recommendations for any individual or on any specific
security or investment product.
Thanks for reading!
If you enjoyed this insight please leave a like, join discussion in the comments and share it with your friends!
Never miss a post from brianq!
Get 'early bird' alerts for new insights from this author
Conversations (0)
No comments yet
Be the first to comment | {"url":"https://insights.santiment.net/read/new-metrics-shed-new-light-on-understanding-how-a-network-s-total-supply-is-faring-7446","timestamp":"2024-11-03T15:37:24Z","content_type":"text/html","content_length":"32738","record_id":"<urn:uuid:190ae043-689d-4dd1-bfc5-6c98b88cb966>","cc-path":"CC-MAIN-2024-46/segments/1730477027779.22/warc/CC-MAIN-20241103145859-20241103175859-00679.warc.gz"} |
Proper Generalized Decomposition using Taylor expansion for non-linear diffusion equations
For a physical problem described by a parameterized mathematical model, different configurations of the problem require computing the solution over a range of parameters in order to study the
phenomenon when parameters change. In other words, it is a process of looking for a continuum of solutions of the equation, relative to these parameters, in order to find the ones that fit the
experimental data. However, solving a direct problem for each parametric configuration will generate a cascade of direct problems, which will cost a huge amount of time, especially when we deal with
non-linear equations. Therefore, the parametric solution is a suitable alternative strategy to compute the solution of the equation. In this paper, we will use the Proper Generalized Decomposition
(PGD) method to solve non-linear diffusion equations and produce parametric solutions. To treat the non-linear functions, we will not use the Discrete Empirical Interpolation Methods (DEIM), which
has proven their utility, but the non-linear terms will be replaced by their Taylor series expansion up to an order m. This will produce a new model, which we call here the ”developed equation” and
therefore the PGD is applied on. Polynomial equations appear for each tensor element computation. While space and time tensor elements’ equations are to be solved using Finite Elements Methods (FEM)
and Borel–Padé–Laplace (BPL) integrator respectively, Newton solver is used for tensors relative to the parameters’ equations. Here, rational polynomial functions arise for parametric tensor
elements, which are known to extrapolate solutions. Numerical simulations are done for a non-linear diffusion equation with exponential diffusion coefficient as first trial, and with a magnetic
diffusion coefficient as a second one.
• Diffusion equation
• Non-linearity
• Proper Generalized Decomposition
• Rational polynomial
• Taylor expansion
Dive into the research topics of 'Proper Generalized Decomposition using Taylor expansion for non-linear diffusion equations'. Together they form a unique fingerprint. | {"url":"https://khazna.ku.ac.ae/en/publications/proper-generalized-decomposition-using-taylor-expansion-for-non-l","timestamp":"2024-11-12T20:38:25Z","content_type":"text/html","content_length":"58242","record_id":"<urn:uuid:2e7adc8f-72c4-4afa-a218-24483e96f6d3>","cc-path":"CC-MAIN-2024-46/segments/1730477028279.73/warc/CC-MAIN-20241112180608-20241112210608-00163.warc.gz"} |
Lab Notebook
This is the fourth post in my Hexapod Project series. The point of this project is to build a robot that allows me to try out a few robotics concepts. For a listing of each post in this series,
The previous posts have gone through how the hexapod was built, how the different processors communicate with each other, how it is able to walk smoothly, and how it can both keep track of obstacles
and where it is relative to those obstacles. In this post, I'll talk about allowing the hexapod to walk around autonomously.
The first question we need to answer is: what do we mean by walking around autonomously? Do we tell the hexapod to walk forward and it does so without help? Do we tell the hexapod to walk to 'the
kitchen' and it figures out how to do that? Does the hexapod wake up in the middle of the night to search for a midnight snack? Enabling the hexapod to act out the last two scenarios is well past my
skill and interest level, so I'll settle for something a little simpler. My goal is to be able to give the hexapod a location in space (as 2D coordinates) and have it walk there without running into
anything. To do this, it needs to use all of the algorithms developed in the previous posts to avoid obstacles, keep track of where it is, and step with smooth confidence.
The work from the previous posts allows for a fair amount of abstraction in solving this problem. From the work on inverse kinematics, we can fully control the motion of the hexapod through two
parameters: speed and turning. From the SLAM algorithm, we have a discrete grid of points representing a map of the local environment. Autonomous navigation is then just a problem of deciding where
on the map the hexapod should go, and then setting the speed and turning values to make it go that way.
Unlike the problem of Simultaneous Localization and Mapping from the previous post,
has plenty of easy-to-digest literature floating around. However, this does not excuse me to simply point to some code I've written and call it a day. Further, many naive implementations of common
pathfinding solutions lead to code that runs well in theory, but not quickly enough for my relatively simple-minded hexapod.
A classic pathfinding algorithm is
Dijkstra's algorithm
. In this algorithm, we represent every possible location as a 'node' that is connected to other 'nodes' (in this case, neighboring locations). Each connection between nodes is assigned a distance
value so that the algorithm can compute distances between far-away nodes by summing up the connection distances between the two. Given a start and end node, Dijkstra's algorithm finds the shortest
path (series of connections) between the two by traversing the network of connected nodes and assigning a distance-from-start value to each. If a node is reached multiple times through different
paths, the shortest distance takes precedence. When the target node is reached, we simply trace the network in reverse to find which nodes gave the shortest path from start to end. In this way,
Dijkstra's algorithm can find the shortest path from point A to point B (assuming you describe positions in space as a network of connected nodes).
Pathfinding on a grid of nodes from point A (green) to point B (red).
In our application, we also want the pathfinding algorithm to avoid sending the hexapod through walls or other obstacles. A simple way to achieve this is to modify the pathfinding algorithm to
include a 'cost' of travelling between nodes. This cost is large for connections that land you inside a solid object. We are then not finding the shortest path between points A and B, but instead the
path that minimizes distance + cost. Since we are free to pick the cost, we can pick how much the algorithm prefers short distances versus avoiding obstacles.
Pathfinding on a grid with obstacles.
One of the drawbacks of this method is the amount of time needed to find the optimal path. On a large grid, the algorithm will spend a significant amount of time exploring the multitude of potential
paths that may not provide an optimal result, but must be checked to make sure. A modification to Dijkstra's algorithm can further improve the results and leads us to the
A* algorithm
(pronounced A-star). In this version, we explore connections that lead closer (by some measure) to the target before considering any other paths. This is like making a bet that the path between
points A and B is the most direct one. If you win, the algorithm finds the path in a very short amount of time. If you lose, the algorithm has to go back and search all of the other less direct
paths. I've decided to go with the A* algorithm, since I know the environment in which the hexapod will navigate is not too maze-like.
My implementation of the A* algorithm for the hexapod (which I call "Autonav") can be found
. As with the SLAM algorithm, the actual solver runs in a thread separate from the main thread that controls the hexapod servos. Given a copy of the SLAM map, the current location of the hexapod, and
a target location, the solver searches through the map to find a sequence of positions that connects the two locations. Since the SLAM map is updated every few seconds and may contain new obstacles,
the Autonav solver needs to be able to solve for a new path every few seconds as well. In my implementation (which is admittedly not very optimized), the solver can search through around 2000 grid
points in the 1-2 seconds between SLAM updates. For typical grid setups, this unfortunately only gives a physical search range of a few meters. In order for the Autonav solver to quickly and reliably
find paths through the environment, the target position can only be a few meters away. Far-away targets can be reached by splitting the distance up into a series of evenly-spaced waypoints.
To visualize the path to which the Autonav sends the hexapod, I've written a
short code
that parses the
written from a run and creates a reconstruction of the scene. The render shows the SLAM map (with distance to objects as a smooth gradient) in red, the hexapod location in blue, and the Autonav path
in green:
Scale is 5 cm per pixel.
For a given SLAM map, the Autonav solver does a pretty good job finding a path around obstacles. To turn this path into commands for the hexapod to follow, I set the hexapod speed to be a constant
(0.3) and modified the turning value based on the path. The low resolution of the Autonav path can cause the hexapod to act a bit jumpy, as the direction the path takes only changes in 45 degree
increments. To smooth the path, the hexapod picks a 'tracking point' at least 20 cm away from its current position along the Autonav path. It computes the heading to that point and sets the turning
parameter proportional to how far away the hexapod's heading is from that point. In this way, the hexapod continually tracks the Autonav path.
To test how well the whole system works, I set up different scenarios for the hexapod to navigate through. After each test, I compiled the reconstruction renders into a movie so I could play back
what happened. The first test was for the hexapod to move to a target 1.5 meters in front of it in an empty room, then turn around and go back to the starting point.
From the video, we see that everything seems to be working well. The SLAM algorithm updates the map with LIDAR scans made from new perspectives, the Autonav solver continually updates the intended
path, and the hexapod smoothly strolls from point A to point B. Through the eyes of a human observer, this kind of test looks pretty neat:
Once again, everything works pretty smoothly. Of course, not every test I did wound up being such a simple success. Here's an example of a test where the hexapod got turned around due to what I call
"SLAM-slip", where the SLAM algorithm fails to properly match new LIDAR scans to the map.
Here's a more complicated test where I added a box along its return path to see if it could adapt to the new environment:
With a bit of tuning, the hexapod was able to successfully navigate around both permanent and temporary obstacles and reach the waypoints I gave it. The biggest issue I found was in timing. By the
time the SLAM algorithm updated the map and the Autonav solver found a new path, the hexapod would have moved enough to invalidate any new solution for where it should go. Sometimes there would
appear to be almost a 5 second delay between useful updates to the hexapod turning. Considering all of the math behind what the hexapod is trying to run on such low-powered hardware, a 5 second delay
isn't too surprising. It's unfortunate that it seems just barely underpowered for the task set before it. A simple solution to this problem is to just let the hexapod walk slowly so it can get
updates to where it should go before walking too far off course.
UPDATE: more testing, so here's a longer navigation test through my apartment!
Race Prep
At this point, the hexapod is a pretty fleshed-out robot. It can do some basic autonomous navigation through my apartment, given a set of waypoints to follow. While the end result is conceptually
fairly simple, the process of getting there has been considerably involved. It would be a shame to end the process here, but there isn't a whole lot more I can reasonably do to enhance the hexapod's
capabilities at this point.
To give the hexapod a final task, I've entered it into the Sparkfun Autonomous Vehicle Competition (AVC) on 20 June 2015. I attended the previous two AVCs and had a lot of fun watching both the
ground and aerial races. The sophistication of the entries in the ground race has been extremely varied, from LEGO cars relying on dead-reckoning to go-karts with
. I figure that a LIDAR-equipped hexapod falls comfortably between these two ends.
Unfortunately, there are three main issues with my hexapod that make it woefully underprepared for such a competition. The first is the speed. The fastest I've gotten the hexapod to move is around
1.2 feet per second, but the navigation has issues above 0.5 feet per second. Robots in the competition are expected to travel almost 1000 feet in less than 5 minutes, giving an average speed of 3.3
feet per second. So at best, the hexapod will only make it about 20% of the way around before time is called. The second issue is sunlight. The LIDAR unit I'm using is intended for indoor use, and
can't make reliable measurements in direct sunlight. This means that the hexapod might not be able to see any of its surroundings to get a position reference. The third issue is that the course
doesn't really have many natural obstacles to use for position reference. In previous years, the outer boundary of the course was a chain-link fence, which is likely all but invisible to LIDAR. Even
if the LIDAR could work in sunlight, there might not be any objects within range to sense. With these significant issues, I'm still going to race the hexapod. It won't win, and it might not even lose
One of the requirements for entries into the AVC ground race is that the robot must not only be autonomous, but must start the race with a single button press. So far I've been ssh-ing into the main
processor through WiFi to initiate any actions, so I need one last physical addition to the hexapod.
The LED+button board connects to the Due, but since the Due pins are shared with the main cpu, both can access them. The orange LED (#1) blinks to show that the PROG_HWMGR code is running on the Due,
and the button below it does nothing. The rest of the LEDs and buttons are controlled by the main cpu when running in fully-autonomous mode. When the code is ready to enable the servos and spin up
the LIDAR unit, the white LED (#2) flashes. Pressing the button below it allows the code to continue. The blue LED (#3) flashes when the LIDAR is spun up and the SLAM algorithm is ready to make an
initial map. Pressing the button below it starts the integrating process. When the initial integration is done, the green LED (#4) flashes, indicating the hexapod is ready to race. Pressing the final
button starts the race, and pressing it again ends the race early (in case of
wayward robot
So with that, the hexapod is built, programmed, and ready to compete. I'll post a write-up of whatever happens during the race sometime next week, along with some pictures of it racing!
This is the third post on my Hexapod Project series. The point of this project is to build a robot that allows me to try out a few robotics concepts. For a listing of each post in this series, click
here. In this post, I'll talk about using laser ranging to let the hexapod keep track of where it is relative to its environment.
Robots are Dumb
It's not hard to find examples of robots failing to perform tasks that a human might deem trivial. Even the entrants to the DARPA Robotic Challenge--while amazingly sophisticated in their
design--sometimes fail to turn handles, walk through doors, or even just stand upright for very long. If you are unfamiliar with the subtle challenges of robotics, these failures might make you
wonder why it takes so much money and time to create such a flawed system. If a computer can calculate the N-millionth digit of pi in a few milliseconds, why can't it handle walking in a straight
line? The general answer to this is that the fundamental differences between the human mind and a computer cpu (neurons vs transistors, programming vs evolution, etc) create vast differences in the
intrinsic difficulty of many tasks.
One such task is spatial awareness and localization. Humans are capable of integrating various senses (sight, touch, balance) into a concept of movement and location relative to an environment. To
make my hexapod robot capable of autonomous navigation, it also needs to have a sense of location so that it can decide where it needs to go and where it should avoid going.
Arguably the most common way of letting a robot know where it is in the world is GPS. Satellites in orbit around the earth broadcast their position and the current time, and a GPS receiver receives
these broadcasts and triangulates its position. A GPS-enabled robot can figure out its location on Earth to within a few feet or better (depending on how much money you spend on a receiver). The
biggest issue with GPS is that a robot using GPS needs a clear view of the sky so that it can receive the signals being beamed down from the GPS satellites. GPS also doesn't give you any information
about your surroundings, so it's impossible to navigate around obstacles.
For the hexapod, I wanted to avoid using GPS altogether. I chose to use a LIDAR unit for indoor localization, mostly because it seemed like an interesting challenge. LIDAR uses visible light pulses
to measure the distance to objects just like RADAR uses radio waves bouncing off objects. A LIDAR unit contains a laser emitter/detector pair that can be swept across a scene to make measurements at
different angles. At each angle, the unit emits a pulse of laser light and looks for a reflected pulse with the detector. The delay between emission and detection (and the speed of light) gives the
distance to the reflecting object at that angle. High-quality LIDAR units (like those used in self-driving vehicles) can quickly give an accurate 3D scan of the surrounding environment.
More Lasers
The LIDAR unit I picked is taken from the XV-11 robotic vacuum cleaner from Neato Robotics. You can find just the LIDAR unit by itself as a replacement item on various sites; I got mine off eBay for
around $80. The XV-11 LIDAR unit has a bit of a following in the hacking community, as it offers 360-degree laser ranging at 5 Hz for much cheaper than anyone else. While the unit isn't open source,
there are some nice resources online that provide enough documentation to get started. It only scans in a 2D plane, but what you lose in dimensionality you gain in monetary savings.
Laser module in the lower left, sensor and optics on the upper right.
The LIDAR unit has just 6 wires to control the whole thing. Two are connected directly to the motor that spins the unit; two provide power to the laser and other electronics that do the ranging; and
the last two provide a serial connection to the processor. Upon powering the main processor, the following text barrels down the data line at 115200 baud:
Piccolo Laser Distance Scanner
Copyright (c) 2009-2011 Neato Robotics, Inc.
All Rights Reserved
Loader V2.5.14010
CPU F2802x/c000
Serial WTD38511AA-0056881
LastCal [5371726C]
Runtime V2.6.15295
It's hard not to trust any device that sends a smiley face as an opening line. The introduction message isn't too informative, but the fact that it is sent as plain ASCII is comforting. The unit
doesn't send any laser ranging data until it is spinning at close to 5 Hz, and the processor has no way of controlling the spin motor. By applying an average of around 3 V to the motor (I did PWM
from a 12 V line), the unit spins up and raw data starts flooding down the line. The resources link above provides documentation for how the data packets are formatted, but the key points are that
they contain some number of laser ranging measurements, error codes, and the current spin rate of the unit. This measured spin rate can be fed into a feedback loop for the motor controller so that it
stays spinning at a nice constant 5 Hz.
I decided to have the Arduino Due on the hexapod handle communication with the LIDAR unit and keeping the motor spinning at the correct rate. The Due already handles communication between the main
cpu and the Arbotix-M, so what's one more device? I soldered up a simple board that included an N-channel MOSFET for PWM control of the motor, and a LM317 voltage regulator to provide the LIDAR
processor with around 3.3 V.
Motor connects on the left, LIDAR controller on the right. Bottom connects to Due.
The hexapod kit came with a mounting board for adding accessory hardware, but the mounting holes on the LIDAR didn't match up. I 3D-printed a little bracket to both attach the unit to the hexapod
body and provide a little space for the board I had just made.
Credit to my Single Pixel Camera project for making me buy a better printer.
Attached to the top mounting panel of the hexapod.
With the small interface board mounted below the LIDAR unit, I connected everything up to the Due. A PWM line from the Due is used to control the speed of the motor, and the Serial2 port is used to
receive data from the LIDAR processor. The 12 V needed to power the motor and processor come from whatever source already powers the main UDOO board. In testing, this was either an AC adapter or a
3-cell lipo battery.
I have no idea what I'm testing, but damn does it look technical.
The stream of data from the LIDAR unit is unfortunately pretty non-stop, so I had to write a code that parses little bits at a time to make sure none would get lost in the small input buffer of the
Due. The Due bundles up bits of the data into packets and sends them off to the main cpu for further processing. To start out, I just plotted the results in gnuplot:
Rounded object at (50,100) is my head.
Each point is a single laser ranging measurement, and they span the full 360-degrees around the unit. A map like this can be made five times a second, allowing for a pretty good update rate.
A Sense of Self
At this point, the hexapod has the ability to continually scan its surroundings with lasers and accurately determine its distance from obstacles in any direction. But we still haven't solved the
problem of letting the hexapod know where it is. By looking at the plot above, we can clearly see that it was sitting near the corner of a rectangular room. If we moved the robot a few feet in any
direction and looked at the new map, we would be able to see that the robot had moved, and by comparing the two plots in detail we could even measure how far it moved. As humans, we are able to do
this by matching similarities between the before and after plots and spotting the differences. This is one of those tasks that is relatively easy for a human to do and very tricky for a computer to
Using only the LIDAR scans, we want the hexapod to be able to track its movement within its environment. By matching new scans to previous ones, we can both infer movement relative to the measured
obstacles and integrate new information about obstacles measured from the new location. The process of doing so is called Simultaneous Localization and Mapping (SLAM). There are many ways of solving
this problem using measurements like the LIDAR scans I have access to. Some methods involve big point clouds, some involve grids. Some are 3D, some are 2D. One of the most common traits of any SLAM
algorithm that I've found is that it is complicated enough to scare away amateur robotics enthusiasts. So in keeping to my goal of writing most (if not all) of the software for my hexapod, I set out
to write my own algorithm.
My algorithm is not great, but it kind of works. I decided to do a 2D grid-based SLAM algorithm because a) my LIDAR scans are only in 2D, and b) point clouds are hard to work with. As the name
suggests, a SLAM algorithm involves solving two problems simultaneously: localization and mapping. My algorithm keeps a map of the surroundings in memory, and given a new LIDAR scan performs two
steps: matching the new scan to the existing map and infering where the scan was measured from; and then adding the new scan to the map to update it with any changes. As the Wikipedia article on the
subject suggests, we have a bit of a chicken-and-egg problem, in that you can't localize without an existing map and you can't map without knowing the location. To solve this problem, I let the
hexapod know its initial coordinates and let it collect a few scans while standing still to create an initial map. Then, it is allowed to step through the full SLAM algorithm with a map already set
For testing, I typically use a map with 128 pixels on a side, and each pixel represents 10 cm. After the initial set up where the hexapod is allowed to create a map in a known location, we might end
up with a map like this:
My 350 by 400 cm workroom. My head is still at (50,100).
The value in each pixel can vary between [0,1], and roughly represents how confident we are that the pixel contains an obstacle. I'll go into how this map is made using LIDAR scans in the next
section, so just assume it's representative of the environment. A new scan is made and needs to be matched onto this map with some shift in horizontal position and angle that represent the new
position and heading of the hexapod. To do this, I've expressed it as an optimization problem. The algorithm tries to find the horizontal shift ($x$,$y$) and angle ($\theta$) that minimize the
distance between each new scan point ($x_i'$,$y_i'$) and an occupied map pixel ($M(x,y)$):
\[ \Psi = \sum_i^N D(\left \lfloor{\tilde{x}_i}\right \rfloor, \left \lfloor{\tilde{y}_i}\right \rfloor) + a (x-x_g)^2 + b (y-y_g)^2 + c (\theta-\theta_g)^2 \]
\[ \tilde{x}_i = x_i' \cos \theta - y_i' \sin \theta + x \]
\[ \tilde{y}_i = x_i' \sin \theta + y_i' \cos \theta + y \]
\[ D(x,y) = \sum_{x'} \sum_{y'} M(x',y') \sqrt{(x-x')^2 + (y-y')^2} \]
Here, we project each LIDAR measurement ($x_i'$,$y_i'$) on to the SLAM map, adjusting for the current best guess of ($x$,$y$,$\theta$). At each projected point, we sum up the distance from that point
to every occupied pixel of the SLAM map. This gives us an estimate for how 'far away' the projected scan is from matching the existing map. The three extra terms on the $\Psi$ equation are to bias
the solution towards guess values for ($x$,$y$,$\theta$).
In this way, we are finding the location of the hexapod so that the new LIDAR scan looks most like the existing map. The assumption being made here is that the new scan is similar enough to the
existing map that is can be matched with some confidence. Solving the above equation is a problem of non-linear optimization, similar to the inverse kinematics solved in the previous post. The code
to solve this problem is a little dense, so I won't try to explain all of the details here. The relevant code is here, and the relevant method is slam::step(...);.
In words, we compute the $\Psi$ equation above and how it changes if we modify each of the parameters ($x$,$y$,$\theta$) by a small amount. Using this information, we can nudge each parameter by an
amount that should get us to a lower value of $\Psi$. Since the problem is non-linear, we aren't guaranteed that this gets us to the lowest possible value, or even a lower one than before. To help
make sure we end up in the right place, we initialize the solver with a guess position based on how the hexapod legs have moved recently. Since we went through so much trouble in the previous post to
plan how the feet move, we might as well use that knowledge to help the localization solver. From there we iterate the nudging step over and over again with a smaller nudge until we find there is no
way of nudging it to a lower value of $\Psi$. This is when we stop and say we have found the optimal values of ($x$,$y$,$\theta$). With that, the localization step is done!
For computational efficiency, I keep three versions of the SLAM map other than the normal one shown above. The first extra one is the original map convolved with a distance kernel, which at any
position gives us an approximate distance to occupied pixels. The next two are the gradient of this distance map, one for the x-component and one for the y-component. These maps allow us to quickly
evaluate both the $\Psi$ function and its derivatives with respect to ($x$,$y$,$\theta$). The distance map is computed in Fourier space using the convolution theorem, using threaded FFTW for
computational speed. This method doesn't actually give us the correct distance measure for $\Psi$, but it's close enough for this basic algorithm.
The companion to localization is mapping. Once we have a solution to where the new scan was measured from, we need to add it to the existing SLAM map. While we have assumed the new scan is close
enough to the existing map to be matched, it will have small differences due to the new measurement location that need to be incorporated so that the following scan is still similar enough to the
map. In my SLAM code, the method that does the mapping step is slam::integrate(...);.
Each new laser ranging measurement from the new scan is projected on to the SLAM map given the estimated hexapod location from the localization step. The pixel below each point is set to 1.0, meaning
we are fully confident that there is some object there. We then scan through every other pixel in the map and determine whether it is closer or farther away from the hexapod than the new scan
measurements. If it is closer, we decrease the map value because the new scan measured something behind it, meaning it must be free of obstacles. If the pixel is farther, we leave it alone because we
don't have any new information there; the new scan was blocked by something in front of it.
Once this mapping step is done, we have completed the two-part SLAM algorithm and are ready for another LIDAR scan. It's not the best or most accurate method, but it is easy to understand and can run
on fairly low-level hardware in real-time. I've written the algorithm to run asynchronously from the main hexapod code, so new scans can be submitted and the hexapod can still walk around while the
SLAM algorithm figures out where it is. On the UDOO's Cortex-A9, I can step a 1024x1024 map in around 2-3 seconds. With a 10 cm resolution, this gives over 100 meters of mapping. In practice, I've
found that 10 cm is about the coarsest you can go in an indoor environment, but anything less than 3 cm is a waste of computing time.
To demonstrate graphically how my SLAM algorithm works, I've written a little JavaScript app to show how LIDAR scans relate to the SLAM map. It doesn't actually go through the whole algorithm from
above, but it does show roughly what happens. The GLOBAL view shows the simulated robot moving through an environment, measuring distances to obstacles with a simulated LIDAR scanner. The ROBOT view
shows what these LIDAR measurements look like from the minds-eye of the robot. It doesn't know that it is moving; instead, it looks like the world is moving around it. The SLAM view shows a simulated
SLAM map and the approximate location of the robot. Move the ERROR slider back and forth to simulate varying amounts of noise in the measurements. These reduce the accuracy of both the localization
and mapping methods.
I've also tested this algorithm out in real life with the hexapod. The following SLAM maps were collected by driving the hexapod around my apartment with a remote control. I started the hexapod out
in one room, and it was able to walk into a different room and keep track of its position. The maps are pretty messy, but acceptable considering the simplicity of the algorithm being used.
Noisy map of my apartment.
It's not the best SLAM algorithm in the world, but it's relatively easy to understand and compute in an embedded setting. It seems to do best when the hexapod is inside a closed room and can see at
least two of the walls. It has some issues keeping track of position when transitioning between rooms, mostly due to the sharp changes in the LIDAR scans when passing through a doorway. Still, it
does a reasonable job at keeping track of the hexapod location within my apartment.
In the next post, I'll sort out how to get the hexapod to navigate autonomously. With the algorithms presented so far in this hexapod series, it becomes a straightforward procedure of using the SLAM
maps to find optimal paths to pre-determined waypoints.
This is the second post on my Hexapod Project series. The point of this project is to build a robot that allows me to try out a few robotics concepts. For a listing of each post in this series, click
here. In this post, I'll go over the steps (!) needed to get a hexapod robot walking.
At this point, I have a robot with six legs, three servos per leg, and the electronics and code to control each servo independently. But with no guidance for how to move the servos, the hexapod is
useless. With their above-average leg count and sixfold symmetry, hexapods can move around in all kinds of unique ways. While dancing around is certainly a possibility for my hexapod, I'm really only
interested in getting it to walk around. So to begin the process of getting it mobile, let's start with the basics of getting a robot to walk.
Inverse Kinematics
In order for the hexapod to walk, it needs to be able to independently move each foot up and down, forward and back. But how can we tell it to move its foot? All we have control over are the three
servos for each leg, none of which independently determine where the foot ends up. The position (and angle) that any given foot is at is determined by the angles of each of the three leg servos
together. We can describe the relationship between leg position and servos angles as such:
\[ r = l_c + l_f \cos(\theta_f) + l_t \cos(\theta_f+\theta_t) \] \[ x_f = x_0 + r \cos(\theta_c+\theta_0) \] \[ y_f = y_0 + r \sin(\theta_c+\theta_0) \] \[ z_f = z_0 + l_f \sin(\theta_f) + l_t \sin(\
theta_f+\theta_t) \]
Here, $\theta_c$, $\theta_f$, and $\theta_t$ are the servo angles for the coxa, femur, and tibia joints, respectively, and $l_c$, $l_f$, and $l_t$ are the distances between the joints. The position
and angle at which the leg is connected to the body are represented by $x_0$, $y_0$, $z_0$, and $\theta_0$. This set of equations represent the forward kinematics of the leg. Each leg has an
identical set of equations, but with different values for the initial position and angle.
These equations can tell us where the foot is, given the angles of the servos, but we need to do the opposite. Unfortunately, there isn't any way to rearrange the equations above so that we can plug
in the foot position and solve for the servo angles (go ahead and try!). Fortunately, this doesn't mean that it's an impossible task! The process of inverting these equations is called inverse
kinematics, and I've done a project on it before. My other post explains how to go about solving an inverse kinematics problem, so if you're interested in the details, check that out.
In short, the inverse kinematic solver takes a target foot position and outputs the servo angles that it thinks are appropriate. Starting with the servo angles as they are, the algorithm uses the
forward kinematic equations to see which way each servo needs to turn so that the foot ends up slightly closer to the target. It takes many small steps like this until the forward kinematics
equations say the new set of servo angles put the foot in the right place. This kind of procedure has its flaws, though. Imagine you tell it to find the servo angles that put the foot a mile away?
The algorithm has no way to achieve this since the legs aren't nearly that long. In situations like this, it often goes haywire, giving you a nonsensical result for the servo angles. So careful
attention to the iteration procedure is important.
The code I've written for the hexapod inverse kinematics solver is in my LIB_HEXAPOD library. The library also contains code used in the rest of this post along with some other bits. The procedure
for determining foot position is as follows:
This bit of code gets called once for each leg, giving it a target position for that leg and getting back the optimal servo angles to use. Assuming I give it reasonable target positions, it can spit
out servo angles quickly enough to solve each leg in just a few milliseconds (running on the Cortex-A9). The inverse kinematics procedure allows me to effectively ignore the fact that each foot is
positioned by three servos, and instead concentrate on where I want the feet to actually go. A simple test is to just tell the feet to move up and down repeatedly:
Impressing Alan the cat with robot weightlifting.
Smooth Stepping
Six legs seems to be more stable than two or four (or an odd number?), but that doesn't make my hexapod impervious to falling over. An object will fall over (or at least be unstable) if it is not
supported by at least three points of contact that surround the center of mass. If we assume the center of mass of the hexapod is roughly in the center of the body, this means that we need at least
three feet touching the ground at all times. Further, if we were to draw a triangle between where the feet touching the ground are, the center of mass needs to be directly above anywhere within this
triangle. Since we have six legs in all, these rules lead us to one of the simplest gaits for a hexapod, the tripod gait:
The six legs are broken up into two groups which trade off being the support for the body. The legs within each group lower to the ground in unison, move towards the back of the body, then lift up
and move back to the front. The two groups do this exactly out of phase with each other so that there are always exactly three feet on the ground at any one point. For my hexapod, I've modified this
a bit so that the three legs within each group hit the ground at slightly different times. I've done this to reduce the repetitive jolting that occurs from moving each leg simultaneously.
Even so, an issue I ran into while designing the gait is that whenever a foot hits the ground, the sudden transition from free-moving to weight-bearing caused the whole hexapod to jerk around. To
specify the position of each foot as a function time, I was using a cosine bell with a flat bottom:
Notice the sharp change in direction at the start and end of when the foot is in contact with the floor. The transition to weight-bearing doesn't happen instantaneously (imagine carpeted floors), so
the sudden transition when the foot goes from moving down to moving back creates problems. To create a smoother path for the feet to follow, I turned to Bezier curves.
Bézier curves are smooth functions that are completely determined by a sequence of points that I will call anchors. These anchors specify generally what shape the curve has, so tweaking the shape of
the curve just involves moving around the anchor points. Going from a set of anchor points to the resulting Bezier curve involves a series of linear interpolations. Given some intended distance along
the total path between 0 and 1, we start by linearly interpolating between each adjacent anchor points. So if we want to know where the Bezier curve is halfway along the path, we start by linearly
interpolating halfway between each pair of adjacent anchors. If we have $N$ anchors, this gives us $N-1$ interpolated points. We then linearly interpolate again halfway between these $N-1$ points to
get $N-2$ doubly-interpolated points. We continue this procedure until we are left with a single interpolated point, and this is the position of the Bezier curve at the halfway point.
The procedure for generating Bezier curves is a little difficult to describe, so I've made a little interface to help explain it. Drag the grey anchor points around to see how the Bezier curve
changes, then increase the Guide Level slider to see the various levels of linear interpolation.
To make sure the hexapod stays steady when walking, I've kept the straight part of the foot path where the foot touches the ground, but set the return path to be a Bezier curve. I wrote a simple
Bezier curve class to handle the computations on the hexapod.
Applying this Bezier curve stepping method to each leg in an alternating pattern gets the hexapod to walk forwards, but it can't yet handle turning. To implement turning, my first instinct was to
simply adjust the amount by which each foot sweeps forward and back on each side of the body differently. This would cause one side to move forward more than the other, and the hexapod would turn.
The problem with this method is that it isn't particularly physical. If you try it, you'll find that the hexapod has to drag its feet sideways to compensate for the fact that it is turning sidways
but the feet only move forward and back. In order to let the hexapod turn naturally, you need to go into a different frame of reference.
If you tell all of the feet to move up towards the sky, the hexapod moves closer to the ground. Tell the feet to move down, the hexapod moves up. It can get confusing to figure out how to move the
feet to get the body to move a certain way. I've found it's best to think that the hexapod body stays still in space and the ground just moves around relative to it. Then all you need to do is make
sure the feet are in contact with that moving floor and they don't slide on it. For straight walking, we can just see it as a treadmill-like floor that continually moves backwards, and the feet are
just trying to match the treadmill speed. For turning, we can think about the hexapod sitting above a giant turntable. How close the hexapod body sits to the axis of rotation determines how sharp of
a turn it makes, or what the turning radius is. In order to keep the feet from sliding around on the turntable, we need to make sure each foot travels along a curve of constant radius from the axis
of rotation. If we set it so the axis of rotation is directly underneath the body, the hexapod will stay in one place and just turn around and around. If the axis of rotation is set very very far
away, there will barely be any curvature to the foot-paths, and the hexapod will basically walk forward in a straight line.
To help explain this concept, I've made another little interface for seeing how the hexapod feet need to move. Move the bottom slider around to change the turning radius, and enable the helper lines
to see how each foot follows a specific path relative to the axis of rotation.
To incorporate the Bezier curve method from above into this view of walking, I convert the foot positions into polar coordinates around the axis of rotation and use the Bezier curve to pick the $\
theta$ and $z$ coordinates as a function of time. In the hexapod code, I've parameterized the turning by a single parameter that relates to the turning radius. Between the turning parameter and a
single speed parameter, I have full control over the movement of the hexapod.
At every time step, the code considers the current values for speed and turning, and decides where along the Bezier curve each foot should be. It then computes the actual positions in space from the
curves and feeds these positions into the inverse kinematic solver. The solver outputs servo angles for each joint of each leg, and the main code packages these up and sends them off to the
lower-level processors. This whole procedure is fairly quick to compute, so I can update the servo positions at about 50Hz.
I wrote a test code that takes input from an Xbox 360 controller and turns it into speed and turning values. This allowed me to treat the whole hexapod like a remote-control car. After driving it
around and chasing my cats, I made a little video to show of the build:
At this point, the hexapod can walk around freely, but does not know where to go. In the next post, I'll go into giving the hexapod a sense of awareness through laser ranging.
This is the first post in my Hexapod Project series. The point of this project is to build a robot that allows me to try out a few robotics concepts. For a listing of each post in this series, click
In this post, I'll cover building the hexapod body, the various levels of processing needed to keep the hexapod on its feet, and some of the code I wrote to tie various levels of processing together.
Building the Body
Instead of designing the hexapod from scratch, I decided to go with a plexiglass kit for the body. This kit (and others from the same company) seem aimed at researchers and high-level hobbyists:
higher quality than kids' toys, but not expensive enough to scare absolutely everyone away. I actually stumbled across this kit while looking for high-quality servos, which would play a key part in
the success of the robot. My servo experience from the Single Pixel Camera project left a bad taste in my mouth (I did buy the cheapest servos I could find..), so I opted for a set of AX-12As for
this project. These servos use a serial communication protocol to not only set the target angle, but also to set angle limits, torque limits, failure modes, and request the current internal status
(temperature, voltage, load, speed, etc.). They are significantly more expensive than your run-of-the-mill micro servos, but I've learned a few times now that the money you save on parts you pay
later in headaches (with interest!).
The fancy boxes are a nice touch.
It's like LEGOs, but for people with more money and less imagination.
The body consists of two flat panels separated by an inch or so with various mounting holes. This leaves a good bit of space in the middle for wires and other electronics. Each of the six legs
consists of a coxa (hip) joint, a femur joint, and a tibia joint. This gives allows each foot to move around in all three dimensions, although with some limitations in position and angle. The kit
also comes with an upper deck with a grid of mounting holes, providing even more space for accessories. I've been a little worried about shattering the plastic components, but have yet to have any
problems. The range of motion of each leg is impressive, and the overall design is nicely menacing.
Sitting pretty on a wooden stand.
Controlling the Servos
The servos receive both power and control data through a three-wire cable system. They can be daisy-chained together, so in theory, all 18 of my servos can be powered and controlled with only three
wires. The same site I got my kit and servos from offers an Arduino-based controller board for these kinds of servos called the Arbotix-M. It sports an ATMEGA644p processor running at 16MHz and 5V,
and most importantly, has the basic accessory electronics needed to handle communication over the servo data line. The unfortunate number of data lines on the servos dedicated to bi-directional
communication (one) means extra electronics, which I am happy to let someone else figure out.
Almost like the hexapod was designed to have one! (it was)
Note, hroughout this project, I'm dealing with the servos as black boxes. Not that they are particularly mysterious -- the documentation is actually pretty good -- but more that I don't want to
adjust their firmware at all or consider the possibility of improving them. So the Arbotix-M controller is the first and lowest level of hardware I need to deal with in terms of programming.
Eighteen servos are a lot to deal with, so I decided to use the Arbotix-M as a kind of aggregator for servo commands and information. Other hardware can send the Arbotix-M a single command to set all
of the servos to various angles, and it handles splitting up the commands and sending them to the servos one by one. It also monitors the servos and report any statistics or faults to other
interested processors. The full code I wrote to run on the Arbotix-M can be found here. I set up a small test where I sent the Arbotix-M commands over FTDI and found I could update all 18 servos at
around 100Hz, which was plenty fast for nice, smooth motion.
Higher Level Hardware
So far, the hexapod had nice servos and a controller that could keep the limbs moving, but nothing giving any it interesting commands. For the next two levels of processing above the Arbotix-M, I
decided to go with a UDOO Quad board. The UDOO Quad is a small single-board computer with a quad-core ARM Cortex-A9 processor alongside an Arduino Due (ARM Cortex-M3), connected to a slew of
peripherals including WiFi, HDMI, Ethernet, USB, Audio, and SATA. Instead of choosing this board after hours of deliberating between the numerous single-board computers out there, I went with this
one because I had one sitting around in my graveyard of processors from failed projects. The Arduino Due would act as a hardware interface between the main cpu and the various hardware components
(like the Arbotix-M), and the main cpu would do the heavy lifting required to get the hexapod moving around.
With the goal of only using my own software comes the goal of utilizing multiple cores in the single-board computer I have chosen. Throughout the project I've decided to use threading in C++ to
achieve this.
Packets of Information
Because the quad-core cpu shares a board with the Ardinuo Due they share a dedicated serial connection (/dev/ttymxc3 in unix-land), so there's no need to worry about keeping them connected. I added a
pair of wires to connect the Due to the Arbotix-M with another serial connection (Serial1 on the Due). These three processors needed a standard way of communicating data with each other. Since I
wanted to allow for any processor to send data to any other processor, I needed a standardized method of packaging data and transmitting it so that it could be directed, parsed, and understood by any
I created a packet class in C++ to handle this. A packet object could be created by any processor, filled with data, and sent along with a desination tagged in the header. The key properties of the
packet class are as follows:
• Arbitrary data size. (As long as it is an integer number of bytes).
• Optional data padding. Some processors have difficulty with tiny packets.
• Checksum. I only programmed for a single-bit XOR checksum, but it's marginally better than nothing.
• Optional pre-buffered data. When memory is scarce, allow new packets to take the place of old ones in memory.
• Destination Tagging. Single bytes indicating the destination, which is useful for packets bouncing between multiple processors.
The C++ class I ended up creating to achieve this looked something like this:
The buffer holds all of the relevant information for the packet, including the size, destination, and checksum. If you don't care for code examples, you can view these packets of information as just
a string of bytes, each with a specific purpose. The data it contains can be any length, but the header and footer are always the same:
Each block represents a single byte.
The three processors are connected in series and use serial communication. Each monitors the serial port that connects it with the other processor(s) and decides what actions to take when a valid
packet is received. For the quad-core processor, I wrote a simple serial port monitor that runs in its own thread. It both monitors incoming traffic for packets, and accepts new packets from other
threads to send. The interface presented to other threads allows for either blocking or non-blocking sends and receives. The threaded serial library can be seen here. The lower level processors can't
use threading (or at least, I'm not going to try), so they just periodically monitor their serial ports and pause other functions to act on incoming packets.
The whole point of this serial communication path with standardized packets is so that you can do something like send information from the quad-core processor, have the Due see that it is meant for
the Arbotix-M, send it along, and have the Arbotix-M see it and act on it. Here's an example of what this kind of action looks like in code:
First, the quad-core processor initiates communication with the Due, creates a packet with some information for the Arbotix-M, then sends it through the serial port.
The Due is continually running a code like the following, which looks for valid packets and sends them on to the Arbotix-M if the destination tag is correct (this looks a lot more complicated, but
that's because I haven't made as much of an effort to hide the complicated parts behind libraries and such):
Code snippets are nice and all, but at the end of the day we need to make sure the code is actually sending the correct information. When determining whether data is being sent between bits of
hardware correctly, my favorite method is to just snoop on the data lines themselves and see what the ones and zeroes look like. I hooked up a digital logic analyzer to the data lines between the
quad-core processor and the Due, and between the Due and the Arbotix-M to look at the data being passed around.
If you aren't familiar with this kind of view, we are looking at the voltage of each data line as a function of time. Within each horizontal band, the thin white line shows the voltage on the line.
I'm using a digital analyzer, so it can only tell me if the voltage is low or high, hence the rapid switching up and down. The horizontal axis is time, and the values above all of the lines shows the
timescale being plotted.
As the probe trace shows, the three processors are successfully passing information back and forth. The quad-core cpu sends a packet of ones and zeros to the Due, then a short time later the same
packet of information gets sent from the Due to the Arbotix-M. Zooming in to one of the packets, we can even see the individual bytes being sent along with what they mean:
The three levels of processing can pass information back and forth, so the next step is to decide what that information should be. The quad-core processor can tell the servos where to go through this
setup of bouncing packets, but the servos don't know what they should be doing. In the next post, I'll talk about using inverse kinematics and some other tricks to decide how each servo should move
such that the hexapod can walk around smoothly. | {"url":"http://www.gperco.com/2015/06/","timestamp":"2024-11-11T23:04:50Z","content_type":"application/xhtml+xml","content_length":"171333","record_id":"<urn:uuid:ca29275b-dbab-403c-aa76-745ef5880367>","cc-path":"CC-MAIN-2024-46/segments/1730477028240.82/warc/CC-MAIN-20241111222353-20241112012353-00122.warc.gz"} |
Hackerrank - The Hurdle Race Solution
Dan is playing a video game in which his character competes in a hurdle race. Hurdles are of varying heights, and Dan has a maximum height he can jump. There is a magic potion he can take that will
increase his maximum height by unit for each dose. How many doses of the potion must he take to be able to jump all of the hurdles.
Given an array of hurdle heights , and an initial maximum height Dan can jump, , determine the minimum number of doses Dan must take to be able to clear all the hurdles in the race.
For example, if and Dan can jump unit high naturally, he must take doses of potion to be able to jump all of the hurdles.
Function Description
Complete the hurdleRace function in the editor below. It should return the minimum units of potion Dan needs to drink to jump all of the hurdles.
hurdleRace has the following parameter(s):
• k: an integer denoting the height Dan can jump naturally
• height: an array of integers denoting the heights of each hurdle
Input Format
The first line contains two space-separated integers and , the number of hurdles and the maximum height Dan can jump naturally.
The second line contains space-separated integers where .
Output Format
Print an integer denoting the minimum doses of magic potion Dan must drink to complete the hurdle race.
Sample Input 0
Sample Output 0
Explanation 0
Dan's character can jump a maximum of units, but the tallest hurdle has a height of :
To be able to jump all the hurdles, Dan must drink doses.
Sample Input 1
Sample Output 1
Explanation 1
Dan's character can jump a maximum of units, which is enough to cross all the hurdles:
Because he can already jump all the hurdles, Dan needs to drink doses.
Solution in Python
def hurdleRace(k, height):
return max(0,max(height)-k)
n,k = map(int,input().split())
height = map(int,input().split())
print(hurdleRace(k, height)) | {"url":"https://www.thepoorcoder.com/hackerrank-the-hurdle-race-solution/","timestamp":"2024-11-06T02:56:35Z","content_type":"text/html","content_length":"42961","record_id":"<urn:uuid:33fe4eb6-96ba-4b13-bd53-0930dd9f879f>","cc-path":"CC-MAIN-2024-46/segments/1730477027906.34/warc/CC-MAIN-20241106003436-20241106033436-00071.warc.gz"} |
Divide-and-Conquer - Foundations of Algorithms (2015)
Foundations of Algorithms (2015)
Chapter 2 Divide-and-Conquer
Our first approach to designing algorithms, divide-and-conquer, is patterned after the brilliant strategy employed by the French emperor Napoleon in the Battle of Austerlitz on December 2, 1805. A
combined army of Austrians and Russians outnumbered Napoleon’s army by about 15,000 soldiers. The Austro-Russian army launched a massive attack against the French right flank. Anticipating their
attack, Napoleon drove against their center and split their forces in two. Because the two smaller armies were individually no match for Napoleon, they each suffered heavy losses and were compelled
to retreat. By dividing the large army into two smaller armies and individually conquering these two smaller armies, Napoleon was able to conquer the large army.
The divide-and-conquer approach employs this same strategy on an instance of a problem. That is, it divides an instance of a problem into two or more smaller instances. The smaller instances are
usually instances of the original problem. If solutions to the smaller instances can be obtained readily, the solution to the original instance can be obtained by combining these solutions. If the
smaller instances are still too large to be solved readily, they can be divided into still smaller instances. This process of dividing the instances continues until they are so small that a solution
is readily obtainable.
The divide-and-conquer approach is a top-down approach. That is, the solution to a top-level instance of a problem is obtained by going down and obtaining solutions to smaller instances. The reader
may recognize this as the method used by recursive routines. Recall that when writing recursion, one thinks at the problem-solving level and lets the system handle the details of obtaining the
solution (by means of stack manipulations). When developing a divide-and-conquer algorithm, we usually think at this level and write it as a recursive routine. After this, we can sometimes create a
more efficient iterative version of the algorithm.
We now introduce the divide-and-conquer approach with examples, starting with Binary Search.
2.1 Binary Search
We showed an iterative version of Binary Search (Algorithm 1.5) in Section 1.2. Here we present a recursive version because recursion illustrates the top-down approach used by divide-and-conquer.
Stated in divide-and-conquer terminology, Binary Search locates a key x in a sorted (nondecreasing order) array by first comparing x with the middle item of the array. If they are equal, the
algorithm is done. If not, the array is divided into two subarrays, one containing all the items to the left of the middle item and the other containing all the items to the right. If x is smaller
than the middle item, this procedure is then applied to the left subarray. Otherwise, it is applied to the right subarray. That is, x is compared with the middle item of the appropriate subarray. If
they are equal, the algorithm is done. If not, the subarray is divided in two. This procedure is repeated until x is found or it is determined that x is not in the array.
The steps of Binary Search can be summarized as follows.
If x equals the middle item, quit. Otherwise:
1. Divide the array into two subarrays about half as large. If x is smaller than the middle item, choose the left subarray. If x is larger than the middle item, choose the right subarray.
2. Conquer (solve) the subarray by determining whether x is in that subarray. Unless the subarray is sufficiently small, use recursion to do this.
3. Obtain the solution to the array from the solution to the subarray.
Binary Search is the simplest kind of divide-and-conquer algorithm because the instance is broken down into only one smaller instance, so there is no combination of outputs. The solution to the
original instance is simply the solution to the smaller instance. The following example illustrates Binary Search.
Example 2.1
Suppose x = 18 and we have the following array:
1. Divide the array: Because x < 25, we need to search
2. Conquer the subarray by determining whether x is in the subarray. This is accomplished by recursively dividing the subarray. The solution is:
Yes, x is in the subarray.
3. Obtain the solution to the array from the solution to the subarray:
Yes, x is in the array.
In Step 2 we simply assumed that the solution to the subarray was available. We did not discuss all the details involved in obtaining the solution because we wanted to show the solution at a
problem-solving level. When developing a recursive algorithm for a problem, we need to
• Develop a way to obtain the solution to an instance from the solution to one or more smaller instances.
• Determine the terminal condition(s) that the smaller instance(s) is (are) approaching.
• Determine the solution in the case of the terminal condition(s).
We need not be concerned with the details of how the solution is obtained (in the case of a computer, by means of stack manipulations). Indeed, worrying about these details can sometimes hinder one’s
development of a complex recursive algorithm. For the sake of concreteness, Figure 2.1shows the steps done by a human when searching with Binary Search.
A recursive version of Binary Search now follows.
Algorithm 2.1
Binary Search (Recursive)
Problem: Determine whether x is in the sorted array S of size n.
Inputs: positive integer n, sorted (nondecreasing order) array of keys S indexed from 1 to n, a key x.
Outputs: location, the location of x in S (0 if x is not in S).
Notice that n, S, and x are not parameters to function location. Because they remain unchanged in each recursive call, there is no need to make them parameters. In this text only the variables, whose
values can change in the recursive calls, are made parameters to recursive routines. There are two reasons for doing this. First, it makes the expression of recursive routines less cluttered. Second,
in an actual implementation of a recursive routine, a new copy of any variable passed to the routine is made in each recursive call. If a variable’s value does not change, the copy is unnecessary.
This waste could be costly if the variable is an array. One way to circumvent this problem would be to pass the variable by address. Indeed, if the implementation language is C++, an array is
automatically passed by address, and using the reserved word const guarantees the array cannot be modified. However, including all of this in our pseudocode expression of recursive algorithms again
serves to clutter them and possibly diminish their clarity.
Figure 2.1 The steps done by a human when searching with Binary Search. (Note: x = 18.)
Each of the recursive algorithms could be implemented in a number of ways, depending on the language used for the implementation. For example, one possible way to implement them in C++ would be pass
all the parameters to the recursive routine; another would be to use classes; and yet another would be to globally define the parameters that do not change in the recursive calls. We will illustrate
how to implement the last one since this is the alternative consistent with our expression of the algorithms. If we did define S and x globally and n was the number of items in S, our top-level call
to function location in Algorithm 2.1 would be as follows:
locationout = location (1 , n) ;
Because the recursive version of Binary Search employs tail-recursion (that is, no operations are done after the recursive call), it is straightforward to produce an iterative version, as was done in
Section 1.2. As previously discussed, we have written a recursive version because recursion clearly illustrates the divide-and-conquer process of dividing an instance into smaller instances. However,
it is advantageous in languages such as C++ to replace tail-recursion by iteration. Most importantly, a substantial amount of memory can be saved by eliminating the stack developed in the recursive
calls. Recall that when a routine calls another routine, it is necessary to save the first routine’s pending results by pushing them onto the stack of activation records. If the second routine calls
another routine, the second routine’s pending results must also be pushed onto the stack, and so on. When control is returned to a calling routine, its activation record is popped from the stack and
the computation of the pending results is completed. In the case of a recursive routine, the number of activation records pushed onto the stack is determined by the depth reached in the recursive
calls. For Binary Search, the stack reaches a depth that in the worst case is about lg n + 1.
Another reason for replacing tail-recursion by iteration is that the iterative algorithm will execute faster (but only by a constant multiplicative factor) than the recursive version because no stack
needs to be maintained. Because most modern LISP dialects compile tail-recursion to iterative code, there is no reason to replace tail-recursion by iteration in these dialects.
Binary Search does not have an every-case time complexity. We will do a worst-case analysis. We already did this informally in Section 1.2. Here we do the analysis more rigorously. Although the
analysis refers to Algorithm 2.1, it pertains to Algorithm 1.5 as well. If you are not familiar with techniques for solving recurrence equations, you should study Appendix B before proceeding.
Analysis of Algorithm 2.1
Worst-Case Time Complexity (Binary Search, Recursive)
In an algorithm that searches an array, the most costly operation is usually the comparison of the search item with an array item. Thus, we have the following:
Basic operation: the comparison of x with S [mid].
Input size: n, the number of items in the array.
We first analyze the case in which n is a power of 2. There are two comparisons of x with S [mid] in any call to function location in which x does not equal S [mid]. However, as discussed in our
informal analysis of Binary Search in Section 1.2, we can assume that there is only one comparison, because this would be the case in an efficient assembler language implementation. Recall from
Section 1.3 that we ordinarily assume that the basic operation is implemented as efficiently as possible.
As discussed in Section 1.2, one way the worst case can occur is when x is larger than all array items. If n is a power of 2 and x is larger than all the array items, each recursive call reduces the
instance to one exactly half as big. For example, if n = 16, then mid = /2x is larger than all the array items, the top eight items are the input to the first recursive call. Similarly, the top four
items are the input to the second recursive call, and so on. We have the following recurrence:
If n = 1 and x is larger than the single array item, there is a comparison of x with that item followed by a recursive call with low > high. At this point the terminal condition is true, which means
that there are no more comparisons. Therefore, W (1) is 1. We have established the recurrence
This recurrence is solved in Example B.1 in Appendix B. The solution is
If n is not restricted to being a power of 2, then
where yy. We show how to establish this result in the exercises.
2.2 Mergesort
A process related to sorting is merging. By two-way merging we mean combining two sorted arrays into one sorted array. By repeatedly applying the merging procedure, we can sort an array. For example,
to sort an array of 16 items, we can divide it into two subarrays, each of size 8, sort the two subarrays, and then merge them to produce the sorted array. In the same way, each subarray of size 8
can be divided into two subarrays of size 4, and these subarrays can be sorted and merged. Eventually, the size of the subarrays will become 1, and an array of size 1 is trivially sorted. This
procedure is called “Mergesort.” Given an array with n items (for simplicity, let n be a power of 2), Mergesort involves the following steps:
1. Divide the array into two subarrays each with n/2 items.
2. Conquer (solve) each subarray by sorting it. Unless the array is sufficiently small, use recursion to do this.
3. Combine the solutions to the subarrays by merging them into a single sorted array.
The following example illustrates these steps.
Example 2.2
Suppose the array contains these numbers in sequence:
1. Divide the array:
2. Sort each subarray:
3. Merge the subarrays:
Figure 2.2 The steps done by a human when sorting with Mergesort.
In Step 2 we think at the problem-solving level and assume that the solutions to the subarrays are available. To make matters more concrete, Figure 2.2 illustrates the steps done by a human when
sorting with Mergesort. The terminal condition occurs when an array of size 1 is reached; at that time, the merging begins.
Algorithm 2.2
Problem: Sort n keys in nondecreasing sequence.
Inputs: positive integer n, array of keys S indexed from 1 to n.
Outputs: the array S containing the keys in nondecreasing order.
Before we can analyze Mergesort, we must write and analyze an algorithm that merges two sorted arrays.
Algorithm 2.3
Problem: Merge two sorted arrays into one sorted array.
Inputs: positive integers h and m, array of sorted keys U indexed from 1 to h, array of sorted keys V indexed from 1 to m.
Outputs: an array S indexed from 1 to h + m containing the keys in U and V in a single sorted array.
• Table 2.1 An example of merging two arrays U and V into one array S^∗
*Items compared are in boldface.
Table 2.1 illustrates how procedure merge works when merging two arrays of size 4.
Analysis of Algorithm 2.3
Worse-Case Time Complexity (Merge)
As mentioned in Section 1.3, in the case of algorithms that sort by comparing keys, the comparison instruction and the assignment instruction can each be considered the basic operation. Here we will
consider the comparison instruction. When we discuss Mergesort further in Chapter 7, we will consider the number of assignments. In this algorithm, the number of comparisons depends on both h and m.
We therefore have the following:
Basic operation: the comparison of U [i] with V [j].
Input size: h and m, the number of items in each of the two input arrays.
The worst case occurs when the loop is exited, because one of the indices— say, i—has reached its exit point h + 1 whereas the other index j has reached m, 1 less than its exit point. For example,
this can occur when the first m − 1 items in V are placed first in S, followed by all h items in U, at which time the loop is exited because i equals h + 1. Therefore,
We can now analyze Mergesort.
Analysis of Algorithm 2.2
Worst-Case Time Complexity (Mergesort)
The basic operation is the comparison that takes place in merge. Because the number of comparisons increases with h and m, and h and m increase with n, we have the following:
Basic operation: the comparison that takes place in merge.
Input size: n, the number of items in the array S.
The total number of comparisons is the sum of the number of comparisons in the recursive call to mergesort with U as the input, the number of comparisons in the recursive call to mergesort with V as
the input, and the number of comparisons in the top-level call to merge. Therefore,
We first analyze the case where n is a power of 2. In this case,
Our expression for W(n) becomes
When the input size is 1, the terminal condition is met and no merging is done. Therefore, W (1) is 0. We have established the recurrence
This recurrence is solved in Example B.19 in Appendix B. The solution is
For n not a power of 2, we will establish in the exercises that
where yy≥ y and the largest integer ≤ y, respectively. It is hard to analyze this case exactly because of the floors (W(n) is nondecreasing. Therefore, Theorem B.4 in that appendix implies that
An in-place sort is a sorting algorithm that does not use any extra space beyond that needed to store the input. Algorithm 2.2 is not an in-place sort because it uses the arrays U and V besides the
input array S. If U and V are variable parameters (passed by address) in merge, a second copy of these arrays will not be created when merge is called. However, new arrays U and V will still be
created each time mergesort is called. At the top level, the sum of the numbers of items in these two arrays is n. In the top-level recursive call, the sum of the numbers of items in the two arrays
is aboutn/2; in the recursive call at the next level, the sum of the numbers of items in the two arrays is about n/4; and, in general, the sum of the numbers of items in the two arrays at each
recursion level is about one-half of the sum at the previous level. Therefore, the total number of extra array items created is about n (1 + 1/2 + 1/4 + · · ·) = 2n.
Algorithm 2.2 clearly illustrates the process of dividing an instance of a problem into smaller instances because two new arrays (smaller instances) are actually created from the input array
(original instance). Therefore, this was a good way to introduce Mergesort and illustrate the divide-and-conquer approach. However, it is possible to reduce the amount of extra space to only one
array containing n items. This is accomplished by doing much of the manipulation on the input array S. The following method for doing this is similar to the method used in Algorithm 2.1 (Binary
Search, Recursive).
Algorithm 2.4
Mergesort 2
Problem: Sort n keys in nondecreasing sequence.
Inputs: positive integer n, array of keys S indexed from 1 to n.
Outputs: the array S containing the keys in nondecreasing order.
Following our convention of making only variables, whose values can change in recursive calls, parameters to recursive routines, n and S are not parameters to procedure mergesort2. If the algorithm
were implemented by defining S globally and n was the number of items in S, the top-level call to mergesort2 would be as follows:
mergesort2 (1 , n) ;
The merging procedure that works with mergesort2 follows.
Algorithm 2.5
Merge 2
Problem: Merge the two sorted subarrays of S created in Mergesort 2.
Inputs: indices low, mid, and high, and the subarray of S indexed from low to high. The keys in array slots from low to mid are already sorted in nondecreasing order, as are the keys in array slots
from mid + 1 to high.
Outputs: the subarray of S indexed from low to high containing the keys in nondecreasing order.
2.3 The Divide-and-Conquer Approach
Having studied two divide-and-conquer algorithms in detail, you should now better understand the following general description of this approach.
The divide-and-conquer design strategy involves the following steps:
1. Divide an instance of a problem into one or more smaller instances.
2. Conquer (solve) each of the smaller instances. Unless a smaller instance is sufficiently small, use recursion to do this.
3. If necessary, combine the solutions to the smaller instances to obtain the solution to the original instance.
The reason we say “if necessary” in Step 3 is that in algorithms such as Binary Search Recursive (Algorithm 2.1) the instance is reduced to just one smaller instance, so there is no need to combine
More examples of the divide-and-conquer approach follow. In these examples we will not explicitly mention the steps previously outlined. It should be clear that we are following them.
2.4 Quicksort (Partition Exchange Sort)
Next we look at a sorting algorithm, called “Quicksort,” that was developed by Hoare (1962). Quicksort is similar to Mergesort in that the sort is accomplished by dividing the array into two
partitions and then sorting each partition recursively. In Quicksort, however, the array is partitioned by placing all items smaller than some pivot item before that item and all items larger than
the pivot item after it. The pivot item can be any item, and for convenience we will simply make it the first one. The following example illustrates how Quicksort works.
Example 2.3
Suppose the array contains these numbers in sequence:
1. Partition the array so that all items smaller than the pivot item are to the left of it and all items larger are to the right:
2. Sort the subarrays:
After the partitioning, the order of the items in the subarrays is unspecified and is a result of how the partitioning is implemented. We have ordered them according to how the partitioning routine,
which will be presented shortly, would place them. The important thing is that all items smaller than the pivot item are to the left of it, and all items larger are to the right of it. Quicksort is
then called recursively to sort each of the two subarrays. They are partitioned, and this procedure is continued until an array with one item is reached. Such an array is trivially sorted. Example
2.3 shows the solution at the problem-solving level. Figure 2.3 illustrates the steps done by a human when sorting with Quicksort. The algorithm follows.
Algorithm 2.6
Problem: Sort n keys in nondecreasing order.
Inputs: positive integer n, array of keys S indexed from 1 to n.
Outputs: the array S containing the keys in nondecreasing order.
Following our usual convention, n and S are not parameters to procedure quicksort. If the algorithm were implemented by defining S globally and n was the number of items in S, the top-level call to
quicksort would be as follows:
quicksort (1 , n) ;
Figure 2.3 The steps done by a human when sorting with Quicksort. The subarrays are enclosed in rectangles whereas the pivot points are free.
The partitioning of the array is done by procedure partition. Next we show an algorithm for this procedure.
Algorithm 2.7
Problem: Partition the array S for Quicksort.
Inputs: two indices, low and high, and the subarray of S indexed from low to high.
Outputs: pivotpoint, the pivot point for the subarray indexed from low to high.
• Table 2.2 An example of procedure partition^∗
^∗ Items compared are in boldface. Items just exchanged appear in squares.
Procedure partition works by checking each item in the array in sequence. Whenever an item is found to be less than the pivot item, it is moved to the left side of the array. Table 2.2 shows how
partition would proceed on the array in Example 2.3.
Next we analyze Partition and Quicksort.
Analysis of Algorithm 2.7
Every-Case Time Complexity (Partition)
Basic operation: the comparison of S [i] with pivotitem.
Input size: n = high − low + 1, the number of items in the subarray.
Because every item except the first is compared,
We are using n here to represent the size of the subarray, not the size of the array S. It represents the size of S only when partition is called at the top level.
Quicksort does not have an every-case complexity. We will do worst-case and average-case analyses.
Analysis of Algorithm 2.6
Worst-Case Time Complexity (Quicksort)
Basic operation: the comparison of S [i] with pivotitem in partition.
Input size: n, the number of items in the array S.
Oddly enough, it turns out that the worst case occurs if the array is already sorted in nondecreasing order. The reason for this should become clear. If the array is already sorted in nondecreasing
order, no items are less than the first item in the array, which is the pivot item. Therefore, when partitionis called at the top level, no items are placed to the left of the pivot item, and the
value of pivotpoint assigned by partition is 1. Similarly, in each recursive call, pivotpoint receives the value of low. Therefore, the array is repeatedly partitioned into an empty subarray on the
left and a subarray with one less item on the right. For the class of instances that are already sorted in nondecreasing order, we have
We are using the notation T(n) because we are presently determining the every-case complexity for the class of instances that are already sorted in nondecreasing order. Because T(0) = 0, we have the
This recurrence is solved in Example B.16 in Appendix B. The solution is
We have established that the worst case is at least n (n − 1) /2. Although intuitively it may now seem that this is as bad as things can get, we still need to show this. We will accomplish this by
using induction to show that, for all n,
Induction base: For n = 0
Induction hypothesis: Assume that, for 0 ≤ k < n,
Induction step: We need to show that
For a given n, there is some instance with size n for which the processing time is W(n). Let p be the value of pivotpoint returned by partition at the top level when this instance is processed.
Because the time to process the instances of size p − 1 and n − p can be no more than W(p − 1) and W(n − p), respectively, we have
The last inequality is by the induction hypothesis. Algebraic manipulations can show that for 1 ≤ p ≤ n this last expression is
This completes the induction proof.
We have shown that the worst-case time complexity is given by
The worst case occurs when the array is already sorted because we always choose the first item for the pivot item. Therefore, if we have reason to believe that the array is close to being sorted,
this is not a good choice for the pivot item. When we discuss Quicksort further in Chapter 7, we will investigate other methods for choosing the pivot item. If we use these methods, the worst case
does not occur when the array is already sorted. But the worst-case time complexity is still n (n − 1) /2.
In the worst case, Algorithm 2.6 is no faster than Exchange Sort (Algorithm 1.3). Why then is this sort called Quicksort? As we shall see, it is in its average-case behavior that Quicksort earns its
Analysis of Algorithm 2.6
Average-Case Time Complexity (Quicksort)
Basic operation: the comparison of S [i] with pivotitem in partition
Input size: n, the number of items in the array S.
We will assume that we have no reason to believe that the numbers in the array are in any particular order, and therefore that the value of pivotpoint returned by partition is equally likely to be
any of the numbers from 1 through n. If there was reason to believe a different distribution, this analysis would not be applicable. The average obtained is, therefore, the average sorting time when
every possible ordering is sorted the same number of times. In this case, the average-case time complexity is given by the following recurrence:
In the exercises we show that
Plugging this equality into Equality 2.1 yields
Multiplying by n we have
Applying Equality 2.2 to n − 1 gives
Subtracting Equality 2.3 from Equality 2.2 yields
which simplifies to
If we let
we have the recurrence
Like the recurrence in Example B.22 in Appendix B, the approximate solution to this recurrence is given by
which implies that
Quicksort’s average-case time complexity is of the same order as Mergesort’s time complexity. Mergesort and Quicksort are compared further in Chapter 7 and in Knuth (1973).
2.5 Strassen’s Matrix Multiplication Algorithm
Recall that Algorithm 1.4 (Matrix Multiplication) multiplied two matrices strictly according to the definition of matrix multiplication. We showed that the time complexity of its number of
multiplications is given by T(n) = n^3, where n is the number of rows and columns in the matrices. We can also analyze the number of additions. As you will show in the exercises, after the algorithm
is modified slightly, the time complexity of the number of additions is given by T(n) = n^3 − n^2. Because both of these time complexities are in Θ(n^3), the algorithm can become impractical fairly
quickly. In 1969, Strassen published an algorithm whose time complexity is better than cubic in terms of both multiplications and additions/subtractions. The following example illustrates his method.
Example 2.4
Suppose we want the product C of two 2 × 2 matrices, A and B. That is,
Strassen determined that if we let
the product C is given by
In the exercises, you will show that this is correct.
To multiply two 2 × 2 matrices, Strassen’s method requires seven multiplications and 18 additions/subtractions, whereas the straightforward method requires eight multiplications and four additions/
subtractions. We have saved ourselves one multiplication at the expense of doing 14 additional additions or subtractions. This is not very impressive, and indeed it is not in the case of 2 × 2
matrices that Strassen’s method is of value. Because the commutativity of multiplications is not used in Strassen’s formulas, those formulas pertain to larger matrices that are each divided into four
submatrices. First we divide the matrices A and B, as illustrated in Figure 2.4. Assuming that n is a power of 2, the matrix A[11], for example, is meant to represent the following submatrix of A:
Using Strassen’s method, first we compute
where our operations are now matrix addition and multiplication. In the same way, we compute M[2] through M[7]. Next we compute
and C[12], C[21], and C[22]. Finally, the product C of A and B is obtained by combining the four submatrices C[ij]. The following example illustrates these steps.
Figure 2.4 The partitioning into submatrices in Strassen’s algorithm.
Example 2.5
Suppose that
Figure 2.5 illustrates the partitioning in Strassen’s method. The computations proceed as follows:
Figure 2.5 The partitioning in Strassen’s algorithm with n = 4 and values given to the matrices.
When the matrices are sufficiently small, we multiply in the standard way. In this example, we do this when n = 2. Therefore,
After this, M[2] through M[7] are computed in the same way, and then the values of C[11], C[12], C[21], and C[22] are computed. They are combined to yield C.
Next we present an algorithm for Strassen’s method when n is a power of 2.
Algorithm 2.8
Problem: Determine the product of two n × n matrices where n is a power of 2.
Inputs: an integer n that is a power of 2, and two n × n matrices A and B.
Outputs: the product C of A and B.
The value of threshold is the point at which we feel it is more efficient to use the standard algorithm than it would be to call procedure strassen recursively. In Section 2.7 we discuss a method for
determining thresholds.
Analysis of Algorithm 2.8
Every-Case Time Complexity Analysis of Number of Multiplications (Strassen)
Basic operation: one elementary multiplication.
Input size: n, the number of rows and columns in the matrices.
For simplicity, we analyze the case in which we keep dividing until we have two 1 × 1 matrices, at which point we simply multiply the numbers in each matrix. The actual threshold value used does not
affect the order. When n = 1, exactly one multiplication is done. When we have two n × nmatrices with n > 1, the algorithm is called exactly seven times with an (n/2) × (n/2) matrix passed each time,
and no multiplications are done at the top level. We have established the recurrence
This recurrence is solved in Example B.2 in Appendix B. The solution is
Analysis of Algorithm 2.8
Every-Case Time Complexity Analysis of Number of Additions/Subtractions (Strassen)
Basic operation: one elementary addition or subtraction.
Input size: n, the number of rows and columns in the matrices.
Again we assume that we keep dividing until we have two 1 × 1 matrices. When n = 1, no additions/subtractions are done. When we have two n × n matrices with n > 1, the algorithm is called exactly
seven times with an (n/2) × (n/2) matrix passed in each time, and 18 matrix additions/subtractions are done on (n/2) × (n/2) matrices. When two (n/2) × (n/2) matrices are added or subtracted, (n/2)^2
additions or subtractions are done on the items in the matrices. We have established the recurrence
This recurrence is solved in Example B.20 in Appendix B. The solution is
When n is not a power of 2, we must modify the previous algorithm. One simple modification is to add sufficient numbers of columns and rows of 0s to the original matrices to make the dimension a
power of 2. Alternatively, in the recursive calls we could add just one extra row and one extra column of 0s whenever the number of rows and columns is odd. Strassen (1969) suggested the following,
more complex modification. We embed the matrices in larger ones with 2^km rows and columns, where k = n − 4m = n/2^kthreshold value ofm and use the standard algorithm after reaching the threshold. It
can be shown that the total number of arithmetic operations (multiplications, additions, and subtractions) is less than 4.7n^2^.^81.
Table 2.3 compares the time complexities of the standard algorithm and Strassen’s algorithm for n a power of 2. If we ignore for the moment the overhead involved in the recursive calls, Strassen’s
algorithm is always more efficient in terms of multiplications, and for large values of n, Strassen’s algorithm is more efficient in terms of additions/subtractions. In Section 2.7 we will discuss an
analysis technique that accounts for the time taken by the recursive calls.
• Table 2.3A comparison of two algorithms that multiply n × n matrices
Standard Algorithm Strassen’s Algorithm
Multiplications n^3 n^2.81
Additions/Subtractions n^3 – n^2 6n^2.81 – 6n^2
Shmuel Winograd developed a variant of Strassen’s algorithm that requires only 15 additions/subtractions. It appears in Brassard and Bratley (1988). For this algorithm, the time complexity of the
additions/subtractions is given by
Coppersmith and Winograd (1987) developed a matrix multiplication algorithm whose time complexity for the number of multiplications is in O (n^2^.^38). However, the constant is so large that
Strassen’s algorithm is usually more efficient.
It is possible to prove that matrix multiplication requires an algorithm whose time complexity is at least quadratic. Whether matrix multiplications can be done in quadratic time remains an open
question; no one has ever created a quadratic-time algorithm for matrix multiplication, and no one has proven that it is not possible to create such an algorithm.
One last point is that other matrix operations such as inverting a matrix and finding the determinant of a matrix are directly related to matrix multiplication. Therefore, we can readily create
algorithms for these operations that are as efficient as Strassen’s algorithm for matrix multiplication.
2.6 Arithmetic with Large Integers
Suppose that we need to do arithmetic operations on integers whose size exceeds the computer’s hardware capability of representing integers. If we need to maintain all the significant digits in our
results, switching to a floating-point representation would be of no value. In such cases, our only alternative is to use software to represent and manipulate the integers. We can accomplish this
with the help of the divide-and-conquer approach. Our discussion focuses on integers represented in base 10. However, the methods developed can be readily modified for use in other bases.
• 2.6.1 Representation of Large Integers: Addition and Other Linear-Time Operations
A straightforward way to represent a large integer is to use an array of integers, in which each array slot stores one digit. For example, the integer 543,127 can be represented in the array S as
To represent both positive and negative integers we need only reserve the high-order array slot for the sign. We could use 0 in that slot to represent a positive integer and 1 to represent a negative
integer. We will assume this representation and use the defined data type large integer to mean an array big enough to represent the integers in the application of interest.
It is not difficult to write linear-time algorithms for addition and subtraction, where n is the number of digits in the large integers. The basic operation consists of the manipulation of one
decimal digit. In the exercises you are asked to write and analyze these algorithms. Furthermore, linear-time algorithms can readily be written that do the operation
where u represents a larger integer, m is a nonnegative integer, divide returns the quotient in integer division, and rem returns the remainder. This, too, is done in the exercises.
• 2.6.2 Multiplication of Large Integers
A simple quadratic-time algorithm for multiplying large integers is one that mimics the standard way learned in grammar school. We will develop one that is better than quadratic time. Our algorithm
is based on using divide-and-conquer to split an n-digit integer into two integers of approximatelyn/2 digits. Following are two examples of such splits.
In general, if n is the number of digits in the integer u, we will split the integer into two integers, one with n/2n/2
With this representation, the exponent m of 10 is given by
If we have two n-digit integers
their product is given by
We can multiply u and v by doing four multiplications on integers with about half as many digits and performing linear-time operations. The following example illustrates this method.
Example 2.6
Consider the following:
Recursively, these smaller integers can then be multiplied by dividing them into yet smaller integers. This division process is continued until a threshold value is reached, at which time the
multiplication can be done in the standard way.
Although we illustrate the method using integers with about the same number of digits, it is still applicable when the number of digits is different. We simply use m = n/2n is the number of digits in
the larger integer. The algorithm now follows. We keep dividing until one of the integers is 0 or we reach some threshold value for the larger integer, at which time the multiplication is done using
the hardware of the computer (that is, in the usual way).
Algorithm 2.9
Large Integer Multiplication
Problem: Multiply two large integers, u and v.
Inputs: large integers u and v.
Outputs: prod, the product of u and v.
Notice that n is an implicit input to the algorithm because it is the number of digits in the larger of the two integers. Remember that divide, rem, and × represent linear-time functions that we need
to write.
Analysis of Algorithm 2.9
Worst-Case Time Complexity (Large Integer Multiplication)
We analyze how long it takes to multiply two n-digit integers.
Basic operation: The manipulation of one decimal digit in a large integer when adding, subtracting, or doing divide 10^m, rem 10^m, or × 10^m. Each of these latter three calls results in the basic
operation being done m times.
Input size: n, the number of digits in each of the two integers.
The worst case is when both integers have no digits equal to 0, because the recursion only ends when threshold is passed. We will analyze this case.
Suppose n is a power of 2. Then x, y, w, and z all have exactly n/2 digits, which means that the input size to each of the four recursive calls to prod is n/2. Because m = n/2, the linear-time
operations of addition, subtraction, divide 10^m, rem 10^m, and × 10^m all have linear-time complexities in terms of n. The maximum input size to these linear-time operations is not the same for all
of them, so the determination of the exact time complexity is not straightforward. It is much simpler to group all the linear-time operations in the one term cn, where c is a positive constant. Our
recurrence is then
The actual value s at which we no longer divide the instance is less than or equal to threshold and is a power of 2, because all the inputs in this case are powers of 2.
For n not restricted to being a power of 2, it is possible to establish a recurrence like the previous one but involving floors and ceilings. Using an induction argument like the one in Example B.25
in Appendix B, we can show that W(n) is eventually nondecreasing. Therefore, Theorem B.6 inAppendix B implies that
Our algorithm for multiplying large integers is still quadratic. The problem is that the algorithm does four multiplications on integers with half as many digits as the original integers. If we can
reduce the numbers of these multiplications, we can obtain an algorithm that is better than quadratic. We do this in the following way. Recall that function prod must determine
and we accomplished this by calling function prod recursively four times to compute
If instead we set
This means we can get the three values in Expression 2.4 by determining the following three values:
To get these three values we need to do only three multiplications, while doing some additional linear-time additions and subtractions. The algorithm that follows implements this method.
Algorithm 2.10
Large Integer Multiplication 2
Problem: Multiply two large integers, u and v.
Inputs: large integers u and v.
Outputs: prod2, the product of u and v.
Analysis of Algorithm 2.10
Worst-Case Time Complexity (Large Integer Multiplication 2)
We analyze how long it takes to multiply two n-digit integers.
Basic operation: The manipulation of one decimal digit in a large integer when adding, subtracting, or doing divide 10^m, rem 10^m, or × 10^m. Each of these latter three calls results in the basic
operation being done m times.
Input size: n, the number of digits in each of the two integers.
The worst case happens when both integers have no digits equal to 0, because in this case the recursion ends only when the threshold is passed. We analyze this case.
• Table 2.4 Examples of the number of digits in x + y in Algorithm 2.10
If n is a power of 2, then x, y, w, and z all have n/2 digits. Therefore, as Table 2.4 illustrates,
This means we can have the following input sizes for the given function calls:
Because m = n/2, the linear-time operations of addition, subtraction, divide 10^m, rem 10^m, and × 10^m all have linear-time complexities in terms of n. Therefore, W(n) satisfies
where s is less than or equal to threshold and is a power of 2, because all the inputs in this case are powers of 2. For n not restricted to being a power of 2, it is possible to establish a
recurrence like the previous one but involving floors and ceilings. Using an induction argument like the one in Example B.25 in Appendix B, we can show that W(n) is eventually nondecreasing.
Therefore, owing to the left inequality in this recurrence and Theorem B.6, we have
Next we show that
To that end, let
Using the right inequality in the recurrence, we have
Because W(n) is nondecreasing, so is W′ (n). Therefore, owing to Theorem B.6 in Appendix B,
and so
Combining our two results, we have
Using Fast Fourier Transforms, Borodin and Munro (1975) developed a
It is possible to write algorithms for other operations on large integers, such as division and square root, whose time complexities are of the same order as that of the algorithm for multiplication.
2.7 Determining Thresholds
As discussed in Section 2.1, recursion requires a fair amount of overhead in terms of computer time. If, for example, we are sorting only eight keys, is it really worth this overhead just so we can
use a Θ(n lg n) algorithm instead of a Θ(n^2) algorithm? Or perhaps, for such a small n, would ExchangeSort (Algorithm 1.3) be faster than our recursive Mergesort? We develop a method that determines
for what values of n it is at least as fast to call an alternative algorithm as it is to divide the instance further. These values depend on the divide-and-conquer algorithm, the alternative
algorithm, and the computer on which they are implemented. Ideally, we would like to find an optimal threshold value of n. This would be an instance size such that for any smaller instance it would
be at least as fast to call the other algorithm as it would be to divide the instance further, and for any larger instance size it would be faster to divide the instance again. However, as we shall
see, an optimal threshold value does not always exist. Even if our analysis does not yield an optimal threshold value, we can use the results of the analysis to pick a threshold value. We then modify
the divide-and-conquer algorithm so that the instance is no longer divided once n reaches that threshold value; instead, the alternative algorithm is called. We have already seen the use of
thresholds in Algorithms 2.8, 2.9, and 2.10.
To determine a threshold, we must consider the computer on which the algorithm is implemented. This technique is illustrated using Mergesort and Exchange Sort. We use Mergesort’s worst-case time
complexity in this analysis. So we are actually trying to optimize the worst-case behavior. When analyzing Mergesort, we determined that the worst case is given by the following recurrence:
Let’s assume that we are implementing Mergesort 2 (Algorithm 2.4). Suppose that on the computer of interest the time Mergesort 2 takes to divide and recombine an instance of size n is 32n µs, where µ
s stands for micro-seconds. The time to divide and recombine the instance includes the time to compute the value of mid, the time to do the stack operations for the two recursive calls, and the time
to merge the two subarrays. Because there are several components to the division and recombination time, it is unlikely that the total time would simply be a constant times n. However, assume that
this is the case to keep things as simple as possible. Because the term n − 1 in the recurrence for W(n) is the recombination time, it is included in the time 32n µs. Therefore, for this computer, we
for Mergesort 2. Because only a terminal condition check is done when the input size is 1, we assume that W (1) is essentially 0. For simplicity, we initially limit our discussion to n being a power
of 2. In this case we have the following recurrence:
The techniques in Appendix B can be used to solve this recurrence. The solution is
Suppose that on this same computer Exchange Sort takes exactly
to sort an instance of size n. Sometimes students erroneously believe that the optimal point where Mergesort 2 should call Exchange Sort can now be found by solving the inequality
The solution is
Students sometimes believe that it is optimal to call Exchange Sort when n < 591 and to call Mergesort 2 otherwise. This analysis is only approximate because we base it on n being a power of 2. But
more importantly it is incorrect, because it only tells us that if we use Mergesort 2 and keep dividing until n = 1, then Exchange Sort is better for n < 591. We want to use Mergesort 2 and keep
dividing until it is better to call Exchange Sort, rather than divide the instance further. This is not the same as dividing until n = 1, and therefore the point at which we call Exchange Sort should
be less than 591. That this value should be less than 591 is a bit hard to grasp in the abstract. The following concrete example, which determines the point at which it is more efficient to call
Exchange Sort rather than dividing the instance further, should make the matter clear. From now on, we no longer limit our considerations to n being a power of 2.
Example 2.7
We determine the optimal threshold for Algorithm 2.5 (Mergesort 2) when calling Algorithm 1.3 (Exchange Sort). Suppose we modify Mergesort 2 so that Exchange Sort is called when n ≤ t for some
threshold t. Assuming the hypothetical computer just discussed, for this version of Mergesort 2,
We want to determine the optimal value of t. That value is the value for which the top and bottom expressions in Equality 2.5 are equal, because this is the point where calling Exchange Sort is as
efficient as dividing the instance further. Therefore, to determine the optimal value of t, we must solve
Because t/2t/2t, the execution time is given by the top expression in Equality 2.5 if the instance has either of these input sizes. Therefore,
Substituting these equalities into Equation 2.6 yields
In general, in an equation with floors and ceilings, we can obtain a different solution when we insert an odd value for t than when we insert an even value for t. This is the reason there is not
always an optimal threshold value. Such a case is investigated next. In this case, however, if we insert an even value for t, which is accomplished by setting t/2t/2t/2, and solve Equation 2.7, we
If we insert an odd value for t, which is accomplished by setting t/2t − 1) /2 and t/2t + 1) /2 and solve Equation 2.7, we obtain
Therefore, we have an optimal threshold value of 128.
Next we give an example where there is no optimal threshold value.
Example 2.8
Suppose for a given divide-and-conquer algorithm running on a particular computer we determine that
where 16n µs is the time needed to divide and recombine an instance of size n. Suppose on the same computer a certain iterative algorithm takes n^2 µs to process an instance of size n. To determine
the value t at which we should call the iterative algorithm, we need to solve
Because t/2≤ t, the iterative algorithm is called when the input has this size, which means that
Therefore, we need to solve
If we substitute an even value for t (by setting t/2t/2) and solve, we get
If we substitute an odd value for t (by setting t/2t + 1) /2) and solve, we get
Because the two values of t are not equal, there is no optimal threshold value. This means that if the size of an instance is an even integer between 64 and 70, it is more efficient to divide the
instance one more time, whereas if the size is an odd integer between 64 and 70, it is more efficient to call the iterative algorithm. When the size is less than 64, it is always more efficient to
call the iterative algorithm. When the size is greater than 70, it is always more efficient to divide the instance again. Table 2.5 illustrates that this is so.
• Table 2.5 Various instance sizes illustrating that the threshold is 64 for n even and 70 for n odd in Example 2.8
n n^2
2.8 When Not to Use Divide-and-Conquer
If possible, we should avoid divide-and-conquer in the following two cases:
1. An instance of size n is divided into two or more instances each almost of size n.
2. An instance of size n is divided into almost n instances of size n/c, where c is a constant.
The first partitioning leads to an exponential-time algorithm, where the second leads to a n^Θ(lg ^n^) algorithm. Neither of these is acceptable for large values of n. Intuitively, we can see why
such partitionings lead to poor performance. For example, the first case would be like Napoleon dividing an opposing army of 30,000 soldiers into two armies of 29,999 soldiers (if this were somehow
possible). Rather than dividing his enemy, he has almost doubled their number! If Napoleon did this, he would have met his Waterloo much sooner.
As you should now verify, Algorithm 1.6 (nth Fibonacci Term, Recursive) is a divide-and-conquer algorithm that divides the instance that computes the nth term into two instances that compute
respectively the (n − 1)st term and the (n − 2)nd term. Although n is not the input size in that algorithm, the situation is the same as that just described concerning input size.
That is, the number of terms computed by Algorithm 1.6 is exponential in n, whereas the number of terms computed by Algorithm 1.7 (nth Fibonacci Term, Iterative) is linear in n.
Sometimes, on the other hand, a problem requires exponentiality, and in such a case there is no reason to avoid the simple divide-and-conquer solution. Consider the Towers of Hanoi problem, which is
presented in Exercise 17. Briefly, the problem involves moving n disks from one peg to another given certain restrictions on how they may be moved. In the exercises you will show that the sequence of
moves, obtained from the standard divide-and-conquer algorithm for the problem, is exponential in terms of n and that it is the most efficient sequence of moves given the problem’s restrictions.
Therefore, the problem requires an exponentially large number of moves in terms of n.
Sections 2.1
1. Use Binary Search, Recursive (Algorithm 2.1) to search for the integer 120 in the following list (array) of integers. Show the actions step by step.
2. Suppose that, even unrealistically, we are to search a list of 700 million items using Binary Search, Recursive (Algorithm 2.1). What is the maximum number of comparisons that this algorithm must
perform before finding a given item or concluding that it is not in the list?
3. Let us assume that we always perform a successful search. That is, in Algorithm 2.1 the item x can always be found in the list S. Improve Algorithm 2.1 by removing all unnecessary operations.
4. Show that the worst-case time complexity for Binary Search (Algorithm 2.1) is given by
when n is not restricted to being a power of 2. Hint: First show that the recurrence equation for W(n) is given by
To do this, consider even and odd values of n separately. Then use induction to solve the recurrence equation.
5. Suppose that, in Algorithm 2.1 (line 4), the splitting function is changed to mid = low;. Explain the new search strategy. Analyze the performance of this strategy and show the results using order
6. Write an algorithm that searches a sorted list of n items by dividing it into three sublists of almost n/3 items. This algorithm finds the sublist that might contain the given item and divides it
into three smaller sublists of almost equal size. The algorithm repeats this process until it finds the item or concludes that the item is not in the list. Analyze your algorithm and give the results
using order notation.
7. Use the divide-and-conquer approach to write an algorithm that finds the largest item in a list of n items. Analyze your algorithm, and show the results in order notation.
Sections 2.2
8. Use Mergesort (Algorithms 2.2 and 2.4) to sort the following list. Show the actions step by step.
9. Give the tree of recursive calls in Exercise 8.
10. Write for the following problem a recursive algorithm whose worst-case time complexity is not worse than Θ(n ln n). Given a list of n distinct positive integers, partition the list into two
sublists, each of size n/2, such that the difference between the sums of the integers in the two sublists is maximized. You may assume that n is a multiple of 2.
11. Write a nonrecursive algorithm for Mergesort (Algorithms 2.2 and 2.4).
12. Show that the recurrence equation for the worst-case time complexity for Mergesort (Algorithms 2.2 and 2.4) is given by
when n is not restricted to being a power of 2.
13. Write an algorithm that sorts a list of n items by dividing it into three sublists of about n/3 items, sorting each sublist recursively and merging the three sorted sublists. Analyze your
algorithm, and give the results under order notation.
Sections 2.3
14. Given the recurrence relation
find T(625).
15. Consider algorithm solve given below. This algorithm solves problem P by finding the output (solution) O corresponding to any input I.
Assume g (n) basic operations for partitioning and combining and no basic operations for an instance of size 1.
(a) Write a recurrence equation T(n) for the number of basic operations needed to solve P when the input size is n.
(b) What is the solution to this recurrence equation if g (n) ∈ Θ(n)? (Proof is not required.)
(c) Assuming that g (n) = n^2, solve the recurrence equation exactly for n = 27.
(d) Find the general solution for n a power of 3.
16. Suppose that, in a divide-and-conquer algorithm, we always divide an instance of size n of a problem into 10 subinstances of size n/3, and the dividing and combining steps take a time in Θ(n^2)
[.] Write a recurrence equation for the running time T(n), and solve the equation for T(n).
17. Write a divide-and-conquer algorithm for the Towers of Hanoi problem. The Towers of Hanoi problem consists of three pegs and n disks of different sizes. The object is to move the disks that are
stacked, in decreasing order of their size, on one of the three pegs to a new peg using the third one as a temporary peg. The problem should be solved according to the following rules: (1) when a
disk is moved, it must be placed on one of the three pegs; (2) only one disk may be moved at a time, and it must be the top disk on one of the pegs; and (3) a larger disk may never be placed on top
of a smaller disk.
(a) Show for your algorithm that S (n) = 2^n − 1. (Here S (n) denotes the number of steps (moves), given an input of n disks.)
(b) Prove that any other algorithm takes at least as many moves as given in part (a).
18. When a divide-and-conquer algorithm divides an instance of size n of a problem into subinstances each of size n/c, the recurrence relation is typically given by
where g (n) is the cost of the dividing and combining processes, and d is a constant. Let n = c^k.
(a) Show that
(b) Solve the recurrence relation given that g(n) ∈ Θ(n).
Sections 2.4
19. Use Quicksort (Algorithm 2.6) to sort the following list. Show the actions step by step.
20. Give the tree of recursive calls in Exercise 19.
21. Show that if
This result is used in the discussion of the worst-case time complexity analysis of Algorithm 2.6 (Quicksort).
22. Verify the following identity
This result is used in the discussion of the average-case time complexity analysis of Algorithm 2.6 (Quicksort).
23. Write a nonrecursive algorithm for Quicksort (Algorithm 2.6). Analyze your algorithm, and give the results using order notation.
24. Assuming that Quicksort uses the first item in the list as the pivot item:
(a) Give a list of n items (for example, an array of 10 integers) representing the worst-case scenario.
(b) Give a list of n items (for example, an array of 10 integers) representing the best-case scenario.
Sections 2.5
25 Show that the number of additions performed by Algorithm 1.4 (Matrix Multiplication) can be reduced to n^3 − n^2 after a slight modification of this algorithm.
26. In Example 2.4, we gave Strassen’s product of two 2 × 2 matrices. Verify the correctness of this product.
27. How many multiplications would be performed in finding the product of two 64 × 64 matrices using the standard algorithm?
28. How many multiplications would be performed in finding the product of two 64 × 64 matrices using Strassen’s method (Algorithm 2.8)?
29. Write a recurrence equation for the modified Strassen’s algorithm developed by Shmuel Winograd that uses 15 additions/subtractions instead of 18. Solve the recurrence equation, and verify your
answer using the time complexity shown at the end of Section 2.5.
Sections 2.6
30. Use Algorithm 2.10 (Large Integer Multiplication 2) to find the product of 1253 and 23,103.
31. How many multiplications are needed to find the product of the two integers in Exercise 30?
32. Write algorithms that perform the operations
where u represents a large integer, m is a nonnegative integer, divide returns the quotient in integer division, and rem returns the remainder. Analyze your algorithms, and show that these operations
can be done in linear time.
33. Modify Algorithm 2.9 (Large Integer Multiplication) so that it divides each n-digit integer into
(a) three smaller integers of n/3 digits (you may assume that n = 3^k).
(b) four smaller integers of n/4 digits (you may assume that n = 4^k).
Analyze your algorithms, and show their time complexities in order notation.
Sections 2.7
34. Implement both Exchange Sort and Quicksort algorithms on your computer to sort a list of n elements. Find the lower bound for n that justifies application of the Quicksort algorithm with its
35. Implement both the standard algorithm and Strassen’s algorithm on your computer to multiply two n × n matrices (n = 2^k). Find the lower bound for n that justifies application of Strassen’s
algorithm with its overhead.
36. Suppose that on a particular computer it takes 12n^2 µs to decompose and recombine an instance of size n in the case of Algorithm 2.8 (Strassen). Note that this time includes the time it takes to
do all the additions and subtractions. If it takes n^3 µs to multiply two n × n matrices using the standard algorithm, determine thresholds at which we should call the standard algorithm instead of
dividing the instance further. Is there a unique optimal threshold?
Sections 2.8
37. Use the divide-and-conquer approach to write a recursive algorithm that computes n!. Define the input size (see Exercise 36 in Chapter 1), and answer the following questions. Does your function
have an exponential time complexity? Does this violate the statement of case 1 given in Section 2.8?
38. Suppose that, in a divide-and-conquer algorithm, we always divide an instance of size n of a problem into n subinstances of size n/3, and the dividing and combining steps take linear time. Write
a recurrence equation for the running time T(n), and solve this recurrence equation for T(n). Show your solution in order notation.
Additional Exercises
39. Implement both algorithms for the Fibonacci Sequence (Algorithms 1.6 and
1.7). Test each algorithm to verify that it is correct. Determine the largest number that the recursive algorithm can accept as its argument and still compute the answer within 60 seconds. See how
long it takes the iterative algorithm to compute this answer.
40. Write an efficient algorithm that searches for a value in an n × m table (two-dimensional array). This table is sorted along the rows and columns—that is,
41. Suppose that there are n = 2^k teams in an elimination tournament, in which there are n/2 games in the first round, with the n/2 = 2^k−^1 winners playing in the second round, and so on.
(a) Develop a recurrence equation for the number of rounds in the tournament.
(b) How many rounds are there in the tournament when there are 64 teams?
(c) Solve the recurrence equation of part (a).
42. A tromino is a group of three unit squares arranged in an L-shape. Consider the following tiling problem: The input is an m × m array of unit squares where m is a positive power of 2, with one
forbidden square on the array. The output is a tiling of the array that satisfies the following conditions:
Write a divide-and-conquer algorithm that solves this problem.
43. Consider the following problem:
(a) Suppose we have nine identical-looking coins numbered 1 through 9 and only one of the coins is heavier than the others. Suppose further that you have one balance scale and are allowed only two
weighings. Develop a method for finding the heavier counterfeit coin given these constraints.
(b) Suppose we now have an integer n (that represents n coins) and only one of the coins is heavier than the others. Suppose further that n is a power of 3 and you are allowed log[3] n weighings to
determine the heavier coin. Write an algorithm that solves this problem. Determine the time complexity of your algorithm.
44. Write a recursive Θ(n lg n) algorithm whose parameters are three integers x, n, and p, and which computes the remainder when x^n is divided by p. For simplicity, you may assume that n is a power
of 2—that is, that n = 2^k for some positive integer k.
45. Use the divide-and-conquer approach to write a recursive algorithm that finds the maximum sum in any contiguous sublist of a given list of n real values. Analyze your algorithm, and show the
results in order notation. | {"url":"https://apprize.best/science/algorithms/2.html","timestamp":"2024-11-03T05:50:23Z","content_type":"text/html","content_length":"109842","record_id":"<urn:uuid:ecb58db3-4757-4d51-aa4c-379d88cb69d0>","cc-path":"CC-MAIN-2024-46/segments/1730477027772.24/warc/CC-MAIN-20241103053019-20241103083019-00055.warc.gz"} |
Python Program to Find the Union of Two Lists - Sanfoundry
Python Program to Find the Union of Two Lists
This is a Python Program to find the union of two lists.
Problem Description
The program takes two lists and finds the unions of the two lists.
Problem Solution
1. Define a function which accepts two lists and returns the union of them.
2. Declare two empty lists and initialise to an empty list.
3. Consider a for loop to accept values for two lists.
4. Take the number of elements in the list and store it in a variable.
5. Accept the values into the list using another for loop and insert into the list.
6. Repeat 4 and 5 for the second list also.
7. Find the union of the two lists.
8. Print the union.
9. Exit.
Program/Source Code
Here is source code of the Python Program to find the union of two lists. The program output is also shown below.
l1 = []
num1 = int(input('Enter size of list 1: '))
for n in range(num1):
numbers1 = int(input('Enter any number:'))
l2 = []
num2 = int(input('Enter size of list 2:'))
for n in range(num2):
numbers2 = int(input('Enter any number:'))
union = list(set().union(l1,l2))
print('The Union of two lists is:',union)
Program Explanation
1. User must enter the number of elements in the list and store it in a variable.
2. User must enter the values to the same number of elements into the list.
3. The append function obtains each element from the user and adds the same to the end of the list as many times as the number of elements taken.
4. The same of 2 and 3 is done for the second list also.
5. The union function accepts two lists and returns the list which is the union of the two lists, i.e, all the values from list 1 and 2 without redundancy.
6. The set function in the union function accepts a list and returns the list after elimination of redundant values.
7. The lists are passed to the union function and the returned list is printed.
Runtime Test Cases
Case 1:
Enter size of list 1: 4
Enter any number: 20
Enter any number: 40
Enter any number: 30
Enter any number: 60
Enter size of list 2: 4
Enter any number: 10
Enter any number: 20
Enter any number: 50
Enter any number: 40
The Union of two lists is: [40, 10, 50, 20, 60, 30]
Case 2:
Enter size of list 1: 3
Enter any number: 5
Enter any number: 6
Enter any number: 7
Enter size of list 2: 2
Enter any number: 8
Enter any number: 9
The Union of two lists is: [8, 9, 5, 6, 7]
Sanfoundry Global Education & Learning Series – Python Programs.
To practice all Python programs, here is complete set of 150+ Python Problems and Solutions. | {"url":"https://www.sanfoundry.com/python-program-find-union-two-lists/","timestamp":"2024-11-02T02:19:41Z","content_type":"text/html","content_length":"140820","record_id":"<urn:uuid:3436210a-010c-4b27-a7ed-67420c86e4ae>","cc-path":"CC-MAIN-2024-46/segments/1730477027632.4/warc/CC-MAIN-20241102010035-20241102040035-00479.warc.gz"} |
How to Convert Sum Of Terms In A Vector In Julia?
To convert the sum of terms in a vector in Julia, you can use the sum() function. This function calculates the sum of all the elements in a given vector. Simply pass the vector as an argument to the
sum() function, and it will return the total sum of all the elements in the vector. This is a quick and easy way to calculate the sum of terms in a vector in Julia.
How to apply a function to each element of a vector in Julia?
To apply a function to each element of a vector in Julia, you can use the map() function. Here's an example:
1 # Define a vector
2 vec = [1, 2, 3, 4, 5]
4 # Define a function to apply to each element
5 function square(x)
6 return x^2
7 end
9 # Use the map function to apply the function to each element of the vector
10 result = map(square, vec)
12 println(result)
This will output [1, 4, 9, 16, 25], as the square() function was applied to each element of the vector vec.
How to normalize a vector in Julia?
To normalize a vector in Julia, you can use the following code:
1 function normalize_vector(vec)
2 norm_vec = sqrt(sum(abs2, vec))
3 return vec / norm_vec
4 end
6 # Example usage
7 vec = [3, 4]
8 normalized_vec = normalize_vector(vec)
9 println(normalized_vec)
In this code, we first calculate the norm of the vector using norm_vec = sqrt(sum(abs2, vec)). Then, we divide each element of the vector by the norm to obtain the normalized vector.
How to reshape a vector in Julia?
To reshape a vector in Julia, you can use the reshape() function.
Here is an example of how to reshape a vector in Julia:
1 v = [1, 2, 3, 4, 5, 6]
2 reshaped_v = reshape(v, 2, 3)
3 println(reshaped_v)
In this example, the reshape() function is used to reshape the vector v into a 2x3 matrix. The reshape() function takes in the vector v and the desired shape of the new matrix as arguments. The
reshaped matrix reshaped_v will have 2 rows and 3 columns.
You can also use the reshape() function to reshape a vector into a 1-dimensional array by specifying the number of elements in the new shape.
1 v = [1, 2, 3, 4, 5, 6]
2 reshaped_v = reshape(v, 3, 2)
3 println(reshaped_v)
In this example, the reshape() function is used to reshape the vector v into a 3x2 matrix.
What is the purpose of using vectors in mathematical computations in Julia?
The purpose of using vectors in mathematical computations in Julia is to store and manipulate multiple elements of data efficiently. Vectors allow for the representation of one-dimensional arrays of
numerical data, which can be used in various mathematical operations such as addition, subtraction, multiplication, division, and more.
By utilizing vectors, users can perform calculations on large sets of data quickly and easily. Vectors are also useful for representing coordinates in geometric operations, describing physical
quantities in physics problems, and analyzing data in statistical analyses. Additionally, vectors can be used to solve systems of linear equations, optimize functions in optimization problems, and
perform transformations in linear algebra applications. Overall, vectors provide a flexible and powerful tool for performing mathematical computations in Julia.
What is the significance of broadcasting in Julia vectors?
Broadcasting in Julia vectors allows for simultaneous operations on multiple elements of the vectors without the need for explicit loops, making code more efficient and concise. This is particularly
useful when working with large datasets or performing complex mathematical operations on vectors. It also enables vectorized calculations, which can greatly speed up processing time compared to
scalar operations. Additionally, broadcasting allows for greater flexibility in writing code and encourages a more functional programming style in Julia. | {"url":"https://stesha.strangled.net/blog/how-to-convert-sum-of-terms-in-a-vector-in-julia","timestamp":"2024-11-10T11:42:47Z","content_type":"text/html","content_length":"138268","record_id":"<urn:uuid:d9e6b43a-17b0-480d-879c-45b30229ff65>","cc-path":"CC-MAIN-2024-46/segments/1730477028186.38/warc/CC-MAIN-20241110103354-20241110133354-00099.warc.gz"} |
Mergesort - AP Computer Science A
All AP Computer Science A Resources
Example Questions
Example Question #1 : Mergesort
How is merge sort accomplished?
Possible Answers:
The orginal list is broken into sublists of 4, then are combined together.
The original list is broken into two groups, then sorted from there.
The original list is continuously broken up into sublists until each sublist is containts 1 element, then the sublists are combined together.
Each element in the list is compared to all the other elements and inserted where it fits.
If there are an even number of elements, the list is broken into two groups, are sorted, then merged back together. If there are odd numbered elements, the list is broken into three groups.
Correct answer:
The original list is continuously broken up into sublists until each sublist is containts 1 element, then the sublists are combined together.
In merge sort, a list is broken up into sublists containing 1 element. Each element is then compared to another element and sorted. Each 2-element group is then combined with other 2-element groups,
comparing the first value of each group and deciding how to four elements. The larger group of four is compared to another group of four, until the process ends and the list is sorted.
Example Question #2 : Sorting
The above diagram represents what type of sorting algorithm?
Correct answer:
Mergesort consists of breaking down an unsorted list into subarrays; sorting each sub-array, and recombining the sub-arrays into larger arrays.
Example Question #11 : Sorting
Of the choices below, what is the most efficient sorting algorithm for an unordered list where the size of the list is an odd number and the size of the list is finite?
Correct answer:
Mergesort is the most efficient among the choices. Both selection sort and insertion sort use O(N^2) time. Bubble Sort may seem like a good answer but uses O(N^2) time most of the time and can be
adapted to use O(N) time however only when the list is nearly sorted, so it's a gamble. Mergesort always uses O(NlogN) time and thus is always the most efficient among the four choices.
Certified Tutor
UTD, Bachelor of Science, Computer Science. UTD, Master of Science, Computer Science.
Certified Tutor
U of Akron, Bachelors, Chemistry. YSU, Masters, Chemistry.
Certified Tutor
University of Massachusetts Amherst, Bachelor of Science, Computer Science. University of Massachusetts Amherst, Master of Sc...
All AP Computer Science A Resources | {"url":"https://cdn.varsitytutors.com/ap_computer_science_a-help/mergesort","timestamp":"2024-11-07T21:32:54Z","content_type":"application/xhtml+xml","content_length":"146921","record_id":"<urn:uuid:211a2f5e-3000-4ac1-8270-4a4f9088e5d5>","cc-path":"CC-MAIN-2024-46/segments/1730477028017.48/warc/CC-MAIN-20241107212632-20241108002632-00369.warc.gz"} |
Post TOPIC: Hawking Radiation
RE: Hawking Radiation Permalink
Black hole - never forms, or never evaporates
Yi Sun
Many discussion about the black hole conundrums, such like singularity and information loss, suggested that there must be some essential irreconcilable conflict between quantum theory and
classical gravity theory, which cannot be solved with any semiclassical quantised model of gravity, the only feasible way must be some complete unified quantum theory of gravity. In Vachaspati,
the arguments indicate the possibility of an alternate outcome of gravitational collapse which avoids the information loss problem. In this paper, also with semiclassical analysis, it shows that
so long as the mechanism of black hole evaporation satisfies a quite loose condition that the evaporation lifespan is finite for external observers, regardless of the detailed mechanism and
process of evaporation, the conundrums above can be naturally avoided. This condition can be satisfied with Hawking-Unruh mechanism. Thus, the conflict between quantum theory and classical
gravity theory may be not as serious as it seemed to be, the effectiveness of semiclassical methods might be underestimated. An exact universal solution with spherical symmetry of Einstein field
equation has been derived in this paper. All possible solutions with spherical symmetry of Einstein field equation are its special cases. In addition, some problems of the Penrose diagram of an
evaporating black hole first introduced by Hawking in 1975 are clarified.
Measurement of stimulated Hawking emission in an analogue system
Silke Weinfurtner, Edmund W. Tedford, Matthew C. J. Penrice, William G. Unruh, Gregory A. Lawrence
(Version v2)
There is a mathematical analogy between the propagation of fields in a general relativistic space-time and long (shallow water) surface waves on moving water. Hawking argued that black holes
emit thermal radiation via a quantum spontaneous emission. Similar arguments predict the same effect near wave horizons in fluid flow. By placing a streamlined obstacle into an open channel flow
we create a region of high velocity over the obstacle that can include wave horizons. Long waves propagating upstream towards this region are blocked and converted into short (deep water) waves.
This is the analogue of the stimulated emission by a white hole (the time inverse of a black hole), and our measurements of the amplitudes of the converted waves demonstrate the thermal nature
of the conversion process for this system. Given the close relationship between stimulated and spontaneous emission, our findings attest to the generality of the Hawking process.
Read more
(2086kb, PDF)
Bekenstein-Hawking entropy Permalink
Corrections to Bekenstein-Hawking entropy --- Quantum or not-so quantum?
S. Shankaranarayanan (IISER-Trivandrum)
Hawking radiation and Bekenstein--Hawking entropy are the two robust predictions of a yet unknown quantum theory of gravity. Any theory which fails to reproduce these predictions is certainly
incorrect. While several approaches lead to Bekenstein--Hawking entropy, they all lead to different sub-leading corrections. In this article, we ask a question that is relevant for any approach:
Using simple techniques, can we know whether an approach contains quantum or semi-classical degrees of freedom? Using naive dimensional analysis, we show that the semi-classical black-hole
entropy has the same dimensional dependence as the gravity action. Among others, this provides a plausible explanation for the connection between Einstein's equations and thermodynamic equation
of state, and that the quantum corrections should have a different scaling behaviour.
Read more
(8kb, PDF)
RE: Hawking Radiation Permalink
Hawking radiation glimpsed in artificial black hole “You might expect black holes to be, well, black, but several decades ago Stephen Hawking calculated that they should emit light. Now, for
the first time, physicists claim that they have observed this weird glow in the lab.”Read more
Hawking Radiation the simple version
'Virtual particle pairs' are constantly being created in 'empty' space....and if they happen to be created near the horizon of the black hole, then one of them can fall in...
Normally, they are created as a particle-antiparticle pair and they quickly annihilate/cancel each other out; so obviously, if one fell into the BH then it's not possible for the other one to
'cancel out' , in which case the other one manages to escapes as Hawking radiation.
The particle that fell into the BH is still virtual and must restore its 'conservation of energy' by giving itself a negative mass-energy.
The black-hole cancels this negative mass-energy and loses some of it's total Mass and shrinks...
Hawking Radiation is a process using mainly virtual photons (which are their own anti-particle, and thus can carry negative mass energy).
An alternative model is that the process can be regarded as 'perspective process'.
A electron-positron pair can be created and it does not matter which one fall into the BH, as from our perspective a particle has 'escaped' from the black hole, (and from our perspective the
blackhole has transformed the time and spatial dimensions) and thus, lost mass.
The black-hole Mass (solar masses) radiates like a 'blackbody' with a temperature of
(6 x 10^-8/Mass) Kelvin, with the total lifetime of a black hole Mass of about:
10^71 Mass^3 seconds
Ed ~ Hawking Radiation is best understood if you know that the process is with virtual photons, (which are their own anti-particle and thus can carry negative mass energy).
Simulated black holes may prove theory
By cramming several thousand superconducting quantum interference devices (SQUIDS), which guide light down a track much like a rail guides trains, scientists hope to simulate the effects of a
black hole.
The research could help prove a 35-year-old theory originally proposed by physicist Stephen Hawking and cement humanity's fundamental understanding of the universe.Read more
Hawking radiation as seen by an infalling observer
Eric Greenwood, Dejan Stojkovic
(Version v2)
We investigate an important question of Hawking-like radiation as seen by an infalling observer during gravitational collapse. Using the functional Schrodinger formalism we are able to probe the
time dependent regime which is out of the reach of the standard approximations like the Bogolyubov method. We calculate the occupation number of particles whose frequencies are measured in the
proper time of an infalling observer in two crucially different space-time foliations: Schwarzschild and Eddington-Finkelstein. We demonstrate that the distribution in Schwarzschild reference
frame is not quite thermal, though it becomes thermal once the horizon is crossed. We approximately fit the temperature and find that the local temperature increases as the horizon is
approached, and diverges exactly at the horizon. In Eddington-Finkelstein reference frame the temperature at the horizon is finite, since the observer in that frame is not accelerated. These
results are in agreement with what is generically expected in the absence of back reaction. We also discuss some subtleties related to the physical interpretation of the infinite local
temperature in Schwarzschild reference frame.
Read more
(1831kb, PDF)
Semiconducting SQUID to help detect Hawking radiation
A trick of the light has allowed U.S. scientists to mimic the physics of black holes in the laboratory.
Reported in the journal Physical Review Letters, the study paves the way for the first test of a number of theories surrounding the concept of black holes, including the existence of Hawking
Read more
A sonic black hole in a density-inverted Bose-Einstein condensate
O. Lahav, A. Itah, A. Blumkin, C. Gordon, J. Steinhauer
We have created the analogue of a black hole in a Bose-Einstein condensate. In this sonic black hole, sound waves, rather than light waves, cannot escape the event horizon. The black hole is
realised via a counterintuitive density inversion, in which an attractive potential repels the atoms. This allows for measured flow speeds which cross and exceed the speed of sound by an order
of magnitude. The Landau critical velocity is therefore surpassed. The point where the flow speed equals the speed of sound is the event horizon. The effective gravity is determined from the
profiles of the velocity and speed of sound.
Read more
(199kb, PDF) | {"url":"https://astronomy.activeboard.com/t19522384/hawking-radiation/?page=3&sort=newestFirst","timestamp":"2024-11-11T11:35:58Z","content_type":"application/xhtml+xml","content_length":"97954","record_id":"<urn:uuid:d5132a2e-1c60-4e55-9575-372676a5642d>","cc-path":"CC-MAIN-2024-46/segments/1730477028228.41/warc/CC-MAIN-20241111091854-20241111121854-00514.warc.gz"} |
Lesson 15
Quartiles and Interquartile Range
Let's look at other measures for describing distributions.
Problem 1
Suppose that there are 20 numbers in a data set and that they are all different.
1. How many of the values in this data set are between the first quartile and the third quartile?
2. How many of the values in this data set are between the first quartile and the median?
Problem 2
In a word game, 1 letter is worth 1 point. This dot plot shows the scores for 20 common words.
1. What is the median score?
2. What is the first quartile (Q1)?
3. What is the third quartile (Q3)?
4. What is the interquartile range (IQR)?
Problem 3
Mai and Priya each played 10 games of bowling and recorded the scores. Mai’s median score was 120, and her IQR was 5. Priya’s median score was 118, and her IQR was 15. Whose scores probably had less
variability? Explain how you know.
Problem 4
Here are five dot plots that show the amounts of time that ten sixth-grade students in five countries took to get to school. Match each dot plot with the appropriate median and IQR.
1. Median: 17.5, IQR: 11
2. Median: 15, IQR: 30
3. Median: 8, IQR: 4
4. Median: 7, IQR: 10
5. Median: 12.5, IQR: 8
Problem 5
Draw and label an appropriate pair of axes and plot the points. \(A = (10, 50)\), \(B = (30, 25)\), \(C = (0, 30)\), \(D = (20, 35)\)
(From Unit 7, Lesson 12.)
Problem 6
There are 20 pennies in a jar. If 16% of the coins in the jar are pennies, how many coins are there in the jar?
(From Unit 6, Lesson 7.) | {"url":"https://curriculum.illustrativemathematics.org/MS/students/1/8/15/practice.html","timestamp":"2024-11-07T22:12:45Z","content_type":"text/html","content_length":"74526","record_id":"<urn:uuid:ee0cb4e3-5df4-4a9e-87d5-d36a56baa08e>","cc-path":"CC-MAIN-2024-46/segments/1730477028017.48/warc/CC-MAIN-20241107212632-20241108002632-00624.warc.gz"} |
How to Pick Up the Bucks
You have 2N coins of varying denominations (each is a non-negative real number) in a line. Players A and B take turns choosing one coin from either end. Prove A always has a strategy that ensures he
end up with at least as much as B.
Communicated by A. Roy
View comments
Warning: Solutions May Be Discussed in the Comments
Written by Steven Miller on March 20, 2011. Reply
My apologies — we had changed software and your reply was temporarily lost. If you contact me again I’ll respond.
Original post: Oh my…please help! I have a 142 IQ, yet I do NOT understand the function of ‘N’ in equations. Truth be told, I am a super creative visual artist and musician who does well with
spacial math…yet the meaning of “n” eludes me. I have never studied math during my higher education years, yet I WANT so badly to understand …’n’!
I think, perhaps, the explanation of how to solve this riddle could help me…this could be the missing link I’ve been searching for. If anyone can help me, please visit my blog, and drop me an
email. Seriously. Thank you.
PS It is a SERIOUS mental block with me! I’ve had a Rubik’s Cube for less than a week and I can solve it under 3 minutes already, yet I’ve been trying to figure out ‘n’ for a couple of years to
no avail. Any good teachers out there willing to shed some light? I think maybe I was born without the part of the brain that comprehends ‘n’.
Written by Linh on October 16, 2011. Reply
This is more complicated than it appears to be. I thought I can prove it easily by induction but there is a problem 🙁
Written by Steven Miller on October 16, 2011. Reply
Try doing some special cases. start with fewer coins than 99, and try and get a feel. There IS a simple recipe for what you should do.
Written by Steven Miller on November 26, 2011. Reply
twig: correct, well done; not posting as it’s the soln
Written by Mark on December 5, 2011. Reply
Is it as simple as looking at the coins play A would potential unveil? If the potential coins are both similar in denomination then he takes the larger of the available coins. If the potentials
are dissimilar then he would choose the coin that unveils the smaller option. I am not a student of math but very facinated by it. I have only a limited background as I have a degree in Biology.
How close am I in a qualitative sense?
Written by Steven Miller on December 6, 2011. Reply
Not quite. The coins are in a line: c1, c2, c3, …, c_{2N}. You have to describe A’s strategy. They start by taking either c1 or c2. Which one? Why? Imagine there are just 2 coins, or 4 coins.
What would the strategy be? Note that A does NOT always have a winning strategy (imagine there are only 3 coins).
Written by kevin on March 6, 2012. Reply
Assuming A goes first, then A sets the tone… a can look down the line to see what coins are the largest and will have to sacrifice by choosing smaller coins in order to set himself up for the
bigger coins??
Written by Steven Miller on March 6, 2012. Reply
you’re on the right track, but not there. imagine the coins are $1 $999 $2 — you can’t win. you need an explicit strategy…. Email me at sjm1 AT williams.edu if you want to chat more.
Written by Bruno on March 9, 2012. Reply
Where can I check if I got the answer? I am pretty sure I did, but in general I’d like to be able to check (I will be coming here often because I have interviews coming up…) Thanks!
Written by Steven Miller on March 9, 2012. Reply
email me at sjm1 AT williams.edu — I see you went to Brown — spent 4 great years as a postdoc there.
Written by Yehuda on March 27, 2012. Reply
For a1, a2, ………, a(n-1), a(n)
Say a1 > a(n). Choose a1 if :
1. a1>a2
2. a(n-1)>a2>a1
3. a2>a1>a(n-1) and a2-a1 a(n-1)>a1 and a2-a1 < a(n-1)-a(n)
Otherwise choose a (n)
What I do not understand is how it guarantees that player A will win
Written by Steven Miller on March 28, 2012. Reply
it doesn’t. email me (sjm1 AT williams.edu) if you want a hint. //s
Written by chris on May 19, 2012. Reply
Is it 2N to ensure you have at least two coins? Could N be 0.5 giving a ‘line’ of 1 coin?
The case of 3 coins implies non-integer N? but I’m confused by the 2N (can’t it just be N?).
With $1 $999 $2 whoever goes first loses, so player A must have the decision on who goes first.
I think I’m nearly there other than the last case, I’ve sent an email to avoid spoiling for anyone else. Great puzzle 🙂
Written by Steven Miller on May 19, 2012. Reply
you’re right that with an odd number of coins whomever goes first MIGHT
it’s important that there are an even number of coins
Written by Anonymous on June 5, 2012. Reply
always go first
Written by Steven Miller on June 6, 2012. Reply
That’s not the soln — how does going first force a win? You need to add a few details. //s
Written by Lloyd on October 21, 2012. Reply
Let 2N be divided into k denominations with n1, n2, …, nk be the number of coins in each denomination. It is clear that n1 + n2 + … + nk = 2N (n1 reads n_sub_1). Let index 1 represent the highest
value of these denominations, 2 for the second highest, and so on. To ensure advantage, each player would intend to pick the coin whose value is highest (or one of) among the remaining coins.
This makes clear that if n1, n2, …, nk are all even, then this will ensure that both players will have equal value (sum of the coins). Now suppose that at least one of these denominations have
odd, say n_sub_i, the number of coins in the denomination i. It follows that there exist an n_sub_j (where i B1+B2+…+B_sub_i, where A_sub_i or B_sub_i is the total cost of coins picked by
respective players in denomination i. Although this implies that B_sub_j>A_sub_j, still A has an advantage over B since the cost of denomination i is larger than i+1 or any of the succeeding
denominations and that for the remaining even numbered denominations, both have equal cost picked up. Therefore, A has at least as much as B. They have equal if n1, n2, …, nk are all even. If at
least two of them are odd (there can’t only be one or three or so on odd denomination since the sum of all are even), this would ensure A has more cost of coins picked up than B. Q.E.D. =)
Written by Steven Miller on October 21, 2012. Reply
You might be on the right path but hard to follow. There is a one line answer. You can email me at [email protected]
Written by Jason on November 28, 2012. Reply
One of your comments says. “they start by taking c1 or c2.” I thought that whoever goes first must take either c1 or cn (the first coin or the last coin). Can c2 be selected if c1 hasn’t been
taken? Or, can c(n-1) be selected if cn hasn’t been chosen?
Written by Steven Miller on November 28, 2012. Reply
probably should be c_1 or c_{2n} as have to take an end — email me at sjm1 AT williams.edu to discuss in greater depth //s
Written by Simon on February 15, 2017. Reply
So for the rules: -both could start
– A could manipulate the order of the coins
-if the coins were ‘c1′,’c2′,’c3′,’c4’, they could choose between c1 and c4
am i right so far?
Written by Steven Miller on February 15, 2017. Reply
coins are given in a certain order
one player knows they are going first
they can both see the amounts
they can ONLY choose from an end
Written by sZpak on October 26, 2017. Reply
Nice one. I tried to solve this puzzle few years ago but I failed. Now I tried again and finally I got it 🙂
But the reasoning is not that simple as the solution. Probably the problem comes from fact that you can:
1. prove the statement – in that case “natural”, intuitive thought is an induction; or
2. point the strategy (constructive proof), but in such case “natural” thinking is greedy algorithm.
Both don’t work 🙂
Thank you Steve for you Puzzle page.
Written by Steven Miller on October 27, 2017. Reply
What’s your soln? Email me at sjm1 AT williams.edu
Leave a Comment | {"url":"https://mathriddles.williams.edu/?p=123","timestamp":"2024-11-05T10:32:17Z","content_type":"application/xhtml+xml","content_length":"102006","record_id":"<urn:uuid:67f761de-b37e-4b44-bf2b-e01a3248dc65>","cc-path":"CC-MAIN-2024-46/segments/1730477027878.78/warc/CC-MAIN-20241105083140-20241105113140-00026.warc.gz"} |
Max Planck Institut
Research Reports of the Max Planck Institute for Informatics
Report on the Security State of Networks of Max-Planck Institutes: Findings and Recommendations
T. Fiebig
Technical Report, 2023
@techreport{Fiebig_Report23, TITLE = {Report on the Security State of Networks of Max-Planck Institutes : Findings and Recommendations}, AUTHOR = {Fiebig, Tobias}, LANGUAGE = {eng}, DOI = {10.17617/
2.3532055}, INSTITUTION = {Max Planck Society}, ADDRESS = {M{\"u}nchen}, YEAR = {2023}, MARGINALMARK = {$\bullet$}, }
%0 Report %A Fiebig, Tobias %+ Internet Architecture, MPI for Informatics, Max Planck Society %T Report on the Security State of Networks of Max-Planck Institutes : Findings and Recommendations : %G
eng %U http://hdl.handle.net/21.11116/0000-000D-C4C9-3 %R 10.17617/2.3532055 %Y Max Planck Society %C München %D 2023 %P 70 p.
Parametric Hand Texture Model for 3D Hand Reconstruction and Personalization
N. Qian, J. Wang, F. Mueller, F. Bernard, V. Golyanik and C. Theobalt
Technical Report, 2020
3D hand reconstruction from image data is a widely-studied problem in com-<br>puter vision and graphics, and has a particularly high relevance for virtual<br>and augmented reality. Although several
3D hand reconstruction approaches<br>leverage hand models as a strong prior to resolve ambiguities and achieve a<br>more robust reconstruction, most existing models account only for the hand<br>shape
and poses and do not model the texture. To fill this gap, in this work<br>we present the first parametric texture model of human hands. Our model<br>spans several dimensions of hand appearance
variability (e.g., related to gen-<br>der, ethnicity, or age) and only requires a commodity camera for data acqui-<br>sition. Experimentally, we demonstrate that our appearance model can be<br>used
to tackle a range of challenging problems such as 3D hand reconstruc-<br>tion from a single monocular image. Furthermore, our appearance model<br>can be used to define a neural rendering layer that
enables training with a<br>self-supervised photometric loss. We make our model publicly available.
@techreport{Qian_report2020, TITLE = {Parametric Hand Texture Model for {3D} Hand Reconstruction and Personalization}, AUTHOR = {Qian, Neng and Wang, Jiayi and Mueller, Franziska and Bernard, Florian
and Golyanik, Vladislav and Theobalt, Christian}, LANGUAGE = {eng}, ISSN = {0946-011X}, NUMBER = {MPI-I-2020-4-001}, INSTITUTION = {Max-Planck-Institut f{\"u}r Informatik}, ADDRESS = {Saarbr{\"u}
cken}, YEAR = {2020}, ABSTRACT = {3D hand reconstruction from image data is a widely-studied problem in com-<br>puter vision and graphics, and has a particularly high relevance for virtual<br>and
augmented reality. Although several 3D hand reconstruction approaches<br>leverage hand models as a strong prior to resolve ambiguities and achieve a<br>more robust reconstruction, most existing
models account only for the hand<br>shape and poses and do not model the texture. To {fi}ll this gap, in this work<br>we present the {fi}rst parametric texture model of human hands. Our model<br>
spans several dimensions of hand appearance variability (e.g., related to gen-<br>der, ethnicity, or age) and only requires a commodity camera for data acqui-<br>sition. Experimentally, we
demonstrate that our appearance model can be<br>used to tackle a range of challenging problems such as 3D hand reconstruc-<br>tion from a single monocular image. Furthermore, our appearance model<br>
can be used to de{fi}ne a neural rendering layer that enables training with a<br>self-supervised photometric loss. We make our model publicly available.}, TYPE = {Research Report}, }
%0 Report %A Qian, Neng %A Wang, Jiayi %A Mueller, Franziska %A Bernard, Florian %A Golyanik, Vladislav %A Theobalt, Christian %+ Computer Graphics, MPI for Informatics, Max Planck Society Computer
Graphics, MPI for Informatics, Max Planck Society Computer Graphics, MPI for Informatics, Max Planck Society Computer Graphics, MPI for Informatics, Max Planck Society Computer Graphics, MPI for
Informatics, Max Planck Society Computer Graphics, MPI for Informatics, Max Planck Society %T Parametric Hand Texture Model for 3D Hand Reconstruction and Personalization : %G eng %U http://
hdl.handle.net/21.11116/0000-0006-9128-9 %Y Max-Planck-Institut für Informatik %C Saarbrücken %D 2020 %P 37 p. %X 3D hand reconstruction from image data is a widely-studied problem in com-
<br>puter vision and graphics, and has a particularly high relevance for virtual<br>and augmented reality. Although several 3D hand reconstruction approaches<br>leverage hand models as a strong prior
to resolve ambiguities and achieve a<br>more robust reconstruction, most existing models account only for the hand<br>shape and poses and do not model the texture. To fill this gap, in this
work<br>we present the first parametric texture model of human hands. Our model<br>spans several dimensions of hand appearance variability (e.g., related to gen-<br>der, ethnicity, or age) and
only requires a commodity camera for data acqui-<br>sition. Experimentally, we demonstrate that our appearance model can be<br>used to tackle a range of challenging problems such as 3D hand
reconstruc-<br>tion from a single monocular image. Furthermore, our appearance model<br>can be used to define a neural rendering layer that enables training with a<br>self-supervised
photometric loss. We make our model publicly available. %K hand texture model, appearance modeling, hand tracking, 3D hand recon- struction %B Research Report %@ false
Live User-guided Intrinsic Video For Static Scenes
G. Fox, A. Meka, M. Zollhöfer, C. Richardt and C. Theobalt
Technical Report, 2017
We present a novel real-time approach for user-guided intrinsic decomposition of static scenes captured by an RGB-D sensor. In the first step, we acquire a three-dimensional representation of the
scene using a dense volumetric reconstruction framework. The obtained reconstruction serves as a proxy to densely fuse reflectance estimates and to store user-provided constraints in
three-dimensional space. User constraints, in the form of constant shading and reflectance strokes, can be placed directly on the real-world geometry using an intuitive touch-based interaction
metaphor, or using interactive mouse strokes. Fusing the decomposition results and constraints in three-dimensional space allows for robust propagation of this information to novel views by
re-projection.We leverage this information to improve on the decomposition quality of existing intrinsic video decomposition techniques by further constraining the ill-posed decomposition problem. In
addition to improved decomposition quality, we show a variety of live augmented reality applications such as recoloring of objects, relighting of scenes and editing of material appearance.
@techreport{Report2017-4-001, TITLE = {Live User-guided Intrinsic Video For Static Scenes}, AUTHOR = {Fox, Gereon and Meka, Abhimitra and Zollh{\"o}fer, Michael and Richardt, Christian and Theobalt,
Christian}, LANGUAGE = {eng}, ISSN = {0946-011X}, NUMBER = {MPI-I-2017-4-001}, INSTITUTION = {Max-Planck-Institut f{\"u}r Informatik}, ADDRESS = {Saarbr{\"u}cken}, YEAR = {2017}, ABSTRACT = {We
present a novel real-time approach for user-guided intrinsic decomposition of static scenes captured by an RGB-D sensor. In the first step, we acquire a three-dimensional representation of the scene
using a dense volumetric reconstruction framework. The obtained reconstruction serves as a proxy to densely fuse reflectance estimates and to store user-provided constraints in three-dimensional
space. User constraints, in the form of constant shading and reflectance strokes, can be placed directly on the real-world geometry using an intuitive touch-based interaction metaphor, or using
interactive mouse strokes. Fusing the decomposition results and constraints in three-dimensional space allows for robust propagation of this information to novel views by re-projection.We leverage
this information to improve on the decomposition quality of existing intrinsic video decomposition techniques by further constraining the ill-posed decomposition problem. In addition to improved
decomposition quality, we show a variety of live augmented reality applications such as recoloring of objects, relighting of scenes and editing of material appearance.}, TYPE = {Research Report}, }
%0 Report %A Fox, Gereon %A Meka, Abhimitra %A Zollhöfer, Michael %A Richardt, Christian %A Theobalt, Christian %+ External Organizations Computer Graphics, MPI for Informatics, Max Planck
Society Computer Graphics, MPI for Informatics, Max Planck Society Computer Graphics, MPI for Informatics, Max Planck Society Computer Graphics, MPI for Informatics, Max Planck Society %T Live
User-guided Intrinsic Video For Static Scenes : %G eng %U http://hdl.handle.net/11858/00-001M-0000-002C-5DA7-3 %Y Max-Planck-Institut für Informatik %C Saarbrücken %D 2017 %P 12 p. %X We
present a novel real-time approach for user-guided intrinsic decomposition of static scenes captured by an RGB-D sensor. In the first step, we acquire a three-dimensional representation of the scene
using a dense volumetric reconstruction framework. The obtained reconstruction serves as a proxy to densely fuse reflectance estimates and to store user-provided constraints in three-dimensional
space. User constraints, in the form of constant shading and reflectance strokes, can be placed directly on the real-world geometry using an intuitive touch-based interaction metaphor, or using
interactive mouse strokes. Fusing the decomposition results and constraints in three-dimensional space allows for robust propagation of this information to novel views by re-projection.We leverage
this information to improve on the decomposition quality of existing intrinsic video decomposition techniques by further constraining the ill-posed decomposition problem. In addition to improved
decomposition quality, we show a variety of live augmented reality applications such as recoloring of objects, relighting of scenes and editing of material appearance. %B Research Report %@ false
Generating Semantic Aspects for Queries
D. Gupta, K. Berberich, J. Strötgen and D. Zeinalipour-Yazti
Technical Report, 2017
Ambiguous information needs expressed in a limited number of keywords<br>often result in long-winded query sessions and many query reformulations.<br>In this work, we tackle ambiguous queries by
providing automatically gen-<br>erated semantic aspects that can guide users to satisfying results regarding<br>their information needs. To generate semantic aspects, we use semantic an-<br>notations
available in the documents and leverage models representing the<br>semantic relationships between annotations of the same type. The aspects in<br>turn provide us a foundation for representing text in
a completely structured<br>manner, thereby allowing for a semantically-motivated organization of search<br>results. We evaluate our approach on a testbed of over 5,000 aspects on Web<br>scale
document collections amounting to more than 450 million documents,<br>with temporal, geographic, and named entity annotations as example dimen-<br>sions. Our experimental results show that our
general approach is Web-scale<br>ready and finds relevant aspects for highly ambiguous queries.
@techreport{Guptareport2007, TITLE = {Generating Semantic Aspects for Queries}, AUTHOR = {Gupta, Dhruv and Berberich, Klaus and Str{\"o}tgen, Jannik and Zeinalipour-Yazti, Demetrios}, LANGUAGE =
{eng}, ISSN = {0946-011X}, NUMBER = {MPI-I-2017-5-001}, INSTITUTION = {Max-Planck-Institut f{\"u}r Informatik}, ADDRESS = {Saarbr{\"u}cken}, YEAR = {2017}, ABSTRACT = {Ambiguous information needs
expressed in a limited number of keywords<br>often result in long-winded query sessions and many query reformulations.<br>In this work, we tackle ambiguous queries by providing automatically gen-<br>
erated semantic aspects that can guide users to satisfying results regarding<br>their information needs. To generate semantic aspects, we use semantic an-<br>notations available in the documents and
leverage models representing the<br>semantic relationships between annotations of the same type. The aspects in<br>turn provide us a foundation for representing text in a completely structured<br>
manner, thereby allowing for a semantically-motivated organization of search<br>results. We evaluate our approach on a testbed of over 5,000 aspects on Web<br>scale document collections amounting to
more than 450 million documents,<br>with temporal, geographic, and named entity annotations as example dimen-<br>sions. Our experimental results show that our general approach is Web-scale<br>ready
and finds relevant aspects for highly ambiguous queries.}, TYPE = {Research Report}, }
%0 Report %A Gupta, Dhruv %A Berberich, Klaus %A Strötgen, Jannik %A Zeinalipour-Yazti, Demetrios %+ Databases and Information Systems, MPI for Informatics, Max Planck Society Databases and
Information Systems, MPI for Informatics, Max Planck Society Databases and Information Systems, MPI for Informatics, Max Planck Society Databases and Information Systems, MPI for Informatics, Max
Planck Society %T Generating Semantic Aspects for Queries : %G eng %U http://hdl.handle.net/11858/00-001M-0000-002E-07DD-0 %Y Max-Planck-Institut für Informatik %C Saarbrücken %D 2017 %P 39
p. %X Ambiguous information needs expressed in a limited number of keywords<br>often result in long-winded query sessions and many query reformulations.<br>In this work, we tackle ambiguous queries
by providing automatically gen-<br>erated semantic aspects that can guide users to satisfying results regarding<br>their information needs. To generate semantic aspects, we use semantic an-<br>
notations available in the documents and leverage models representing the<br>semantic relationships between annotations of the same type. The aspects in<br>turn provide us a foundation for
representing text in a completely structured<br>manner, thereby allowing for a semantically-motivated organization of search<br>results. We evaluate our approach on a testbed of over 5,000 aspects on
Web<br>scale document collections amounting to more than 450 million documents,<br>with temporal, geographic, and named entity annotations as example dimen-<br>sions. Our experimental results show
that our general approach is Web-scale<br>ready and finds relevant aspects for highly ambiguous queries. %B Research Report %@ false
WatchSense: On- and Above-Skin Input Sensing through a Wearable Depth Sensor
S. Sridhar, A. Markussen, A. Oulasvirta, C. Theobalt and S. Boring
Technical Report, 2017
This paper contributes a novel sensing approach to support on- and above-skin finger input for interaction on the move. WatchSense uses a depth sensor embedded in a wearable device to expand the
input space to neighboring areas of skin and the space above it. Our approach addresses challenging camera-based tracking conditions, such as oblique viewing angles and occlusions. It can accurately
detect fingertips, their locations, and whether they are touching the skin or hovering above it. It extends previous work that supported either mid-air or multitouch input by simultaneously
supporting both. We demonstrate feasibility with a compact, wearable prototype attached to a user's forearm (simulating an integrated depth sensor). Our prototype---which runs in real-time on
consumer mobile devices---enables a 3D input space on the back of the hand. We evaluated the accuracy and robustness of the approach in a user study. We also show how WatchSense increases the
expressiveness of input by interweaving mid-air and multitouch for several interactive applications.
@techreport{sridharwatch17, TITLE = {{WatchSense}: On- and Above-Skin Input Sensing through a Wearable Depth Sensor}, AUTHOR = {Sridhar, Srinath and Markussen, Anders and Oulasvirta, Antti and
Theobalt, Christian and Boring, Sebastian}, LANGUAGE = {eng}, ISSN = {0946-011X}, NUMBER = {MPI-I-2016-4-003}, INSTITUTION = {Max-Planck-Institut f{\"u}r Informatik}, ADDRESS = {Saarbr{\"u}cken},
YEAR = {2017}, ABSTRACT = {This paper contributes a novel sensing approach to support on- and above-skin finger input for interaction on the move. WatchSense uses a depth sensor embedded in a
wearable device to expand the input space to neighboring areas of skin and the space above it. Our approach addresses challenging camera-based tracking conditions, such as oblique viewing angles and
occlusions. It can accurately detect fingertips, their locations, and whether they are touching the skin or hovering above it. It extends previous work that supported either mid-air or multitouch
input by simultaneously supporting both. We demonstrate feasibility with a compact, wearable prototype attached to a user's forearm (simulating an integrated depth sensor). Our prototype---which runs
in real-time on consumer mobile devices---enables a 3D input space on the back of the hand. We evaluated the accuracy and robustness of the approach in a user study. We also show how WatchSense
increases the expressiveness of input by interweaving mid-air and multitouch for several interactive applications.}, TYPE = {Research Report}, }
%0 Report %A Sridhar, Srinath %A Markussen, Anders %A Oulasvirta, Antti %A Theobalt, Christian %A Boring, Sebastian %+ Computer Graphics, MPI for Informatics, Max Planck Society External
Organizations Computer Graphics, MPI for Informatics, Max Planck Society Computer Graphics, MPI for Informatics, Max Planck Society External Organizations %T WatchSense: On- and Above-Skin Input
Sensing through a Wearable Depth Sensor : %G eng %U http://hdl.handle.net/11858/00-001M-0000-002C-402E-D %Y Max-Planck-Institut für Informatik %C Saarbrücken %D 2017 %P 17 p. %X This paper
contributes a novel sensing approach to support on- and above-skin finger input for interaction on the move. WatchSense uses a depth sensor embedded in a wearable device to expand the input space to
neighboring areas of skin and the space above it. Our approach addresses challenging camera-based tracking conditions, such as oblique viewing angles and occlusions. It can accurately detect
fingertips, their locations, and whether they are touching the skin or hovering above it. It extends previous work that supported either mid-air or multitouch input by simultaneously supporting both.
We demonstrate feasibility with a compact, wearable prototype attached to a user's forearm (simulating an integrated depth sensor). Our prototype---which runs in real-time on consumer mobile
devices---enables a 3D input space on the back of the hand. We evaluated the accuracy and robustness of the approach in a user study. We also show how WatchSense increases the expressiveness of input
by interweaving mid-air and multitouch for several interactive applications. %B Research Report %@ false
Verification of Linear Hybrid Systems with Large Discrete State Spaces: Exploring the Design Space for Optimization
E. Althaus, B. Beber, W. Damm, S. Disch, W. Hagemann, A. Rakow, C. Scholl, U. Waldmann and B. Wirtz
Technical Report, 2016
This paper provides a suite of optimization techniques for the verification of safety properties of linear hybrid automata with large discrete state spaces, such as naturally arising when
incorporating health state monitoring and degradation levels into the controller design. Such models can -- in contrast to purely functional controller models -- not analyzed with hybrid verification
engines relying on explicit representations of modes, but require fully symbolic representations for both the continuous and discrete part of the state space. The optimization techniques shown yield
consistently a speedup of about 20 against previously published results for a similar benchmark suite, and complement these with new results on counterexample guided abstraction refinement. In
combination with the methods guaranteeing preciseness of abstractions, this allows to significantly extend the class of models for which safety can be established, covering in particular models with
23 continuous variables and 2 to the 71 discrete states, 20 continuous variables and 2 to the 199 discrete states, and 9 continuous variables and 2 to the 271 discrete states.
@techreport{AlthausBeberDammEtAl2016ATR, TITLE = {Verification of Linear Hybrid Systems with Large Discrete State Spaces: Exploring the Design Space for Optimization}, AUTHOR = {Althaus, Ernst and
Beber, Bj{\"o}rn and Damm, Werner and Disch, Stefan and Hagemann, Willem and Rakow, Astrid and Scholl, Christoph and Waldmann, Uwe and Wirtz, Boris}, LANGUAGE = {eng}, ISSN = {1860-9821}, NUMBER =
{ATR103}, INSTITUTION = {SFB/TR 14 AVACS}, YEAR = {2016}, DATE = {2016}, ABSTRACT = {This paper provides a suite of optimization techniques for the verification of safety properties of linear hybrid
automata with large discrete state spaces, such as naturally arising when incorporating health state monitoring and degradation levels into the controller design. Such models can -- in contrast to
purely functional controller models -- not analyzed with hybrid verification engines relying on explicit representations of modes, but require fully symbolic representations for both the continuous
and discrete part of the state space. The optimization techniques shown yield consistently a speedup of about 20 against previously published results for a similar benchmark suite, and complement
these with new results on counterexample guided abstraction refinement. In combination with the methods guaranteeing preciseness of abstractions, this allows to significantly extend the class of
models for which safety can be established, covering in particular models with 23 continuous variables and 2 to the 71 discrete states, 20 continuous variables and 2 to the 199 discrete states, and 9
continuous variables and 2 to the 271 discrete states.}, TYPE = {AVACS Technical Report}, VOLUME = {103}, }
%0 Report %A Althaus, Ernst %A Beber, Björn %A Damm, Werner %A Disch, Stefan %A Hagemann, Willem %A Rakow, Astrid %A Scholl, Christoph %A Waldmann, Uwe %A Wirtz, Boris %+ Algorithms and
Complexity, MPI for Informatics, Max Planck Society Algorithms and Complexity, MPI for Informatics, Max Planck Society External Organizations External Organizations Automation of Logic, MPI for
Informatics, Max Planck Society International Max Planck Research School, MPI for Informatics, Max Planck Society External Organizations External Organizations Automation of Logic, MPI for
Informatics, Max Planck Society External Organizations %T Verification of Linear Hybrid Systems with Large Discrete State Spaces: Exploring the Design Space for Optimization : %G eng %U http://
hdl.handle.net/11858/00-001M-0000-002C-4540-0 %Y SFB/TR 14 AVACS %D 2016 %P 93 p. %X This paper provides a suite of optimization techniques for the verification of safety properties of linear hybrid
automata with large discrete state spaces, such as naturally arising when incorporating health state monitoring and degradation levels into the controller design. Such models can -- in contrast to
purely functional controller models -- not analyzed with hybrid verification engines relying on explicit representations of modes, but require fully symbolic representations for both the continuous
and discrete part of the state space. The optimization techniques shown yield consistently a speedup of about 20 against previously published results for a similar benchmark suite, and complement
these with new results on counterexample guided abstraction refinement. In combination with the methods guaranteeing preciseness of abstractions, this allows to significantly extend the class of
models for which safety can be established, covering in particular models with 23 continuous variables and 2 to the 71 discrete states, 20 continuous variables and 2 to the 199 discrete states, and 9
continuous variables and 2 to the 271 discrete states. %B AVACS Technical Report %N 103 %@ false %U http://www.avacs.org/fileadmin/Publikationen/Open/avacs_technical_report_103.pdf
Diversifying Search Results Using Time
D. Gupta and K. Berberich
Technical Report, 2016
Getting an overview of a historic entity or event can be difficult in search results, especially if important dates concerning the entity or event are not known beforehand. For such information
needs, users would benefit if returned results covered diverse dates, thus giving an overview of what has happened throughout history. Diversifying search results based on important dates can be a
building block for applications, for instance, in digital humanities. Historians would thus be able to quickly explore longitudinal document collections by querying for entities or events without
knowing associated important dates apriori. In this work, we describe an approach to diversify search results using temporal expressions (e.g., in the 1990s) from their contents. Our approach first
identifies time intervals of interest to the given keyword query based on pseudo-relevant documents. It then re-ranks query results so as to maximize the coverage of identified time intervals. We
present a novel and objective evaluation for our proposed approach. We test the effectiveness of our methods on the New York Times Annotated corpus and the Living Knowledge corpus, collectively
consisting of around 6 million documents. Using history-oriented queries and encyclopedic resources we show that our method indeed is able to present search results diversified along time.
@techreport{GuptaReport2016-5-001, TITLE = {Diversifying Search Results Using Time}, AUTHOR = {Gupta, Dhruv and Berberich, Klaus}, LANGUAGE = {eng}, ISSN = {0946-011X}, NUMBER = {MPI-I-2016-5-001},
INSTITUTION = {Max-Planck-Institut f{\"u}r Informatik}, ADDRESS = {Saarbr{\"u}cken}, YEAR = {2016}, ABSTRACT = {Getting an overview of a historic entity or event can be difficult in search results,
especially if important dates concerning the entity or event are not known beforehand. For such information needs, users would benefit if returned results covered diverse dates, thus giving an
overview of what has happened throughout history. Diversifying search results based on important dates can be a building block for applications, for instance, in digital humanities. Historians would
thus be able to quickly explore longitudinal document collections by querying for entities or events without knowing associated important dates apriori. In this work, we describe an approach to
diversify search results using temporal expressions (e.g., in the 1990s) from their contents. Our approach first identifies time intervals of interest to the given keyword query based on
pseudo-relevant documents. It then re-ranks query results so as to maximize the coverage of identified time intervals. We present a novel and objective evaluation for our proposed approach. We test
the effectiveness of our methods on the New York Times Annotated corpus and the Living Knowledge corpus, collectively consisting of around 6 million documents. Using history-oriented queries and
encyclopedic resources we show that our method indeed is able to present search results diversified along time.}, TYPE = {Research Report}, }
%0 Report %A Gupta, Dhruv %A Berberich, Klaus %+ Databases and Information Systems, MPI for Informatics, Max Planck Society Databases and Information Systems, MPI for Informatics, Max Planck Society
%T Diversifying Search Results Using Time : %G eng %U http://hdl.handle.net/11858/00-001M-0000-002A-0AA4-C %Y Max-Planck-Institut für Informatik %C Saarbrücken %D 2016 %P 51 p. %X Getting
an overview of a historic entity or event can be difficult in search results, especially if important dates concerning the entity or event are not known beforehand. For such information needs, users
would benefit if returned results covered diverse dates, thus giving an overview of what has happened throughout history. Diversifying search results based on important dates can be a building block
for applications, for instance, in digital humanities. Historians would thus be able to quickly explore longitudinal document collections by querying for entities or events without knowing associated
important dates apriori. In this work, we describe an approach to diversify search results using temporal expressions (e.g., in the 1990s) from their contents. Our approach first identifies time
intervals of interest to the given keyword query based on pseudo-relevant documents. It then re-ranks query results so as to maximize the coverage of identified time intervals. We present a novel and
objective evaluation for our proposed approach. We test the effectiveness of our methods on the New York Times Annotated corpus and the Living Knowledge corpus, collectively consisting of around 6
million documents. Using history-oriented queries and encyclopedic resources we show that our method indeed is able to present search results diversified along time. %B Research Report %@ false
Leveraging Semantic Annotations to Link Wikipedia and News Archives
A. Mishra and K. Berberich
Technical Report, 2016
The incomprehensible amount of information available online has made it difficult to retrospect on past events. We propose a novel linking problem to connect excerpts from Wikipedia summarizing
events to online news articles elaborating on them. To address the linking problem, we cast it into an information retrieval task by treating a given excerpt as a user query with the goal to retrieve
a ranked list of relevant news articles. We find that Wikipedia excerpts often come with additional semantics, in their textual descriptions, representing the time, geolocations, and named entities
involved in the event. Our retrieval model leverages text and semantic annotations as different dimensions of an event by estimating independent query models to rank documents. In our experiments on
two datasets, we compare methods that consider different combinations of dimensions and find that the approach that leverages all dimensions suits our problem best.
@techreport{MishraBerberich16, TITLE = {Leveraging Semantic Annotations to Link Wikipedia and News Archives}, AUTHOR = {Mishra, Arunav and Berberich, Klaus}, LANGUAGE = {eng}, ISSN = {0946-011X},
NUMBER = {MPI-I-2016-5-002}, INSTITUTION = {Max-Planck-Institut f{\"u}r Informatik}, ADDRESS = {Saarbr{\"u}cken}, YEAR = {2016}, ABSTRACT = {The incomprehensible amount of information available
online has made it difficult to retrospect on past events. We propose a novel linking problem to connect excerpts from Wikipedia summarizing events to online news articles elaborating on them. To
address the linking problem, we cast it into an information retrieval task by treating a given excerpt as a user query with the goal to retrieve a ranked list of relevant news articles. We find that
Wikipedia excerpts often come with additional semantics, in their textual descriptions, representing the time, geolocations, and named entities involved in the event. Our retrieval model leverages
text and semantic annotations as different dimensions of an event by estimating independent query models to rank documents. In our experiments on two datasets, we compare methods that consider
different combinations of dimensions and find that the approach that leverages all dimensions suits our problem best.}, TYPE = {Research Reports}, }
%0 Report %A Mishra, Arunav %A Berberich, Klaus %+ Databases and Information Systems, MPI for Informatics, Max Planck Society Databases and Information Systems, MPI for Informatics, Max Planck
Society %T Leveraging Semantic Annotations to Link Wikipedia and News Archives : %G eng %U http://hdl.handle.net/11858/00-001M-0000-0029-5FF0-A %Y Max-Planck-Institut für Informatik %C Saarbr&#
252;cken %D 2016 %P 21 p. %X The incomprehensible amount of information available online has made it difficult to retrospect on past events. We propose a novel linking problem to connect excerpts
from Wikipedia summarizing events to online news articles elaborating on them. To address the linking problem, we cast it into an information retrieval task by treating a given excerpt as a user
query with the goal to retrieve a ranked list of relevant news articles. We find that Wikipedia excerpts often come with additional semantics, in their textual descriptions, representing the time,
geolocations, and named entities involved in the event. Our retrieval model leverages text and semantic annotations as different dimensions of an event by estimating independent query models to rank
documents. In our experiments on two datasets, we compare methods that consider different combinations of dimensions and find that the approach that leverages all dimensions suits our problem best.
%B Research Reports %@ false
Real-time Joint Tracking of a Hand Manipulating an Object from RGB-D Input
S. Sridhar, F. Mueller, M. Zollhöfer, D. Casas, A. Oulasvirta and C. Theobalt
Technical Report, 2016
Real-time simultaneous tracking of hands manipulating and interacting with external objects has many potential applications in augmented reality, tangible computing, and wearable computing. However,
due to dicult occlusions, fast motions, and uniform hand appearance, jointly tracking hand and object pose is more challenging than tracking either of the two separately. Many previous approaches
resort to complex multi-camera setups to remedy the occlusion problem and often employ expensive segmentation and optimization steps which makes real-time tracking impossible. In this paper, we
propose a real-time solution that uses a single commodity RGB-D camera. The core of our approach is a 3D articulated Gaussian mixture alignment strategy tailored to hand-object tracking that allows
fast pose optimization. The alignment energy uses novel regularizers to address occlusions and hand-object contacts. For added robustness, we guide the optimization with discriminative part
classication of the hand and segmentation of the object. We conducted extensive experiments on several existing datasets and introduce a new annotated hand-object dataset. Quantitative and
qualitative results show the key advantages of our method: speed, accuracy, and robustness.
@techreport{Report2016-4-001, TITLE = {Real-time Joint Tracking of a Hand Manipulating an Object from {RGB-D} Input}, AUTHOR = {Sridhar, Srinath and Mueller, Franziska and Zollh{\"o}fer, Michael and
Casas, Dan and Oulasvirta, Antti and Theobalt, Christian}, LANGUAGE = {eng}, ISSN = {0946-011X}, NUMBER = {MPI-I-2016-4-001}, INSTITUTION = {Max-Planck-Institut f{\"u}r Informatik}, ADDRESS = {Saarbr
{\"u}cken}, YEAR = {2016}, ABSTRACT = {Real-time simultaneous tracking of hands manipulating and interacting with external objects has many potential applications in augmented reality, tangible
computing, and wearable computing. However, due to dicult occlusions, fast motions, and uniform hand appearance, jointly tracking hand and object pose is more challenging than tracking either of the
two separately. Many previous approaches resort to complex multi-camera setups to remedy the occlusion problem and often employ expensive segmentation and optimization steps which makes real-time
tracking impossible. In this paper, we propose a real-time solution that uses a single commodity RGB-D camera. The core of our approach is a 3D articulated Gaussian mixture alignment strategy
tailored to hand-object tracking that allows fast pose optimization. The alignment energy uses novel regularizers to address occlusions and hand-object contacts. For added robustness, we guide the
optimization with discriminative part classication of the hand and segmentation of the object. We conducted extensive experiments on several existing datasets and introduce a new annotated
hand-object dataset. Quantitative and qualitative results show the key advantages of our method: speed, accuracy, and robustness.}, TYPE = {Research Report}, }
%0 Report %A Sridhar, Srinath %A Mueller, Franziska %A Zollhöfer, Michael %A Casas, Dan %A Oulasvirta, Antti %A Theobalt, Christian %+ Computer Graphics, MPI for Informatics, Max Planck Society
Computer Graphics, MPI for Informatics, Max Planck Society Computer Graphics, MPI for Informatics, Max Planck Society Computer Graphics, MPI for Informatics, Max Planck Society Computer Graphics, MPI
for Informatics, Max Planck Society Computer Graphics, MPI for Informatics, Max Planck Society %T Real-time Joint Tracking of a Hand Manipulating an Object from RGB-D Input : %G eng %U http://
hdl.handle.net/11858/00-001M-0000-002B-5510-A %Y Max-Planck-Institut für Informatik %C Saarbrücken %D 2016 %P 31 p. %X Real-time simultaneous tracking of hands manipulating and interacting
with external objects has many potential applications in augmented reality, tangible computing, and wearable computing. However, due to dicult occlusions, fast motions, and uniform hand appearance,
jointly tracking hand and object pose is more challenging than tracking either of the two separately. Many previous approaches resort to complex multi-camera setups to remedy the occlusion problem
and often employ expensive segmentation and optimization steps which makes real-time tracking impossible. In this paper, we propose a real-time solution that uses a single commodity RGB-D camera. The
core of our approach is a 3D articulated Gaussian mixture alignment strategy tailored to hand-object tracking that allows fast pose optimization. The alignment energy uses novel regularizers to
address occlusions and hand-object contacts. For added robustness, we guide the optimization with discriminative part classication of the hand and segmentation of the object. We conducted extensive
experiments on several existing datasets and introduce a new annotated hand-object dataset. Quantitative and qualitative results show the key advantages of our method: speed, accuracy, and
robustness. %B Research Report %@ false
FullHand: Markerless Skeleton-based Tracking for Free-Hand Interaction
S. Sridhar, G. Bailly, E. Heydrich, A. Oulasvirta and C. Theobalt
Technical Report, 2016
This paper advances a novel markerless hand tracking method for interactive applications. FullHand uses input from RGB and depth cameras in a desktop setting. It combines, in a voting scheme, a
discriminative, part-based pose retrieval with a generative pose estimation method based on local optimization. We develop this approach to enable: (1) capturing hand articulations with high number
of degrees of freedom, including the motion of all fingers, (2) sufficient precision, shown in a dataset of user-generated gestures, and (3) a high framerate of 50 fps for one hand. We discuss the
design of free-hand interactions with the tracker and present several demonstrations ranging from simple (few DOFs) to complex (finger individuation plus global hand motion), including mouse
operation, a first-person shooter and virtual globe navigation. A user study on the latter shows that free-hand interactions implemented for the tracker can equal mouse-based interactions in user
@techreport{Report2016-4-002, TITLE = {{FullHand}: {M}arkerless Skeleton-based Tracking for Free-Hand Interaction}, AUTHOR = {Sridhar, Srinath and Bailly, Gilles and Heydrich, Elias and Oulasvirta,
Antti and Theobalt, Christian}, LANGUAGE = {eng}, ISSN = {0946-011X}, NUMBER = {MPI-I-2016-4-002}, INSTITUTION = {Max-Planck-Institut f{\"u}r Informatik}, ADDRESS = {Saarbr{\"u}cken}, YEAR = {2016},
ABSTRACT = {This paper advances a novel markerless hand tracking method for interactive applications. FullHand uses input from RGB and depth cameras in a desktop setting. It combines, in a voting
scheme, a discriminative, part-based pose retrieval with a generative pose estimation method based on local optimization. We develop this approach to enable: (1) capturing hand articulations with
high number of degrees of freedom, including the motion of all fingers, (2) sufficient precision, shown in a dataset of user-generated gestures, and (3) a high framerate of 50 fps for one hand. We
discuss the design of free-hand interactions with the tracker and present several demonstrations ranging from simple (few DOFs) to complex (finger individuation plus global hand motion), including
mouse operation, a first-person shooter and virtual globe navigation. A user study on the latter shows that free-hand interactions implemented for the tracker can equal mouse-based interactions in
user performance.}, TYPE = {Research Report}, }
%0 Report %A Sridhar, Srinath %A Bailly, Gilles %A Heydrich, Elias %A Oulasvirta, Antti %A Theobalt, Christian %+ Computer Graphics, MPI for Informatics, Max Planck Society External Organizations
External Organizations External Organizations Computer Graphics, MPI for Informatics, Max Planck Society %T FullHand: Markerless Skeleton-based Tracking for Free-Hand Interaction : %G eng %U http://
hdl.handle.net/11858/00-001M-0000-002B-7456-7 %Y Max-Planck-Institut für Informatik %C Saarbrücken %D 2016 %P 11 p. %X This paper advances a novel markerless hand tracking method for
interactive applications. FullHand uses input from RGB and depth cameras in a desktop setting. It combines, in a voting scheme, a discriminative, part-based pose retrieval with a generative pose
estimation method based on local optimization. We develop this approach to enable: (1) capturing hand articulations with high number of degrees of freedom, including the motion of all fingers, (2)
sufficient precision, shown in a dataset of user-generated gestures, and (3) a high framerate of 50 fps for one hand. We discuss the design of free-hand interactions with the tracker and present
several demonstrations ranging from simple (few DOFs) to complex (finger individuation plus global hand motion), including mouse operation, a first-person shooter and virtual globe navigation. A user
study on the latter shows that free-hand interactions implemented for the tracker can equal mouse-based interactions in user performance. %B Research Report %@ false
Computational Modelling and Prediction of Gaze Estimation Error for Head-mounted Eye Trackers
M. Barz, A. Bulling and F. Daiber
Technical Report, 2015
Head-mounted eye tracking has significant potential for mobile gaze-based interaction with ambient displays but current interfaces lack information about the tracker\'s gaze estimation error.
Consequently, current interfaces do not exploit the full potential of gaze input as the inherent estimation error can not be dealt with. The error depends on the physical properties of the display
and constantly varies with changes in position and distance of the user to the display. In this work we present a computational model of gaze estimation error for head-mounted eye trackers. Our model
covers the full processing pipeline for mobile gaze estimation, namely mapping of pupil positions to scene camera coordinates, marker-based display detection, and display mapping. We build the model
based on a series of controlled measurements of a sample state-of-the-art monocular head-mounted eye tracker. Results show that our model can predict gaze estimation error with a root mean squared
error of 17.99~px ($1.96^\\circ$).
@techreport{Barz_Rep15, TITLE = {Computational Modelling and Prediction of Gaze Estimation Error for Head-mounted Eye Trackers}, AUTHOR = {Barz, Michael and Bulling, Andreas and Daiber, Florian},
LANGUAGE = {eng}, URL = {https://perceptual.mpi-inf.mpg.de/files/2015/01/gazequality.pdf}, NUMBER = {15-01}, INSTITUTION = {DFKI}, ADDRESS = {Saarbr{\"u}cken}, YEAR = {2015}, ABSTRACT = {Head-mounted
eye tracking has significant potential for mobile gaze-based interaction with ambient displays but current interfaces lack information about the tracker\'s gaze estimation error. Consequently,
current interfaces do not exploit the full potential of gaze input as the inherent estimation error can not be dealt with. The error depends on the physical properties of the display and constantly
varies with changes in position and distance of the user to the display. In this work we present a computational model of gaze estimation error for head-mounted eye trackers. Our model covers the
full processing pipeline for mobile gaze estimation, namely mapping of pupil positions to scene camera coordinates, marker-based display detection, and display mapping. We build the model based on a
series of controlled measurements of a sample state-of-the-art monocular head-mounted eye tracker. Results show that our model can predict gaze estimation error with a root mean squared error of
17.99~px ($1.96^\\circ$).}, TYPE = {DFKI Research Report}, }
%0 Report %A Barz, Michael %A Bulling, Andreas %A Daiber, Florian %+ External Organizations Computer Vision and Multimodal Computing, MPI for Informatics, Max Planck Society External Organizations %T
Computational Modelling and Prediction of Gaze Estimation Error for Head-mounted Eye Trackers : %G eng %U http://hdl.handle.net/11858/00-001M-0000-0024-B972-8 %U https://perceptual.mpi-inf.mpg.de/
files/2015/01/gazequality.pdf %Y DFKI %C Saarbrücken %D 2015 %8 01.01.2015 %P 10 p. %X Head-mounted eye tracking has significant potential for mobile gaze-based interaction with ambient displays
but current interfaces lack information about the tracker\'s gaze estimation error. Consequently, current interfaces do not exploit the full potential of gaze input as the inherent estimation error
can not be dealt with. The error depends on the physical properties of the display and constantly varies with changes in position and distance of the user to the display. In this work we present a
computational model of gaze estimation error for head-mounted eye trackers. Our model covers the full processing pipeline for mobile gaze estimation, namely mapping of pupil positions to scene camera
coordinates, marker-based display detection, and display mapping. We build the model based on a series of controlled measurements of a sample state-of-the-art monocular head-mounted eye tracker.
Results show that our model can predict gaze estimation error with a root mean squared error of 17.99~px ($1.96^\\circ$). %B DFKI Research Report %U http://www.dfki.de/web/forschung/publikationen/
Decidability of Verification of Safety Properties of Spatial Families of Linear Hybrid Automata
W. Damm, M. Horbach and V. Sofronie-Stokkermans
Technical Report, 2015
@techreport{atr111, TITLE = {Decidability of Verification of Safety Properties of Spatial Families of Linear Hybrid Automata}, AUTHOR = {Damm, Werner and Horbach, Matthias and Sofronie-Stokkermans,
Viorica}, LANGUAGE = {eng}, ISSN = {1860-9821}, NUMBER = {ATR111}, INSTITUTION = {SFB/TR 14 AVACS}, YEAR = {2015}, TYPE = {AVACS Technical Report}, VOLUME = {111}, }
%0 Report %A Damm, Werner %A Horbach, Matthias %A Sofronie-Stokkermans, Viorica %+ External Organizations Automation of Logic, MPI for Informatics, Max Planck Society Automation of Logic, MPI for
Informatics, Max Planck Society %T Decidability of Verification of Safety Properties of Spatial Families of Linear Hybrid Automata : %G eng %U http://hdl.handle.net/11858/00-001M-0000-002A-0805-6 %Y
SFB/TR 14 AVACS %D 2015 %P 52 p. %B AVACS Technical Report %N 111 %@ false
GazeProjector: Location-independent Gaze Interaction on and Across Multiple Displays
C. Lander, S. Gehring, A. Krüger, S. Boring and A. Bulling
Technical Report, 2015
Mobile gaze-based interaction with multiple displays may occur from arbitrary positions and orientations. However, maintaining high gaze estimation accuracy still represents a significant challenge.
To address this, we present GazeProjector, a system that combines accurate point-of-gaze estimation with natural feature tracking on displays to determine the mobile eye tracker’s position relative
to a display. The detected eye positions are transformed onto that display allowing for gaze-based interaction. This allows for seamless gaze estimation and interaction on (1) multiple displays of
arbitrary sizes, (2) independently of the user’s position and orientation to the display. In a user study with 12 participants we compared GazeProjector to existing well- established methods such as
visual on-screen markers and a state-of-the-art motion capture system. Our results show that our approach is robust to varying head poses, orientations, and distances to the display, while still
providing high gaze estimation accuracy across multiple displays without re-calibration. The system represents an important step towards the vision of pervasive gaze-based interfaces.
@techreport{Lander_Rep15, TITLE = {{GazeProjector}: Location-independent Gaze Interaction on and Across Multiple Displays}, AUTHOR = {Lander, Christian and Gehring, Sven and Kr{\"u}ger, Antonio and
Boring, Sebastian and Bulling, Andreas}, LANGUAGE = {eng}, URL = {http://www.dfki.de/web/research/publications?pubid=7618}, NUMBER = {15-01}, INSTITUTION = {DFKI}, ADDRESS = {Saarbr{\"u}cken}, YEAR =
{2015}, ABSTRACT = {Mobile gaze-based interaction with multiple displays may occur from arbitrary positions and orientations. However, maintaining high gaze estimation accuracy still represents a
significant challenge. To address this, we present GazeProjector, a system that combines accurate point-of-gaze estimation with natural feature tracking on displays to determine the mobile eye
tracker{\textquoteright}s position relative to a display. The detected eye positions are transformed onto that display allowing for gaze-based interaction. This allows for seamless gaze estimation
and interaction on (1) multiple displays of arbitrary sizes, (2) independently of the user{\textquoteright}s position and orientation to the display. In a user study with 12 participants we compared
GazeProjector to existing well- established methods such as visual on-screen markers and a state-of-the-art motion capture system. Our results show that our approach is robust to varying head poses,
orientations, and distances to the display, while still providing high gaze estimation accuracy across multiple displays without re-calibration. The system represents an important step towards the
vision of pervasive gaze-based interfaces.}, TYPE = {DFKI Research Report}, }
%0 Report %A Lander, Christian %A Gehring, Sven %A Krüger, Antonio %A Boring, Sebastian %A Bulling, Andreas %+ External Organizations External Organizations External Organizations External
Organizations Computer Vision and Multimodal Computing, MPI for Informatics, Max Planck Society %T GazeProjector: Location-independent Gaze Interaction on and Across Multiple Displays : %G eng %U
http://hdl.handle.net/11858/00-001M-0000-0024-B947-A %U http://www.dfki.de/web/research/publications?pubid=7618 %Y DFKI %C Saarbrücken %D 2015 %8 01.01.2015 %P 10 p. %X Mobile gaze-based
interaction with multiple displays may occur from arbitrary positions and orientations. However, maintaining high gaze estimation accuracy still represents a significant challenge. To address this,
we present GazeProjector, a system that combines accurate point-of-gaze estimation with natural feature tracking on displays to determine the mobile eye tracker’s position relative to a
display. The detected eye positions are transformed onto that display allowing for gaze-based interaction. This allows for seamless gaze estimation and interaction on (1) multiple displays of
arbitrary sizes, (2) independently of the user’s position and orientation to the display. In a user study with 12 participants we compared GazeProjector to existing well- established methods
such as visual on-screen markers and a state-of-the-art motion capture system. Our results show that our approach is robust to varying head poses, orientations, and distances to the display, while
still providing high gaze estimation accuracy across multiple displays without re-calibration. The system represents an important step towards the vision of pervasive gaze-based interfaces. %B DFKI
Research Report
Modal Tableau Systems with Blocking and Congruence Closure
R. A. Schmidt and U. Waldmann
Technical Report, 2015
@techreport{SchmidtTR2015, TITLE = {Modal Tableau Systems with Blocking and Congruence Closure}, AUTHOR = {Schmidt, Renate A. and Waldmann, Uwe}, LANGUAGE = {eng}, NUMBER = {uk-ac-man-scw:268816},
INSTITUTION = {University of Manchester}, ADDRESS = {Manchester}, YEAR = {2015}, TYPE = {eScholar}, }
%0 Report %A Schmidt, Renate A. %A Waldmann, Uwe %+ External Organizations Automation of Logic, MPI for Informatics, Max Planck Society %T Modal Tableau Systems with Blocking and Congruence Closure :
%G eng %U http://hdl.handle.net/11858/00-001M-0000-002A-08BC-A %Y University of Manchester %C Manchester %D 2015 %P 22 p. %B eScholar %U https://www.escholar.manchester.ac.uk/
Phrase Query Optimization on Inverted Indexes
A. Anand, I. Mele, S. Bedathur and K. Berberich
Technical Report, 2014
Phrase queries are a key functionality of modern search engines. Beyond that, they increasingly serve as an important building block for applications such as entity-oriented search, text analytics,
and plagiarism detection. Processing phrase queries is costly, though, since positional information has to be kept in the index and all words, including stopwords, need to be considered. We consider
an augmented inverted index that indexes selected variable-length multi-word sequences in addition to single words. We study how arbitrary phrase queries can be processed efficiently on such an
augmented inverted index. We show that the underlying optimization problem is NP-hard in the general case and describe an exact exponential algorithm and an approximation algorithm to its solution.
Experiments on ClueWeb09 and The New York Times with different real-world query workloads examine the practical performance of our methods.
@techreport{AnandMeleBedathurBerberich2014, TITLE = {Phrase Query Optimization on Inverted Indexes}, AUTHOR = {Anand, Avishek and Mele, Ida and Bedathur, Srikanta and Berberich, Klaus}, LANGUAGE =
{eng}, ISSN = {0946-011X}, NUMBER = {MPI-I-2014-5-002}, INSTITUTION = {Max-Planck-Institut f{\"u}r Informatik}, ADDRESS = {Saarbr{\"u}cken}, YEAR = {2014}, ABSTRACT = {Phrase queries are a key
functionality of modern search engines. Beyond that, they increasingly serve as an important building block for applications such as entity-oriented search, text analytics, and plagiarism detection.
Processing phrase queries is costly, though, since positional information has to be kept in the index and all words, including stopwords, need to be considered. We consider an augmented inverted
index that indexes selected variable-length multi-word sequences in addition to single words. We study how arbitrary phrase queries can be processed efficiently on such an augmented inverted index.
We show that the underlying optimization problem is NP-hard in the general case and describe an exact exponential algorithm and an approximation algorithm to its solution. Experiments on ClueWeb09
and The New York Times with different real-world query workloads examine the practical performance of our methods.}, TYPE = {Research Report}, }
%0 Report %A Anand, Avishek %A Mele, Ida %A Bedathur, Srikanta %A Berberich, Klaus %+ Databases and Information Systems, MPI for Informatics, Max Planck Society Databases and Information Systems, MPI
for Informatics, Max Planck Society Databases and Information Systems, MPI for Informatics, Max Planck Society Databases and Information Systems, MPI for Informatics, Max Planck Society %T Phrase
Query Optimization on Inverted Indexes : %G eng %U http://hdl.handle.net/11858/00-001M-0000-0024-022A-3 %Y Max-Planck-Institut für Informatik %C Saarbrücken %D 2014 %P 20 p. %X Phrase
queries are a key functionality of modern search engines. Beyond that, they increasingly serve as an important building block for applications such as entity-oriented search, text analytics, and
plagiarism detection. Processing phrase queries is costly, though, since positional information has to be kept in the index and all words, including stopwords, need to be considered. We consider an
augmented inverted index that indexes selected variable-length multi-word sequences in addition to single words. We study how arbitrary phrase queries can be processed efficiently on such an
augmented inverted index. We show that the underlying optimization problem is NP-hard in the general case and describe an exact exponential algorithm and an approximation algorithm to its solution.
Experiments on ClueWeb09 and The New York Times with different real-world query workloads examine the practical performance of our methods. %B Research Report %@ false
Learning Tuple Probabilities in Probabilistic Databases
M. Dylla and M. Theobald
Technical Report, 2014
Learning the parameters of complex probabilistic-relational models from labeled training data is a standard technique in machine learning, which has been intensively studied in the subfield of
Statistical Relational Learning (SRL), but---so far---this is still an under-investigated topic in the context of Probabilistic Databases (PDBs). In this paper, we focus on learning the probability
values of base tuples in a PDB from query answers, the latter of which are represented as labeled lineage formulas. Specifically, we consider labels in the form of pairs, each consisting of a Boolean
lineage formula and a marginal probability that comes attached to the corresponding query answer. The resulting learning problem can be viewed as the inverse problem to confidence computations in
PDBs: given a set of labeled query answers, learn the probability values of the base tuples, such that the marginal probabilities of the query answers again yield in the assigned probability labels.
We analyze the learning problem from a theoretical perspective, devise two optimization-based objectives, and provide an efficient algorithm (based on Stochastic Gradient Descent) for solving these
objectives. Finally, we conclude this work by an experimental evaluation on three real-world and one synthetic dataset, while competing with various techniques from SRL, reasoning in information
extraction, and optimization.
@techreport{Dylla-Learning2014, TITLE = {Learning Tuple Probabilities in Probabilistic Databases}, AUTHOR = {Dylla, Maximilian and Theobald, Martin}, LANGUAGE = {eng}, ISSN = {0946-011X}, NUMBER =
{MPI-I-2014-5-001}, INSTITUTION = {Max-Planck-Institut f{\"u}r Informatik}, ADDRESS = {Saarbr{\"u}cken}, YEAR = {2014}, ABSTRACT = {Learning the parameters of complex probabilistic-relational models
from labeled training data is a standard technique in machine learning, which has been intensively studied in the subfield of Statistical Relational Learning (SRL), but---so far---this is still an
under-investigated topic in the context of Probabilistic Databases (PDBs). In this paper, we focus on learning the probability values of base tuples in a PDB from query answers, the latter of which
are represented as labeled lineage formulas. Specifically, we consider labels in the form of pairs, each consisting of a Boolean lineage formula and a marginal probability that comes attached to the
corresponding query answer. The resulting learning problem can be viewed as the inverse problem to confidence computations in PDBs: given a set of labeled query answers, learn the probability values
of the base tuples, such that the marginal probabilities of the query answers again yield in the assigned probability labels. We analyze the learning problem from a theoretical perspective, devise
two optimization-based objectives, and provide an efficient algorithm (based on Stochastic Gradient Descent) for solving these objectives. Finally, we conclude this work by an experimental evaluation
on three real-world and one synthetic dataset, while competing with various techniques from SRL, reasoning in information extraction, and optimization.}, TYPE = {Research Report}, }
%0 Report %A Dylla, Maximilian %A Theobald, Martin %+ Databases and Information Systems, MPI for Informatics, Max Planck Society Databases and Information Systems, MPI for Informatics, Max Planck
Society %T Learning Tuple Probabilities in Probabilistic Databases : %G eng %U http://hdl.handle.net/11858/00-001M-0000-0019-8492-6 %Y Max-Planck-Institut für Informatik %C Saarbrücken %D
2014 %P 51 p. %X Learning the parameters of complex probabilistic-relational models from labeled training data is a standard technique in machine learning, which has been intensively studied in the
subfield of Statistical Relational Learning (SRL), but---so far---this is still an under-investigated topic in the context of Probabilistic Databases (PDBs). In this paper, we focus on learning the
probability values of base tuples in a PDB from query answers, the latter of which are represented as labeled lineage formulas. Specifically, we consider labels in the form of pairs, each consisting
of a Boolean lineage formula and a marginal probability that comes attached to the corresponding query answer. The resulting learning problem can be viewed as the inverse problem to confidence
computations in PDBs: given a set of labeled query answers, learn the probability values of the base tuples, such that the marginal probabilities of the query answers again yield in the assigned
probability labels. We analyze the learning problem from a theoretical perspective, devise two optimization-based objectives, and provide an efficient algorithm (based on Stochastic Gradient Descent)
for solving these objectives. Finally, we conclude this work by an experimental evaluation on three real-world and one synthetic dataset, while competing with various techniques from SRL, reasoning
in information extraction, and optimization. %B Research Report %@ false
Obtaining Finite Local Theory Axiomatizations via Saturation
M. Horbach and V. Sofronie-Stokkermans
Technical Report, 2014
In this paper we study theory combinations over non-disjoint signatures in which hierarchical and modular reasoning is possible. We use a notion of locality of a theory extension parameterized by a
closure operator on ground terms. We give criteria for recognizing these types of theory extensions. We then show that combinations of extensions of theories which are local in this extended sense
have also a locality property and hence allow modular and hierarchical reasoning. We thus obtain parameterized decidability and complexity results for many (combinations of) theories important in
@techreport{atr093, TITLE = {Obtaining Finite Local Theory Axiomatizations via Saturation}, AUTHOR = {Horbach, Matthias and Sofronie-Stokkermans, Viorica}, LANGUAGE = {eng}, ISSN = {1860-9821},
NUMBER = {ATR93}, INSTITUTION = {SFB/TR 14 AVACS}, YEAR = {2014}, ABSTRACT = {In this paper we study theory combinations over non-disjoint signatures in which hierarchical and modular reasoning is
possible. We use a notion of locality of a theory extension parameterized by a closure operator on ground terms. We give criteria for recognizing these types of theory extensions. We then show that
combinations of extensions of theories which are local in this extended sense have also a locality property and hence allow modular and hierarchical reasoning. We thus obtain parameterized
decidability and complexity results for many (combinations of) theories important in verification.}, TYPE = {AVACS Technical Report}, VOLUME = {93}, }
%0 Report %A Horbach, Matthias %A Sofronie-Stokkermans, Viorica %+ Automation of Logic, MPI for Informatics, Max Planck Society Automation of Logic, MPI for Informatics, Max Planck Society %T
Obtaining Finite Local Theory Axiomatizations via Saturation : %G eng %U http://hdl.handle.net/11858/00-001M-0000-0024-C90C-F %Y SFB/TR 14 AVACS %D 2014 %P 26 p. %X In this paper we study theory
combinations over non-disjoint signatures in which hierarchical and modular reasoning is possible. We use a notion of locality of a theory extension parameterized by a closure operator on ground
terms. We give criteria for recognizing these types of theory extensions. We then show that combinations of extensions of theories which are local in this extended sense have also a locality property
and hence allow modular and hierarchical reasoning. We thus obtain parameterized decidability and complexity results for many (combinations of) theories important in verification. %B AVACS Technical
Report %N 93 %@ false %U http://www.avacs.org/Publikationen/Open/avacs_technical_report_093.pdf
Local High-order Regularization on Data Manifolds
K. I. Kim, J. Tompkin and C. Theobalt
Technical Report, 2014
@techreport{KimTR2014, TITLE = {Local High-order Regularization on Data Manifolds}, AUTHOR = {Kim, Kwang In and Tompkin, James and Theobalt, Christian}, LANGUAGE = {eng}, ISSN = {0946-011X}, NUMBER =
{MPI-I-2014-4-001}, INSTITUTION = {Max-Planck Institut f{\"u}r Informatik}, ADDRESS = {Saarbr{\"u}cken}, YEAR = {2014}, TYPE = {Research Report}, }
%0 Report %A Kim, Kwang In %A Tompkin, James %A Theobalt, Christian %+ Computer Graphics, MPI for Informatics, Max Planck Society Computer Graphics, MPI for Informatics, Max Planck Society Computer
Graphics, MPI for Informatics, Max Planck Society %T Local High-order Regularization on Data Manifolds : %G eng %U http://hdl.handle.net/11858/00-001M-0000-0024-B210-7 %Y Max-Planck Institut für
Informatik %C Saarbrücken %D 2014 %P 12 p. %B Research Report %@ false
Fast Tracking of Hand and Finger Articulations Using a Single Depth Camera
S. Sridhar, A. Oulasvirta and C. Theobalt
Technical Report, 2014
@techreport{Sridhar2014, TITLE = {Fast Tracking of Hand and Finger Articulations Using a Single Depth Camera}, AUTHOR = {Sridhar, Srinath and Oulasvirta, Antti and Theobalt, Christian}, LANGUAGE =
{eng}, ISSN = {0946-011X}, NUMBER = {MPI-I-2014-4-002}, INSTITUTION = {Max-Planck-Institut f{\"u}r Informatik}, ADDRESS = {Saarbr{\"u}cken}, YEAR = {2014}, TYPE = {Research Report}, }
%0 Report %A Sridhar, Srinath %A Oulasvirta, Antti %A Theobalt, Christian %+ Computer Graphics, MPI for Informatics, Max Planck Society Computer Graphics, MPI for Informatics, Max Planck Society
Computer Graphics, MPI for Informatics, Max Planck Society %T Fast Tracking of Hand and Finger Articulations Using a Single Depth Camera : %G eng %U http://hdl.handle.net/11858/
00-001M-0000-0024-B5B8-8 %Y Max-Planck-Institut für Informatik %C Saarbrücken %D 2014 %P 14 p. %B Research Report %@ false
Hierarchic Superposition with Weak Abstraction
P. Baumgartner and U. Waldmann
Technical Report, 2013
Many applications of automated deduction require reasoning in first-order logic modulo background theories, in particular some form of integer arithmetic. A major unsolved research challenge is to
design theorem provers that are "reasonably complete" even in the presence of free function symbols ranging into a background theory sort. The hierarchic superposition calculus of Bachmair,
Ganzinger, and Waldmann already supports such symbols, but, as we demonstrate, not optimally. This paper aims to rectify the situation by introducing a novel form of clause abstraction, a core
component in the hierarchic superposition calculus for transforming clauses into a form needed for internal operation. We argue for the benefits of the resulting calculus and provide a new
completeness result for the fragment where all background-sorted terms are ground.
@techreport{Waldmann2013, TITLE = {Hierarchic Superposition with Weak Abstraction}, AUTHOR = {Baumgartner, Peter and Waldmann, Uwe}, LANGUAGE = {eng}, ISSN = {0946-011X}, NUMBER =
{MPI-I-2014-RG1-002}, INSTITUTION = {Max-Planck-Institut f{\"u}r Informatik}, ADDRESS = {Saarbr{\"u}cken}, YEAR = {2013}, ABSTRACT = {Many applications of automated deduction require reasoning in
first-order logic modulo background theories, in particular some form of integer arithmetic. A major unsolved research challenge is to design theorem provers that are "reasonably complete" even in
the presence of free function symbols ranging into a background theory sort. The hierarchic superposition calculus of Bachmair, Ganzinger, and Waldmann already supports such symbols, but, as we
demonstrate, not optimally. This paper aims to rectify the situation by introducing a novel form of clause abstraction, a core component in the hierarchic superposition calculus for transforming
clauses into a form needed for internal operation. We argue for the benefits of the resulting calculus and provide a new completeness result for the fragment where all background-sorted terms are
ground.}, TYPE = {Research Report}, }
%0 Report %A Baumgartner, Peter %A Waldmann, Uwe %+ External Organizations Automation of Logic, MPI for Informatics, Max Planck Society %T Hierarchic Superposition with Weak Abstraction : %G eng %U
http://hdl.handle.net/11858/00-001M-0000-0024-03A8-0 %Y Max-Planck-Institut für Informatik %C Saarbrücken %D 2013 %P 45 p. %X Many applications of automated deduction require reasoning in
first-order logic modulo background theories, in particular some form of integer arithmetic. A major unsolved research challenge is to design theorem provers that are "reasonably complete" even in
the presence of free function symbols ranging into a background theory sort. The hierarchic superposition calculus of Bachmair, Ganzinger, and Waldmann already supports such symbols, but, as we
demonstrate, not optimally. This paper aims to rectify the situation by introducing a novel form of clause abstraction, a core component in the hierarchic superposition calculus for transforming
clauses into a form needed for internal operation. We argue for the benefits of the resulting calculus and provide a new completeness result for the fragment where all background-sorted terms are
ground. %B Research Report %@ false
New Results for Non-preemptive Speed Scaling
C.-C. Huang and S. Ott
Technical Report, 2013
We consider the speed scaling problem introduced in the seminal paper of Yao et al.. In this problem, a number of jobs, each with its own processing volume, release time, and deadline needs to be
executed on a speed-scalable processor. The power consumption of this processor is $P(s) = s^\alpha$, where $s$ is the processing speed, and $\alpha > 1$ is a constant. The total energy consumption
is power integrated over time, and the goal is to process all jobs while minimizing the energy consumption. The preemptive version of the problem, along with its many variants, has been extensively
studied over the years. However, little is known about the non-preemptive version of the problem, except that it is strongly NP-hard and allows a constant factor approximation. Up until now, the
(general) complexity of this problem is unknown. In the present paper, we study an important special case of the problem, where the job intervals form a laminar family, and present a
quasipolynomial-time approximation scheme for it, thereby showing that (at least) this special case is not APX-hard, unless $NP \subseteq DTIME(2^{poly(\log n)})$. The second contribution of this
work is a polynomial-time algorithm for the special case of equal-volume jobs, where previously only a $2^\alpha$ approximation was known. In addition, we show that two other special cases of this
problem allow fully polynomial-time approximation schemes (FPTASs).
@techreport{HuangOtt2013, TITLE = {New Results for Non-preemptive Speed Scaling}, AUTHOR = {Huang, Chien-Chung and Ott, Sebastian}, LANGUAGE = {eng}, ISSN = {0946-011X}, NUMBER = {MPI-I-2013-1-001},
INSTITUTION = {Max-Planck-Institut f{\"u}r Informatik}, ADDRESS = {Saarbr{\"u}cken}, YEAR = {2013}, ABSTRACT = {We consider the speed scaling problem introduced in the seminal paper of Yao et al.. In
this problem, a number of jobs, each with its own processing volume, release time, and deadline needs to be executed on a speed-scalable processor. The power consumption of this processor is $P(s) =
s^\alpha$, where $s$ is the processing speed, and $\alpha > 1$ is a constant. The total energy consumption is power integrated over time, and the goal is to process all jobs while minimizing the
energy consumption. The preemptive version of the problem, along with its many variants, has been extensively studied over the years. However, little is known about the non-preemptive version of the
problem, except that it is strongly NP-hard and allows a constant factor approximation. Up until now, the (general) complexity of this problem is unknown. In the present paper, we study an important
special case of the problem, where the job intervals form a laminar family, and present a quasipolynomial-time approximation scheme for it, thereby showing that (at least) this special case is not
APX-hard, unless $NP \subseteq DTIME(2^{poly(\log n)})$. The second contribution of this work is a polynomial-time algorithm for the special case of equal-volume jobs, where previously only a $2^\
alpha$ approximation was known. In addition, we show that two other special cases of this problem allow fully polynomial-time approximation schemes (FPTASs).}, TYPE = {Research Reports}, }
%0 Report %A Huang, Chien-Chung %A Ott, Sebastian %+ External Organizations Algorithms and Complexity, MPI for Informatics, Max Planck Society %T New Results for Non-preemptive Speed Scaling : %G eng
%U http://hdl.handle.net/11858/00-001M-0000-0024-03BF-D %Y Max-Planck-Institut für Informatik %C Saarbrücken %D 2013 %P 32 p. %X We consider the speed scaling problem introduced in the
seminal paper of Yao et al.. In this problem, a number of jobs, each with its own processing volume, release time, and deadline needs to be executed on a speed-scalable processor. The power
consumption of this processor is $P(s) = s^\alpha$, where $s$ is the processing speed, and $\alpha > 1$ is a constant. The total energy consumption is power integrated over time, and the goal is to
process all jobs while minimizing the energy consumption. The preemptive version of the problem, along with its many variants, has been extensively studied over the years. However, little is known
about the non-preemptive version of the problem, except that it is strongly NP-hard and allows a constant factor approximation. Up until now, the (general) complexity of this problem is unknown. In
the present paper, we study an important special case of the problem, where the job intervals form a laminar family, and present a quasipolynomial-time approximation scheme for it, thereby showing
that (at least) this special case is not APX-hard, unless $NP \subseteq DTIME(2^{poly(\log n)})$. The second contribution of this work is a polynomial-time algorithm for the special case of
equal-volume jobs, where previously only a $2^\alpha$ approximation was known. In addition, we show that two other special cases of this problem allow fully polynomial-time approximation schemes
(FPTASs). %B Research Reports %@ false
A Distributed Algorithm for Large-scale Generalized Matching
F. Makari, B. Awerbuch, R. Gemulla, R. Khandekar, J. Mestre and M. Sozio
Technical Report, 2013
Generalized matching problems arise in a number of applications, including computational advertising, recommender systems, and trade markets. Consider, for example, the problem of recommending
multimedia items (e.g., DVDs) to users such that (1) users are recommended items that they are likely to be interested in, (2) every user gets neither too few nor too many recommendations, and (3)
only items available in stock are recommended to users. State-of-the-art matching algorithms fail at coping with large real-world instances, which may involve millions of users and items. We propose
the first distributed algorithm for computing near-optimal solutions to large-scale generalized matching problems like the one above. Our algorithm is designed to run on a small cluster of commodity
nodes (or in a MapReduce environment), has strong approximation guarantees, and requires only a poly-logarithmic number of passes over the input. In particular, we propose a novel distributed
algorithm to approximately solve mixed packing-covering linear programs, which include but are not limited to generalized matching problems. Experiments on real-world and synthetic data suggest that
our algorithm scales to very large problem sizes and can be orders of magnitude faster than alternative approaches.
@techreport{MakariAwerbuchGemullaKhandekarMestreSozio2013, TITLE = {A Distributed Algorithm for Large-scale Generalized Matching}, AUTHOR = {Makari, Faraz and Awerbuch, Baruch and Gemulla, Rainer and
Khandekar, Rohit and Mestre, Julian and Sozio, Mauro}, LANGUAGE = {eng}, ISSN = {0946-011X}, NUMBER = {MPI-I-2013-5-002}, INSTITUTION = {Max-Planck-Institut f{\"u}r Informatik}, ADDRESS = {Saarbr{\
"u}cken}, YEAR = {2013}, ABSTRACT = {Generalized matching problems arise in a number of applications, including computational advertising, recommender systems, and trade markets. Consider, for
example, the problem of recommending multimedia items (e.g., DVDs) to users such that (1) users are recommended items that they are likely to be interested in, (2) every user gets neither too few nor
too many recommendations, and (3) only items available in stock are recommended to users. State-of-the-art matching algorithms fail at coping with large real-world instances, which may involve
millions of users and items. We propose the first distributed algorithm for computing near-optimal solutions to large-scale generalized matching problems like the one above. Our algorithm is designed
to run on a small cluster of commodity nodes (or in a MapReduce environment), has strong approximation guarantees, and requires only a poly-logarithmic number of passes over the input. In particular,
we propose a novel distributed algorithm to approximately solve mixed packing-covering linear programs, which include but are not limited to generalized matching problems. Experiments on real-world
and synthetic data suggest that our algorithm scales to very large problem sizes and can be orders of magnitude faster than alternative approaches.}, TYPE = {Research Reports}, }
%0 Report %A Makari, Faraz %A Awerbuch, Baruch %A Gemulla, Rainer %A Khandekar, Rohit %A Mestre, Julian %A Sozio, Mauro %+ Databases and Information Systems, MPI for Informatics, Max Planck Society
External Organizations Databases and Information Systems, MPI for Informatics, Max Planck Society Algorithms and Complexity, MPI for Informatics, Max Planck Society Algorithms and Complexity, MPI for
Informatics, Max Planck Society Databases and Information Systems, MPI for Informatics, Max Planck Society %T A Distributed Algorithm for Large-scale Generalized Matching : %G eng %U http://
hdl.handle.net/11858/00-001M-0000-0024-03B4-3 %Y Max-Planck-Institut für Informatik %C Saarbrücken %D 2013 %P 39 p. %X Generalized matching problems arise in a number of applications,
including computational advertising, recommender systems, and trade markets. Consider, for example, the problem of recommending multimedia items (e.g., DVDs) to users such that (1) users are
recommended items that they are likely to be interested in, (2) every user gets neither too few nor too many recommendations, and (3) only items available in stock are recommended to users.
State-of-the-art matching algorithms fail at coping with large real-world instances, which may involve millions of users and items. We propose the first distributed algorithm for computing
near-optimal solutions to large-scale generalized matching problems like the one above. Our algorithm is designed to run on a small cluster of commodity nodes (or in a MapReduce environment), has
strong approximation guarantees, and requires only a poly-logarithmic number of passes over the input. In particular, we propose a novel distributed algorithm to approximately solve mixed
packing-covering linear programs, which include but are not limited to generalized matching problems. Experiments on real-world and synthetic data suggest that our algorithm scales to very large
problem sizes and can be orders of magnitude faster than alternative approaches. %B Research Reports %@ false
Building and Maintaining Halls of Fame Over a Database
F. Alvanaki, S. Michel and A. Stupar
Technical Report, 2012
Halls of Fame are fascinating constructs. They represent the elite of an often very large amount of entities|persons, companies, products, countries etc. Beyond their practical use as static
rankings, changes to them are particularly interesting|for decision making processes, as input to common media or novel narrative science applications, or simply consumed by users. In this work, we
aim at detecting events that can be characterized by changes to a Hall of Fame ranking in an automated way. We describe how the schema and data of a database can be used to generate Halls of Fame. In
this database scenario, by Hall of Fame we refer to distinguished tuples; entities, whose characteristics set them apart from the majority. We dene every Hall of Fame as one specic instance of an SQL
query, such that a change in its result is considered a noteworthy event. Identied changes (i.e., events) are ranked using lexicographic tradeos over event and query properties and presented to users
or fed in higher-level applications. We have implemented a full-edged prototype system that uses either database triggers or a Java based middleware for event identication. We report on an
experimental evaluation using a real-world dataset of basketball statistics.
@techreport{AlvanakiMichelStupar2012, TITLE = {Building and Maintaining Halls of Fame Over a Database}, AUTHOR = {Alvanaki, Foteini and Michel, Sebastian and Stupar, Aleksandar}, LANGUAGE = {eng},
ISSN = {0946-011X}, NUMBER = {MPI-I-2012-5-004}, INSTITUTION = {Max-Plankc-Institute f{\"u}r Informatik}, ADDRESS = {Saarbr{\"u}cken}, YEAR = {2012}, ABSTRACT = {Halls of Fame are fascinating
constructs. They represent the elite of an often very large amount of entities|persons, companies, products, countries etc. Beyond their practical use as static rankings, changes to them are
particularly interesting|for decision making processes, as input to common media or novel narrative science applications, or simply consumed by users. In this work, we aim at detecting events that
can be characterized by changes to a Hall of Fame ranking in an automated way. We describe how the schema and data of a database can be used to generate Halls of Fame. In this database scenario, by
Hall of Fame we refer to distinguished tuples; entities, whose characteristics set them apart from the majority. We dene every Hall of Fame as one specic instance of an SQL query, such that a change
in its result is considered a noteworthy event. Identied changes (i.e., events) are ranked using lexicographic tradeos over event and query properties and presented to users or fed in higher-level
applications. We have implemented a full-edged prototype system that uses either database triggers or a Java based middleware for event identication. We report on an experimental evaluation using a
real-world dataset of basketball statistics.}, TYPE = {Research Reports}, }
%0 Report %A Alvanaki, Foteini %A Michel, Sebastian %A Stupar, Aleksandar %+ Cluster of Excellence Multimodal Computing and Interaction Databases and Information Systems, MPI for Informatics, Max
Planck Society Cluster of Excellence Multimodal Computing and Interaction %T Building and Maintaining Halls of Fame Over a Database : %G eng %U http://hdl.handle.net/11858/00-001M-0000-0024-03E9-D %Y
Max-Plankc-Institute für Informatik %C Saarbrücken %D 2012 %X Halls of Fame are fascinating constructs. They represent the elite of an often very large amount of entities|persons,
companies, products, countries etc. Beyond their practical use as static rankings, changes to them are particularly interesting|for decision making processes, as input to common media or novel
narrative science applications, or simply consumed by users. In this work, we aim at detecting events that can be characterized by changes to a Hall of Fame ranking in an automated way. We describe
how the schema and data of a database can be used to generate Halls of Fame. In this database scenario, by Hall of Fame we refer to distinguished tuples; entities, whose characteristics set them
apart from the majority. We dene every Hall of Fame as one specic instance of an SQL query, such that a change in its result is considered a noteworthy event. Identied changes (i.e., events) are
ranked using lexicographic tradeos over event and query properties and presented to users or fed in higher-level applications. We have implemented a full-edged prototype system that uses either
database triggers or a Java based middleware for event identication. We report on an experimental evaluation using a real-world dataset of basketball statistics. %B Research Reports %@ false
Computing n-Gram Statistics in MapReduce
K. Berberich and S. Bedathur
Technical Report, 2012
@techreport{BerberichBedathur2012, TITLE = {Computing n--Gram Statistics in {MapReduce}}, AUTHOR = {Berberich, Klaus and Bedathur, Srikanta}, LANGUAGE = {eng}, ISSN = {0946-011X}, NUMBER =
{MPI-I-2012-5-003}, INSTITUTION = {Max-Planck-Institut f{\"u}r Informatik}, ADDRESS = {Saa}, YEAR = {2012}, TYPE = {Research Report}, }
%0 Report %A Berberich, Klaus %A Bedathur, Srikanta %+ Databases and Information Systems, MPI for Informatics, Max Planck Society External Organizations %T Computing n-Gram Statistics in MapReduce :
%G eng %U http://hdl.handle.net/11858/00-001M-0000-0024-0416-A %Y Max-Planck-Institut für Informatik %C Saa %D 2012 %P 39 p. %B Research Report %@ false
Top-k Query Processing in Probabilistic Databases with Non-materialized Views
M. Dylla, I. Miliaraki and M. Theobald
Technical Report, 2012
@techreport{DyllaTopk2012, TITLE = {Top-k Query Processing in Probabilistic Databases with Non-materialized Views}, AUTHOR = {Dylla, Maximilian and Miliaraki, Iris and Theobald, Martin}, LANGUAGE =
{eng}, ISSN = {0946-011X}, NUMBER = {MPI-I-2012-5-002}, LOCALID = {Local-ID: 62EC1C9C96B8EFF4C1257B560029F18C-DyllaTopk2012}, INSTITUTION = {Max-Planck-Institut f{\"u}r Informatik}, ADDRESS = {Saarbr
{\"u}cken}, YEAR = {2012}, DATE = {2012}, TYPE = {Research Report}, }
%0 Report %A Dylla, Maximilian %A Miliaraki, Iris %A Theobald, Martin %+ Databases and Information Systems, MPI for Informatics, Max Planck Society Databases and Information Systems, MPI for
Informatics, Max Planck Society Databases and Information Systems, MPI for Informatics, Max Planck Society %T Top-k Query Processing in Probabilistic Databases with Non-materialized Views : %G eng %U
http://hdl.handle.net/11858/00-001M-0000-0014-B02F-2 %F OTHER: Local-ID: 62EC1C9C96B8EFF4C1257B560029F18C-DyllaTopk2012 %Y Max-Planck-Institut für Informatik %C Saarbrücken %D 2012 %B
Research Report %@ false
Automatic Generation of Invariants for Circular Derivations in SUP(LA) 1
A. Fietzke, E. Kruglov and C. Weidenbach
Technical Report, 2012
The hierarchic combination of linear arithmetic and firstorder logic with free function symbols, FOL(LA), results in a strictly more expressive logic than its two parts. The SUP(LA) calculus can be
turned into a decision procedure for interesting fragments of FOL(LA). For example, reachability problems for timed automata can be decided by SUP(LA) using an appropriate translation into FOL(LA).
In this paper, we extend the SUP(LA) calculus with an additional inference rule, automatically generating inductive invariants from partial SUP(LA) derivations. The rule enables decidability of more
expressive fragments, including reachability for timed automata with unbounded integer variables. We have implemented the rule in the SPASS(LA) theorem prover with promising results, showing that it
can considerably speed up proof search and enable termination of saturation for practically relevant problems.
@techreport{FietzkeKruglovWeidenbach2012, TITLE = {Automatic Generation of Invariants for Circular Derivations in {SUP(LA)} 1}, AUTHOR = {Fietzke, Arnaud and Kruglov, Evgeny and Weidenbach,
Christoph}, LANGUAGE = {eng}, ISSN = {0946-011X}, NUMBER = {MPI-I-2012-RG1-002}, INSTITUTION = {Max-Planck-Institut f{\"u}r Informatik}, ADDRESS = {Saarbr{\"u}cken}, YEAR = {2012}, ABSTRACT = {The
hierarchic combination of linear arithmetic and firstorder logic with free function symbols, FOL(LA), results in a strictly more expressive logic than its two parts. The SUP(LA) calculus can be
turned into a decision procedure for interesting fragments of FOL(LA). For example, reachability problems for timed automata can be decided by SUP(LA) using an appropriate translation into FOL(LA).
In this paper, we extend the SUP(LA) calculus with an additional inference rule, automatically generating inductive invariants from partial SUP(LA) derivations. The rule enables decidability of more
expressive fragments, including reachability for timed automata with unbounded integer variables. We have implemented the rule in the SPASS(LA) theorem prover with promising results, showing that it
can considerably speed up proof search and enable termination of saturation for practically relevant problems.}, TYPE = {Research Report}, }
%0 Report %A Fietzke, Arnaud %A Kruglov, Evgeny %A Weidenbach, Christoph %+ Automation of Logic, MPI for Informatics, Max Planck Society Automation of Logic, MPI for Informatics, Max Planck Society
Automation of Logic, MPI for Informatics, Max Planck Society %T Automatic Generation of Invariants for Circular Derivations in SUP(LA) 1 : %G eng %U http://hdl.handle.net/11858/
00-001M-0000-0024-03CF-9 %Y Max-Planck-Institut für Informatik %C Saarbrücken %D 2012 %P 26 p. %X The hierarchic combination of linear arithmetic and firstorder logic with free function
symbols, FOL(LA), results in a strictly more expressive logic than its two parts. The SUP(LA) calculus can be turned into a decision procedure for interesting fragments of FOL(LA). For example,
reachability problems for timed automata can be decided by SUP(LA) using an appropriate translation into FOL(LA). In this paper, we extend the SUP(LA) calculus with an additional inference rule,
automatically generating inductive invariants from partial SUP(LA) derivations. The rule enables decidability of more expressive fragments, including reachability for timed automata with unbounded
integer variables. We have implemented the rule in the SPASS(LA) theorem prover with promising results, showing that it can considerably speed up proof search and enable termination of saturation for
practically relevant problems. %B Research Report %@ false
Symmetry Detection in Large Scale City Scans
J. Kerber, M. Wand, M. Bokeloh and H.-P. Seidel
Technical Report, 2012
In this report we present a novel method for detecting partial symmetries in very large point clouds of 3D city scans. Unlike previous work, which was limited to data sets of a few hundred megabytes
maximum, our method scales to very large scenes. We map the detection problem to a nearestneighbor search in a low-dimensional feature space, followed by a cascade of tests for geometric clustering
of potential matches. Our algorithm robustly handles noisy real-world scanner data, obtaining a recognition performance comparable to state-of-the-art methods. In practice, it scales linearly with
the scene size and achieves a high absolute throughput, processing half a terabyte of raw scanner data over night on a dual socket commodity PC.
@techreport{KerberBokelohWandSeidel2012, TITLE = {Symmetry Detection in Large Scale City Scans}, AUTHOR = {Kerber, Jens and Wand, Michael and Bokeloh, Martin and Seidel, Hans-Peter}, LANGUAGE =
{eng}, ISSN = {0946-011X}, NUMBER = {MPI-I-2012-4-001}, YEAR = {2012}, ABSTRACT = {In this report we present a novel method for detecting partial symmetries in very large point clouds of 3D city
scans. Unlike previous work, which was limited to data sets of a few hundred megabytes maximum, our method scales to very large scenes. We map the detection problem to a nearestneighbor search in a
low-dimensional feature space, followed by a cascade of tests for geometric clustering of potential matches. Our algorithm robustly handles noisy real-world scanner data, obtaining a recognition
performance comparable to state-of-the-art methods. In practice, it scales linearly with the scene size and achieves a high absolute throughput, processing half a terabyte of raw scanner data over
night on a dual socket commodity PC.}, TYPE = {Research Report}, }
%0 Report %A Kerber, Jens %A Wand, Michael %A Bokeloh, Martin %A Seidel, Hans-Peter %+ Computer Graphics, MPI for Informatics, Max Planck Society Computer Graphics, MPI for Informatics, Max Planck
Society External Organizations Computer Graphics, MPI for Informatics, Max Planck Society %T Symmetry Detection in Large Scale City Scans : %G eng %U http://hdl.handle.net/11858/
00-001M-0000-0024-0427-4 %D 2012 %P 32 p. %X In this report we present a novel method for detecting partial symmetries in very large point clouds of 3D city scans. Unlike previous work, which was
limited to data sets of a few hundred megabytes maximum, our method scales to very large scenes. We map the detection problem to a nearestneighbor search in a low-dimensional feature space, followed
by a cascade of tests for geometric clustering of potential matches. Our algorithm robustly handles noisy real-world scanner data, obtaining a recognition performance comparable to state-of-the-art
methods. In practice, it scales linearly with the scene size and achieves a high absolute throughput, processing half a terabyte of raw scanner data over night on a dual socket commodity PC. %B
Research Report %@ false
MDL4BMF: Minimum Description Length for Boolean Matrix Factorization
P. Miettinen and J. Vreeken
Technical Report, 2012
Matrix factorizations—where a given data matrix is approximated by a prod- uct of two or more factor matrices—are powerful data mining tools. Among other tasks, matrix factorizations are often used
to separate global structure from noise. This, however, requires solving the ‘model order selection problem’ of determining where fine-grained structure stops, and noise starts, i.e., what is the
proper size of the factor matrices. Boolean matrix factorization (BMF)—where data, factors, and matrix product are Boolean—has received increased attention from the data mining community in recent
years. The technique has desirable properties, such as high interpretability and natural sparsity. However, so far no method for selecting the correct model order for BMF has been available. In this
paper we propose to use the Minimum Description Length (MDL) principle for this task. Besides solving the problem, this well-founded approach has numerous benefits, e.g., it is automatic, does not
require a likelihood function, is fast, and, as experiments show, is highly accurate. We formulate the description length function for BMF in general—making it applicable for any BMF algorithm. We
discuss how to construct an appropriate encoding, starting from a simple and intuitive approach, we arrive at a highly efficient data-to-model based encoding for BMF. We extend an existing algorithm
for BMF to use MDL to identify the best Boolean matrix factorization, analyze the complexity of the problem, and perform an extensive experimental evaluation to study its behavior.
@techreport{MiettinenVreeken, TITLE = {{MDL4BMF}: Minimum Description Length for Boolean Matrix Factorization}, AUTHOR = {Miettinen, Pauli and Vreeken, Jilles}, LANGUAGE = {eng}, ISSN = {0946-011X},
NUMBER = {MPI-I-2012-5-001}, INSTITUTION = {Max-Planck-Institut f{\"u}r Informatik}, ADDRESS = {Saarbr{\"u}cken}, YEAR = {2012}, ABSTRACT = {Matrix factorizations---where a given data matrix is
approximated by a prod- uct of two or more factor matrices---are powerful data mining tools. Among other tasks, matrix factorizations are often used to separate global structure from noise. This,
however, requires solving the {\textquoteleft}model order selection problem{\textquoteright} of determining where fine-grained structure stops, and noise starts, i.e., what is the proper size of the
factor matrices. Boolean matrix factorization (BMF)---where data, factors, and matrix product are Boolean---has received increased attention from the data mining community in recent years. The
technique has desirable properties, such as high interpretability and natural sparsity. However, so far no method for selecting the correct model order for BMF has been available. In this paper we
propose to use the Minimum Description Length (MDL) principle for this task. Besides solving the problem, this well-founded approach has numerous benefits, e.g., it is automatic, does not require a
likelihood function, is fast, and, as experiments show, is highly accurate. We formulate the description length function for BMF in general---making it applicable for any BMF algorithm. We discuss
how to construct an appropriate encoding, starting from a simple and intuitive approach, we arrive at a highly efficient data-to-model based encoding for BMF. We extend an existing algorithm for BMF
to use MDL to identify the best Boolean matrix factorization, analyze the complexity of the problem, and perform an extensive experimental evaluation to study its behavior.}, TYPE = {Research
Report}, }
%0 Report %A Miettinen, Pauli %A Vreeken, Jilles %+ Databases and Information Systems, MPI for Informatics, Max Planck Society External Organizations %T MDL4BMF: Minimum Description Length for
Boolean Matrix Factorization : %G eng %U http://hdl.handle.net/11858/00-001M-0000-0024-0422-E %Y Max-Planck-Institut für Informatik %C Saarbrücken %D 2012 %P 48 p. %X Matrix factorizations&
#8212;where a given data matrix is approximated by a prod- uct of two or more factor matrices—are powerful data mining tools. Among other tasks, matrix factorizations are often used to separate
global structure from noise. This, however, requires solving the ‘model order selection problem’ of determining where fine-grained structure stops, and noise starts, i.e., what is the
proper size of the factor matrices. Boolean matrix factorization (BMF)—where data, factors, and matrix product are Boolean—has received increased attention from the data mining community
in recent years. The technique has desirable properties, such as high interpretability and natural sparsity. However, so far no method for selecting the correct model order for BMF has been
available. In this paper we propose to use the Minimum Description Length (MDL) principle for this task. Besides solving the problem, this well-founded approach has numerous benefits, e.g., it is
automatic, does not require a likelihood function, is fast, and, as experiments show, is highly accurate. We formulate the description length function for BMF in general—making it applicable
for any BMF algorithm. We discuss how to construct an appropriate encoding, starting from a simple and intuitive approach, we arrive at a highly efficient data-to-model based encoding for BMF. We
extend an existing algorithm for BMF to use MDL to identify the best Boolean matrix factorization, analyze the complexity of the problem, and perform an extensive experimental evaluation to study its
behavior. %B Research Report %@ false
Labelled Superposition for PLTL
M. Suda and C. Weidenbach
Technical Report, 2012
This paper introduces a new decision procedure for PLTL based on labelled superposition. Its main idea is to treat temporal formulas as infinite sets of purely propositional clauses over an extended
signature. These infinite sets are then represented by finite sets of labelled propositional clauses. The new representation enables the replacement of the complex temporal resolution rule, suggested
by existing resolution calculi for PLTL, by a fine grained repetition check of finitely saturated labelled clause sets followed by a simple inference. The completeness argument is based on the
standard model building idea from superposition. It inherently justifies ordering restrictions, redundancy elimination and effective partial model building. The latter can be directly used to
effectively generate counterexamples of non-valid PLTL conjectures out of saturated labelled clause sets in a straightforward way.
@techreport{SudaWeidenbachLPAR2012, TITLE = {Labelled Superposition for {PLTL}}, AUTHOR = {Suda, Martin and Weidenbach, Christoph}, LANGUAGE = {eng}, ISSN = {0946-011X}, NUMBER =
{MPI-I-2012-RG1-001}, INSTITUTION = {Max-Planck-Institut f{\"u}r Informatik}, ADDRESS = {Saarbr{\"u}cken}, YEAR = {2012}, ABSTRACT = {This paper introduces a new decision procedure for PLTL based on
labelled superposition. Its main idea is to treat temporal formulas as infinite sets of purely propositional clauses over an extended signature. These infinite sets are then represented by finite
sets of labelled propositional clauses. The new representation enables the replacement of the complex temporal resolution rule, suggested by existing resolution calculi for PLTL, by a fine grained
repetition check of finitely saturated labelled clause sets followed by a simple inference. The completeness argument is based on the standard model building idea from superposition. It inherently
justifies ordering restrictions, redundancy elimination and effective partial model building. The latter can be directly used to effectively generate counterexamples of non-valid PLTL conjectures out
of saturated labelled clause sets in a straightforward way.}, TYPE = {Research Reports}, }
%0 Report %A Suda, Martin %A Weidenbach, Christoph %+ Automation of Logic, MPI for Informatics, Max Planck Society Automation of Logic, MPI for Informatics, Max Planck Society %T Labelled
Superposition for PLTL : %G eng %U http://hdl.handle.net/11858/00-001M-0000-0024-03DC-B %Y Max-Planck-Institut für Informatik %C Saarbrücken %D 2012 %P 42 p. %X This paper introduces a new
decision procedure for PLTL based on labelled superposition. Its main idea is to treat temporal formulas as infinite sets of purely propositional clauses over an extended signature. These infinite
sets are then represented by finite sets of labelled propositional clauses. The new representation enables the replacement of the complex temporal resolution rule, suggested by existing resolution
calculi for PLTL, by a fine grained repetition check of finitely saturated labelled clause sets followed by a simple inference. The completeness argument is based on the standard model building idea
from superposition. It inherently justifies ordering restrictions, redundancy elimination and effective partial model building. The latter can be directly used to effectively generate counterexamples
of non-valid PLTL conjectures out of saturated labelled clause sets in a straightforward way. %B Research Reports %@ false
Temporal Index Sharding for Space-time Efficiency in Archive Search
A. Anand, S. Bedathur, K. Berberich and R. Schenkel
Technical Report, 2011
Time-travel queries that couple temporal constraints with keyword queries are useful in searching large-scale archives of time-evolving content such as the Web, document collections, wikis, and so
on. Typical approaches for efficient evaluation of these queries involve \emph{slicing} along the time-axis either the entire collection~\cite{253349}, or individual index lists~\cite
{kberberi:sigir2007}. Both these methods are not satisfactory since they sacrifice compactness of index for processing efficiency making them either too big or, otherwise, too slow. We present a
novel index organization scheme that \emph{shards} the index with \emph{zero increase in index size}, still minimizing the cost of reading index index entries during query processing. Based on the
optimal sharding thus obtained, we develop practically efficient sharding that takes into account the different costs of random and sequential accesses. Our algorithm merges shards from the optimal
solution carefully to allow for few extra sequential accesses while gaining significantly by reducing the random accesses. Finally, we empirically establish the effectiveness of our novel sharding
scheme via detailed experiments over the edit history of the English version of Wikipedia between 2001-2005 ($\approx$ 700 GB) and an archive of the UK governmental web sites ($\approx$ 400 GB). Our
results demonstrate the feasibility of faster time-travel query processing with no space overhead.
@techreport{Bedathur2011, TITLE = {Temporal Index Sharding for Space-time Efficiency in Archive Search}, AUTHOR = {Anand, Avishek and Bedathur, Srikanta and Berberich, Klaus and Schenkel, Ralf},
LANGUAGE = {eng}, ISSN = {0946-011X}, NUMBER = {MPI-I-2011-5-001}, INSTITUTION = {Universit{\"a}t des Saarlandes}, ADDRESS = {Saarbr{\"u}cken}, YEAR = {2011}, DATE = {2011}, ABSTRACT = {Time-travel
queries that couple temporal constraints with keyword queries are useful in searching large-scale archives of time-evolving content such as the Web, document collections, wikis, and so on. Typical
approaches for efficient evaluation of these queries involve \emph{slicing} along the time-axis either the entire collection~\cite{253349}, or individual index lists~\cite{kberberi:sigir2007}. Both
these methods are not satisfactory since they sacrifice compactness of index for processing efficiency making them either too big or, otherwise, too slow. We present a novel index organization scheme
that \emph{shards} the index with \emph{zero increase in index size}, still minimizing the cost of reading index index entries during query processing. Based on the optimal sharding thus obtained, we
develop practically efficient sharding that takes into account the different costs of random and sequential accesses. Our algorithm merges shards from the optimal solution carefully to allow for few
extra sequential accesses while gaining significantly by reducing the random accesses. Finally, we empirically establish the effectiveness of our novel sharding scheme via detailed experiments over
the edit history of the English version of Wikipedia between 2001-2005 ($\approx$ 700 GB) and an archive of the UK governmental web sites ($\approx$ 400 GB). Our results demonstrate the feasibility
of faster time-travel query processing with no space overhead.}, TYPE = {Research Report}, }
%0 Report %A Anand, Avishek %A Bedathur, Srikanta %A Berberich, Klaus %A Schenkel, Ralf %+ Databases and Information Systems, MPI for Informatics, Max Planck Society Databases and Information
Systems, MPI for Informatics, Max Planck Society Databases and Information Systems, MPI for Informatics, Max Planck Society Databases and Information Systems, MPI for Informatics, Max Planck Society
%T Temporal Index Sharding for Space-time Efficiency in Archive Search : %G eng %U http://hdl.handle.net/11858/00-001M-0000-0025-7311-D %Y Universität des Saarlandes %C Saarbrücken %D 2011
%X Time-travel queries that couple temporal constraints with keyword queries are useful in searching large-scale archives of time-evolving content such as the Web, document collections, wikis, and so
on. Typical approaches for efficient evaluation of these queries involve \emph{slicing} along the time-axis either the entire collection~\cite{253349}, or individual index lists~\cite
{kberberi:sigir2007}. Both these methods are not satisfactory since they sacrifice compactness of index for processing efficiency making them either too big or, otherwise, too slow. We present a
novel index organization scheme that \emph{shards} the index with \emph{zero increase in index size}, still minimizing the cost of reading index index entries during query processing. Based on the
optimal sharding thus obtained, we develop practically efficient sharding that takes into account the different costs of random and sequential accesses. Our algorithm merges shards from the optimal
solution carefully to allow for few extra sequential accesses while gaining significantly by reducing the random accesses. Finally, we empirically establish the effectiveness of our novel sharding
scheme via detailed experiments over the edit history of the English version of Wikipedia between 2001-2005 ($\approx$ 700 GB) and an archive of the UK governmental web sites ($\approx$ 400 GB). Our
results demonstrate the feasibility of faster time-travel query processing with no space overhead. %B Research Report %@ false
A Morphable Part Model for Shape Manipulation
A. Berner, O. Burghard, M. Wand, N. Mitra, R. Klein and H.-P. Seidel
Technical Report, 2011
We introduce morphable part models for smart shape manipulation using an assembly of deformable parts with appropriate boundary conditions. In an analysis phase, we characterize the continuous
allowable variations both for the individual parts and their interconnections using Gaussian shape models with low rank covariance. The discrete aspect of how parts can be assembled is captured using
a shape grammar. The parts and their interconnection rules are learned semi-automatically from symmetries within a single object or from semantically corresponding parts across a larger set of
example models. The learned discrete and continuous structure is encoded as a graph. In the interaction phase, we obtain an interactive yet intuitive shape deformation framework producing realistic
deformations on classes of objects that are difficult to edit using existing structure-aware deformation techniques. Unlike previous techniques, our method uses self-similarities from a single model
as training input and allows the user to reassemble the identified parts in new configurations, thus exploiting both the discrete and continuous learned variations while ensuring appropriate boundary
conditions across part boundaries.
@techreport{BernerBurghardWandMitraKleinSeidel2011, TITLE = {A Morphable Part Model for Shape Manipulation}, AUTHOR = {Berner, Alexander and Burghard, Oliver and Wand, Michael and Mitra, Niloy and
Klein, Reinhard and Seidel, Hans-Peter}, LANGUAGE = {eng}, ISSN = {0946-011X}, NUMBER = {MPI-I-2011-4-005}, INSTITUTION = {Max-Planck-Institut f{\"u}r Informatik}, ADDRESS = {Saarbr{\"u}cken}, YEAR =
{2011}, DATE = {2011}, ABSTRACT = {We introduce morphable part models for smart shape manipulation using an assembly of deformable parts with appropriate boundary conditions. In an analysis phase, we
characterize the continuous allowable variations both for the individual parts and their interconnections using Gaussian shape models with low rank covariance. The discrete aspect of how parts can be
assembled is captured using a shape grammar. The parts and their interconnection rules are learned semi-automatically from symmetries within a single object or from semantically corresponding parts
across a larger set of example models. The learned discrete and continuous structure is encoded as a graph. In the interaction phase, we obtain an interactive yet intuitive shape deformation
framework producing realistic deformations on classes of objects that are difficult to edit using existing structure-aware deformation techniques. Unlike previous techniques, our method uses
self-similarities from a single model as training input and allows the user to reassemble the identified parts in new configurations, thus exploiting both the discrete and continuous learned
variations while ensuring appropriate boundary conditions across part boundaries.}, TYPE = {Research Report}, }
%0 Report %A Berner, Alexander %A Burghard, Oliver %A Wand, Michael %A Mitra, Niloy %A Klein, Reinhard %A Seidel, Hans-Peter %+ External Organizations External Organizations Computer Graphics, MPI
for Informatics, Max Planck Society External Organizations External Organizations Computer Graphics, MPI for Informatics, Max Planck Society %T A Morphable Part Model for Shape Manipulation : %G eng
%U http://hdl.handle.net/11858/00-001M-0000-0014-6972-0 %Y Max-Planck-Institut für Informatik %C Saarbrücken %D 2011 %P 33 p. %X We introduce morphable part models for smart shape
manipulation using an assembly of deformable parts with appropriate boundary conditions. In an analysis phase, we characterize the continuous allowable variations both for the individual parts and
their interconnections using Gaussian shape models with low rank covariance. The discrete aspect of how parts can be assembled is captured using a shape grammar. The parts and their interconnection
rules are learned semi-automatically from symmetries within a single object or from semantically corresponding parts across a larger set of example models. The learned discrete and continuous
structure is encoded as a graph. In the interaction phase, we obtain an interactive yet intuitive shape deformation framework producing realistic deformations on classes of objects that are difficult
to edit using existing structure-aware deformation techniques. Unlike previous techniques, our method uses self-similarities from a single model as training input and allows the user to reassemble
the identified parts in new configurations, thus exploiting both the discrete and continuous learned variations while ensuring appropriate boundary conditions across part boundaries. %B Research
Report %@ false
PTIME Parametric Verification of Safety Properties for Reasonable Linear Hybrid Automata
W. Damm, C. Ihlemann and V. Sofronie-Stokkermans
Technical Report, 2011
This paper identifies an industrially relevant class of linear hybrid automata (LHA) called reasonable LHA for which parametric verification of convex safety properties with exhaustive entry states
can be verified in polynomial time and time-bounded reachability can be decided in nondeterministic polynomial time for non-parametric verification and in exponential time for parametric
verification. Properties with exhaustive entry states are restricted to runs originating in a (specified) inner envelope of some mode-invariant. Deciding whether an LHA is reasonable is shown to be
decidable in polynomial time.
@techreport{Damm-Ihlemann-Sofronie-Stokkermans2011-report, TITLE = {{PTIME} Parametric Verification of Safety Properties for Reasonable Linear Hybrid Automata}, AUTHOR = {Damm, Werner and Ihlemann,
Carsten and Sofronie-Stokkermans, Viorica}, LANGUAGE = {eng}, ISSN = {1860-9821}, NUMBER = {ATR70}, LOCALID = {Local-ID:
C125716C0050FB51-DEB90D4E9EAE27B7C1257855003AF8EE-Damm-Ihlemann-Sofronie-Stokkermans2011-report}, INSTITUTION = {SFB/TR 14 AVACS}, YEAR = {2011}, DATE = {2011}, ABSTRACT = {This paper identifies an
industrially relevant class of linear hybrid automata (LHA) called reasonable LHA for which parametric verification of convex safety properties with exhaustive entry states can be verified in
polynomial time and time-bounded reachability can be decided in nondeterministic polynomial time for non-parametric verification and in exponential time for parametric verification. Properties with
exhaustive entry states are restricted to runs originating in a (specified) inner envelope of some mode-invariant. Deciding whether an LHA is reasonable is shown to be decidable in polynomial time.},
TYPE = {AVACS Technical Report}, VOLUME = {70}, }
%0 Report %A Damm, Werner %A Ihlemann, Carsten %A Sofronie-Stokkermans, Viorica %+ External Organizations Automation of Logic, MPI for Informatics, Max Planck Society Automation of Logic, MPI for
Informatics, Max Planck Society %T PTIME Parametric Verification of Safety Properties for Reasonable Linear Hybrid Automata : %G eng %U http://hdl.handle.net/11858/00-001M-0000-0010-14F5-F %F EDOC:
619013 %F OTHER: Local-ID: C125716C0050FB51-DEB90D4E9EAE27B7C1257855003AF8EE-Damm-Ihlemann-Sofronie-Stokkermans2011-report %Y SFB/TR 14 AVACS %D 2011 %P 31 p. %X This paper identifies an industrially
relevant class of linear hybrid automata (LHA) called reasonable LHA for which parametric verification of convex safety properties with exhaustive entry states can be verified in polynomial time and
time-bounded reachability can be decided in nondeterministic polynomial time for non-parametric verification and in exponential time for parametric verification. Properties with exhaustive entry
states are restricted to runs originating in a (specified) inner envelope of some mode-invariant. Deciding whether an LHA is reasonable is shown to be decidable in polynomial time. %B AVACS Technical
Report %N 70 %@ false %U http://www.avacs.org/fileadmin/Publikationen/Open/avacs_technical_report_070.pdf
Integrating Incremental Flow Pipes into a Symbolic Model Checker for Hybrid Systems
W. Damm, S. Disch, W. Hagemann, C. Scholl, U. Waldmann and B. Wirtz
Technical Report, 2011
We describe an approach to integrate incremental ow pipe computation into a fully symbolic backward model checker for hybrid systems. Our method combines the advantages of symbolic state set
representation, such as the ability to deal with large numbers of boolean variables, with an effcient way to handle continuous ows de ned by linear differential equations, possibly including bounded
@techreport{DammDierksHagemannEtAl2011, TITLE = {Integrating Incremental Flow Pipes into a Symbolic Model Checker for Hybrid Systems}, AUTHOR = {Damm, Werner and Disch, Stefan and Hagemann, Willem
and Scholl, Christoph and Waldmann, Uwe and Wirtz, Boris}, EDITOR = {Becker, Bernd and Damm, Werner and Finkbeiner, Bernd and Fr{\"a}nzle, Martin and Olderog, Ernst-R{\"u}diger and Podelski,
Andreas}, LANGUAGE = {eng}, ISSN = {1860-9821}, NUMBER = {ATR76}, INSTITUTION = {SFB/TR 14 AVACS}, ADDRESS = {Saarbr{\"u}cken}, YEAR = {2011}, DATE = {2011}, ABSTRACT = {We describe an approach to
integrate incremental ow pipe computation into a fully symbolic backward model checker for hybrid systems. Our method combines the advantages of symbolic state set representation, such as the ability
to deal with large numbers of boolean variables, with an effcient way to handle continuous ows dened by linear differential equations, possibly including bounded disturbances.}, TYPE = {AVACS
Technical Report}, VOLUME = {76}, }
%0 Report %A Damm, Werner %A Disch, Stefan %A Hagemann, Willem %A Scholl, Christoph %A Waldmann, Uwe %A Wirtz, Boris %E Becker, Bernd %E Damm, Werner %E Finkbeiner, Bernd %E Fränzle, Martin %E
Olderog, Ernst-Rüdiger %E Podelski, Andreas %+ External Organizations External Organizations Automation of Logic, MPI for Informatics, Max Planck Society External Organizations Automation of
Logic, MPI for Informatics, Max Planck Society External Organizations External Organizations External Organizations External Organizations External Organizations External Organizations External
Organizations %T Integrating Incremental Flow Pipes into a Symbolic Model Checker for Hybrid Systems : %G eng %U http://hdl.handle.net/11858/00-001M-0000-001A-150E-7 %Y SFB/TR 14 AVACS %C Saarbr&#
252;cken %D 2011 %X We describe an approach to integrate incremental ow pipe computation into a fully symbolic backward model checker for hybrid systems. Our method combines the advantages of
symbolic state set representation, such as the ability to deal with large numbers of boolean variables, with an effcient way to handle continuous ows de ned by linear differential equations,
possibly including bounded disturbances. %B AVACS Technical Report %N 76 %@ false
Large-scale Matrix Factorization with Distributed Stochastic Gradient Descent
R. Gemulla, P. J. Haas, E. Nijkamp and Y. Sismanis
Technical Report, 2011
As Web 2.0 and enterprise-cloud applications have proliferated, data mining algorithms increasingly need to be (re)designed to handle web-scale datasets. For this reason, low-rank matrix
factorization has received a lot of attention in recent years, since it is fundamental to a variety of mining tasks, such as topic detection and collaborative filtering, that are increasingly being
applied to massive datasets. We provide a novel algorithm to approximately factor large matrices with millions of rows, millions of columns, and billions of nonzero elements. Our approach rests on
stochastic gradient descent (SGD), an iterative stochastic optimization algorithm; the idea is to exploit the special structure of the matrix factorization problem to develop a new ``stratified'' SGD
variant that can be fully distributed and run on web-scale datasets using, e.g., MapReduce. The resulting distributed SGD factorization algorithm, called DSGD, provides good speed-up and handles a
wide variety of matrix factorizations. We establish convergence properties of DSGD using results from stochastic approximation theory and regenerative process theory, and also describe the practical
techniques used to optimize performance in our DSGD implementation. Experiments suggest that DSGD converges significantly faster and has better scalability properties than alternative algorithms.
@techreport{gemulla11, TITLE = {Large-scale Matrix Factorization with Distributed Stochastic Gradient Descent}, AUTHOR = {Gemulla, Rainer and Haas, Peter J. and Nijkamp, Erik and Sismanis, Yannis},
LANGUAGE = {eng}, URL = {http://www.almaden.ibm.com/cs/people/peterh/dsgdTechRep.pdf}, LOCALID = {Local-ID: C1256DBF005F876D-5B618B1FF070E981C125784D0044B0D1-gemulla11}, INSTITUTION = {IBM Research
Division}, ADDRESS = {San Jose, CA}, YEAR = {2011}, ABSTRACT = {As Web 2.0 and enterprise-cloud applications have proliferated, data mining algorithms increasingly need to be (re)designed to handle
web-scale datasets. For this reason, low-rank matrix factorization has received a lot of attention in recent years, since it is fundamental to a variety of mining tasks, such as topic detection and
collaborative filtering, that are increasingly being applied to massive datasets. We provide a novel algorithm to approximately factor large matrices with millions of rows, millions of columns, and
billions of nonzero elements. Our approach rests on stochastic gradient descent (SGD), an iterative stochastic optimization algorithm; the idea is to exploit the special structure of the matrix
factorization problem to develop a new ``stratified'' SGD variant that can be fully distributed and run on web-scale datasets using, e.g., MapReduce. The resulting distributed SGD factorization
algorithm, called DSGD, provides good speed-up and handles a wide variety of matrix factorizations. We establish convergence properties of DSGD using results from stochastic approximation theory and
regenerative process theory, and also describe the practical techniques used to optimize performance in our DSGD implementation. Experiments suggest that DSGD converges significantly faster and has
better scalability properties than alternative algorithms.}, TYPE = {IBM Research Report}, VOLUME = {RJ10481}, }
%0 Report %A Gemulla, Rainer %A Haas, Peter J. %A Nijkamp, Erik %A Sismanis, Yannis %+ Databases and Information Systems, MPI for Informatics, Max Planck Society External Organizations External
Organizations External Organizations %T Large-scale Matrix Factorization with Distributed Stochastic Gradient Descent : %G eng %U http://hdl.handle.net/11858/00-001M-0000-0010-147F-E %F EDOC: 618949
%U http://www.almaden.ibm.com/cs/people/peterh/dsgdTechRep.pdf %F OTHER: Local-ID: C1256DBF005F876D-5B618B1FF070E981C125784D0044B0D1-gemulla11 %Y IBM Research Division %C San Jose, CA %D 2011 %X As
Web 2.0 and enterprise-cloud applications have proliferated, data mining algorithms increasingly need to be (re)designed to handle web-scale datasets. For this reason, low-rank matrix factorization
has received a lot of attention in recent years, since it is fundamental to a variety of mining tasks, such as topic detection and collaborative filtering, that are increasingly being applied to
massive datasets. We provide a novel algorithm to approximately factor large matrices with millions of rows, millions of columns, and billions of nonzero elements. Our approach rests on stochastic
gradient descent (SGD), an iterative stochastic optimization algorithm; the idea is to exploit the special structure of the matrix factorization problem to develop a new ``stratified'' SGD variant
that can be fully distributed and run on web-scale datasets using, e.g., MapReduce. The resulting distributed SGD factorization algorithm, called DSGD, provides good speed-up and handles a wide
variety of matrix factorizations. We establish convergence properties of DSGD using results from stochastic approximation theory and regenerative process theory, and also describe the practical
techniques used to optimize performance in our DSGD implementation. Experiments suggest that DSGD converges significantly faster and has better scalability properties than alternative algorithms. %B
IBM Research Report %N RJ10481
How Not to Be Seen -- Inpainting Dynamic Objects in Crowded Scenes
M. Granados, J. Tompkin, K. Kim, O. Grau, J. Kautz and C. Theobalt
Technical Report, 2011
Removing dynamic objects from videos is an extremely challenging problem that even visual effects professionals often solve with time-consuming manual frame-by-frame editing. We propose a new
approach to video completion that can deal with complex scenes containing dynamic background and non-periodical moving objects. We build upon the idea that the spatio-temporal hole left by a removed
object can be filled with data available on other regions of the video where the occluded objects were visible. Video completion is performed by solving a large combinatorial problem that searches
for an optimal pattern of pixel offsets from occluded to unoccluded regions. Our contribution includes an energy functional that generalizes well over different scenes with stable parameters, and
that has the desirable convergence properties for a graph-cut-based optimization. We provide an interface to guide the completion process that both reduces computation time and allows for efficient
correction of small errors in the result. We demonstrate that our approach can effectively complete complex, high-resolution occlusions that are greater in difficulty than what existing methods have
@techreport{Granados2011TR, TITLE = {How Not to Be Seen -- Inpainting Dynamic Objects in Crowded Scenes}, AUTHOR = {Granados, Miguel and Tompkin, James and Kim, Kwang and Grau, O. and Kautz, Jan and
Theobalt, Christian}, LANGUAGE = {eng}, NUMBER = {MPI-I-2011-4-001}, INSTITUTION = {MPI f{\"u}r Informatik}, ADDRESS = {Saarbr{\"u}cken}, YEAR = {2011}, ABSTRACT = {Removing dynamic objects from
videos is an extremely challenging problem that even visual effects professionals often solve with time-consuming manual frame-by-frame editing. We propose a new approach to video completion that can
deal with complex scenes containing dynamic background and non-periodical moving objects. We build upon the idea that the spatio-temporal hole left by a removed object can be filled with data
available on other regions of the video where the occluded objects were visible. Video completion is performed by solving a large combinatorial problem that searches for an optimal pattern of pixel
offsets from occluded to unoccluded regions. Our contribution includes an energy functional that generalizes well over different scenes with stable parameters, and that has the desirable convergence
properties for a graph-cut-based optimization. We provide an interface to guide the completion process that both reduces computation time and allows for efficient correction of small errors in the
result. We demonstrate that our approach can effectively complete complex, high-resolution occlusions that are greater in difficulty than what existing methods have shown.}, TYPE = {Research Report},
%0 Report %A Granados, Miguel %A Tompkin, James %A Kim, Kwang %A Grau, O. %A Kautz, Jan %A Theobalt, Christian %+ Computer Graphics, MPI for Informatics, Max Planck Society External Organizations
Computer Graphics, MPI for Informatics, Max Planck Society External Organizations External Organizations Computer Graphics, MPI for Informatics, Max Planck Society %T How Not to Be Seen -- Inpainting
Dynamic Objects in Crowded Scenes : %G eng %U http://hdl.handle.net/11858/00-001M-0000-0010-13C5-3 %F EDOC: 618872 %Y MPI für Informatik %C Saarbrücken %D 2011 %P 35 p. %X Removing dynamic
objects from videos is an extremely challenging problem that even visual effects professionals often solve with time-consuming manual frame-by-frame editing. We propose a new approach to video
completion that can deal with complex scenes containing dynamic background and non-periodical moving objects. We build upon the idea that the spatio-temporal hole left by a removed object can be
filled with data available on other regions of the video where the occluded objects were visible. Video completion is performed by solving a large combinatorial problem that searches for an optimal
pattern of pixel offsets from occluded to unoccluded regions. Our contribution includes an energy functional that generalizes well over different scenes with stable parameters, and that has the
desirable convergence properties for a graph-cut-based optimization. We provide an interface to guide the completion process that both reduces computation time and allows for efficient correction of
small errors in the result. We demonstrate that our approach can effectively complete complex, high-resolution occlusions that are greater in difficulty than what existing methods have shown. %B
Research Report
Efficient Learning-based Image Enhancement: Application to Compression Artifact Removal and Super-resolution
K. I. Kim, Y. Kwon, J. H. Kim and C. Theobalt
Technical Report, 2011
Many computer vision and computational photography applications essentially solve an image enhancement problem. The image has been deteriorated by a specific noise process, such as aberrations from
camera optics and compression artifacts, that we would like to remove. We describe a framework for learning-based image enhancement. At the core of our algorithm lies a generic regularization
framework that comprises a prior on natural images, as well as an application-specific conditional model based on Gaussian processes. In contrast to prior learning-based approaches, our algorithm can
instantly learn task-specific degradation models from sample images which enables users to easily adapt the algorithm to a specific problem and data set of interest. This is facilitated by our
efficient approximation scheme of large-scale Gaussian processes. We demonstrate the efficiency and effectiveness of our approach by applying it to example enhancement applications including
single-image super-resolution, as well as artifact removal in JPEG- and JPEG 2000-encoded images.
@techreport{KimKwonKimTheobalt2011, TITLE = {Efficient Learning-based Image Enhancement : Application to Compression Artifact Removal and Super-resolution}, AUTHOR = {Kim, Kwang In and Kwon, Younghee
and Kim, Jin Hyung and Theobalt, Christian}, LANGUAGE = {eng}, ISSN = {0946-011X}, NUMBER = {MPI-I-2011-4-002}, INSTITUTION = {Max-Planck-Institut f{\"u}r Informatik}, ADDRESS = {Saarbr{\"u}cken},
YEAR = {2011}, ABSTRACT = {Many computer vision and computational photography applications essentially solve an image enhancement problem. The image has been deteriorated by a specific noise process,
such as aberrations from camera optics and compression artifacts, that we would like to remove. We describe a framework for learning-based image enhancement. At the core of our algorithm lies a
generic regularization framework that comprises a prior on natural images, as well as an application-specific conditional model based on Gaussian processes. In contrast to prior learning-based
approaches, our algorithm can instantly learn task-specific degradation models from sample images which enables users to easily adapt the algorithm to a specific problem and data set of interest.
This is facilitated by our efficient approximation scheme of large-scale Gaussian processes. We demonstrate the efficiency and effectiveness of our approach by applying it to example enhancement
applications including single-image super-resolution, as well as artifact removal in JPEG- and JPEG 2000-encoded images.}, TYPE = {Research Report}, }
%0 Report %A Kim, Kwang In %A Kwon, Younghee %A Kim, Jin Hyung %A Theobalt, Christian %+ Computer Graphics, MPI for Informatics, Max Planck Society External Organizations External Organizations
Computer Graphics, MPI for Informatics, Max Planck Society %T Efficient Learning-based Image Enhancement : Application to Compression Artifact Removal and Super-resolution : %G eng %U http://
hdl.handle.net/11858/00-001M-0000-0027-13A3-E %Y Max-Planck-Institut für Informatik %C Saarbrücken %D 2011 %X Many computer vision and computational photography applications essentially
solve an image enhancement problem. The image has been deteriorated by a specific noise process, such as aberrations from camera optics and compression artifacts, that we would like to remove. We
describe a framework for learning-based image enhancement. At the core of our algorithm lies a generic regularization framework that comprises a prior on natural images, as well as an
application-specific conditional model based on Gaussian processes. In contrast to prior learning-based approaches, our algorithm can instantly learn task-specific degradation models from sample
images which enables users to easily adapt the algorithm to a specific problem and data set of interest. This is facilitated by our efficient approximation scheme of large-scale Gaussian processes.
We demonstrate the efficiency and effectiveness of our approach by applying it to example enhancement applications including single-image super-resolution, as well as artifact removal in JPEG- and
JPEG 2000-encoded images. %B Research Report %@ false
Towards Verification of the Pastry Protocol using TLA+
T. Lu, S. Merz and C. Weidenbach
Technical Report, 2011
Pastry is an algorithm that provides a scalable distributed hash table over an underlying P2P network. Several implementations of Pastry are available and have been applied in practice, but no
attempt has so far been made to formally describe the algorithm or to verify its properties. Since Pastry combines rather complex data structures, asynchronous communication, concurrency, resilience
to churn and fault tolerance, it makes an interesting target for verication. We have modeled Pastry's core routing algorithms and communication protocol in the specication language TLA+. In order to
validate the model and to search for bugs we employed the TLA+ model checker tlc to analyze several qualitative properties. We obtained non-trivial insights in the behavior of Pastry through the
model checking analysis. Furthermore, we started to verify Pastry using the very same model and the interactive theorem prover tlaps for TLA+. A rst result is the reduction of global Pastry
correctness properties to invariants of the underlying data structures.
@techreport{LuMerzWeidenbach2011, TITLE = {Towards Verification of the {Pastry} Protocol using {TLA+}}, AUTHOR = {Lu, Tianxiang and Merz, Stephan and Weidenbach, Christoph}, LANGUAGE = {eng}, URL =
{http://domino.mpi-inf.mpg.de/internet/reports.nsf/NumberView/2011-RG1-002}, NUMBER = {MPI-I-2011-RG1-002}, INSTITUTION = {Max-Planck-Institut f{\"u}r Informatik}, ADDRESS = {Saarbr{\"u}cken}, YEAR =
{2011}, DATE = {2011}, ABSTRACT = {Pastry is an algorithm that provides a scalable distributed hash table over an underlying P2P network. Several implementations of Pastry are available and have been
applied in practice, but no attempt has so far been made to formally describe the algorithm or to verify its properties. Since Pastry combines rather complex data structures, asynchronous
communication, concurrency, resilience to churn and fault tolerance, it makes an interesting target for verication. We have modeled Pastry's core routing algorithms and communication protocol in the
specication language TLA+. In order to validate the model and to search for bugs we employed the TLA+ model checker tlc to analyze several qualitative properties. We obtained non-trivial insights in
the behavior of Pastry through the model checking analysis. Furthermore, we started to verify Pastry using the very same model and the interactive theorem prover tlaps for TLA+. A rst result is the
reduction of global Pastry correctness properties to invariants of the underlying data structures.}, TYPE = {Research Report}, }
%0 Report %A Lu, Tianxiang %A Merz, Stephan %A Weidenbach, Christoph %+ Automation of Logic, MPI for Informatics, Max Planck Society External Organizations Automation of Logic, MPI for Informatics,
Max Planck Society %T Towards Verification of the Pastry Protocol using TLA+ : %G eng %U http://hdl.handle.net/11858/00-001M-0000-0014-6975-A %U http://domino.mpi-inf.mpg.de/internet/reports.nsf/
NumberView/2011-RG1-002 %Y Max-Planck-Institut für Informatik %C Saarbrücken %D 2011 %P 51 p. %X Pastry is an algorithm that provides a scalable distributed hash table over an underlying
P2P network. Several implementations of Pastry are available and have been applied in practice, but no attempt has so far been made to formally describe the algorithm or to verify its properties.
Since Pastry combines rather complex data structures, asynchronous communication, concurrency, resilience to churn and fault tolerance, it makes an interesting target for verication. We have modeled
Pastry's core routing algorithms and communication protocol in the specication language TLA+. In order to validate the model and to search for bugs we employed the TLA+ model checker tlc to analyze
several qualitative properties. We obtained non-trivial insights in the behavior of Pastry through the model checking analysis. Furthermore, we started to verify Pastry using the very same model and
the interactive theorem prover tlaps for TLA+. A rst result is the reduction of global Pastry correctness properties to invariants of the underlying data structures. %B Research Report
Finding Images of Rare and Ambiguous Entities
B. Taneva, M. Kacimi El Hassani and G. Weikum
Technical Report, 2011
@techreport{TanevaKacimiWeikum2011, TITLE = {Finding Images of Rare and Ambiguous Entities}, AUTHOR = {Taneva, Bilyana and Kacimi El Hassani, M. and Weikum, Gerhard}, LANGUAGE = {eng}, URL = {http://
domino.mpi-inf.mpg.de/internet/reports.nsf/NumberView/2011-5-002}, NUMBER = {MPI-I-2011-5-002}, INSTITUTION = {Max-Planck-Institut f{\"u}r Informatik}, ADDRESS = {Saarbr{\"u}cken}, YEAR = {2011},
DATE = {2011}, TYPE = {Research Report}, }
%0 Report %A Taneva, Bilyana %A Kacimi El Hassani, M. %A Weikum, Gerhard %+ Databases and Information Systems, MPI for Informatics, Max Planck Society Databases and Information Systems, MPI for
Informatics, Max Planck Society Databases and Information Systems, MPI for Informatics, Max Planck Society %T Finding Images of Rare and Ambiguous Entities : %G eng %U http://hdl.handle.net/11858/
00-001M-0000-0014-6581-8 %U http://domino.mpi-inf.mpg.de/internet/reports.nsf/NumberView/2011-5-002 %Y Max-Planck-Institut für Informatik %C Saarbrücken %D 2011 %P 30 p. %B Research Report
Videoscapes: Exploring Unstructured Video Collections
J. Tompkin, K. I. Kim, J. Kautz and C. Theobalt
Technical Report, 2011
@techreport{TompkinKimKautzTheobalt2011, TITLE = {Videoscapes: Exploring Unstructured Video Collections}, AUTHOR = {Tompkin, James and Kim, Kwang In and Kautz, Jan and Theobalt, Christian}, LANGUAGE
= {eng}, ISSN = {0946-011X}, NUMBER = {MPI-I-2011-4-003}, INSTITUTION = {Max-Planck-Institut f{\"u}r Informatik}, ADDRESS = {Saarbr{\"u}cken}, YEAR = {2011}, DATE = {2011}, TYPE = {Research Report},
%0 Report %A Tompkin, James %A Kim, Kwang In %A Kautz, Jan %A Theobalt, Christian %+ Computer Graphics, MPI for Informatics, Max Planck Society Computer Graphics, MPI for Informatics, Max Planck
Society Computer Graphics, MPI for Informatics, Max Planck Society Computer Graphics, MPI for Informatics, Max Planck Society %T Videoscapes: Exploring Unstructured Video Collections : %G eng %U
http://hdl.handle.net/11858/00-001M-0000-0014-F76C-8 %Y Max-Planck-Institut für Informatik %C Saarbrücken %D 2011 %P 32 p. %B Research Report %@ false
A New Combinatorial Approach to Parametric Path Analysis
E. Althaus, S. Altmeyer and R. Naujoks
Technical Report, 2010
Hard real-time systems require tasks to finish in time. To guarantee the timeliness of such a system, static timing analyses derive upper bounds on the \emph{worst-case execution time} of tasks.
There are two types of timing analyses: numeric and parametric ones. A numeric analysis derives a numeric timing bound and, to this end, assumes all information such as loop bounds to be given a
priori. If these bounds are unknown during analysis time, a parametric analysis can compute a timing formula parametric in these variables. A performance bottleneck of timing analyses, numeric and
especially parametric, can be the so-called path analysis, which determines the path in the analyzed task with the longest execution time bound. In this paper, we present a new approach to the path
analysis. This approach exploits the rather regular structure of software for hard real-time and safety-critical systems. As we show in the evaluation of this paper, we strongly improve upon former
techniques in terms of precision and runtime in the parametric case. Even in the numeric case, our approach matches up to state-of-the-art techniques and may be an alternative to commercial tools
employed for path analysis.
@techreport{Naujoks10a, TITLE = {A New Combinatorial Approach to Parametric Path Analysis}, AUTHOR = {Althaus, Ernst and Altmeyer, Sebastian and Naujoks, Rouven}, LANGUAGE = {eng}, ISSN =
{1860-9821}, NUMBER = {ATR58}, LOCALID = {Local-ID: C1256428004B93B8-7741AE14A57A7C00C125781100477B84-Naujoks10a}, INSTITUTION = {SFB/TR 14 AVACS}, YEAR = {2010}, DATE = {2010}, ABSTRACT = {Hard
real-time systems require tasks to finish in time. To guarantee the timeliness of such a system, static timing analyses derive upper bounds on the \emph{worst-case execution time} of tasks. There are
two types of timing analyses: numeric and parametric ones. A numeric analysis derives a numeric timing bound and, to this end, assumes all information such as loop bounds to be given a priori. If
these bounds are unknown during analysis time, a parametric analysis can compute a timing formula parametric in these variables. A performance bottleneck of timing analyses, numeric and especially
parametric, can be the so-called path analysis, which determines the path in the analyzed task with the longest execution time bound. In this paper, we present a new approach to the path analysis.
This approach exploits the rather regular structure of software for hard real-time and safety-critical systems. As we show in the evaluation of this paper, we strongly improve upon former techniques
in terms of precision and runtime in the parametric case. Even in the numeric case, our approach matches up to state-of-the-art techniques and may be an alternative to commercial tools employed for
path analysis.}, TYPE = {AVACS Technical Report}, VOLUME = {58}, }
%0 Report %A Althaus, Ernst %A Altmeyer, Sebastian %A Naujoks, Rouven %+ Algorithms and Complexity, MPI for Informatics, Max Planck Society External Organizations Algorithms and Complexity, MPI for
Informatics, Max Planck Society %T A New Combinatorial Approach to Parametric Path Analysis : %G eng %U http://hdl.handle.net/11858/00-001M-0000-000F-15F7-8 %F EDOC: 536763 %F OTHER: Local-ID:
C1256428004B93B8-7741AE14A57A7C00C125781100477B84-Naujoks10a %Y SFB/TR 14 AVACS %D 2010 %P 33 p. %X Hard real-time systems require tasks to finish in time. To guarantee the timeliness of such a
system, static timing analyses derive upper bounds on the \emph{worst-case execution time} of tasks. There are two types of timing analyses: numeric and parametric ones. A numeric analysis derives a
numeric timing bound and, to this end, assumes all information such as loop bounds to be given a priori. If these bounds are unknown during analysis time, a parametric analysis can compute a timing
formula parametric in these variables. A performance bottleneck of timing analyses, numeric and especially parametric, can be the so-called path analysis, which determines the path in the analyzed
task with the longest execution time bound. In this paper, we present a new approach to the path analysis. This approach exploits the rather regular structure of software for hard real-time and
safety-critical systems. As we show in the evaluation of this paper, we strongly improve upon former techniques in terms of precision and runtime in the parametric case. Even in the numeric case, our
approach matches up to state-of-the-art techniques and may be an alternative to commercial tools employed for path analysis. %B AVACS Technical Report %N 58 %@ false %U http://www.avacs.org/
Efficient Temporal Keyword Queries over Versioned Text
A. Anand, S. Bedathur, K. Berberich and R. Schenkel
Technical Report, 2010
@techreport{AnandBedathurBerberichSchenkel2010, TITLE = {Efficient Temporal Keyword Queries over Versioned Text}, AUTHOR = {Anand, Avishek and Bedathur, Srikanta and Berberich, Klaus and Schenkel,
Ralf}, LANGUAGE = {eng}, URL = {http://domino.mpi-inf.mpg.de/internet/reports.nsf/NumberView/2010-5-003}, NUMBER = {MPI-I-2010-5-003}, INSTITUTION = {Max-Planck-Institut f{\"u}r Informatik}, ADDRESS
= {Saarbr{\"u}cken}, YEAR = {2010}, DATE = {2010}, TYPE = {Research Report}, }
%0 Report %A Anand, Avishek %A Bedathur, Srikanta %A Berberich, Klaus %A Schenkel, Ralf %+ Databases and Information Systems, MPI for Informatics, Max Planck Society Databases and Information
Systems, MPI for Informatics, Max Planck Society Databases and Information Systems, MPI for Informatics, Max Planck Society Databases and Information Systems, MPI for Informatics, Max Planck Society
%T Efficient Temporal Keyword Queries over Versioned Text : %G eng %U http://hdl.handle.net/11858/00-001M-0000-0014-65A0-2 %U http://domino.mpi-inf.mpg.de/internet/reports.nsf/NumberView/2010-5-003
%Y Max-Planck-Institut für Informatik %C Saarbrücken %D 2010 %P 39 p. %B Research Report
A Generic Algebraic Kernel for Non-linear Geometric Applications
E. Berberich, M. Hemmer and M. Kerber
Technical Report, 2010
@techreport{bhk-ak2-inria-2010, TITLE = {A Generic Algebraic Kernel for Non-linear Geometric Applications}, AUTHOR = {Berberich, Eric and Hemmer, Michael and Kerber, Michael}, LANGUAGE = {eng}, URL =
{http://hal.inria.fr/inria-00480031/fr/}, NUMBER = {7274}, LOCALID = {Local-ID: C1256428004B93B8-4DF2B1DAA1910721C12577FB00348D67-bhk-ak2-inria-2010}, INSTITUTION = {INRIA}, ADDRESS = {Sophia
Antipolis, France}, YEAR = {2010}, DATE = {2010}, TYPE = {Rapport de recherche / INRIA}, }
%0 Report %A Berberich, Eric %A Hemmer, Michael %A Kerber, Michael %+ Algorithms and Complexity, MPI for Informatics, Max Planck Society Algorithms and Complexity, MPI for Informatics, Max Planck
Society Algorithms and Complexity, MPI for Informatics, Max Planck Society %T A Generic Algebraic Kernel for Non-linear Geometric Applications : %G eng %U http://hdl.handle.net/11858/
00-001M-0000-000F-15EC-2 %F EDOC: 536754 %U http://hal.inria.fr/inria-00480031/fr/ %F OTHER: Local-ID: C1256428004B93B8-4DF2B1DAA1910721C12577FB00348D67-bhk-ak2-inria-2010 %Y INRIA %C Sophia
Antipolis, France %D 2010 %P 20 p. %B Rapport de recherche / INRIA
A Language Modeling Approach for Temporal Information Needs
K. Berberich, S. Bedathur, O. Alonso and G. Weikum
Technical Report, 2010
This work addresses information needs that have a temporal dimension conveyed by a temporal expression in the user's query. Temporal expressions such as \textsf{``in the 1990s''} are frequent, easily
extractable, but not leveraged by existing retrieval models. One challenge when dealing with them is their inherent uncertainty. It is often unclear which exact time interval a temporal expression
refers to. We integrate temporal expressions into a language modeling approach, thus making them first-class citizens of the retrieval model and considering their inherent uncertainty. Experiments on
the New York Times Annotated Corpus using Amazon Mechanical Turk to collect queries and obtain relevance assessments demonstrate that our approach yields substantial improvements in retrieval
@techreport{BerberichBedathurAlonsoWeikum2010, TITLE = {A Language Modeling Approach for Temporal Information Needs}, AUTHOR = {Berberich, Klaus and Bedathur, Srikanta and Alonso, Omar and Weikum,
Gerhard}, LANGUAGE = {eng}, URL = {http://domino.mpi-inf.mpg.de/internet/reports.nsf/NumberView/2010-5-001}, NUMBER = {MPI-I-2010-5-001}, INSTITUTION = {Max-Planck-Institut f{\"u}r Informatik},
ADDRESS = {Saarbr{\"u}cken}, YEAR = {2010}, DATE = {2010}, ABSTRACT = {This work addresses information needs that have a temporal dimension conveyed by a temporal expression in the user's query.
Temporal expressions such as \textsf{``in the 1990s''} are frequent, easily extractable, but not leveraged by existing retrieval models. One challenge when dealing with them is their inherent
uncertainty. It is often unclear which exact time interval a temporal expression refers to. We integrate temporal expressions into a language modeling approach, thus making them first-class citizens
of the retrieval model and considering their inherent uncertainty. Experiments on the New York Times Annotated Corpus using Amazon Mechanical Turk to collect queries and obtain relevance assessments
demonstrate that our approach yields substantial improvements in retrieval effectiveness.}, TYPE = {Research Report}, }
%0 Report %A Berberich, Klaus %A Bedathur, Srikanta %A Alonso, Omar %A Weikum, Gerhard %+ Databases and Information Systems, MPI for Informatics, Max Planck Society Databases and Information Systems,
MPI for Informatics, Max Planck Society Databases and Information Systems, MPI for Informatics, Max Planck Society Databases and Information Systems, MPI for Informatics, Max Planck Society %T A
Language Modeling Approach for Temporal Information Needs : %G eng %U http://hdl.handle.net/11858/00-001M-0000-0014-65AB-C %U http://domino.mpi-inf.mpg.de/internet/reports.nsf/NumberView/2010-5-001
%Y Max-Planck-Institut für Informatik %C Saarbrücken %D 2010 %P 29 p. %X This work addresses information needs that have a temporal dimension conveyed by a temporal expression in the user's
query. Temporal expressions such as \textsf{``in the 1990s''} are frequent, easily extractable, but not leveraged by existing retrieval models. One challenge when dealing with them is their inherent
uncertainty. It is often unclear which exact time interval a temporal expression refers to. We integrate temporal expressions into a language modeling approach, thus making them first-class citizens
of the retrieval model and considering their inherent uncertainty. Experiments on the New York Times Annotated Corpus using Amazon Mechanical Turk to collect queries and obtain relevance assessments
demonstrate that our approach yields substantial improvements in retrieval effectiveness. %B Research Report
Real-time Text Queries with Tunable Term Pair Indexes
A. Broschart and R. Schenkel
Technical Report, 2010
Term proximity scoring is an established means in information retrieval for improving result quality of full-text queries. Integrating such proximity scores into efficient query processing, however,
has not been equally well studied. Existing methods make use of precomputed lists of documents where tuples of terms, usually pairs, occur together, usually incurring a huge index size compared to
term-only indexes. This paper introduces a joint framework for trading off index size and result quality, and provides optimization techniques for tuning precomputed indexes towards either maximal
result quality or maximal query processing performance, given an upper bound for the index size. The framework allows to selectively materialize lists for pairs based on a query log to further reduce
index size. Extensive experiments with two large text collections demonstrate runtime improvements of several orders of magnitude over existing text-based processing techniques with reasonable index
@techreport{BroschartSchenkel2010, TITLE = {Real-time Text Queries with Tunable Term Pair Indexes}, AUTHOR = {Broschart, Andreas and Schenkel, Ralf}, LANGUAGE = {eng}, URL = {http://
domino.mpi-inf.mpg.de/internet/reports.nsf/NumberView/2010-5-006}, NUMBER = {MPI-I-2010-5-006}, INSTITUTION = {Max-Planck-Institut f{\"u}r Informatik}, ADDRESS = {Saarbr{\"u}cken}, YEAR = {2010},
DATE = {2010}, ABSTRACT = {Term proximity scoring is an established means in information retrieval for improving result quality of full-text queries. Integrating such proximity scores into efficient
query processing, however, has not been equally well studied. Existing methods make use of precomputed lists of documents where tuples of terms, usually pairs, occur together, usually incurring a
huge index size compared to term-only indexes. This paper introduces a joint framework for trading off index size and result quality, and provides optimization techniques for tuning precomputed
indexes towards either maximal result quality or maximal query processing performance, given an upper bound for the index size. The framework allows to selectively materialize lists for pairs based
on a query log to further reduce index size. Extensive experiments with two large text collections demonstrate runtime improvements of several orders of magnitude over existing text-based processing
techniques with reasonable index sizes.}, TYPE = {Research Report}, }
%0 Report %A Broschart, Andreas %A Schenkel, Ralf %+ Databases and Information Systems, MPI for Informatics, Max Planck Society Databases and Information Systems, MPI for Informatics, Max Planck
Society %T Real-time Text Queries with Tunable Term Pair Indexes : %G eng %U http://hdl.handle.net/11858/00-001M-0000-0014-658C-1 %U http://domino.mpi-inf.mpg.de/internet/reports.nsf/NumberView/
2010-5-006 %Y Max-Planck-Institut für Informatik %C Saarbrücken %D 2010 %P 41 p. %X Term proximity scoring is an established means in information retrieval for improving result quality of
full-text queries. Integrating such proximity scores into efficient query processing, however, has not been equally well studied. Existing methods make use of precomputed lists of documents where
tuples of terms, usually pairs, occur together, usually incurring a huge index size compared to term-only indexes. This paper introduces a joint framework for trading off index size and result
quality, and provides optimization techniques for tuning precomputed indexes towards either maximal result quality or maximal query processing performance, given an upper bound for the index size.
The framework allows to selectively materialize lists for pairs based on a query log to further reduce index size. Extensive experiments with two large text collections demonstrate runtime
improvements of several orders of magnitude over existing text-based processing techniques with reasonable index sizes. %B Research Report
LIVE: A Lineage-Supported Versioned DBMS
A. Das Sarma, M. Theobald and J. Widom
Technical Report, 2010
@techreport{ilpubs-926, TITLE = {{LIVE}: A Lineage-Supported Versioned {DBMS}}, AUTHOR = {Das Sarma, Anish and Theobald, Martin and Widom, Jennifer}, LANGUAGE = {eng}, URL = {http://
ilpubs.stanford.edu:8090/926/}, NUMBER = {ILPUBS-926}, LOCALID = {Local-ID: C1256DBF005F876D-C48EC96138450196C12576B1003F58D3-ilpubs-926}, INSTITUTION = {Standford University}, ADDRESS = {Standford},
YEAR = {2010}, DATE = {2010}, TYPE = {Technical Report}, }
%0 Report %A Das Sarma, Anish %A Theobald, Martin %A Widom, Jennifer %+ External Organizations Databases and Information Systems, MPI for Informatics, Max Planck Society External Organizations %T
LIVE: A Lineage-Supported Versioned DBMS : %G eng %U http://hdl.handle.net/11858/00-001M-0000-000F-1512-A %F EDOC: 536357 %U http://ilpubs.stanford.edu:8090/926/ %F OTHER: Local-ID:
C1256DBF005F876D-C48EC96138450196C12576B1003F58D3-ilpubs-926 %Y Standford University %C Standford %D 2010 %P 13 p. %B Technical Report
Query Relaxation for Entity-relationship Search
S. Elbassuoni, M. Ramanath and G. Weikum
Technical Report, 2010
@techreport{Elbassuoni-relax2010, TITLE = {Query Relaxation for Entity-relationship Search}, AUTHOR = {Elbassuoni, Shady and Ramanath, Maya and Weikum, Gerhard}, LANGUAGE = {eng}, NUMBER =
{MPI-I-2010-5-008}, INSTITUTION = {Max-Planck Institut f{\"u}r Informatik}, ADDRESS = {Saarbr{\"u}cken}, YEAR = {2010}, DATE = {2010}, TYPE = {Report}, }
%0 Report %A Elbassuoni, Shady %A Ramanath, Maya %A Weikum, Gerhard %+ Databases and Information Systems, MPI for Informatics, Max Planck Society Databases and Information Systems, MPI for
Informatics, Max Planck Society Databases and Information Systems, MPI for Informatics, Max Planck Society %T Query Relaxation for Entity-relationship Search : %G eng %U http://hdl.handle.net/11858/
00-001M-0000-0019-B30B-6 %Y Max-Planck Institut für Informatik %C Saarbrücken %D 2010 %B Report
Automatic Verification of Parametric Specifications with Complex Topologies
J. Faber, C. Ihlemann, S. Jacobs and V. Sofronie-Stokkermans
Technical Report, 2010
The focus of this paper is on reducing the complexity in verification by exploiting modularity at various levels: in specification, in verification, and structurally. \begin{itemize} \item For
specifications, we use the modular language CSP-OZ-DC, which allows us to decouple verification tasks concerning data from those concerning durations. \item At the verification level, we exploit
modularity in theorem proving for rich data structures and use this for invariant checking. \item At the structural level, we analyze possibilities for modular verification of systems consisting of
various components which interact. \end{itemize} We illustrate these ideas by automatically verifying safety properties of a case study from the European Train Control System standard, which extends
previous examples by comprising a complex track topology with lists of track segments and trains with different routes.
@techreport{faber-ihlemann-jacobs-sofronie-2010-report, TITLE = {Automatic Verification of Parametric Specifications with Complex Topologies}, AUTHOR = {Faber, Johannes and Ihlemann, Carsten and
Jacobs, Swen and Sofronie-Stokkermans, Viorica}, LANGUAGE = {eng}, ISSN = {1860-9821}, NUMBER = {ATR66}, LOCALID = {Local-ID:
C125716C0050FB51-2E8AD7BA67FF4CB5C12577B4004D8EF8-faber-ihlemann-jacobs-sofronie-2010-report}, INSTITUTION = {SFB/TR 14 AVACS}, YEAR = {2010}, DATE = {2010}, ABSTRACT = {The focus of this paper is on
reducing the complexity in verification by exploiting modularity at various levels: in specification, in verification, and structurally. \begin{itemize} \item For specifications, we use the modular
language CSP-OZ-DC, which allows us to decouple verification tasks concerning data from those concerning durations. \item At the verification level, we exploit modularity in theorem proving for rich
data structures and use this for invariant checking. \item At the structural level, we analyze possibilities for modular verification of systems consisting of various components which interact. \end
{itemize} We illustrate these ideas by automatically verifying safety properties of a case study from the European Train Control System standard, which extends previous examples by comprising a
complex track topology with lists of track segments and trains with different routes.}, TYPE = {AVACS Technical Report}, VOLUME = {66}, }
%0 Report %A Faber, Johannes %A Ihlemann, Carsten %A Jacobs, Swen %A Sofronie-Stokkermans, Viorica %+ Automation of Logic, MPI for Informatics, Max Planck Society Automation of Logic, MPI for
Informatics, Max Planck Society Automation of Logic, MPI for Informatics, Max Planck Society Automation of Logic, MPI for Informatics, Max Planck Society %T Automatic Verification of Parametric
Specifications with Complex Topologies : %G eng %U http://hdl.handle.net/11858/00-001M-0000-000F-14A6-8 %F EDOC: 536341 %F OTHER: Local-ID:
C125716C0050FB51-2E8AD7BA67FF4CB5C12577B4004D8EF8-faber-ihlemann-jacobs-sofronie-2010-report %Y SFB/TR 14 AVACS %D 2010 %P 40 p. %X The focus of this paper is on reducing the complexity in
verification by exploiting modularity at various levels: in specification, in verification, and structurally. \begin{itemize} \item For specifications, we use the modular language CSP-OZ-DC, which
allows us to decouple verification tasks concerning data from those concerning durations. \item At the verification level, we exploit modularity in theorem proving for rich data structures and use
this for invariant checking. \item At the structural level, we analyze possibilities for modular verification of systems consisting of various components which interact. \end{itemize} We illustrate
these ideas by automatically verifying safety properties of a case study from the European Train Control System standard, which extends previous examples by comprising a complex track topology with
lists of track segments and trains with different routes. %B AVACS Technical Report %N 66 %@ false
YAGO2: A Spatially and Temporally Enhanced Knowledge Base from Wikipedia
J. Hoffart, F. M. Suchanek, K. Berberich and G. Weikum
Technical Report, 2010
We present YAGO2, an extension of the YAGO knowledge base, in which entities, facts, and events are anchored in both time and space. YAGO2 is built automatically from Wikipedia, GeoNames, and
WordNet. It contains 80 million facts about 9.8 million entities. Human evaluation confirmed an accuracy of 95\% of the facts in YAGO2. In this paper, we present the extraction methodology, the
integration of the spatio-temporal dimension, and our knowledge representation SPOTL, an extension of the original SPO-triple model to time and space.
@techreport{Hoffart2010, TITLE = {{YAGO}2: A Spatially and Temporally Enhanced Knowledge Base from {Wikipedia}}, AUTHOR = {Hoffart, Johannes and Suchanek, Fabian M. and Berberich, Klaus and Weikum,
Gerhard}, LANGUAGE = {eng}, URL = {http://domino.mpi-inf.mpg.de/internet/reports.nsf/NumberView/2010-5-007}, NUMBER = {MPI-I-2010-5-007}, LOCALID = {Local-ID:
C1256DBF005F876D-37A86CDFCE56B71DC125784800386E6A-Hoffart2010}, INSTITUTION = {Max-Planck-Institut f{\"u}r Informatik}, ADDRESS = {Saarbr{\"u}cken}, YEAR = {2010}, DATE = {2010}, ABSTRACT = {We
present YAGO2, an extension of the YAGO knowledge base, in which entities, facts, and events are anchored in both time and space. YAGO2 is built automatically from Wikipedia, GeoNames, and WordNet.
It contains 80 million facts about 9.8 million entities. Human evaluation confirmed an accuracy of 95\% of the facts in YAGO2. In this paper, we present the extraction methodology, the integration of
the spatio-temporal dimension, and our knowledge representation SPOTL, an extension of the original SPO-triple model to time and space.}, TYPE = {Research Report}, }
%0 Report %A Hoffart, Johannes %A Suchanek, Fabian M. %A Berberich, Klaus %A Weikum, Gerhard %+ Databases and Information Systems, MPI for Informatics, Max Planck Society Databases and Information
Systems, MPI for Informatics, Max Planck Society Databases and Information Systems, MPI for Informatics, Max Planck Society Databases and Information Systems, MPI for Informatics, Max Planck Society
%T YAGO2: A Spatially and Temporally Enhanced Knowledge Base from Wikipedia : %G eng %U http://hdl.handle.net/11858/00-001M-0000-000F-155B-A %F EDOC: 536412 %U http://domino.mpi-inf.mpg.de/internet/
reports.nsf/NumberView/2010-5-007 %F OTHER: Local-ID: C1256DBF005F876D-37A86CDFCE56B71DC125784800386E6A-Hoffart2010 %Y Max-Planck-Institut für Informatik %C Saarbrücken %D 2010 %P 55 p. %X
We present YAGO2, an extension of the YAGO knowledge base, in which entities, facts, and events are anchored in both time and space. YAGO2 is built automatically from Wikipedia, GeoNames, and
WordNet. It contains 80 million facts about 9.8 million entities. Human evaluation confirmed an accuracy of 95\% of the facts in YAGO2. In this paper, we present the extraction methodology, the
integration of the spatio-temporal dimension, and our knowledge representation SPOTL, an extension of the original SPO-triple model to time and space. %B Research Report
Maximum Cardinality Popular Matchings in Strict Two-sided Preference Lists
C.-C. Huang and T. Kavitha
Technical Report, 2010
We consider the problem of computing a maximum cardinality {\em popular} matching in a bipartite graph $G = (\A\cup\B, E)$ where each vertex $u \in \A\cup\B$ ranks its neighbors in a strict order of
preference. This is the same as an instance of the {\em stable marriage} problem with incomplete lists. A matching $M^*$ is said to be popular if there is no matching $M$ such that more vertices are
better off in $M$ than in $M^*$. \smallskip Popular matchings have been extensively studied in the case of one-sided preference lists, i.e., only vertices of $\A$ have preferences over their
neighbors while vertices in $\B$ have no preferences; polynomial time algorithms have been shown here to determine if a given instance admits a popular matching or not and if so, to compute one with
maximum cardinality. It has very recently been shown that for two-sided preference lists, the problem of determining if a given instance admits a popular matching or not is NP-complete. However this
hardness result assumes that preference lists have {\em ties}. When preference lists are {\em strict}, it is easy to show that popular matchings always exist since stable matchings always exist and
they are popular. But the complexity of computing a maximum cardinality popular matching was unknown. In this paper we show an $O(mn)$ algorithm for this problem, where $n = |\A| + |\B|$ and $m = |E|
@techreport{HuangKavitha2010, TITLE = {Maximum Cardinality Popular Matchings in Strict Two-sided Preference Lists}, AUTHOR = {Huang, Chien-Chung and Kavitha, Telikepalli}, LANGUAGE = {eng}, URL =
{http://domino.mpi-inf.mpg.de/internet/reports.nsf/NumberView/2010-1-001}, NUMBER = {MPI-I-2010-1-001}, INSTITUTION = {Max-Planck-Institut f{\"u}r Informatik}, ADDRESS = {Saarbr{\"u}cken}, YEAR =
{2010}, DATE = {2010}, ABSTRACT = {We consider the problem of computing a maximum cardinality {\em popular} matching in a bipartite graph $G = (\A\cup\B, E)$ where each vertex $u \in \A\cup\B$ ranks
its neighbors in a strict order of preference. This is the same as an instance of the {\em stable marriage} problem with incomplete lists. A matching $M^*$ is said to be popular if there is no
matching $M$ such that more vertices are better off in $M$ than in $M^*$. \smallskip Popular matchings have been extensively studied in the case of one-sided preference lists, i.e., only vertices of
$\A$ have preferences over their neighbors while vertices in $\B$ have no preferences; polynomial time algorithms have been shown here to determine if a given instance admits a popular matching or
not and if so, to compute one with maximum cardinality. It has very recently been shown that for two-sided preference lists, the problem of determining if a given instance admits a popular matching
or not is NP-complete. However this hardness result assumes that preference lists have {\em ties}. When preference lists are {\em strict}, it is easy to show that popular matchings always exist since
stable matchings always exist and they are popular. But the complexity of computing a maximum cardinality popular matching was unknown. In this paper we show an $O(mn)$ algorithm for this problem,
where $n = |\A| + |\B|$ and $m = |E|$.}, TYPE = {Research Report}, }
%0 Report %A Huang, Chien-Chung %A Kavitha, Telikepalli %+ Algorithms and Complexity, MPI for Informatics, Max Planck Society External Organizations %T Maximum Cardinality Popular Matchings in Strict
Two-sided Preference Lists : %G eng %U http://hdl.handle.net/11858/00-001M-0000-0014-6668-9 %U http://domino.mpi-inf.mpg.de/internet/reports.nsf/NumberView/2010-1-001 %Y Max-Planck-Institut für
Informatik %C Saarbrücken %D 2010 %P 17 p. %X We consider the problem of computing a maximum cardinality {\em popular} matching in a bipartite graph $G = (\A\cup\B, E)$ where each vertex $u \in
\A\cup\B$ ranks its neighbors in a strict order of preference. This is the same as an instance of the {\em stable marriage} problem with incomplete lists. A matching $M^*$ is said to be popular if
there is no matching $M$ such that more vertices are better off in $M$ than in $M^*$. \smallskip Popular matchings have been extensively studied in the case of one-sided preference lists, i.e., only
vertices of $\A$ have preferences over their neighbors while vertices in $\B$ have no preferences; polynomial time algorithms have been shown here to determine if a given instance admits a popular
matching or not and if so, to compute one with maximum cardinality. It has very recently been shown that for two-sided preference lists, the problem of determining if a given instance admits a
popular matching or not is NP-complete. However this hardness result assumes that preference lists have {\em ties}. When preference lists are {\em strict}, it is easy to show that popular matchings
always exist since stable matchings always exist and they are popular. But the complexity of computing a maximum cardinality popular matching was unknown. In this paper we show an $O(mn)$ algorithm
for this problem, where $n = |\A| + |\B|$ and $m = |E|$. %B Research Report
On Hierarchical Reasoning in Combinations of Theories
C. Ihlemann and V. Sofronie-Stokkermans
Technical Report, 2010a
In this paper we study theory combinations over non-disjoint signatures in which hierarchical and modular reasoning is possible. We use a notion of locality of a theory extension parameterized by a
closure operator on ground terms. We give criteria for recognizing these types of theory extensions. We then show that combinations of extensions of theories which are local in this extended sense
have also a locality property and hence allow modular and hierarchical reasoning. We thus obtain parameterized decidability and complexity results for many (combinations of) theories important in
@techreport{Ihlemann-Sofronie-Stokkermans-atr60-2010, TITLE = {On Hierarchical Reasoning in Combinations of Theories}, AUTHOR = {Ihlemann, Carsten and Sofronie-Stokkermans, Viorica}, LANGUAGE =
{eng}, ISSN = {1860-9821}, NUMBER = {ATR60}, LOCALID = {Local-ID: C125716C0050FB51-8E77AFE123C76116C1257782003FEBDA-Ihlemann-Sofronie-Stokkermans-atr60-2010}, INSTITUTION = {SFB/TR 14 AVACS}, YEAR =
{2010}, DATE = {2010}, ABSTRACT = {In this paper we study theory combinations over non-disjoint signatures in which hierarchical and modular reasoning is possible. We use a notion of locality of a
theory extension parameterized by a closure operator on ground terms. We give criteria for recognizing these types of theory extensions. We then show that combinations of extensions of theories which
are local in this extended sense have also a locality property and hence allow modular and hierarchical reasoning. We thus obtain parameterized decidability and complexity results for many
(combinations of) theories important in verification.}, TYPE = {AVACS Technical Report}, VOLUME = {60}, }
%0 Report %A Ihlemann, Carsten %A Sofronie-Stokkermans, Viorica %+ Automation of Logic, MPI for Informatics, Max Planck Society Automation of Logic, MPI for Informatics, Max Planck Society %T On
Hierarchical Reasoning in Combinations of Theories : %G eng %U http://hdl.handle.net/11858/00-001M-0000-000F-14B7-2 %F EDOC: 536339 %F OTHER: Local-ID:
C125716C0050FB51-8E77AFE123C76116C1257782003FEBDA-Ihlemann-Sofronie-Stokkermans-atr60-2010 %Y SFB/TR 14 AVACS %D 2010 %P 26 p. %X In this paper we study theory combinations over non-disjoint
signatures in which hierarchical and modular reasoning is possible. We use a notion of locality of a theory extension parameterized by a closure operator on ground terms. We give criteria for
recognizing these types of theory extensions. We then show that combinations of extensions of theories which are local in this extended sense have also a locality property and hence allow modular and
hierarchical reasoning. We thus obtain parameterized decidability and complexity results for many (combinations of) theories important in verification. %B AVACS Technical Report %N 60 %@ false %U
System Description: H-PILoT (Version 1.9)
C. Ihlemann and V. Sofronie-Stokkermans
Technical Report, 2010b
This system description provides an overview of H-PILoT (Hierarchical Proving by Instantiation in Local Theory extensions), a program for hierarchical reasoning in extensions of logical theories.
H-PILoT reduces deduction problems in the theory extension to deduction problems in the base theory. Specialized provers and standard SMT solvers can be used for testing the satisfiability of the
formulae obtained after the reduction. For a certain type of theory extension (namely for {\em local theory extensions}) this hierarchical reduction is sound and complete and -- if the formulae
obtained this way belong to a fragment decidable in the base theory -- H-PILoT provides a decision procedure for testing satisfiability of ground formulae, and can also be used for model generation.
@techreport{Ihlemann-Sofronie-Stokkermans-atr61-2010, TITLE = {System Description: H-{PILoT} (Version 1.9)}, AUTHOR = {Ihlemann, Carsten and Sofronie-Stokkermans, Viorica}, LANGUAGE = {eng}, ISSN =
{1860-9821}, NUMBER = {ATR61}, LOCALID = {Local-ID: C125716C0050FB51-5F53450808E13ED9C125778C00501AE6-Ihlemann-Sofronie-Stokkermans-atr61-2010}, INSTITUTION = {SFB/TR 14 AVACS}, YEAR = {2010}, DATE =
{2010}, ABSTRACT = {This system description provides an overview of H-PILoT (Hierarchical Proving by Instantiation in Local Theory extensions), a program for hierarchical reasoning in extensions of
logical theories. H-PILoT reduces deduction problems in the theory extension to deduction problems in the base theory. Specialized provers and standard SMT solvers can be used for testing the
satisfiability of the formulae obtained after the reduction. For a certain type of theory extension (namely for {\em local theory extensions}) this hierarchical reduction is sound and complete and --
if the formulae obtained this way belong to a fragment decidable in the base theory -- H-PILoT provides a decision procedure for testing satisfiability of ground formulae, and can also be used for
model generation.}, TYPE = {AVACS Technical Report}, VOLUME = {61}, }
%0 Report %A Ihlemann, Carsten %A Sofronie-Stokkermans, Viorica %+ Automation of Logic, MPI for Informatics, Max Planck Society Automation of Logic, MPI for Informatics, Max Planck Society %T System
Description: H-PILoT (Version 1.9) : %G eng %U http://hdl.handle.net/11858/00-001M-0000-000F-14C5-2 %F EDOC: 536340 %F OTHER: Local-ID:
C125716C0050FB51-5F53450808E13ED9C125778C00501AE6-Ihlemann-Sofronie-Stokkermans-atr61-2010 %Y SFB/TR 14 AVACS %D 2010 %P 45 p. %X This system description provides an overview of H-PILoT (Hierarchical
Proving by Instantiation in Local Theory extensions), a program for hierarchical reasoning in extensions of logical theories. H-PILoT reduces deduction problems in the theory extension to deduction
problems in the base theory. Specialized provers and standard SMT solvers can be used for testing the satisfiability of the formulae obtained after the reduction. For a certain type of theory
extension (namely for {\em local theory extensions}) this hierarchical reduction is sound and complete and -- if the formulae obtained this way belong to a fragment decidable in the base theory --
H-PILoT provides a decision procedure for testing satisfiability of ground formulae, and can also be used for model generation. %B AVACS Technical Report %N 61 %@ false
Query Evaluation with Asymmetric Web Services
N. Preda, F. Suchanek, W. Yuan and G. Weikum
Technical Report, 2010
@techreport{PredaSuchanekYuanWeikum2011, TITLE = {Query Evaluation with Asymmetric Web Services}, AUTHOR = {Preda, Nicoleta and Suchanek, F. and Yuan, Wenjun and Weikum, Gerhard}, LANGUAGE = {eng},
URL = {http://domino.mpi-inf.mpg.de/internet/reports.nsf/NumberView/2010-5-004}, NUMBER = {MPI-I-2010-5-004}, INSTITUTION = {Max-Planck-Institut f{\"u}r Informatik}, ADDRESS = {Saarbr{\"u}cken}, YEAR
= {2010}, DATE = {2010}, TYPE = {Research Report}, }
%0 Report %A Preda, Nicoleta %A Suchanek, F. %A Yuan, Wenjun %A Weikum, Gerhard %+ Databases and Information Systems, MPI for Informatics, Max Planck Society Databases and Information Systems, MPI
for Informatics, Max Planck Society Databases and Information Systems, MPI for Informatics, Max Planck Society Databases and Information Systems, MPI for Informatics, Max Planck Society %T Query
Evaluation with Asymmetric Web Services : %G eng %U http://hdl.handle.net/11858/00-001M-0000-0014-659D-C %U http://domino.mpi-inf.mpg.de/internet/reports.nsf/NumberView/2010-5-004 %Y
Max-Planck-Institut für Informatik %C Saarbrücken %D 2010 %P 31 p. %B Research Report
Bonsai: Growing Interesting Small Trees
S. Seufert, S. Bedathur, J. Mestre and G. Weikum
Technical Report, 2010
Graphs are increasingly used to model a variety of loosely structured data such as biological or social networks and entity-relationships. Given this profusion of large-scale graph data, efficiently
discovering interesting substructures buried within is essential. These substructures are typically used in determining subsequent actions, such as conducting visual analytics by humans or designing
expensive biomedical experiments. In such settings, it is often desirable to constrain the size of the discovered results in order to directly control the associated costs. In this report, we address
the problem of finding cardinality-constrained connected subtrees from large node-weighted graphs that maximize the sum of weights of selected nodes. We provide an efficient constant-factor
approximation algorithm for this strongly NP-hard problem. Our techniques can be applied in a wide variety of application settings, for example in differential analysis of graphs, a problem that
frequently arises in bioinformatics but also has applications on the web.
@techreport{Seufert2010a, TITLE = {Bonsai: Growing Interesting Small Trees}, AUTHOR = {Seufert, Stephan and Bedathur, Srikanta and Mestre, Julian and Weikum, Gerhard}, LANGUAGE = {eng}, URL = {http:/
/domino.mpi-inf.mpg.de/internet/reports.nsf/NumberView/2010-5-005}, NUMBER = {MPI-I-2010-5-005}, LOCALID = {Local-ID: C1256DBF005F876D-BC73995718B48415C12577E600538833-Seufert2010a}, INSTITUTION =
{Max-Planck-Institut f{\"u}r Informatik}, ADDRESS = {Saarbr{\"u}cken}, YEAR = {2010}, DATE = {2010}, ABSTRACT = {Graphs are increasingly used to model a variety of loosely structured data such as
biological or social networks and entity-relationships. Given this profusion of large-scale graph data, efficiently discovering interesting substructures buried within is essential. These
substructures are typically used in determining subsequent actions, such as conducting visual analytics by humans or designing expensive biomedical experiments. In such settings, it is often
desirable to constrain the size of the discovered results in order to directly control the associated costs. In this report, we address the problem of finding cardinality-constrained connected
subtrees from large node-weighted graphs that maximize the sum of weights of selected nodes. We provide an efficient constant-factor approximation algorithm for this strongly NP-hard problem. Our
techniques can be applied in a wide variety of application settings, for example in differential analysis of graphs, a problem that frequently arises in bioinformatics but also has applications on
the web.}, TYPE = {Research Report}, }
%0 Report %A Seufert, Stephan %A Bedathur, Srikanta %A Mestre, Julian %A Weikum, Gerhard %+ Databases and Information Systems, MPI for Informatics, Max Planck Society Databases and Information
Systems, MPI for Informatics, Max Planck Society Algorithms and Complexity, MPI for Informatics, Max Planck Society Databases and Information Systems, MPI for Informatics, Max Planck Society %T
Bonsai: Growing Interesting Small Trees : %G eng %U http://hdl.handle.net/11858/00-001M-0000-000F-14D8-7 %F EDOC: 536383 %U http://domino.mpi-inf.mpg.de/internet/reports.nsf/NumberView/2010-5-005 %F
OTHER: Local-ID: C1256DBF005F876D-BC73995718B48415C12577E600538833-Seufert2010a %Y Max-Planck-Institut für Informatik %C Saarbrücken %D 2010 %P 32 p. %X Graphs are increasingly used to
model a variety of loosely structured data such as biological or social networks and entity-relationships. Given this profusion of large-scale graph data, efficiently discovering interesting
substructures buried within is essential. These substructures are typically used in determining subsequent actions, such as conducting visual analytics by humans or designing expensive biomedical
experiments. In such settings, it is often desirable to constrain the size of the discovered results in order to directly control the associated costs. In this report, we address the problem of
finding cardinality-constrained connected subtrees from large node-weighted graphs that maximize the sum of weights of selected nodes. We provide an efficient constant-factor approximation algorithm
for this strongly NP-hard problem. Our techniques can be applied in a wide variety of application settings, for example in differential analysis of graphs, a problem that frequently arises in
bioinformatics but also has applications on the web. %B Research Report
On the saturation of YAGO
M. Suda, C. Weidenbach and P. Wischnewski
Technical Report, 2010
@techreport{SudaWischnewski2010, TITLE = {On the saturation of {YAGO}}, AUTHOR = {Suda, Martin and Weidenbach, Christoph and Wischnewski, Patrick}, LANGUAGE = {eng}, URL = {http://
domino.mpi-inf.mpg.de/internet/reports.nsf/NumberView/2010-RG1-001}, NUMBER = {MPI-I-2010-RG1-001}, INSTITUTION = {Max-Planck-Institut f{\"u}r Informatik}, ADDRESS = {Saarbr{\"u}cken}, YEAR = {2010},
DATE = {2010}, TYPE = {Research Report}, }
%0 Report %A Suda, Martin %A Weidenbach, Christoph %A Wischnewski, Patrick %+ Automation of Logic, MPI for Informatics, Max Planck Society Automation of Logic, MPI for Informatics, Max Planck Society
Automation of Logic, MPI for Informatics, Max Planck Society %T On the saturation of YAGO : %G eng %U http://hdl.handle.net/11858/00-001M-0000-0014-6584-2 %U http://domino.mpi-inf.mpg.de/internet/
reports.nsf/NumberView/2010-RG1-001 %Y Max-Planck-Institut für Informatik %C Saarbrücken %D 2010 %P 50 p. %B Research Report
A Bayesian Approach to Manifold Topology Reconstruction
A. Tevs, M. Wand, I. Ihrke and H.-P. Seidel
Technical Report, 2010
In this paper, we investigate the problem of statistical reconstruction of piecewise linear manifold topology. Given a noisy, probably undersampled point cloud from a one- or two-manifold, the
algorithm reconstructs an approximated most likely mesh in a Bayesian sense from which the sample might have been taken. We incorporate statistical priors on the object geometry to improve the
reconstruction quality if additional knowledge about the class of original shapes is available. The priors can be formulated analytically or learned from example geometry with known manifold
tessellation. The statistical objective function is approximated by a linear programming / integer programming problem, for which a globally optimal solution is found. We apply the algorithm to a set
of 2D and 3D reconstruction examples, demon-strating that a statistics-based manifold reconstruction is feasible, and still yields plausible results in situations where sampling conditions are
@techreport{TevsTechReport2009, TITLE = {A Bayesian Approach to Manifold Topology Reconstruction}, AUTHOR = {Tevs, Art and Wand, Michael and Ihrke, Ivo and Seidel, Hans-Peter}, LANGUAGE = {eng}, ISSN
= {0946-011X}, NUMBER = {MPI-I-2009-4-002}, INSTITUTION = {Max-Planck-Institut f{\"u}r Informatik}, ADDRESS = {Saarbr{\"u}cken}, YEAR = {2010}, DATE = {2010}, ABSTRACT = {In this paper, we
investigate the problem of statistical reconstruction of piecewise linear manifold topology. Given a noisy, probably undersampled point cloud from a one- or two-manifold, the algorithm reconstructs
an approximated most likely mesh in a Bayesian sense from which the sample might have been taken. We incorporate statistical priors on the object geometry to improve the reconstruction quality if
additional knowledge about the class of original shapes is available. The priors can be formulated analytically or learned from example geometry with known manifold tessellation. The statistical
objective function is approximated by a linear programming / integer programming problem, for which a globally optimal solution is found. We apply the algorithm to a set of 2D and 3D reconstruction
examples, demon-strating that a statistics-based manifold reconstruction is feasible, and still yields plausible results in situations where sampling conditions are violated.}, TYPE = {Research
Report}, }
%0 Report %A Tevs, Art %A Wand, Michael %A Ihrke, Ivo %A Seidel, Hans-Peter %+ Computer Graphics, MPI for Informatics, Max Planck Society International Max Planck Research School, MPI for
Informatics, Max Planck Society Computer Graphics, MPI for Informatics, Max Planck Society Graphics - Optics - Vision, MPI for Informatics, Max Planck Society Computer Graphics, MPI for Informatics,
Max Planck Society Computer Graphics, MPI for Informatics, Max Planck Society %T A Bayesian Approach to Manifold Topology Reconstruction : %G eng %U http://hdl.handle.net/11858/
00-001M-0000-000F-1722-7 %F EDOC: 537282 %@ 0946-011X %Y Max-Planck-Institut für Informatik %C Saarbrücken %D 2010 %P 23 p. %X In this paper, we investigate the problem of statistical
reconstruction of piecewise linear manifold topology. Given a noisy, probably undersampled point cloud from a one- or two-manifold, the algorithm reconstructs an approximated most likely mesh in a
Bayesian sense from which the sample might have been taken. We incorporate statistical priors on the object geometry to improve the reconstruction quality if additional knowledge about the class of
original shapes is available. The priors can be formulated analytically or learned from example geometry with known manifold tessellation. The statistical objective function is approximated by a
linear programming / integer programming problem, for which a globally optimal solution is found. We apply the algorithm to a set of 2D and 3D reconstruction examples, demon-strating that a
statistics-based manifold reconstruction is feasible, and still yields plausible results in situations where sampling conditions are violated. %B Research Report
URDF: Efficient Reasoning in Uncertain RDF Knowledge Bases with Soft and Hard Rules
M. Theobald, M. Sozio, F. Suchanek and N. Nakashole
Technical Report, 2010
We present URDF, an efficient reasoning framework for graph-based, nonschematic RDF knowledge bases and SPARQL-like queries. URDF augments first-order reasoning by a combination of soft rules, with
Datalog-style recursive implications, and hard rules, in the shape of mutually exclusive sets of facts. It incorporates the common possible worlds semantics with independent base facts as it is
prevalent in most probabilistic database approaches, but also supports semantically more expressive, probabilistic first-order representations such as Markov Logic Networks. As knowledge extraction
on theWeb often is an iterative (and inherently noisy) process, URDF explicitly targets the resolution of inconsistencies between the underlying RDF base facts and the inference rules. Core of our
approach is a novel and efficient approximation algorithm for a generalized version of the Weighted MAX-SAT problem, allowing us to dynamically resolve such inconsistencies directly at query
processing time. Our MAX-SAT algorithm has a worst-case running time of O(jCj jSj), where jCj and jSj denote the number of facts in grounded soft and hard rules, respectively, and it comes with tight
approximation guarantees with respect to the shape of the rules and the distribution of confidences of facts they contain. Experiments over various benchmark settings confirm a high robustness and
significantly improved runtime of our reasoning framework in comparison to state-of-the-art techniques for MCMC sampling such as MAP inference and MC-SAT. Keywords
@techreport{urdf-tr-2010, TITLE = {{URDF}: Efficient Reasoning in Uncertain {RDF} Knowledge Bases with Soft and Hard Rules}, AUTHOR = {Theobald, Martin and Sozio, Mauro and Suchanek, Fabian and
Nakashole, Ndapandula}, LANGUAGE = {eng}, URL = {http://domino.mpi-inf.mpg.de/internet/reports.nsf/NumberView/2010-5-002}, NUMBER = {MPI-I-2010-5-002}, LOCALID = {Local-ID:
C1256DBF005F876D-4F6C2407136ECAA6C125770E003634BE-urdf-tr-2010}, INSTITUTION = {Max-Planck-Institut f{\"u}r Informatik}, ADDRESS = {Saarbr{\"u}cken}, YEAR = {2010}, DATE = {2010}, ABSTRACT = {We
present URDF, an efficient reasoning framework for graph-based, nonschematic RDF knowledge bases and SPARQL-like queries. URDF augments first-order reasoning by a combination of soft rules, with
Datalog-style recursive implications, and hard rules, in the shape of mutually exclusive sets of facts. It incorporates the common possible worlds semantics with independent base facts as it is
prevalent in most probabilistic database approaches, but also supports semantically more expressive, probabilistic first-order representations such as Markov Logic Networks. As knowledge extraction
on theWeb often is an iterative (and inherently noisy) process, URDF explicitly targets the resolution of inconsistencies between the underlying RDF base facts and the inference rules. Core of our
approach is a novel and efficient approximation algorithm for a generalized version of the Weighted MAX-SAT problem, allowing us to dynamically resolve such inconsistencies directly at query
processing time. Our MAX-SAT algorithm has a worst-case running time of O(jCj jSj), where jCj and jSj denote the number of facts in grounded soft and hard rules, respectively, and it comes with tight
approximation guarantees with respect to the shape of the rules and the distribution of confidences of facts they contain. Experiments over various benchmark settings confirm a high robustness and
significantly improved runtime of our reasoning framework in comparison to state-of-the-art techniques for MCMC sampling such as MAP inference and MC-SAT. Keywords}, TYPE = {Research Report}, }
%0 Report %A Theobald, Martin %A Sozio, Mauro %A Suchanek, Fabian %A Nakashole, Ndapandula %+ Databases and Information Systems, MPI for Informatics, Max Planck Society Databases and Information
Systems, MPI for Informatics, Max Planck Society Databases and Information Systems, MPI for Informatics, Max Planck Society Databases and Information Systems, MPI for Informatics, Max Planck Society
%T URDF: Efficient Reasoning in Uncertain RDF Knowledge Bases with Soft and Hard Rules : %G eng %U http://hdl.handle.net/11858/00-001M-0000-000F-1556-3 %F EDOC: 536366 %U http://domino.mpi-inf.mpg.de
/internet/reports.nsf/NumberView/2010-5-002 %F OTHER: Local-ID: C1256DBF005F876D-4F6C2407136ECAA6C125770E003634BE-urdf-tr-2010 %Y Max-Planck-Institut für Informatik %C Saarbrücken %D 2010
%P 48 p. %X We present URDF, an efficient reasoning framework for graph-based, nonschematic RDF knowledge bases and SPARQL-like queries. URDF augments first-order reasoning by a combination of soft
rules, with Datalog-style recursive implications, and hard rules, in the shape of mutually exclusive sets of facts. It incorporates the common possible worlds semantics with independent base facts as
it is prevalent in most probabilistic database approaches, but also supports semantically more expressive, probabilistic first-order representations such as Markov Logic Networks. As knowledge
extraction on theWeb often is an iterative (and inherently noisy) process, URDF explicitly targets the resolution of inconsistencies between the underlying RDF base facts and the inference rules.
Core of our approach is a novel and efficient approximation algorithm for a generalized version of the Weighted MAX-SAT problem, allowing us to dynamically resolve such inconsistencies directly at
query processing time. Our MAX-SAT algorithm has a worst-case running time of O(jCj jSj), where jCj and jSj denote the number of facts in grounded soft and hard rules, respectively, and it comes with
tight approximation guarantees with respect to the shape of the rules and the distribution of confidences of facts they contain. Experiments over various benchmark settings confirm a high robustness
and significantly improved runtime of our reasoning framework in comparison to state-of-the-art techniques for MCMC sampling such as MAP inference and MC-SAT. Keywords %B Research Report
Scalable Phrase Mining for Ad-hoc Text Analytics
S. Bedathur, K. Berberich, J. Dittrich, N. Mamoulis and G. Weikum
Technical Report, 2009
@techreport{BedathurBerberichDittrichMamoulisWeikum2009, TITLE = {Scalable Phrase Mining for Ad-hoc Text Analytics}, AUTHOR = {Bedathur, Srikanta and Berberich, Klaus and Dittrich, Jens and Mamoulis,
Nikos and Weikum, Gerhard}, LANGUAGE = {eng}, ISSN = {0946-011X}, NUMBER = {MPI-I-2009-5-006}, LOCALID = {Local-ID: C1256DBF005F876D-4E35301DBC58B9F7C12575A00044A942-TechReport-BBDMW2009},
INSTITUTION = {Max-Planck-Institut f{\"u}r Informatik}, ADDRESS = {Saarbr{\"u}cken}, YEAR = {2009}, DATE = {2009}, TYPE = {Research Report}, }
%0 Report %A Bedathur, Srikanta %A Berberich, Klaus %A Dittrich, Jens %A Mamoulis, Nikos %A Weikum, Gerhard %+ Databases and Information Systems, MPI for Informatics, Max Planck Society Databases and
Information Systems, MPI for Informatics, Max Planck Society Databases and Information Systems, MPI for Informatics, Max Planck Society Databases and Information Systems, MPI for Informatics, Max
Planck Society Databases and Information Systems, MPI for Informatics, Max Planck Society %T Scalable Phrase Mining for Ad-hoc Text Analytics : %G eng %U http://hdl.handle.net/11858/
00-001M-0000-000F-194A-0 %F EDOC: 520425 %@ 0946-011X %F OTHER: Local-ID: C1256DBF005F876D-4E35301DBC58B9F7C12575A00044A942-TechReport-BBDMW2009 %Y Max-Planck-Institut für Informatik %C Saarbr&#
252;cken %D 2009 %P 41 p. %B Research Report
Generalized intrinsic symmetry detection
A. Berner, M. Bokeloh, M. Wand, A. Schilling and H.-P. Seidel
Technical Report, 2009
In this paper, we address the problem of detecting partial symmetries in 3D objects. In contrast to previous work, our algorithm is able to match deformed symmetric parts: We first develop an
algorithm for the case of approximately isometric deformations, based on matching graphs of surface feature lines that are annotated with intrinsic geometric properties. The sensitivity to
non-isometry is controlled by tolerance parameters for each such annotation. Using large tolerance values for some of these annotations and a robust matching of the graph topology yields a more
general symmetry detection algorithm that can detect similarities in structures that have undergone strong deformations. This approach for the first time allows for detecting partial intrinsic as
well as more general, non-isometric symmetries. We evaluate the recognition performance of our technique for a number synthetic and real-world scanner data sets.
@techreport{BernerBokelohWandSchillingSeidel2009, TITLE = {Generalized intrinsic symmetry detection}, AUTHOR = {Berner, Alexander and Bokeloh, Martin and Wand, Martin and Schilling, Andreas and
Seidel, Hans-Peter}, LANGUAGE = {eng}, URL = {http://domino.mpi-inf.mpg.de/internet/reports.nsf/NumberView/2009-4-005}, NUMBER = {MPI-I-2009-4-005}, INSTITUTION = {Max-Planck-Institut f{\"u}r
Informatik}, ADDRESS = {Saarbr{\"u}cken}, YEAR = {2009}, DATE = {2009}, ABSTRACT = {In this paper, we address the problem of detecting partial symmetries in 3D objects. In contrast to previous work,
our algorithm is able to match deformed symmetric parts: We first develop an algorithm for the case of approximately isometric deformations, based on matching graphs of surface feature lines that are
annotated with intrinsic geometric properties. The sensitivity to non-isometry is controlled by tolerance parameters for each such annotation. Using large tolerance values for some of these
annotations and a robust matching of the graph topology yields a more general symmetry detection algorithm that can detect similarities in structures that have undergone strong deformations. This
approach for the first time allows for detecting partial intrinsic as well as more general, non-isometric symmetries. We evaluate the recognition performance of our technique for a number synthetic
and real-world scanner data sets.}, TYPE = {Research Report / Max-Planck-Institut für Informatik}, }
%0 Report %A Berner, Alexander %A Bokeloh, Martin %A Wand, Martin %A Schilling, Andreas %A Seidel, Hans-Peter %+ Computer Graphics, MPI for Informatics, Max Planck Society Computer Graphics, MPI for
Informatics, Max Planck Society External Organizations External Organizations Computer Graphics, MPI for Informatics, Max Planck Society %T Generalized intrinsic symmetry detection : %G eng %U http:/
/hdl.handle.net/11858/00-001M-0000-0014-666B-3 %U http://domino.mpi-inf.mpg.de/internet/reports.nsf/NumberView/2009-4-005 %Y Max-Planck-Institut für Informatik %C Saarbrücken %D 2009 %P 33
p. %X In this paper, we address the problem of detecting partial symmetries in 3D objects. In contrast to previous work, our algorithm is able to match deformed symmetric parts: We first develop an
algorithm for the case of approximately isometric deformations, based on matching graphs of surface feature lines that are annotated with intrinsic geometric properties. The sensitivity to
non-isometry is controlled by tolerance parameters for each such annotation. Using large tolerance values for some of these annotations and a robust matching of the graph topology yields a more
general symmetry detection algorithm that can detect similarities in structures that have undergone strong deformations. This approach for the first time allows for detecting partial intrinsic as
well as more general, non-isometric symmetries. We evaluate the recognition performance of our technique for a number synthetic and real-world scanner data sets. %B Research Report /
Max-Planck-Institut für Informatik
Towards a Universal Wordnet by Learning from Combined Evidenc
G. de Melo and G. Weikum
Technical Report, 2009
Lexical databases are invaluable sources of knowledge about words and their meanings, with numerous applications in areas like NLP, IR, and AI. We propose a methodology for the automatic construction
of a large-scale multilingual lexical database where words of many languages are hierarchically organized in terms of their meanings and their semantic relations to other words. This resource is
bootstrapped from WordNet, a well-known English-language resource. Our approach extends WordNet with around 1.5 million meaning links for 800,000 words in over 200 languages, drawing on evidence
extracted from a variety of resources including existing (monolingual) wordnets, (mostly bilingual) translation dictionaries, and parallel corpora. Graph-based scoring functions and statistical
learning techniques are used to iteratively integrate this information and build an output graph. Experiments show that this wordnet has a high level of precision and coverage, and that it can be
useful in applied tasks such as cross-lingual text classification.
@techreport{deMeloWeikum2009, TITLE = {Towards a Universal Wordnet by Learning from Combined Evidenc}, AUTHOR = {de Melo, Gerard and Weikum, Gerhard}, LANGUAGE = {eng}, URL = {http://
domino.mpi-inf.mpg.de/internet/reports.nsf/NumberView/2009-5-005}, NUMBER = {MPI-I-2009-5-005}, INSTITUTION = {Max-Planck-Institut f{\"u}r Informatik}, ADDRESS = {Saarbr{\"u}cken}, YEAR = {2009},
DATE = {2009}, ABSTRACT = {Lexical databases are invaluable sources of knowledge about words and their meanings, with numerous applications in areas like NLP, IR, and AI. We propose a methodology for
the automatic construction of a large-scale multilingual lexical database where words of many languages are hierarchically organized in terms of their meanings and their semantic relations to other
words. This resource is bootstrapped from WordNet, a well-known English-language resource. Our approach extends WordNet with around 1.5 million meaning links for 800,000 words in over 200 languages,
drawing on evidence extracted from a variety of resources including existing (monolingual) wordnets, (mostly bilingual) translation dictionaries, and parallel corpora. Graph-based scoring functions
and statistical learning techniques are used to iteratively integrate this information and build an output graph. Experiments show that this wordnet has a high level of precision and coverage, and
that it can be useful in applied tasks such as cross-lingual text classification.}, TYPE = {Research Report}, }
%0 Report %A de Melo, Gerard %A Weikum, Gerhard %+ Databases and Information Systems, MPI for Informatics, Max Planck Society Databases and Information Systems, MPI for Informatics, Max Planck
Society %T Towards a Universal Wordnet by Learning from Combined Evidenc : %G eng %U http://hdl.handle.net/11858/00-001M-0000-0014-665C-5 %U http://domino.mpi-inf.mpg.de/internet/reports.nsf/
NumberView/2009-5-005 %Y Max-Planck-Institut für Informatik %C Saarbrücken %D 2009 %P 32 p. %X Lexical databases are invaluable sources of knowledge about words and their meanings, with
numerous applications in areas like NLP, IR, and AI. We propose a methodology for the automatic construction of a large-scale multilingual lexical database where words of many languages are
hierarchically organized in terms of their meanings and their semantic relations to other words. This resource is bootstrapped from WordNet, a well-known English-language resource. Our approach
extends WordNet with around 1.5 million meaning links for 800,000 words in over 200 languages, drawing on evidence extracted from a variety of resources including existing (monolingual) wordnets,
(mostly bilingual) translation dictionaries, and parallel corpora. Graph-based scoring functions and statistical learning techniques are used to iteratively integrate this information and build an
output graph. Experiments show that this wordnet has a high level of precision and coverage, and that it can be useful in applied tasks such as cross-lingual text classification. %B Research Report
A shaped temporal filter camera
M. Fuchs, T. Chen, O. Wang, R. Raskar, H. P. A. Lensch and H.-P. Seidel
Technical Report, 2009
@techreport{FuchsChenWangRaskarLenschSeidel2009, TITLE = {A shaped temporal filter camera}, AUTHOR = {Fuchs, Martin and Chen, Tongbo and Wang, Oliver and Raskar, Ramesh and Lensch, Hendrik P. A. and
Seidel, Hans-Peter}, LANGUAGE = {eng}, URL = {http://domino.mpi-inf.mpg.de/internet/reports.nsf/NumberView/2009-4-003}, NUMBER = {MPI-I-2009-4-003}, INSTITUTION = {Max-Planck-Institut f{\"u}r
Informatik}, ADDRESS = {Saarbr{\"u}cken}, YEAR = {2009}, DATE = {2009}, TYPE = {Research Report / Max-Planck-Institut für Informatik}, }
%0 Report %A Fuchs, Martin %A Chen, Tongbo %A Wang, Oliver %A Raskar, Ramesh %A Lensch, Hendrik P. A. %A Seidel, Hans-Peter %+ Computer Graphics, MPI for Informatics, Max Planck Society Computer
Graphics, MPI for Informatics, Max Planck Society Computer Graphics, MPI for Informatics, Max Planck Society External Organizations Computer Graphics, MPI for Informatics, Max Planck Society Computer
Graphics, MPI for Informatics, Max Planck Society %T A shaped temporal filter camera : %G eng %U http://hdl.handle.net/11858/00-001M-0000-0014-666E-E %U http://domino.mpi-inf.mpg.de/internet/
reports.nsf/NumberView/2009-4-003 %Y Max-Planck-Institut für Informatik %C Saarbrücken %D 2009 %P 25 p. %B Research Report / Max-Planck-Institut für Informatik
MPI Informatics building model as data for your research
V. Havran, J. Zajac, J. Drahokoupil and H.-P. Seidel
Technical Report, 2009
In this report we describe the MPI Informatics building model that provides the data of the Max-Planck-Institut f\"{u}r Informatik (MPII) building. We present our motivation for this work and its
relationship to reproducibility of a scientific research. We describe the dataset acquisition and creation including geometry, luminaires, surface reflectances, reference photographs etc. needed to
use this model in testing of algorithms. The created dataset can be used in computer graphics and beyond, in particular in global illumination algorithms with focus on realistic and predictive image
synthesis. Outside of computer graphics, it can be used as general source of real world geometry with an existing counterpart and hence also suitable for computer vision.
@techreport{HavranZajacDrahokoupilSeidel2009, TITLE = {{MPI} Informatics building model as data for your research}, AUTHOR = {Havran, Vlastimil and Zajac, Jozef and Drahokoupil, Jiri and Seidel,
Hans-Peter}, LANGUAGE = {eng}, URL = {http://domino.mpi-inf.mpg.de/internet/reports.nsf/NumberView/2009-4-004}, NUMBER = {MPI-I-2009-4-004}, INSTITUTION = {Max-Planck-Institut f{\"u}r Informatik},
ADDRESS = {Saarbr{\"u}cken}, YEAR = {2009}, DATE = {2009}, ABSTRACT = {In this report we describe the MPI Informatics building model that provides the data of the Max-Planck-Institut f\"{u}r
Informatik (MPII) building. We present our motivation for this work and its relationship to reproducibility of a scientific research. We describe the dataset acquisition and creation including
geometry, luminaires, surface reflectances, reference photographs etc. needed to use this model in testing of algorithms. The created dataset can be used in computer graphics and beyond, in
particular in global illumination algorithms with focus on realistic and predictive image synthesis. Outside of computer graphics, it can be used as general source of real world geometry with an
existing counterpart and hence also suitable for computer vision.}, TYPE = {Research Report / Max-Planck-Institut für Informatik}, }
%0 Report %A Havran, Vlastimil %A Zajac, Jozef %A Drahokoupil, Jiri %A Seidel, Hans-Peter %+ Computer Graphics, MPI for Informatics, Max Planck Society Computer Graphics, MPI for Informatics, Max
Planck Society External Organizations Computer Graphics, MPI for Informatics, Max Planck Society %T MPI Informatics building model as data for your research : %G eng %U http://hdl.handle.net/11858/
00-001M-0000-0014-6665-F %U http://domino.mpi-inf.mpg.de/internet/reports.nsf/NumberView/2009-4-004 %Y Max-Planck-Institut für Informatik %C Saarbrücken %D 2009 %P 113 p. %X In this report
we describe the MPI Informatics building model that provides the data of the Max-Planck-Institut f\"{u}r Informatik (MPII) building. We present our motivation for this work and its relationship to
reproducibility of a scientific research. We describe the dataset acquisition and creation including geometry, luminaires, surface reflectances, reference photographs etc. needed to use this model in
testing of algorithms. The created dataset can be used in computer graphics and beyond, in particular in global illumination algorithms with focus on realistic and predictive image synthesis. Outside
of computer graphics, it can be used as general source of real world geometry with an existing counterpart and hence also suitable for computer vision. %B Research Report / Max-Planck-Institut f&#
252;r Informatik
Deciding the Inductive Validity of Forall Exists* Queries
M. Horbach and C. Weidenbach
Technical Report, 2009a
We present a new saturation-based decidability result for inductive validity. Let $\Sigma$ be a finite signature in which all function symbols are at most unary and let $N$ be a satisfiable Horn
clause set without equality in which all positive literals are linear. If $N\cup\{A_1,\ldots,A_n\rightarrow\}$ belongs to a finitely saturating clause class, then it is decidable whether a sentence
of the form $\forall\exists^* (A_1\wedge\ldots\wedge A_n)$ is valid in the minimal model of $N$.
@techreport{HorbachWeidenbach2009, TITLE = {Deciding the Inductive Validity of Forall Exists* Queries}, AUTHOR = {Horbach, Matthias and Weidenbach, Christoph}, LANGUAGE = {eng}, NUMBER =
{MPI-I-2009-RG1-001}, LOCALID = {Local-ID: C125716C0050FB51-F9BA0666A42B8463C12576AF002882D7-Horbach2009TR1}, YEAR = {2009}, DATE = {2009}, ABSTRACT = {We present a new saturation-based decidability
result for inductive validity. Let $\Sigma$ be a finite signature in which all function symbols are at most unary and let $N$ be a satisfiable Horn clause set without equality in which all positive
literals are linear. If $N\cup\{A_1,\ldots,A_n\rightarrow\}$ belongs to a finitely saturating clause class, then it is decidable whether a sentence of the form $\forall\exists^* (A_1\wedge\ldots\
wedge A_n)$ is valid in the minimal model of $N$.}, }
%0 Report %A Horbach, Matthias %A Weidenbach, Christoph %+ Automation of Logic, MPI for Informatics, Max Planck Society Automation of Logic, MPI for Informatics, Max Planck Society %T Deciding the
Inductive Validity of Forall Exists* Queries : %G eng %U http://hdl.handle.net/11858/00-001M-0000-000F-1A51-3 %F EDOC: 521099 %F OTHER: Local-ID:
C125716C0050FB51-F9BA0666A42B8463C12576AF002882D7-Horbach2009TR1 %D 2009 %X We present a new saturation-based decidability result for inductive validity. Let $\Sigma$ be a finite signature in which
all function symbols are at most unary and let $N$ be a satisfiable Horn clause set without equality in which all positive literals are linear. If $N\cup\{A_1,\ldots,A_n\rightarrow\}$ belongs to a
finitely saturating clause class, then it is decidable whether a sentence of the form $\forall\exists^* (A_1\wedge\ldots\wedge A_n)$ is valid in the minimal model of $N$.
Superposition for Fixed Domains
M. Horbach and C. Weidenbach
Technical Report, 2009b
Superposition is an established decision procedure for a variety of first-order logic theories represented by sets of clauses. A satisfiable theory, saturated by superposition, implicitly defines a
minimal term-generated model for the theory. Proving universal properties with respect to a saturated theory directly leads to a modification of the minimal model's term-generated domain, as new
Skolem functions are introduced. For many applications, this is not desired. Therefore, we propose the first superposition calculus that can explicitly represent existentially quantified variables
and can thus compute with respect to a given domain. This calculus is sound and refutationally complete in the limit for a first-order fixed domain semantics. For saturated Horn theories and classes
of positive formulas, we can even employ the calculus to prove properties of the minimal model itself, going beyond the scope of known superposition-based approaches.
@techreport{Horbach2009TR2, TITLE = {Superposition for Fixed Domains}, AUTHOR = {Horbach, Matthias and Weidenbach, Christoph}, LANGUAGE = {eng}, ISSN = {0946-011X}, NUMBER = {MPI-I-2009-RG1-005},
LOCALID = {Local-ID: C125716C0050FB51-5DDBBB1B134360CFC12576AF0028D299-Horbach2009TR2}, INSTITUTION = {Max-Planck-Institut f{\"u}r Informatik}, ADDRESS = {Saarbr{\"u}cken}, YEAR = {2009}, DATE =
{2009}, ABSTRACT = {Superposition is an established decision procedure for a variety of first-order logic theories represented by sets of clauses. A satisfiable theory, saturated by superposition,
implicitly defines a minimal term-generated model for the theory. Proving universal properties with respect to a saturated theory directly leads to a modification of the minimal model's
term-generated domain, as new Skolem functions are introduced. For many applications, this is not desired. Therefore, we propose the first superposition calculus that can explicitly represent
existentially quantified variables and can thus compute with respect to a given domain. This calculus is sound and refutationally complete in the limit for a first-order fixed domain semantics. For
saturated Horn theories and classes of positive formulas, we can even employ the calculus to prove properties of the minimal model itself, going beyond the scope of known superposition-based
approaches.}, TYPE = {Research Report}, }
%0 Report %A Horbach, Matthias %A Weidenbach, Christoph %+ Automation of Logic, MPI for Informatics, Max Planck Society Automation of Logic, MPI for Informatics, Max Planck Society %T Superposition
for Fixed Domains : %G eng %U http://hdl.handle.net/11858/00-001M-0000-000F-1A71-C %F EDOC: 521100 %F OTHER: Local-ID: C125716C0050FB51-5DDBBB1B134360CFC12576AF0028D299-Horbach2009TR2 %Y
Max-Planck-Institut für Informatik %C Saarbrücken %D 2009 %P 49 p. %X Superposition is an established decision procedure for a variety of first-order logic theories represented by sets of
clauses. A satisfiable theory, saturated by superposition, implicitly defines a minimal term-generated model for the theory. Proving universal properties with respect to a saturated theory directly
leads to a modification of the minimal model's term-generated domain, as new Skolem functions are introduced. For many applications, this is not desired. Therefore, we propose the first superposition
calculus that can explicitly represent existentially quantified variables and can thus compute with respect to a given domain. This calculus is sound and refutationally complete in the limit for a
first-order fixed domain semantics. For saturated Horn theories and classes of positive formulas, we can even employ the calculus to prove properties of the minimal model itself, going beyond the
scope of known superposition-based approaches. %B Research Report %@ false
Decidability Results for Saturation-based Model Building
M. Horbach and C. Weidenbach
Technical Report, 2009c
Saturation-based calculi such as superposition can be successfully instantiated to decision procedures for many decidable fragments of first-order logic. In case of termination without generating an
empty clause, a saturated clause set implicitly represents a minimal model for all clauses, based on the underlying term ordering of the superposition calculus. In general, it is not decidable
whether a ground atom, a clause or even a formula holds in this minimal model of a satisfiable saturated clause set. Based on an extension of our superposition calculus for fixed domains with
syntactic disequality constraints in a non-equational setting, we describe models given by ARM (Atomic Representations of term Models) or DIG (Disjunctions of Implicit Generalizations)
representations as minimal models of finite saturated clause sets. This allows us to present several new decidability results for validity in such models. These results extend in particular the known
decidability results for ARM and DIG representations.
@techreport{HorbachWeidenbach2010, TITLE = {Decidability Results for Saturation-based Model Building}, AUTHOR = {Horbach, Matthias and Weidenbach, Christoph}, LANGUAGE = {eng}, ISSN = {0946-011X},
URL = {http://domino.mpi-inf.mpg.de/internet/reports.nsf/NumberView/2009-RG1-004}, NUMBER = {MPI-I-2009-RG1-004}, INSTITUTION = {Max-Planck-Institut f{\"u}r Informatik}, ADDRESS = {Saarbr{\"u}cken},
YEAR = {2009}, DATE = {2009}, ABSTRACT = {Saturation-based calculi such as superposition can be successfully instantiated to decision procedures for many decidable fragments of first-order logic. In
case of termination without generating an empty clause, a saturated clause set implicitly represents a minimal model for all clauses, based on the underlying term ordering of the superposition
calculus. In general, it is not decidable whether a ground atom, a clause or even a formula holds in this minimal model of a satisfiable saturated clause set. Based on an extension of our
superposition calculus for fixed domains with syntactic disequality constraints in a non-equational setting, we describe models given by ARM (Atomic Representations of term Models) or DIG
(Disjunctions of Implicit Generalizations) representations as minimal models of finite saturated clause sets. This allows us to present several new decidability results for validity in such models.
These results extend in particular the known decidability results for ARM and DIG representations.}, TYPE = {Research Report}, }
%0 Report %A Horbach, Matthias %A Weidenbach, Christoph %+ Automation of Logic, MPI for Informatics, Max Planck Society Automation of Logic, MPI for Informatics, Max Planck Society %T Decidability
Results for Saturation-based Model Building : %G eng %U http://hdl.handle.net/11858/00-001M-0000-0014-6659-B %U http://domino.mpi-inf.mpg.de/internet/reports.nsf/NumberView/2009-RG1-004 %Y
Max-Planck-Institut für Informatik %C Saarbrücken %D 2009 %P 38 p. %X Saturation-based calculi such as superposition can be successfully instantiated to decision procedures for many
decidable fragments of first-order logic. In case of termination without generating an empty clause, a saturated clause set implicitly represents a minimal model for all clauses, based on the
underlying term ordering of the superposition calculus. In general, it is not decidable whether a ground atom, a clause or even a formula holds in this minimal model of a satisfiable saturated clause
set. Based on an extension of our superposition calculus for fixed domains with syntactic disequality constraints in a non-equational setting, we describe models given by ARM (Atomic Representations
of term Models) or DIG (Disjunctions of Implicit Generalizations) representations as minimal models of finite saturated clause sets. This allows us to present several new decidability results for
validity in such models. These results extend in particular the known decidability results for ARM and DIG representations. %B Research Report %@ false
Acquisition and analysis of bispectral bidirectional reflectance distribution functions
M. B. Hullin, B. Ajdin, J. Hanika, H.-P. Seidel, J. Kautz and H. P. A. Lensch
Technical Report, 2009
In fluorescent materials, energy from a certain band of incident wavelengths is reflected or reradiated at larger wavelengths, i.e. with lower energy per photon. While fluorescent materials are
common in everyday life, they have received little attention in computer graphics. Especially, no bidirectional reflectance measurements of fluorescent materials have been available so far. In this
paper, we develop the concept of a bispectral BRDF, which extends the well-known concept of the bidirectional reflectance distribution function (BRDF) to account for energy transfer between
wavelengths. Using a bidirectional and bispectral measurement setup, we acquire reflectance data of a variety of fluorescent materials, including vehicle paints, paper and fabric. We show bispectral
renderings of the measured data and compare them with reduced versions of the bispectral BRDF, including the traditional RGB vector valued BRDF. Principal component analysis of the measured data
reveals that for some materials the fluorescent reradiation spectrum changes considerably over the range of directions. We further show that bispectral BRDFs can be efficiently acquired using an
acquisition strategy based on principal components.
@techreport{HullinAjdinHanikaSeidelKautzLensch2009, TITLE = {Acquisition and analysis of bispectral bidirectional reflectance distribution functions}, AUTHOR = {Hullin, Matthias B. and Ajdin, Boris
and Hanika, Johannes and Seidel, Hans-Peter and Kautz, Jan and Lensch, Hendrik P. A.}, LANGUAGE = {eng}, URL = {http://domino.mpi-inf.mpg.de/internet/reports.nsf/NumberView/2009-4-001}, NUMBER =
{MPI-I-2009-4-001}, INSTITUTION = {Max-Planck-Institut f{\"u}r Informatik}, ADDRESS = {Saarbr{\"u}cken}, YEAR = {2009}, DATE = {2009}, ABSTRACT = {In fluorescent materials, energy from a certain band
of incident wavelengths is reflected or reradiated at larger wavelengths, i.e. with lower energy per photon. While fluorescent materials are common in everyday life, they have received little
attention in computer graphics. Especially, no bidirectional reflectance measurements of fluorescent materials have been available so far. In this paper, we develop the concept of a bispectral BRDF,
which extends the well-known concept of the bidirectional reflectance distribution function (BRDF) to account for energy transfer between wavelengths. Using a bidirectional and bispectral measurement
setup, we acquire reflectance data of a variety of fluorescent materials, including vehicle paints, paper and fabric. We show bispectral renderings of the measured data and compare them with reduced
versions of the bispectral BRDF, including the traditional RGB vector valued BRDF. Principal component analysis of the measured data reveals that for some materials the fluorescent reradiation
spectrum changes considerably over the range of directions. We further show that bispectral BRDFs can be efficiently acquired using an acquisition strategy based on principal components.}, TYPE =
{Research Report / Max-Planck-Institut für Informatik}, }
%0 Report %A Hullin, Matthias B. %A Ajdin, Boris %A Hanika, Johannes %A Seidel, Hans-Peter %A Kautz, Jan %A Lensch, Hendrik P. A. %+ Computer Graphics, MPI for Informatics, Max Planck Society
Computer Graphics, MPI for Informatics, Max Planck Society Computer Graphics, MPI for Informatics, Max Planck Society Computer Graphics, MPI for Informatics, Max Planck Society Computer Graphics, MPI
for Informatics, Max Planck Society Computer Graphics, MPI for Informatics, Max Planck Society %T Acquisition and analysis of bispectral bidirectional reflectance distribution functions : %G eng %U
http://hdl.handle.net/11858/00-001M-0000-0014-6671-4 %U http://domino.mpi-inf.mpg.de/internet/reports.nsf/NumberView/2009-4-001 %Y Max-Planck-Institut für Informatik %C Saarbrücken %D 2009
%P 25 p. %X In fluorescent materials, energy from a certain band of incident wavelengths is reflected or reradiated at larger wavelengths, i.e. with lower energy per photon. While fluorescent
materials are common in everyday life, they have received little attention in computer graphics. Especially, no bidirectional reflectance measurements of fluorescent materials have been available so
far. In this paper, we develop the concept of a bispectral BRDF, which extends the well-known concept of the bidirectional reflectance distribution function (BRDF) to account for energy transfer
between wavelengths. Using a bidirectional and bispectral measurement setup, we acquire reflectance data of a variety of fluorescent materials, including vehicle paints, paper and fabric. We show
bispectral renderings of the measured data and compare them with reduced versions of the bispectral BRDF, including the traditional RGB vector valued BRDF. Principal component analysis of the
measured data reveals that for some materials the fluorescent reradiation spectrum changes considerably over the range of directions. We further show that bispectral BRDFs can be efficiently acquired
using an acquisition strategy based on principal components. %B Research Report / Max-Planck-Institut für Informatik
MING: Mining Informative Entity-relationship Subgraphs
G. Kasneci, S. Elbassuoni and G. Weikum
Technical Report, 2009
@techreport{KasneciWeikumElbassuoni2009, TITLE = {{MING}: Mining Informative Entity-relationship Subgraphs}, AUTHOR = {Kasneci, Gjergji and Elbassuoni, Shady and Weikum, Gerhard}, LANGUAGE = {eng},
NUMBER = {MPI-I-2009-5-007}, LOCALID = {Local-ID: C1256DBF005F876D-E977DDB8EDAABEE6C12576320036DBD9-KasneciMING2009}, INSTITUTION = {Max-Planck-Institut f{\"u}r Informatik}, ADDRESS = {Saarbr{\"u}
cken}, YEAR = {2009}, DATE = {2009}, TYPE = {Research Report}, }
%0 Report %A Kasneci, Gjergji %A Elbassuoni, Shady %A Weikum, Gerhard %+ Databases and Information Systems, MPI for Informatics, Max Planck Society Databases and Information Systems, MPI for
Informatics, Max Planck Society Databases and Information Systems, MPI for Informatics, Max Planck Society %T MING: Mining Informative Entity-relationship Subgraphs : %G eng %U http://hdl.handle.net/
11858/00-001M-0000-000F-1932-4 %F EDOC: 520416 %F OTHER: Local-ID: C1256DBF005F876D-E977DDB8EDAABEE6C12576320036DBD9-KasneciMING2009 %Y Max-Planck-Institut für Informatik %C Saarbrücken %D
2009 %P 32 p. %B Research Report
The RDF-3X Engine for Scalable Management of RDF Data
T. Neumann and G. Weikum
Technical Report, 2009
RDF is a data model for schema-free structured information that is gaining momentum in the context of Semantic-Web data, life sciences, and also Web 2.0 platforms. The ``pay-as-you-go'' nature of RDF
and the flexible pattern-matching capabilities of its query language SPARQL entail efficiency and scalability challenges for complex queries including long join paths. This paper presents the RDF-3X
engine, an implementation of SPARQL that achieves excellent performance by pursuing a RISC-style architecture with streamlined indexing and query processing. The physical design is identical for all
RDF-3X databases regardless of their workloads, and completely eliminates the need for index tuning by exhaustive indexes for all permutations of subject-property-object triples and their binary and
unary projections. These indexes are highly compressed, and the query processor can aggressively leverage fast merge joins with excellent performance of processor caches. The query optimizer is able
to choose optimal join orders even for complex queries, with a cost model that includes statistical synopses for entire join paths. Although RDF-3X is optimized for queries, it also provides good
support for efficient online updates by means of a staging architecture: direct updates to the main database indexes are deferred, and instead applied to compact differential indexes which are later
merged into the main indexes in a batched manner. Experimental studies with several large-scale datasets with more than 50 million RDF triples and benchmark queries that include pattern matching,
manyway star-joins, and long path-joins demonstrate that RDF-3X can outperform the previously best alternatives by one or two orders of magnitude.
@techreport{Neumann2009report1, TITLE = {The {RDF}-3X Engine for Scalable Management of {RDF} Data}, AUTHOR = {Neumann, Thomas and Weikum, Gerhard}, LANGUAGE = {eng}, ISSN = {0946-011X}, NUMBER =
{MPI-I-2009-5-003}, LOCALID = {Local-ID: C1256DBF005F876D-AD3DBAFA6FB90DD2C1257593002FF3DF-Neumann2009report1}, INSTITUTION = {Max-Planck-Institut f{\"u}r Informatik}, ADDRESS = {Saarbr{\"u}cken},
YEAR = {2009}, DATE = {2009}, ABSTRACT = {RDF is a data model for schema-free structured information that is gaining momentum in the context of Semantic-Web data, life sciences, and also Web 2.0
platforms. The ``pay-as-you-go'' nature of RDF and the flexible pattern-matching capabilities of its query language SPARQL entail efficiency and scalability challenges for complex queries including
long join paths. This paper presents the RDF-3X engine, an implementation of SPARQL that achieves excellent performance by pursuing a RISC-style architecture with streamlined indexing and query
processing. The physical design is identical for all RDF-3X databases regardless of their workloads, and completely eliminates the need for index tuning by exhaustive indexes for all permutations of
subject-property-object triples and their binary and unary projections. These indexes are highly compressed, and the query processor can aggressively leverage fast merge joins with excellent
performance of processor caches. The query optimizer is able to choose optimal join orders even for complex queries, with a cost model that includes statistical synopses for entire join paths.
Although RDF-3X is optimized for queries, it also provides good support for efficient online updates by means of a staging architecture: direct updates to the main database indexes are deferred, and
instead applied to compact differential indexes which are later merged into the main indexes in a batched manner. Experimental studies with several large-scale datasets with more than 50 million RDF
triples and benchmark queries that include pattern matching, manyway star-joins, and long path-joins demonstrate that RDF-3X can outperform the previously best alternatives by one or two orders of
magnitude.}, TYPE = {Research Report}, }
%0 Report %A Neumann, Thomas %A Weikum, Gerhard %+ Databases and Information Systems, MPI for Informatics, Max Planck Society Databases and Information Systems, MPI for Informatics, Max Planck
Society %T The RDF-3X Engine for Scalable Management of RDF Data : %G eng %U http://hdl.handle.net/11858/00-001M-0000-000F-195A-A %F EDOC: 520381 %@ 0946-011X %F OTHER: Local-ID:
C1256DBF005F876D-AD3DBAFA6FB90DD2C1257593002FF3DF-Neumann2009report1 %Y Max-Planck-Institut für Informatik %C Saarbrücken %D 2009 %X RDF is a data model for schema-free structured
information that is gaining momentum in the context of Semantic-Web data, life sciences, and also Web 2.0 platforms. The ``pay-as-you-go'' nature of RDF and the flexible pattern-matching capabilities
of its query language SPARQL entail efficiency and scalability challenges for complex queries including long join paths. This paper presents the RDF-3X engine, an implementation of SPARQL that
achieves excellent performance by pursuing a RISC-style architecture with streamlined indexing and query processing. The physical design is identical for all RDF-3X databases regardless of their
workloads, and completely eliminates the need for index tuning by exhaustive indexes for all permutations of subject-property-object triples and their binary and unary projections. These indexes are
highly compressed, and the query processor can aggressively leverage fast merge joins with excellent performance of processor caches. The query optimizer is able to choose optimal join orders even
for complex queries, with a cost model that includes statistical synopses for entire join paths. Although RDF-3X is optimized for queries, it also provides good support for efficient online updates
by means of a staging architecture: direct updates to the main database indexes are deferred, and instead applied to compact differential indexes which are later merged into the main indexes in a
batched manner. Experimental studies with several large-scale datasets with more than 50 million RDF triples and benchmark queries that include pattern matching, manyway star-joins, and long
path-joins demonstrate that RDF-3X can outperform the previously best alternatives by one or two orders of magnitude. %B Research Report
Coupling Knowledge Bases and Web Services for Active Knowledge
N. Preda, F. Suchanek, G. Kasneci, T. Neumann and G. Weikum
Technical Report, 2009
We present ANGIE, a system that can answer user queries by combining knowledge from a local database with knowledge retrieved from Web services. If a user poses a query that cannot be answered by the
local database alone, ANGIE calls the appropriate Web services to retrieve the missing information. In ANGIE,Web services act as dynamic components of the knowledge base that deliver knowledge on
demand. To the user, this is fully transparent; the dynamically acquired knowledge is presented as if it were stored in the local knowledge base. We have developed a RDF based model for declarative
definition of functions embedded in the local knowledge base. The results of available Web services are cast into RDF subgraphs. Parameter bindings are automatically constructed by ANGIE, services
are invoked, and the semi-structured information returned by the services are dynamically integrated into the knowledge base We have developed a query rewriting algorithm that determines one or more
function composition that need to be executed in order to evaluate a SPARQL style user query. The key idea is that the local knowledge base can be used to guide the selection of values used as input
parameters of function calls. This is in contrast to the conventional approaches in the literature which would exhaustively materialize all values that can be used as binding values for the input
@techreport{PredaSuchanekKasneciNeumannWeikum2009, TITLE = {Coupling Knowledge Bases and Web Services for Active Knowledge}, AUTHOR = {Preda, Nicoleta and Suchanek, Fabian and Kasneci, Gjergji and
Neumann, Thomas and Weikum, Gerhard}, LANGUAGE = {eng}, NUMBER = {MPI-I-2009-5-004}, LOCALID = {Local-ID: C1256DBF005F876D-BF2AB4A39F925BC8C125759800444744-PredaSuchanekKasneciNeumannWeikum2009},
YEAR = {2009}, DATE = {2009}, ABSTRACT = {We present ANGIE, a system that can answer user queries by combining knowledge from a local database with knowledge retrieved from Web services. If a user
poses a query that cannot be answered by the local database alone, ANGIE calls the appropriate Web services to retrieve the missing information. In ANGIE,Web services act as dynamic components of the
knowledge base that deliver knowledge on demand. To the user, this is fully transparent; the dynamically acquired knowledge is presented as if it were stored in the local knowledge base. We have
developed a RDF based model for declarative definition of functions embedded in the local knowledge base. The results of available Web services are cast into RDF subgraphs. Parameter bindings are
automatically constructed by ANGIE, services are invoked, and the semi-structured information returned by the services are dynamically integrated into the knowledge base We have developed a query
rewriting algorithm that determines one or more function composition that need to be executed in order to evaluate a SPARQL style user query. The key idea is that the local knowledge base can be used
to guide the selection of values used as input parameters of function calls. This is in contrast to the conventional approaches in the literature which would exhaustively materialize all values that
can be used as binding values for the input parameters.}, TYPE = {Research Reports}, }
%0 Report %A Preda, Nicoleta %A Suchanek, Fabian %A Kasneci, Gjergji %A Neumann, Thomas %A Weikum, Gerhard %+ Databases and Information Systems, MPI for Informatics, Max Planck Society Databases and
Information Systems, MPI for Informatics, Max Planck Society Databases and Information Systems, MPI for Informatics, Max Planck Society Databases and Information Systems, MPI for Informatics, Max
Planck Society Databases and Information Systems, MPI for Informatics, Max Planck Society %T Coupling Knowledge Bases and Web Services for Active Knowledge : %G eng %U http://hdl.handle.net/11858/
00-001M-0000-000F-1901-1 %F EDOC: 520423 %F OTHER: Local-ID: C1256DBF005F876D-BF2AB4A39F925BC8C125759800444744-PredaSuchanekKasneciNeumannWeikum2009 %D 2009 %X We present ANGIE, a system that can
answer user queries by combining knowledge from a local database with knowledge retrieved from Web services. If a user poses a query that cannot be answered by the local database alone, ANGIE calls
the appropriate Web services to retrieve the missing information. In ANGIE,Web services act as dynamic components of the knowledge base that deliver knowledge on demand. To the user, this is fully
transparent; the dynamically acquired knowledge is presented as if it were stored in the local knowledge base. We have developed a RDF based model for declarative definition of functions embedded in
the local knowledge base. The results of available Web services are cast into RDF subgraphs. Parameter bindings are automatically constructed by ANGIE, services are invoked, and the semi-structured
information returned by the services are dynamically integrated into the knowledge base We have developed a query rewriting algorithm that determines one or more function composition that need to be
executed in order to evaluate a SPARQL style user query. The key idea is that the local knowledge base can be used to guide the selection of values used as input parameters of function calls. This is
in contrast to the conventional approaches in the literature which would exhaustively materialize all values that can be used as binding values for the input parameters. %B Research Reports
Generating Concise and Readable Summaries of XML documents
M. Ramanath, K. Sarath Kumar and G. Ifrim
Technical Report, 2009
@techreport{Ramanath2008a, TITLE = {Generating Concise and Readable Summaries of {XML} documents}, AUTHOR = {Ramanath, Maya and Sarath Kumar, Kondreddi and Ifrim, Georgiana}, LANGUAGE = {eng}, NUMBER
= {MPI-I-2009-5-002}, LOCALID = {Local-ID: C1256DBF005F876D-EA355A84178BB514C12575BA002A90E0-Ramanath2008}, YEAR = {2009}, DATE = {2009}, TYPE = {Research Reports}, }
%0 Report %A Ramanath, Maya %A Sarath Kumar, Kondreddi %A Ifrim, Georgiana %+ Databases and Information Systems, MPI for Informatics, Max Planck Society Databases and Information Systems, MPI for
Informatics, Max Planck Society Databases and Information Systems, MPI for Informatics, Max Planck Society %T Generating Concise and Readable Summaries of XML documents : %G eng %U http://
hdl.handle.net/11858/00-001M-0000-000F-1915-6 %F EDOC: 520419 %F OTHER: Local-ID: C1256DBF005F876D-EA355A84178BB514C12575BA002A90E0-Ramanath2008 %D 2009 %B Research Reports
Constraint Solving for Interpolation
A. Rybalchenko and V. Sofronie-Stokkermans
Technical Report, 2009
@techreport{Rybalchenko-Sofronie-Stokkermans-2009, TITLE = {Constraint Solving for Interpolation}, AUTHOR = {Rybalchenko, Andrey and Sofronie-Stokkermans, Viorica}, LANGUAGE = {eng}, LOCALID =
{Local-ID: C125716C0050FB51-7BE33255DCBCF2AAC1257650004B7C65-Rybalchenko-Sofronie-Stokkermans-2009}, YEAR = {2009}, DATE = {2009}, }
%0 Report %A Rybalchenko, Andrey %A Sofronie-Stokkermans, Viorica %+ Automation of Logic, MPI for Informatics, Max Planck Society %T Constraint Solving for Interpolation : %G eng %U http://
hdl.handle.net/11858/00-001M-0000-000F-1A4A-6 %F EDOC: 521091 %F OTHER: Local-ID: C125716C0050FB51-7BE33255DCBCF2AAC1257650004B7C65-Rybalchenko-Sofronie-Stokkermans-2009 %D 2009
A Higher-order Structure Tensor
T. Schultz, J. Weickert and H.-P. Seidel
Technical Report, 2009
Structure tensors are a common tool for orientation estimation in image processing and computer vision. We present a generalization of the traditional second-order model to a higher-order structure
tensor (HOST), which is able to model more than one significant orientation, as found in corners, junctions, and multi-channel images. We provide a theoretical analysis and a number of mathematical
tools that facilitate practical use of the HOST, visualize it using a novel glyph for higher-order tensors, and demonstrate how it can be applied in an improved integrated edge, corner, and junction
@techreport{SchultzlWeickertSeidel2007, TITLE = {A Higher-order Structure Tensor}, AUTHOR = {Schultz, Thomas and Weickert, Joachim and Seidel, Hans-Peter}, LANGUAGE = {eng}, NUMBER =
{MPI-I-2007-4-005}, INSTITUTION = {Max-Planck-Institut f{\"u}r Informatik}, ADDRESS = {Saarbr{\"u}cken}, YEAR = {2009}, DATE = {2009}, ABSTRACT = {Structure tensors are a common tool for orientation
estimation in image processing and computer vision. We present a generalization of the traditional second-order model to a higher-order structure tensor (HOST), which is able to model more than one
significant orientation, as found in corners, junctions, and multi-channel images. We provide a theoretical analysis and a number of mathematical tools that facilitate practical use of the HOST,
visualize it using a novel glyph for higher-order tensors, and demonstrate how it can be applied in an improved integrated edge, corner, and junction}, TYPE = {Research Report}, }
%0 Report %A Schultz, Thomas %A Weickert, Joachim %A Seidel, Hans-Peter %+ Computer Graphics, MPI for Informatics, Max Planck Society External Organizations Computer Graphics, MPI for Informatics,
Max Planck Society %T A Higher-order Structure Tensor : %G eng %U http://hdl.handle.net/11858/00-001M-0000-0027-13BC-7 %Y Max-Planck-Institut für Informatik %C Saarbrücken %D 2009 %X
Structure tensors are a common tool for orientation estimation in image processing and computer vision. We present a generalization of the traditional second-order model to a higher-order structure
tensor (HOST), which is able to model more than one significant orientation, as found in corners, junctions, and multi-channel images. We provide a theoretical analysis and a number of mathematical
tools that facilitate practical use of the HOST, visualize it using a novel glyph for higher-order tensors, and demonstrate how it can be applied in an improved integrated edge, corner, and junction
%B Research Report
Optical reconstruction of detailed animatable human body models
C. Stoll
Technical Report, 2009
@techreport{Stoll2009, TITLE = {Optical reconstruction of detailed animatable human body models}, AUTHOR = {Stoll, Carsten}, LANGUAGE = {eng}, URL = {http://domino.mpi-inf.mpg.de/internet/reports.nsf
/NumberView/2009-4-006}, NUMBER = {MPI-I-2009-4-006}, INSTITUTION = {Max-Planck-Institut f{\"u}r Informatik}, ADDRESS = {Saarbr{\"u}cken}, YEAR = {2009}, DATE = {2009}, TYPE = {Research Report /
Max-Planck-Institut für Informatik}, }
%0 Report %A Stoll, Carsten %+ Computer Graphics, MPI for Informatics, Max Planck Society %T Optical reconstruction of detailed animatable human body models : %G eng %U http://hdl.handle.net/11858/
00-001M-0000-0014-665F-0 %U http://domino.mpi-inf.mpg.de/internet/reports.nsf/NumberView/2009-4-006 %Y Max-Planck-Institut für Informatik %C Saarbrücken %D 2009 %P 37 p. %B Research Report
/ Max-Planck-Institut für Informatik
Contextual Rewriting
C. Weidenbach and P. Wischnewski
Technical Report, 2009
@techreport{WischnewskiWeidenbach2009, TITLE = {Contextual Rewriting}, AUTHOR = {Weidenbach, Christoph and Wischnewski, Patrick}, LANGUAGE = {eng}, NUMBER = {MPI-I-2009-RG1-002}, LOCALID = {Local-ID:
C125716C0050FB51-DD89BAB0441DE797C125757F0034B8CB-WeidenbachWischnewskiReport2009}, YEAR = {2009}, DATE = {2009}, }
%0 Report %A Weidenbach, Christoph %A Wischnewski, Patrick %+ Automation of Logic, MPI for Informatics, Max Planck Society Automation of Logic, MPI for Informatics, Max Planck Society %T Contextual
Rewriting : %G eng %U http://hdl.handle.net/11858/00-001M-0000-000F-1A4C-2 %F EDOC: 521106 %F OTHER: Local-ID: C125716C0050FB51-DD89BAB0441DE797C125757F0034B8CB-WeidenbachWischnewskiReport2009 %D
Characterizing the performance of Flash memory storage devices and its impact on algorithm design
D. Ajwani, I. Malinger, U. Meyer and S. Toledo
Technical Report, 2008
Initially used in digital audio players, digital cameras, mobile phones, and USB memory sticks, flash memory may become the dominant form of end-user storage in mobile computing, either completely
replacing the magnetic hard disks or being an additional secondary storage. We study the design of algorithms and data structures that can exploit the flash memory devices better. For this, we
characterize the performance of NAND flash based storage devices, including many solid state disks. We show that these devices have better random read performance than hard disks, but much worse
random write performance. We also analyze the effect of misalignments, aging and past I/O patterns etc. on the performance obtained on these devices. We show that despite the similarities between
flash memory and RAM (fast random reads) and between flash disk and hard disk (both are block based devices), the algorithms designed in the RAM model or the external memory model do not realize the
full potential of the flash memory devices. We later give some broad guidelines for designing algorithms which can exploit the comparative advantages of both a flash memory device and a hard disk,
when used together.
@techreport{AjwaniMalingerMeyerToledo2008, TITLE = {Characterizing the performance of Flash memory storage devices and its impact on algorithm design}, AUTHOR = {Ajwani, Deepak and Malinger, Itay and
Meyer, Ulrich and Toledo, Sivan}, LANGUAGE = {eng}, URL = {http://domino.mpi-inf.mpg.de/internet/reports.nsf/NumberView/2008-1-001}, NUMBER = {MPI-I-2008-1-001}, INSTITUTION = {Max-Planck-Institut f
{\"u}r Informatik}, ADDRESS = {Saarbr{\"u}cken}, YEAR = {2008}, DATE = {2008}, ABSTRACT = {Initially used in digital audio players, digital cameras, mobile phones, and USB memory sticks, flash memory
may become the dominant form of end-user storage in mobile computing, either completely replacing the magnetic hard disks or being an additional secondary storage. We study the design of algorithms
and data structures that can exploit the flash memory devices better. For this, we characterize the performance of NAND flash based storage devices, including many solid state disks. We show that
these devices have better random read performance than hard disks, but much worse random write performance. We also analyze the effect of misalignments, aging and past I/O patterns etc. on the
performance obtained on these devices. We show that despite the similarities between flash memory and RAM (fast random reads) and between flash disk and hard disk (both are block based devices), the
algorithms designed in the RAM model or the external memory model do not realize the full potential of the flash memory devices. We later give some broad guidelines for designing algorithms which can
exploit the comparative advantages of both a flash memory device and a hard disk, when used together.}, TYPE = {Research Report}, }
%0 Report %A Ajwani, Deepak %A Malinger, Itay %A Meyer, Ulrich %A Toledo, Sivan %+ Algorithms and Complexity, MPI for Informatics, Max Planck Society External Organizations Algorithms and Complexity,
MPI for Informatics, Max Planck Society External Organizations %T Characterizing the performance of Flash memory storage devices and its impact on algorithm design : %G eng %U http://hdl.handle.net/
11858/00-001M-0000-0014-66C7-6 %U http://domino.mpi-inf.mpg.de/internet/reports.nsf/NumberView/2008-1-001 %Y Max-Planck-Institut für Informatik %C Saarbrücken %D 2008 %P 36 p. %X Initially
used in digital audio players, digital cameras, mobile phones, and USB memory sticks, flash memory may become the dominant form of end-user storage in mobile computing, either completely replacing
the magnetic hard disks or being an additional secondary storage. We study the design of algorithms and data structures that can exploit the flash memory devices better. For this, we characterize the
performance of NAND flash based storage devices, including many solid state disks. We show that these devices have better random read performance than hard disks, but much worse random write
performance. We also analyze the effect of misalignments, aging and past I/O patterns etc. on the performance obtained on these devices. We show that despite the similarities between flash memory and
RAM (fast random reads) and between flash disk and hard disk (both are block based devices), the algorithms designed in the RAM model or the external memory model do not realize the full potential of
the flash memory devices. We later give some broad guidelines for designing algorithms which can exploit the comparative advantages of both a flash memory device and a hard disk, when used together.
%B Research Report
Prototype Implementation of the Algebraic Kernel
E. Berberich, M. Hemmer, M. Karavelas, S. Pion, M. Teillaud and E. Tsigaridas
Technical Report, 2008
In this report we describe the current progress with respect to prototype implementations of algebraic kernels within the ACS project. More specifically, we report on: (1) the Cgal package
Algebraic_kernel_for_circles_2_2 aimed at providing the necessary algebraic functionality required for treating circular arcs; (2) an interface between Cgal and SYNAPS for accessing the algebraic
functionality in the SYNAPS library; (3) the NumeriX library (part of the EXACUS project) which is a prototype implementation of a set of algebraic tools on univariate polynomials, needed to built an
algebraic kernel and (4) a rough CGAL-like prototype implementation of a set of algebraic tools on univariate polynomials.
@techreport{ACS-TR-121202-01, TITLE = {Prototype Implementation of the Algebraic Kernel}, AUTHOR = {Berberich, Eric and Hemmer, Michael and Karavelas, Menelaos and Pion, Sylvain and Teillaud, Monique
and Tsigaridas, Elias}, LANGUAGE = {eng}, NUMBER = {ACS-TR-121202-01}, INSTITUTION = {University of Groningen}, ADDRESS = {Groningen}, YEAR = {2008}, DATE = {2008}, ABSTRACT = {In this report we
describe the current progress with respect to prototype implementations of algebraic kernels within the ACS project. More specifically, we report on: (1) the Cgal package
Algebraic_kernel_for_circles_2_2 aimed at providing the necessary algebraic functionality required for treating circular arcs; (2) an interface between Cgal and SYNAPS for accessing the algebraic
functionality in the SYNAPS library; (3) the NumeriX library (part of the EXACUS project) which is a prototype implementation of a set of algebraic tools on univariate polynomials, needed to built an
algebraic kernel and (4) a rough CGAL-like prototype implementation of a set of algebraic tools on univariate polynomials.}, }
%0 Report %A Berberich, Eric %A Hemmer, Michael %A Karavelas, Menelaos %A Pion, Sylvain %A Teillaud, Monique %A Tsigaridas, Elias %+ Algorithms and Complexity, MPI for Informatics, Max Planck Society
Algorithms and Complexity, MPI for Informatics, Max Planck Society External Organizations Algorithms and Complexity, MPI for Informatics, Max Planck Society External Organizations External
Organizations %T Prototype Implementation of the Algebraic Kernel : %G eng %U http://hdl.handle.net/11858/00-001M-0000-0019-E387-2 %Y University of Groningen %C Groningen %D 2008 %X In this report we
describe the current progress with respect to prototype implementations of algebraic kernels within the ACS project. More specifically, we report on: (1) the Cgal package
Algebraic_kernel_for_circles_2_2 aimed at providing the necessary algebraic functionality required for treating circular arcs; (2) an interface between Cgal and SYNAPS for accessing the algebraic
functionality in the SYNAPS library; (3) the NumeriX library (part of the EXACUS project) which is a prototype implementation of a set of algebraic tools on univariate polynomials, needed to built an
algebraic kernel and (4) a rough CGAL-like prototype implementation of a set of algebraic tools on univariate polynomials. %U http://www.researchgate.net/publication/
Slippage Features
M. Bokeloh, A. Berner, M. Wand, H.-P. Seidel and A. Schilling
Technical Report, 2008
@techreport{Bokeloh2008, TITLE = {Slippage Features}, AUTHOR = {Bokeloh, Martin and Berner, Alexander and Wand, Michael and Seidel, Hans-Peter and Schilling, Andreas}, LANGUAGE = {eng}, ISSN =
{0946-3852}, URL = {urn:nbn:de:bsz:21-opus-33880}, NUMBER = {WSI-2008-03}, INSTITUTION = {Wilhelm-Schickard-Institut / Universit{\"a}t T{\"u}bingen}, ADDRESS = {T{\"u}bingen}, YEAR = {2008}, DATE =
{2008}, TYPE = {WSI}, VOLUME = {2008-03}, }
%0 Report %A Bokeloh, Martin %A Berner, Alexander %A Wand, Michael %A Seidel, Hans-Peter %A Schilling, Andreas %+ External Organizations External Organizations Computer Graphics, MPI for Informatics,
Max Planck Society Computer Graphics, MPI for Informatics, Max Planck Society External Organizations %T Slippage Features : %G eng %U http://hdl.handle.net/11858/00-001M-0000-0023-D3FC-F %U
urn:nbn:de:bsz:21-opus-33880 %Y Wilhelm-Schickard-Institut / Universität Tübingen %C Tübingen %D 2008 %P 17 p. %B WSI %N 2008-03 %@ false %U http://nbn-resolving.de/
Data Modifications and Versioning in Trio
A. Das Sarma, M. Theobald and J. Widom
Technical Report, 2008
@techreport{ilpubs-849, TITLE = {Data Modifications and Versioning in Trio}, AUTHOR = {Das Sarma, Anish and Theobald, Martin and Widom, Jennifer}, LANGUAGE = {eng}, URL = {http://
ilpubs.stanford.edu:8090/849/}, NUMBER = {ILPUBS-849}, INSTITUTION = {Standford University Infolab}, ADDRESS = {Standford, CA}, YEAR = {2008}, TYPE = {Technical Report}, }
%0 Report %A Das Sarma, Anish %A Theobald, Martin %A Widom, Jennifer %+ External Organizations Databases and Information Systems, MPI for Informatics, Max Planck Society External Organizations %T
Data Modifications and Versioning in Trio : %G eng %U http://hdl.handle.net/11858/00-001M-0000-0027-AED6-D %U http://ilpubs.stanford.edu:8090/849/ %Y Standford University Infolab %C Standford, CA %D
2008 %B Technical Report
Integrating Yago into the suggested upper merged ontology
G. de Melo, F. Suchanek and A. Pease
Technical Report, 2008
Ontologies are becoming more and more popular as background knowledge for intelligent applications. Up to now, there has been a schism between manually assembled, highly axiomatic ontologies and
large, automatically constructed knowledge bases. This report discusses how the two worlds can be brought together by combining the high-level axiomatizations from the Standard Upper Merged Ontology
(SUMO) with the extensive world knowledge of the YAGO ontology. On the theoretical side, it analyses the differences between the knowledge representation in YAGO and SUMO. On the practical side, this
report explains how the two resources can be merged. This yields a new large-scale formal ontology, which provides information about millions of entities such as people, cities, organizations, and
companies. This report is the detailed version of our paper at ICTAI 2008.
@techreport{deMeloSuchanekPease2008, TITLE = {Integrating Yago into the suggested upper merged ontology}, AUTHOR = {de Melo, Gerard and Suchanek, Fabian and Pease, Adam}, LANGUAGE = {eng}, URL =
{http://domino.mpi-inf.mpg.de/internet/reports.nsf/NumberView/2008-5-003}, NUMBER = {MPI-I-2008-5-003}, INSTITUTION = {Max-Planck-Institut f{\"u}r Informatik}, ADDRESS = {Saarbr{\"u}cken}, YEAR =
{2008}, DATE = {2008}, ABSTRACT = {Ontologies are becoming more and more popular as background knowledge for intelligent applications. Up to now, there has been a schism between manually assembled,
highly axiomatic ontologies and large, automatically constructed knowledge bases. This report discusses how the two worlds can be brought together by combining the high-level axiomatizations from the
Standard Upper Merged Ontology (SUMO) with the extensive world knowledge of the YAGO ontology. On the theoretical side, it analyses the differences between the knowledge representation in YAGO and
SUMO. On the practical side, this report explains how the two resources can be merged. This yields a new large-scale formal ontology, which provides information about millions of entities such as
people, cities, organizations, and companies. This report is the detailed version of our paper at ICTAI 2008.}, TYPE = {Research Report / Max-Planck-Institut für Informatik}, }
%0 Report %A de Melo, Gerard %A Suchanek, Fabian %A Pease, Adam %+ Databases and Information Systems, MPI for Informatics, Max Planck Society Databases and Information Systems, MPI for Informatics,
Max Planck Society External Organizations %T Integrating Yago into the suggested upper merged ontology : %G eng %U http://hdl.handle.net/11858/00-001M-0000-0014-66AB-6 %U http://domino.mpi-inf.mpg.de
/internet/reports.nsf/NumberView/2008-5-003 %Y Max-Planck-Institut für Informatik %C Saarbrücken %D 2008 %P 31 p. %X Ontologies are becoming more and more popular as background knowledge
for intelligent applications. Up to now, there has been a schism between manually assembled, highly axiomatic ontologies and large, automatically constructed knowledge bases. This report discusses
how the two worlds can be brought together by combining the high-level axiomatizations from the Standard Upper Merged Ontology (SUMO) with the extensive world knowledge of the YAGO ontology. On the
theoretical side, it analyses the differences between the knowledge representation in YAGO and SUMO. On the practical side, this report explains how the two resources can be merged. This yields a new
large-scale formal ontology, which provides information about millions of entities such as people, cities, organizations, and companies. This report is the detailed version of our paper at ICTAI
2008. %B Research Report / Max-Planck-Institut für Informatik
Labelled splitting
A. L. Fietzke and C. Weidenbach
Technical Report, 2008
We define a superposition calculus with explicit splitting and an explicit, new backtracking rule on the basis of labelled clauses. For the first time we show a superposition calculus with explicit
backtracking rule sound and complete. The new backtracking rule advances backtracking with branch condensing known from SPASS. An experimental evaluation of an implementation of the new rule shows
that it improves considerably the previous SPASS splitting implementation. Finally, we discuss the relationship between labelled first-order splitting and DPLL style splitting with intelligent
backtracking and clause learning.
@techreport{FietzkeWeidenbach2008, TITLE = {Labelled splitting}, AUTHOR = {Fietzke, Arnaud Luc and Weidenbach, Christoph}, LANGUAGE = {eng}, URL = {http://domino.mpi-inf.mpg.de/internet/reports.nsf/
NumberView/2008-RG1-001}, NUMBER = {MPI-I-2008-RG1-001}, INSTITUTION = {Max-Planck-Institut f{\"u}r Informatik}, ADDRESS = {Saarbr{\"u}cken}, YEAR = {2008}, DATE = {2008}, ABSTRACT = {We define a
superposition calculus with explicit splitting and an explicit, new backtracking rule on the basis of labelled clauses. For the first time we show a superposition calculus with explicit backtracking
rule sound and complete. The new backtracking rule advances backtracking with branch condensing known from SPASS. An experimental evaluation of an implementation of the new rule shows that it
improves considerably the previous SPASS splitting implementation. Finally, we discuss the relationship between labelled first-order splitting and DPLL style splitting with intelligent backtracking
and clause learning.}, TYPE = {Research Report}, }
%0 Report %A Fietzke, Arnaud Luc %A Weidenbach, Christoph %+ Automation of Logic, MPI for Informatics, Max Planck Society Automation of Logic, MPI for Informatics, Max Planck Society %T Labelled
splitting : %G eng %U http://hdl.handle.net/11858/00-001M-0000-0014-6674-D %U http://domino.mpi-inf.mpg.de/internet/reports.nsf/NumberView/2008-RG1-001 %Y Max-Planck-Institut für Informatik %C
Saarbrücken %D 2008 %P 45 p. %X We define a superposition calculus with explicit splitting and an explicit, new backtracking rule on the basis of labelled clauses. For the first time we show a
superposition calculus with explicit backtracking rule sound and complete. The new backtracking rule advances backtracking with branch condensing known from SPASS. An experimental evaluation of an
implementation of the new rule shows that it improves considerably the previous SPASS splitting implementation. Finally, we discuss the relationship between labelled first-order splitting and DPLL
style splitting with intelligent backtracking and clause learning. %B Research Report
STAR: Steiner tree approximation in relationship-graphs
G. Kasneci, M. Ramanath, M. Sozio, F. Suchanek and G. Weikum
Technical Report, 2008
Large-scale graphs and networks are abundant in modern information systems: entity-relationship graphs over relational data or Web-extracted entities, biological networks, social online communities,
knowledge bases, and many more. Often such data comes with expressive node and edge labels that allow an interpretation as a semantic graph, and edge weights that reflect the strengths of semantic
relations between entities. Finding close relationships between a given set of two, three, or more entities is an important building block for many search, ranking, and analysis tasks. From an
algorithmic point of view, this translates into computing the best Steiner trees between the given nodes, a classical NP-hard problem. In this paper, we present a new approximation algorithm, coined
STAR, for relationship queries over large graphs that do not fit into memory. We prove that for n query entities, STAR yields an O(log(n))-approximation of the optimal Steiner tree, and show that in
practical cases the results returned by STAR are qualitatively better than the results returned by a classical 2-approximation algorithm. We then describe an extension to our algorithm to return the
top-k Steiner trees. Finally, we evaluate our algorithm over both main-memory as well as completely disk-resident graphs containing millions of nodes. Our experiments show that STAR outperforms the
best state-of-the returns qualitatively better results.
@techreport{KasneciRamanathSozioSuchanekWeikum2008, TITLE = {{STAR}: Steiner tree approximation in relationship-graphs}, AUTHOR = {Kasneci, Gjergji and Ramanath, Maya and Sozio, Mauro and Suchanek,
Fabian and Weikum, Gerhard}, LANGUAGE = {eng}, URL = {http://domino.mpi-inf.mpg.de/internet/reports.nsf/NumberView/2008-5-001}, NUMBER = {MPI-I-2008-5-001}, INSTITUTION = {Max-Planck-Institut f{\"u}r
Informatik}, ADDRESS = {Saarbr{\"u}cken}, YEAR = {2008}, DATE = {2008}, ABSTRACT = {Large-scale graphs and networks are abundant in modern information systems: entity-relationship graphs over
relational data or Web-extracted entities, biological networks, social online communities, knowledge bases, and many more. Often such data comes with expressive node and edge labels that allow an
interpretation as a semantic graph, and edge weights that reflect the strengths of semantic relations between entities. Finding close relationships between a given set of two, three, or more entities
is an important building block for many search, ranking, and analysis tasks. From an algorithmic point of view, this translates into computing the best Steiner trees between the given nodes, a
classical NP-hard problem. In this paper, we present a new approximation algorithm, coined STAR, for relationship queries over large graphs that do not fit into memory. We prove that for n query
entities, STAR yields an O(log(n))-approximation of the optimal Steiner tree, and show that in practical cases the results returned by STAR are qualitatively better than the results returned by a
classical 2-approximation algorithm. We then describe an extension to our algorithm to return the top-k Steiner trees. Finally, we evaluate our algorithm over both main-memory as well as completely
disk-resident graphs containing millions of nodes. Our experiments show that STAR outperforms the best state-of-the returns qualitatively better results.}, TYPE = {Research Report /
Max-Planck-Institut für Informatik}, }
%0 Report %A Kasneci, Gjergji %A Ramanath, Maya %A Sozio, Mauro %A Suchanek, Fabian %A Weikum, Gerhard %+ Databases and Information Systems, MPI for Informatics, Max Planck Society Databases and
Information Systems, MPI for Informatics, Max Planck Society Databases and Information Systems, MPI for Informatics, Max Planck Society Databases and Information Systems, MPI for Informatics, Max
Planck Society Databases and Information Systems, MPI for Informatics, Max Planck Society %T STAR: Steiner tree approximation in relationship-graphs : %G eng %U http://hdl.handle.net/11858/
00-001M-0000-0014-66B3-1 %U http://domino.mpi-inf.mpg.de/internet/reports.nsf/NumberView/2008-5-001 %Y Max-Planck-Institut für Informatik %C Saarbrücken %D 2008 %P 37 p. %X Large-scale
graphs and networks are abundant in modern information systems: entity-relationship graphs over relational data or Web-extracted entities, biological networks, social online communities, knowledge
bases, and many more. Often such data comes with expressive node and edge labels that allow an interpretation as a semantic graph, and edge weights that reflect the strengths of semantic relations
between entities. Finding close relationships between a given set of two, three, or more entities is an important building block for many search, ranking, and analysis tasks. From an algorithmic
point of view, this translates into computing the best Steiner trees between the given nodes, a classical NP-hard problem. In this paper, we present a new approximation algorithm, coined STAR, for
relationship queries over large graphs that do not fit into memory. We prove that for n query entities, STAR yields an O(log(n))-approximation of the optimal Steiner tree, and show that in practical
cases the results returned by STAR are qualitatively better than the results returned by a classical 2-approximation algorithm. We then describe an extension to our algorithm to return the top-k
Steiner trees. Finally, we evaluate our algorithm over both main-memory as well as completely disk-resident graphs containing millions of nodes. Our experiments show that STAR outperforms the best
state-of-the returns qualitatively better results. %B Research Report / Max-Planck-Institut für Informatik
Single phase construction of optimal DAG-structured QEPs
T. Neumann and G. Moerkotte
Technical Report, 2008
Traditionally, database management systems use tree-structured query evaluation plans. They are easy to implement but not expressive enough for some optimizations like eliminating common algebraic
subexpressions or magic sets. These require directed acyclic graphs (DAGs), i.e. shared subplans. Existing approaches consider DAGs merely for special cases and not in full generality. We introduce a
novel framework to reason about sharing of subplans and, thus, DAG-structured query evaluation plans. Then, we present the first plan generator capable of generating optimal DAG-structured query
evaluation plans. The experimental results show that with no or only a modest increase of plan generation time, a major reduction of query execution time can be achieved for common queries.
@techreport{NeumannMoerkotte2008, TITLE = {Single phase construction of optimal {DAG}-structured {QEPs}}, AUTHOR = {Neumann, Thomas and Moerkotte, Guido}, LANGUAGE = {eng}, URL = {http://
domino.mpi-inf.mpg.de/internet/reports.nsf/NumberView/2008-5-002}, NUMBER = {MPI-I-2008-5-002}, INSTITUTION = {Max-Planck-Institut f{\"u}r Informatik}, ADDRESS = {Saarbr{\"u}cken}, YEAR = {2008},
DATE = {2008}, ABSTRACT = {Traditionally, database management systems use tree-structured query evaluation plans. They are easy to implement but not expressive enough for some optimizations like
eliminating common algebraic subexpressions or magic sets. These require directed acyclic graphs (DAGs), i.e. shared subplans. Existing approaches consider DAGs merely for special cases and not in
full generality. We introduce a novel framework to reason about sharing of subplans and, thus, DAG-structured query evaluation plans. Then, we present the first plan generator capable of generating
optimal DAG-structured query evaluation plans. The experimental results show that with no or only a modest increase of plan generation time, a major reduction of query execution time can be achieved
for common queries.}, TYPE = {Research Report / Max-Planck-Institut für Informatik}, }
%0 Report %A Neumann, Thomas %A Moerkotte, Guido %+ Databases and Information Systems, MPI for Informatics, Max Planck Society External Organizations %T Single phase construction of optimal
DAG-structured QEPs : %G eng %U http://hdl.handle.net/11858/00-001M-0000-0014-66B0-7 %U http://domino.mpi-inf.mpg.de/internet/reports.nsf/NumberView/2008-5-002 %Y Max-Planck-Institut für
Informatik %C Saarbrücken %D 2008 %P 73 p. %X Traditionally, database management systems use tree-structured query evaluation plans. They are easy to implement but not expressive enough for some
optimizations like eliminating common algebraic subexpressions or magic sets. These require directed acyclic graphs (DAGs), i.e. shared subplans. Existing approaches consider DAGs merely for special
cases and not in full generality. We introduce a novel framework to reason about sharing of subplans and, thus, DAG-structured query evaluation plans. Then, we present the first plan generator
capable of generating optimal DAG-structured query evaluation plans. The experimental results show that with no or only a modest increase of plan generation time, a major reduction of query execution
time can be achieved for common queries. %B Research Report / Max-Planck-Institut für Informatik
Crease surfaces: from theory to extraction and application to diffusion tensor MRI
T. Schultz, H. Theisel and H.-P. Seidel
Technical Report, 2008
Crease surfaces are two-dimensional manifolds along which a scalar field assumes a local maximum (ridge) or a local minimum (valley) in a constrained space. Unlike isosurfaces, they are able to
capture extremal structures in the data. Creases have a long tradition in image processing and computer vision, and have recently become a popular tool for visualization. When extracting crease
surfaces, degeneracies of the Hessian (i.e., lines along which two eigenvalues are equal), have so far been ignored. We show that these loci, however, have two important consequences for the topology
of crease surfaces: First, creases are bounded not only by a side constraint on eigenvalue sign, but also by Hessian degeneracies. Second, crease surfaces are not in general orientable. We describe
an efficient algorithm for the extraction of crease surfaces which takes these insights into account and demonstrate that it produces more accurate results than previous approaches. Finally, we show
that DT-MRI streamsurfaces, which were previously used for the analysis of planar regions in diffusion tensor MRI data, are mathematically ill-defined. As an example application of our method,
creases in a measure of planarity are presented as a viable substitute.
@techreport{SchultzTheiselSeidel2008, TITLE = {Crease surfaces: from theory to extraction and application to diffusion tensor {MRI}}, AUTHOR = {Schultz, Thomas and Theisel, Holger and Seidel,
Hans-Peter}, LANGUAGE = {eng}, URL = {http://domino.mpi-inf.mpg.de/internet/reports.nsf/NumberView/2008-4-003}, NUMBER = {MPI-I-2008-4-003}, INSTITUTION = {Max-Planck-Institut f{\"u}r Informatik},
ADDRESS = {Saarbr{\"u}cken}, YEAR = {2008}, DATE = {2008}, ABSTRACT = {Crease surfaces are two-dimensional manifolds along which a scalar field assumes a local maximum (ridge) or a local minimum
(valley) in a constrained space. Unlike isosurfaces, they are able to capture extremal structures in the data. Creases have a long tradition in image processing and computer vision, and have recently
become a popular tool for visualization. When extracting crease surfaces, degeneracies of the Hessian (i.e., lines along which two eigenvalues are equal), have so far been ignored. We show that these
loci, however, have two important consequences for the topology of crease surfaces: First, creases are bounded not only by a side constraint on eigenvalue sign, but also by Hessian degeneracies.
Second, crease surfaces are not in general orientable. We describe an efficient algorithm for the extraction of crease surfaces which takes these insights into account and demonstrate that it
produces more accurate results than previous approaches. Finally, we show that DT-MRI streamsurfaces, which were previously used for the analysis of planar regions in diffusion tensor MRI data, are
mathematically ill-defined. As an example application of our method, creases in a measure of planarity are presented as a viable substitute.}, TYPE = {Research Report / Max-Planck-Institut für
Informatik}, }
%0 Report %A Schultz, Thomas %A Theisel, Holger %A Seidel, Hans-Peter %+ Computer Graphics, MPI for Informatics, Max Planck Society Computer Graphics, MPI for Informatics, Max Planck Society Computer
Graphics, MPI for Informatics, Max Planck Society %T Crease surfaces: from theory to extraction and application to diffusion tensor MRI : %G eng %U http://hdl.handle.net/11858/
00-001M-0000-0014-66B6-C %U http://domino.mpi-inf.mpg.de/internet/reports.nsf/NumberView/2008-4-003 %Y Max-Planck-Institut für Informatik %C Saarbrücken %D 2008 %P 33 p. %X Crease surfaces
are two-dimensional manifolds along which a scalar field assumes a local maximum (ridge) or a local minimum (valley) in a constrained space. Unlike isosurfaces, they are able to capture extremal
structures in the data. Creases have a long tradition in image processing and computer vision, and have recently become a popular tool for visualization. When extracting crease surfaces, degeneracies
of the Hessian (i.e., lines along which two eigenvalues are equal), have so far been ignored. We show that these loci, however, have two important consequences for the topology of crease surfaces:
First, creases are bounded not only by a side constraint on eigenvalue sign, but also by Hessian degeneracies. Second, crease surfaces are not in general orientable. We describe an efficient
algorithm for the extraction of crease surfaces which takes these insights into account and demonstrate that it produces more accurate results than previous approaches. Finally, we show that DT-MRI
streamsurfaces, which were previously used for the analysis of planar regions in diffusion tensor MRI data, are mathematically ill-defined. As an example application of our method, creases in a
measure of planarity are presented as a viable substitute. %B Research Report / Max-Planck-Institut für Informatik
Efficient Hierarchical Reasoning about Functions over Numerical Domains
V. Sofronie-Stokkermans
Technical Report, 2008a
We show that many properties studied in mathematical analysis (monotonicity, boundedness, inverse, Lipschitz properties, possibly combined with continuity or derivability) are expressible by formulae
in a class for which sound and complete hierarchical proof methods for testing satisfiability of sets of ground clauses exist. The results are useful for automated reasoning in mathematical analysis
and for the verification of hybrid systems.
@techreport{Sofronie-Stokkermans-atr45-2008, TITLE = {Efficient Hierarchical Reasoning about Functions over Numerical Domains}, AUTHOR = {Sofronie-Stokkermans, Viorica}, LANGUAGE = {eng}, ISSN =
{1860-9821}, NUMBER = {ATR45}, INSTITUTION = {SFB/TR 14 AVACS}, YEAR = {2008}, DATE = {2008}, ABSTRACT = {We show that many properties studied in mathematical analysis (monotonicity, boundedness,
inverse, Lipschitz properties, possibly combined with continuity or derivability) are expressible by formulae in a class for which sound and complete hierarchical proof methods for testing
satisfiability of sets of ground clauses exist. The results are useful for automated reasoning in mathematical analysis and for the verification of hybrid systems.}, TYPE = {AVACS Technical Report},
VOLUME = {45}, }
%0 Report %A Sofronie-Stokkermans, Viorica %+ Automation of Logic, MPI for Informatics, Max Planck Society %T Efficient Hierarchical Reasoning about Functions over Numerical Domains : %G eng %U http:
//hdl.handle.net/11858/00-001M-0000-0027-A46C-B %Y SFB/TR 14 AVACS %D 2008 %P 17 p. %X We show that many properties studied in mathematical analysis (monotonicity, boundedness, inverse, Lipschitz
properties, possibly combined with continuity or derivability) are expressible by formulae in a class for which sound and complete hierarchical proof methods for testing satisfiability of sets of
ground clauses exist. The results are useful for automated reasoning in mathematical analysis and for the verification of hybrid systems. %B AVACS Technical Report %N 45 %@ false %U http://
Sheaves and Geometric Logic and Applications to Modular Verification of Complex Systems
V. Sofronie-Stokkermans
Technical Report, 2008b
In this paper we show that states, transitions and behavior of concurrent systems can often be modeled as sheaves over a suitable topological space (where the topology expresses how the interacting
systems share the information). This allows us to use results from categorical logic (and in particular geometric logic) to describe which type of properties are transferred, if valid locally in all
component systems, also at a global level, to the system obtained by interconnecting the individual systems. The main area of application is to modular verification of complex systems. We illustrate
the ideas by means of an example involving a family of interacting controllers for trains on a rail track.
@techreport{Sofronie-Stokkermans-atr46-2008, TITLE = {Sheaves and Geometric Logic and Applications to Modular Verification of Complex Systems}, AUTHOR = {Sofronie-Stokkermans, Viorica}, LANGUAGE =
{eng}, ISSN = {1860-9821}, NUMBER = {ATR46}, INSTITUTION = {SFB/TR 14 AVACS}, YEAR = {2008}, DATE = {2008}, ABSTRACT = {In this paper we show that states, transitions and behavior of concurrent
systems can often be modeled as sheaves over a suitable topological space (where the topology expresses how the interacting systems share the information). This allows us to use results from
categorical logic (and in particular geometric logic) to describe which type of properties are transferred, if valid locally in all component systems, also at a global level, to the system obtained
by interconnecting the individual systems. The main area of application is to modular verification of complex systems. We illustrate the ideas by means of an example involving a family of interacting
controllers for trains on a rail track.}, TYPE = {AVACS Technical Report}, VOLUME = {46}, }
%0 Report %A Sofronie-Stokkermans, Viorica %+ Automation of Logic, MPI for Informatics, Max Planck Society %T Sheaves and Geometric Logic and Applications to Modular Verification of Complex Systems :
%G eng %U http://hdl.handle.net/11858/00-001M-0000-0027-A579-5 %Y SFB/TR 14 AVACS %D 2008 %X In this paper we show that states, transitions and behavior of concurrent systems can often be modeled as
sheaves over a suitable topological space (where the topology expresses how the interacting systems share the information). This allows us to use results from categorical logic (and in particular
geometric logic) to describe which type of properties are transferred, if valid locally in all component systems, also at a global level, to the system obtained by interconnecting the individual
systems. The main area of application is to modular verification of complex systems. We illustrate the ideas by means of an example involving a family of interacting controllers for trains on a rail
track. %B AVACS Technical Report %N 46 %@ false %U http://www.avacs.org/fileadmin/Publikationen/Open/avacs_technical_report_046.pdf
SOFIE: A Self-Organizing Framework for Information Extraction
F. Suchanek, M. Sozio and G. Weikum
Technical Report, 2008
This paper presents SOFIE, a system for automated ontology extension. SOFIE can parse natural language documents, extract ontological facts from them and link the facts into an ontology. SOFIE uses
logical reasoning on the existing knowledge and on the new knowledge in order to disambiguate words to their most probable meaning, to reason on the meaning of text patterns and to take into account
world knowledge axioms. This allows SOFIE to check the plausibility of hypotheses and to avoid inconsistencies with the ontology. The framework of SOFIE unites the paradigms of pattern matching, word
sense disambiguation and ontological reasoning in one unified model. Our experiments show that SOFIE delivers near-perfect output, even from unstructured Internet documents.
@techreport{SuchanekMauroWeikum2008, TITLE = {{SOFIE}: A Self-Organizing Framework for Information Extraction}, AUTHOR = {Suchanek, Fabian and Sozio, Mauro and Weikum, Gerhard}, LANGUAGE = {eng}, URL
= {http://domino.mpi-inf.mpg.de/internet/reports.nsf/NumberView/2008-5-004}, NUMBER = {MPI-I-2008-5-004}, INSTITUTION = {Max-Planck-Institut f{\"u}r Informatik}, ADDRESS = {Saarbr{\"u}cken}, YEAR =
{2008}, DATE = {2008}, ABSTRACT = {This paper presents SOFIE, a system for automated ontology extension. SOFIE can parse natural language documents, extract ontological facts from them and link the
facts into an ontology. SOFIE uses logical reasoning on the existing knowledge and on the new knowledge in order to disambiguate words to their most probable meaning, to reason on the meaning of text
patterns and to take into account world knowledge axioms. This allows SOFIE to check the plausibility of hypotheses and to avoid inconsistencies with the ontology. The framework of SOFIE unites the
paradigms of pattern matching, word sense disambiguation and ontological reasoning in one unified model. Our experiments show that SOFIE delivers near-perfect output, even from unstructured Internet
documents.}, TYPE = {Research Report / Max-Planck-Institut für Informatik}, }
%0 Report %A Suchanek, Fabian %A Sozio, Mauro %A Weikum, Gerhard %+ Databases and Information Systems, MPI for Informatics, Max Planck Society Databases and Information Systems, MPI for Informatics,
Max Planck Society Databases and Information Systems, MPI for Informatics, Max Planck Society %T SOFIE: A Self-Organizing Framework for Information Extraction : %G eng %U http://hdl.handle.net/11858/
00-001M-0000-0014-668E-6 %U http://domino.mpi-inf.mpg.de/internet/reports.nsf/NumberView/2008-5-004 %Y Max-Planck-Institut für Informatik %C Saarbrücken %D 2008 %P 49 p. %X This paper
presents SOFIE, a system for automated ontology extension. SOFIE can parse natural language documents, extract ontological facts from them and link the facts into an ontology. SOFIE uses logical
reasoning on the existing knowledge and on the new knowledge in order to disambiguate words to their most probable meaning, to reason on the meaning of text patterns and to take into account world
knowledge axioms. This allows SOFIE to check the plausibility of hypotheses and to avoid inconsistencies with the ontology. The framework of SOFIE unites the paradigms of pattern matching, word sense
disambiguation and ontological reasoning in one unified model. Our experiments show that SOFIE delivers near-perfect output, even from unstructured Internet documents. %B Research Report /
Max-Planck-Institut für Informatik
Shape Complexity from Image Similarity
D. Wang, A. Belyaev, W. Saleem and H.-P. Seidel
Technical Report, 2008
We present an approach to automatically compute the complexity of a given 3D shape. Previous approaches have made use of geometric and/or topological properties of the 3D shape to compute complexity.
Our approach is based on shape appearance and estimates the complexity of a given 3D shape according to how 2D views of the shape diverge from each other. We use similarity among views of the 3D
shape as the basis for our complexity computation. Hence our approach uses claims from psychology that humans mentally represent 3D shapes as organizations of 2D views and, therefore, mimics how
humans gauge shape complexity. Experimental results show that our approach produces results that are more in agreement with the human notion of shape complexity than those obtained using previous
@techreport{WangBelyaevSaleemSeidel2008, TITLE = {Shape Complexity from Image Similarity}, AUTHOR = {Wang, Danyi and Belyaev, Alexander and Saleem, Waqar and Seidel, Hans-Peter}, LANGUAGE = {eng},
URL = {http://domino.mpi-inf.mpg.de/internet/reports.nsf/NumberView/2008-4-002}, NUMBER = {MPI-I-2008-4-002}, INSTITUTION = {Max-Planck-Institut f{\"u}r Informatik}, ADDRESS = {Saarbr{\"u}cken}, YEAR
= {2008}, DATE = {2008}, ABSTRACT = {We present an approach to automatically compute the complexity of a given 3D shape. Previous approaches have made use of geometric and/or topological properties
of the 3D shape to compute complexity. Our approach is based on shape appearance and estimates the complexity of a given 3D shape according to how 2D views of the shape diverge from each other. We
use similarity among views of the 3D shape as the basis for our complexity computation. Hence our approach uses claims from psychology that humans mentally represent 3D shapes as organizations of 2D
views and, therefore, mimics how humans gauge shape complexity. Experimental results show that our approach produces results that are more in agreement with the human notion of shape complexity than
those obtained using previous approaches.}, TYPE = {Research Report / Max-Planck-Institut für Informatik}, }
%0 Report %A Wang, Danyi %A Belyaev, Alexander %A Saleem, Waqar %A Seidel, Hans-Peter %+ Computer Graphics, MPI for Informatics, Max Planck Society Computer Graphics, MPI for Informatics, Max Planck
Society Computer Graphics, MPI for Informatics, Max Planck Society Computer Graphics, MPI for Informatics, Max Planck Society %T Shape Complexity from Image Similarity : %G eng %U http://
hdl.handle.net/11858/00-001M-0000-0014-66B9-6 %U http://domino.mpi-inf.mpg.de/internet/reports.nsf/NumberView/2008-4-002 %Y Max-Planck-Institut für Informatik %C Saarbrücken %D 2008 %P 28
p. %X We present an approach to automatically compute the complexity of a given 3D shape. Previous approaches have made use of geometric and/or topological properties of the 3D shape to compute
complexity. Our approach is based on shape appearance and estimates the complexity of a given 3D shape according to how 2D views of the shape diverge from each other. We use similarity among views of
the 3D shape as the basis for our complexity computation. Hence our approach uses claims from psychology that humans mentally represent 3D shapes as organizations of 2D views and, therefore, mimics
how humans gauge shape complexity. Experimental results show that our approach produces results that are more in agreement with the human notion of shape complexity than those obtained using previous
approaches. %B Research Report / Max-Planck-Institut für Informatik
A Lagrangian relaxation approach for the multiple sequence alignment problem
E. Althaus and S. Canzar
Technical Report, 2007
We present a branch-and-bound (bb) algorithm for the multiple sequence alignment problem (MSA), one of the most important problems in computational biology. The upper bound at each bb node is based
on a Lagrangian relaxation of an integer linear programming formulation for MSA. Dualizing certain inequalities, the Lagrangian subproblem becomes a pairwise alignment problem, which can be solved
efficiently by a dynamic programming approach. Due to a reformulation w.r.t. additionally introduced variables prior to relaxation we improve the convergence rate dramatically while at the same time
being able to solve the Lagrangian problem efficiently. Our experiments show that our implementation, although preliminary, outperforms all exact algorithms for the multiple sequence alignment
@techreport{, TITLE = {A Lagrangian relaxation approach for the multiple sequence alignment problem}, AUTHOR = {Althaus, Ernst and Canzar, Stefan}, LANGUAGE = {eng}, URL = {http://
domino.mpi-inf.mpg.de/internet/reports.nsf/NumberView/2007-1-002}, NUMBER = {MPI-I-2007-1-002}, INSTITUTION = {Max-Planck-Institut f{\"u}r Informatik}, ADDRESS = {Saarbr{\"u}cken}, YEAR = {2007},
DATE = {2007}, ABSTRACT = {We present a branch-and-bound (bb) algorithm for the multiple sequence alignment problem (MSA), one of the most important problems in computational biology. The upper bound
at each bb node is based on a Lagrangian relaxation of an integer linear programming formulation for MSA. Dualizing certain inequalities, the Lagrangian subproblem becomes a pairwise alignment
problem, which can be solved efficiently by a dynamic programming approach. Due to a reformulation w.r.t. additionally introduced variables prior to relaxation we improve the convergence rate
dramatically while at the same time being able to solve the Lagrangian problem efficiently. Our experiments show that our implementation, although preliminary, outperforms all exact algorithms for
the multiple sequence alignment problem.}, TYPE = {Research Report / Max-Planck-Institut für Informatik}, }
%0 Report %A Althaus, Ernst %A Canzar, Stefan %+ Algorithms and Complexity, MPI for Informatics, Max Planck Society Algorithms and Complexity, MPI for Informatics, Max Planck Society %T A Lagrangian
relaxation approach for the multiple sequence alignment problem : %G eng %U http://hdl.handle.net/11858/00-001M-0000-0014-6707-0 %U http://domino.mpi-inf.mpg.de/internet/reports.nsf/NumberView/
2007-1-002 %Y Max-Planck-Institut für Informatik %C Saarbrücken %D 2007 %P 41 p. %X We present a branch-and-bound (bb) algorithm for the multiple sequence alignment problem (MSA), one of
the most important problems in computational biology. The upper bound at each bb node is based on a Lagrangian relaxation of an integer linear programming formulation for MSA. Dualizing certain
inequalities, the Lagrangian subproblem becomes a pairwise alignment problem, which can be solved efficiently by a dynamic programming approach. Due to a reformulation w.r.t. additionally introduced
variables prior to relaxation we improve the convergence rate dramatically while at the same time being able to solve the Lagrangian problem efficiently. Our experiments show that our implementation,
although preliminary, outperforms all exact algorithms for the multiple sequence alignment problem. %B Research Report / Max-Planck-Institut für Informatik
A nonlinear viseme model for triphone-based speech synthesis
R. Bargmann, V. Blanz and H.-P. Seidel
Technical Report, 2007
This paper presents a representation of visemes that defines a measure of similarity between different visemes, and a system of viseme categories. The representation is derived from a statistical
data analysis of feature points on 3D scans, using Locally Linear Embedding (LLE). The similarity measure determines which available viseme and triphones to use to synthesize 3D face animation for a
novel audio file. From a corpus of dynamic recorded 3D mouth articulation data, our system is able to find the best suited sequence of triphones over which to interpolate while reusing the
coarticulation information to obtain correct mouth movements over time. Due to the similarity measure, the system can deal with relatively small triphone databases and find the most appropriate
candidates. With the selected sequence of database triphones, we can finally morph along the successive triphones to produce the final articulation animation. In an entirely data-driven approach, our
automated procedure for defining viseme categories reproduces the groups of related visemes that are defined in the phonetics literature.
@techreport{BargmannBlanzSeidel2007, TITLE = {A nonlinear viseme model for triphone-based speech synthesis}, AUTHOR = {Bargmann, Robert and Blanz, Volker and Seidel, Hans-Peter}, LANGUAGE = {eng},
URL = {http://domino.mpi-inf.mpg.de/internet/reports.nsf/NumberView/2007-4-003}, NUMBER = {MPI-I-2007-4-003}, INSTITUTION = {Max-Planck-Institut f{\"u}r Informatik}, ADDRESS = {Saarbr{\"u}cken}, YEAR
= {2007}, DATE = {2007}, ABSTRACT = {This paper presents a representation of visemes that defines a measure of similarity between different visemes, and a system of viseme categories. The
representation is derived from a statistical data analysis of feature points on 3D scans, using Locally Linear Embedding (LLE). The similarity measure determines which available viseme and triphones
to use to synthesize 3D face animation for a novel audio file. From a corpus of dynamic recorded 3D mouth articulation data, our system is able to find the best suited sequence of triphones over
which to interpolate while reusing the coarticulation information to obtain correct mouth movements over time. Due to the similarity measure, the system can deal with relatively small triphone
databases and find the most appropriate candidates. With the selected sequence of database triphones, we can finally morph along the successive triphones to produce the final articulation animation.
In an entirely data-driven approach, our automated procedure for defining viseme categories reproduces the groups of related visemes that are defined in the phonetics literature.}, TYPE = {Research
Report / Max-Planck-Institut für Informatik}, }
%0 Report %A Bargmann, Robert %A Blanz, Volker %A Seidel, Hans-Peter %+ Computer Graphics, MPI for Informatics, Max Planck Society Computer Graphics, MPI for Informatics, Max Planck Society Computer
Graphics, MPI for Informatics, Max Planck Society %T A nonlinear viseme model for triphone-based speech synthesis : %G eng %U http://hdl.handle.net/11858/00-001M-0000-0014-66DC-7 %U http://
domino.mpi-inf.mpg.de/internet/reports.nsf/NumberView/2007-4-003 %Y Max-Planck-Institut für Informatik %C Saarbrücken %D 2007 %P 28 p. %X This paper presents a representation of visemes
that defines a measure of similarity between different visemes, and a system of viseme categories. The representation is derived from a statistical data analysis of feature points on 3D scans, using
Locally Linear Embedding (LLE). The similarity measure determines which available viseme and triphones to use to synthesize 3D face animation for a novel audio file. From a corpus of dynamic recorded
3D mouth articulation data, our system is able to find the best suited sequence of triphones over which to interpolate while reusing the coarticulation information to obtain correct mouth movements
over time. Due to the similarity measure, the system can deal with relatively small triphone databases and find the most appropriate candidates. With the selected sequence of database triphones, we
can finally morph along the successive triphones to produce the final articulation animation. In an entirely data-driven approach, our automated procedure for defining viseme categories reproduces
the groups of related visemes that are defined in the phonetics literature. %B Research Report / Max-Planck-Institut für Informatik
Computing Envelopes of Quadrics
E. Berberich and M. Meyerovitch
Technical Report, 2007
@techreport{acs:bm-ceq-07, TITLE = {Computing Envelopes of Quadrics}, AUTHOR = {Berberich, Eric and Meyerovitch, Michal}, LANGUAGE = {eng}, NUMBER = {ACS-TR-241402-03}, LOCALID = {Local-ID:
C12573CC004A8E26-12A6DC64E5449DC9C12573D1004DA0BC-acs:bm-ceq-07}, INSTITUTION = {University of Groningen}, ADDRESS = {Groningen, The Netherlands}, YEAR = {2007}, DATE = {2007}, TYPE = {ACS Technical
Reports}, }
%0 Report %A Berberich, Eric %A Meyerovitch, Michal %+ Algorithms and Complexity, MPI for Informatics, Max Planck Society Algorithms and Complexity, MPI for Informatics, Max Planck Society %T
Computing Envelopes of Quadrics : %G eng %U http://hdl.handle.net/11858/00-001M-0000-000F-1EA4-F %F EDOC: 356718 %F OTHER: Local-ID: C12573CC004A8E26-12A6DC64E5449DC9C12573D1004DA0BC-acs:bm-ceq-07 %Y
University of Groningen %C Groningen, The Netherlands %D 2007 %P 5 p. %B ACS Technical Reports
Linear-Time Reordering in a Sweep-line Algorithm for Algebraic Curves Intersecting in a Common Point
E. Berberich and L. Kettner
Technical Report, 2007
@techreport{bk-reorder-07, TITLE = {Linear-Time Reordering in a Sweep-line Algorithm for Algebraic Curves Intersecting in a Common Point}, AUTHOR = {Berberich, Eric and Kettner, Lutz}, LANGUAGE =
{eng}, ISSN = {0946-011X}, NUMBER = {MPI-I-2007-1-001}, LOCALID = {Local-ID: C12573CC004A8E26-D3347FB7A037EE5CC12573D1004C6833-bk-reorder-07}, INSTITUTION = {Max-Planck-Institut f{\"u}r Informatik},
ADDRESS = {Saarbr{\"u}cken}, YEAR = {2007}, DATE = {2007}, TYPE = {Research Report}, }
%0 Report %A Berberich, Eric %A Kettner, Lutz %+ Algorithms and Complexity, MPI for Informatics, Max Planck Society Algorithms and Complexity, MPI for Informatics, Max Planck Society %T Linear-Time
Reordering in a Sweep-line Algorithm for Algebraic Curves Intersecting in a Common Point : %G eng %U http://hdl.handle.net/11858/00-001M-0000-000F-1FB9-8 %F EDOC: 356668 %@ 0946-011X %F OTHER:
Local-ID: C12573CC004A8E26-D3347FB7A037EE5CC12573D1004C6833-bk-reorder-07 %Y Max-Planck-Institut für Informatik %C Saarbrücken %D 2007 %P 20 p. %B Research Report
Revision of interface specification of algebraic kernel
E. Berberich, M. Hemmer, M. I. Karavelas and M. Teillaud
Technical Report, 2007
@techreport{acs:bhkt-risak-06, TITLE = {Revision of interface specification of algebraic kernel}, AUTHOR = {Berberich, Eric and Hemmer, Michael and Karavelas, Menelaos I. and Teillaud, Monique},
LANGUAGE = {eng}, LOCALID = {Local-ID: C12573CC004A8E26-1F31C7FA352D83DDC12573D1004F257E-acs:bhkt-risak-06}, INSTITUTION = {University of Groningen}, ADDRESS = {Groningen, The Netherlands}, YEAR =
{2007}, DATE = {2007}, TYPE = {ACS Technical Reports}, }
%0 Report %A Berberich, Eric %A Hemmer, Michael %A Karavelas, Menelaos I. %A Teillaud, Monique %+ Algorithms and Complexity, MPI for Informatics, Max Planck Society Algorithms and Complexity, MPI for
Informatics, Max Planck Society External Organizations External Organizations %T Revision of interface specification of algebraic kernel : %G eng %U http://hdl.handle.net/11858/
00-001M-0000-000F-208F-0 %F EDOC: 356661 %F OTHER: Local-ID: C12573CC004A8E26-1F31C7FA352D83DDC12573D1004F257E-acs:bhkt-risak-06 %F OTHER: ACS-TR-243301-01 %Y University of Groningen %C Groningen,
The Netherlands %D 2007 %P 100 p. %B ACS Technical Reports
Sweeping and maintaining two-dimensional arrangements on quadrics
E. Berberich, E. Fogel, D. Halperin, K. Mehlhorn and R. Wein
Technical Report, 2007
@techreport{acs:bfhmw-smtaoq-07, TITLE = {Sweeping and maintaining two-dimensional arrangements on quadrics}, AUTHOR = {Berberich, Eric and Fogel, Efi and Halperin, Dan and Mehlhorn, Kurt and Wein,
Ron}, LANGUAGE = {eng}, NUMBER = {ACS-TR-241402-02}, LOCALID = {Local-ID: C12573CC004A8E26-A2D9FC191F294C4BC12573D1004D4FA3-acs:bfhmw-smtaoq-07}, INSTITUTION = {University of Groningen}, ADDRESS =
{Groningen, The Netherlands}, YEAR = {2007}, DATE = {2007}, TYPE = {ACS Technical Reports}, }
%0 Report %A Berberich, Eric %A Fogel, Efi %A Halperin, Dan %A Mehlhorn, Kurt %A Wein, Ron %+ Algorithms and Complexity, MPI for Informatics, Max Planck Society Algorithms and Complexity, MPI for
Informatics, Max Planck Society Algorithms and Complexity, MPI for Informatics, Max Planck Society %T Sweeping and maintaining two-dimensional arrangements on quadrics : %G eng %U http://
hdl.handle.net/11858/00-001M-0000-000F-20E3-1 %F EDOC: 356692 %F OTHER: Local-ID: C12573CC004A8E26-A2D9FC191F294C4BC12573D1004D4FA3-acs:bfhmw-smtaoq-07 %Y University of Groningen %C Groningen, The
Netherlands %D 2007 %P 10 p. %B ACS Technical Reports
Definition of the 3D Quadrical Kernel Content
E. Berberich and M. Hemmer
Technical Report, 2007
@techreport{acs:bh-dtqkc-07, TITLE = {Definition of the {3D} Quadrical Kernel Content}, AUTHOR = {Berberich, Eric and Hemmer, Michael}, LANGUAGE = {eng}, NUMBER = {ACS-TR-243302-02}, LOCALID =
{Local-ID: C12573CC004A8E26-2FF567066FB82A5FC12573D1004DDD73-acs:bh-dtqkc-07}, INSTITUTION = {University of Groningen}, ADDRESS = {Groningen, The Netherlands}, YEAR = {2007}, DATE = {2007}, TYPE =
{ACS Technical Reports}, }
%0 Report %A Berberich, Eric %A Hemmer, Michael %+ Algorithms and Complexity, MPI for Informatics, Max Planck Society Algorithms and Complexity, MPI for Informatics, Max Planck Society %T Definition
of the 3D Quadrical Kernel Content : %G eng %U http://hdl.handle.net/11858/00-001M-0000-000F-1ED4-1 %F EDOC: 356735 %F OTHER: Local-ID:
C12573CC004A8E26-2FF567066FB82A5FC12573D1004DDD73-acs:bh-dtqkc-07 %Y University of Groningen %C Groningen, The Netherlands %D 2007 %P 25 p. %B ACS Technical Reports
Exact Computation of Arrangements of Rotated Conics
E. Berberich, M. Caroli and N. Wolpert
Technical Report, 2007
@techreport{acs:bcw-carc-07, TITLE = {Exact Computation of Arrangements of Rotated Conics}, AUTHOR = {Berberich, Eric and Caroli, Manuel and Wolpert, Nicola}, LANGUAGE = {eng}, NUMBER =
{ACS-TR-123104-03}, LOCALID = {Local-ID: C12573CC004A8E26-1EB177EFAA801139C12573D1004D0246-acs:bcw-carc-07}, INSTITUTION = {University of Groningen}, ADDRESS = {Groningen, The Netherlands}, YEAR =
{2007}, DATE = {2007}, TYPE = {ACS Technical Reports}, }
%0 Report %A Berberich, Eric %A Caroli, Manuel %A Wolpert, Nicola %+ Algorithms and Complexity, MPI for Informatics, Max Planck Society Algorithms and Complexity, MPI for Informatics, Max Planck
Society Algorithms and Complexity, MPI for Informatics, Max Planck Society %T Exact Computation of Arrangements of Rotated Conics : %G eng %U http://hdl.handle.net/11858/00-001M-0000-000F-1F20-F %F
EDOC: 356666 %F OTHER: Local-ID: C12573CC004A8E26-1EB177EFAA801139C12573D1004D0246-acs:bcw-carc-07 %Y University of Groningen %C Groningen, The Netherlands %D 2007 %P 5 p %B ACS Technical Reports
Updated Website to include Benchmark Instances for Arrangements of Quadrics and Planar Algebraic Curves
E. Berberich, E. Fogel and A. Meyer
Technical Report, 2007
@techreport{acs:bfm-uwibaqpac-07, TITLE = {Updated Website to include Benchmark Instances for Arrangements of Quadrics and Planar Algebraic Curves}, AUTHOR = {Berberich, Eric and Fogel, Efi and
Meyer, Andreas}, LANGUAGE = {eng}, NUMBER = {ACS-TR-243305-01}, LOCALID = {Local-ID: C12573CC004A8E26-DEDF6F20E463424CC12573D1004E1823-acs:bfm-uwibaqpac-07}, INSTITUTION = {University of Groningen},
ADDRESS = {Groningen, The Netherlands}, YEAR = {2007}, DATE = {2007}, TYPE = {ACS Technical Reports}, }
%0 Report %A Berberich, Eric %A Fogel, Efi %A Meyer, Andreas %+ Algorithms and Complexity, MPI for Informatics, Max Planck Society Algorithms and Complexity, MPI for Informatics, Max Planck Society
%T Updated Website to include Benchmark Instances for Arrangements of Quadrics and Planar Algebraic Curves : %G eng %U http://hdl.handle.net/11858/00-001M-0000-000F-2128-E %F EDOC: 356664 %F OTHER:
Local-ID: C12573CC004A8E26-DEDF6F20E463424CC12573D1004E1823-acs:bfm-uwibaqpac-07 %Y University of Groningen %C Groningen, The Netherlands %D 2007 %P 5 p. %B ACS Technical Reports
A Time Machine for Text Search
K. Berberich, S. Bedathur, T. Neumann and G. Weikum
Technical Report, 2007
@techreport{TechReportBBNW-2007, TITLE = {A Time Machine for Text Search}, AUTHOR = {Berberich, Klaus and Bedathur, Srikanta and Neumann, Thomas and Weikum, Gerhard}, LANGUAGE = {eng}, ISSN =
{0946-011X}, NUMBER = {MPII-I-2007-5-02}, LOCALID = {Local-ID: C12573CC004A8E26-D444201EBAA5F95BC125731E00458A41-TechReportBBNW-2007}, INSTITUTION = {Max-Planck-Institut f{\"u}r Informatik}, ADDRESS
= {Saarbr{\"u}cken}, YEAR = {2007}, DATE = {2007}, TYPE = {Research Report}, }
%0 Report %A Berberich, Klaus %A Bedathur, Srikanta %A Neumann, Thomas %A Weikum, Gerhard %+ Databases and Information Systems, MPI for Informatics, Max Planck Society Databases and Information
Systems, MPI for Informatics, Max Planck Society Databases and Information Systems, MPI for Informatics, Max Planck Society Databases and Information Systems, MPI for Informatics, Max Planck Society
%T A Time Machine for Text Search : %G eng %U http://hdl.handle.net/11858/00-001M-0000-000F-1E49-E %F EDOC: 356443 %@ 0946-011X %F OTHER: Local-ID:
C12573CC004A8E26-D444201EBAA5F95BC125731E00458A41-TechReportBBNW-2007 %Y Max-Planck-Institut für Informatik %C Saarbrücken %D 2007 %P 39 p. %B Research Report
HistoPyramids in Iso-Surface Extraction
C. Dyken, G. Ziegler, C. Theobalt and H.-P. Seidel
Technical Report, 2007
We present an implementation approach to high-speed Marching Cubes, running entirely on the Graphics Processing Unit of Shader Model 3.0 and 4.0 graphics hardware. Our approach is based on the
interpretation of Marching Cubes as a stream compaction and expansion process, and is implemented using the HistoPyramid, a hierarchical data structure previously only used in GPU data compaction. We
extend the HistoPyramid structure to allow for stream expansion, which provides an efficient method for generating geometry directly on the GPU, even on Shader Model 3.0 hardware. Currently, our
algorithm outperforms all other known GPU-based iso-surface extraction algorithms. We describe our implementation and present a performance analysis on several generations of graphics hardware.
@techreport{DykenZieglerTheobaltSeidel2007, TITLE = {Histo{P}yramids in Iso-Surface Extraction}, AUTHOR = {Dyken, Christopher and Ziegler, Gernot and Theobalt, Christian and Seidel, Hans-Peter},
LANGUAGE = {eng}, URL = {http://domino.mpi-inf.mpg.de/internet/reports.nsf/NumberView/2007-4-006}, NUMBER = {MPI-I-2007-4-006}, INSTITUTION = {Max-Planck-Institut f{\"u}r Informatik}, ADDRESS =
{Saarbr{\"u}cken}, YEAR = {2007}, DATE = {2007}, ABSTRACT = {We present an implementation approach to high-speed Marching Cubes, running entirely on the Graphics Processing Unit of Shader Model 3.0
and 4.0 graphics hardware. Our approach is based on the interpretation of Marching Cubes as a stream compaction and expansion process, and is implemented using the HistoPyramid, a hierarchical data
structure previously only used in GPU data compaction. We extend the HistoPyramid structure to allow for stream expansion, which provides an efficient method for generating geometry directly on the
GPU, even on Shader Model 3.0 hardware. Currently, our algorithm outperforms all other known GPU-based iso-surface extraction algorithms. We describe our implementation and present a performance
analysis on several generations of graphics hardware.}, TYPE = {Research Report / Max-Planck-Institut für Informatik}, }
%0 Report %A Dyken, Christopher %A Ziegler, Gernot %A Theobalt, Christian %A Seidel, Hans-Peter %+ External Organizations Computer Graphics, MPI for Informatics, Max Planck Society Computer Graphics,
MPI for Informatics, Max Planck Society Computer Graphics, MPI for Informatics, Max Planck Society %T HistoPyramids in Iso-Surface Extraction : %G eng %U http://hdl.handle.net/11858/
00-001M-0000-0014-66D3-A %U http://domino.mpi-inf.mpg.de/internet/reports.nsf/NumberView/2007-4-006 %Y Max-Planck-Institut für Informatik %C Saarbrücken %D 2007 %P 16 p. %X We present an
implementation approach to high-speed Marching Cubes, running entirely on the Graphics Processing Unit of Shader Model 3.0 and 4.0 graphics hardware. Our approach is based on the interpretation of
Marching Cubes as a stream compaction and expansion process, and is implemented using the HistoPyramid, a hierarchical data structure previously only used in GPU data compaction. We extend the
HistoPyramid structure to allow for stream expansion, which provides an efficient method for generating geometry directly on the GPU, even on Shader Model 3.0 hardware. Currently, our algorithm
outperforms all other known GPU-based iso-surface extraction algorithms. We describe our implementation and present a performance analysis on several generations of graphics hardware. %B Research
Report / Max-Planck-Institut für Informatik
Snap Rounding of Bézier Curves
A. Eigenwillig, L. Kettner and N. Wolpert
Technical Report, 2007
@techreport{ACS-TR-121108-01, TITLE = {Snap Rounding of B{\'e}zier Curves}, AUTHOR = {Eigenwillig, Arno and Kettner, Lutz and Wolpert, Nicola}, LANGUAGE = {eng}, NUMBER = {MPI-I-2006-1-005}, LOCALID
= {Local-ID: C12573CC004A8E26-13E19171EEC8D5E0C12572A0005C02F6-ACS-TR-121108-01}, INSTITUTION = {Max-Planck-Institut f{\"u}r Informatik}, ADDRESS = {Saarbr{\"u}cken, Germany}, YEAR = {2007}, DATE =
{2007}, TYPE = {Research Report}, }
%0 Report %A Eigenwillig, Arno %A Kettner, Lutz %A Wolpert, Nicola %+ Algorithms and Complexity, MPI for Informatics, Max Planck Society Algorithms and Complexity, MPI for Informatics, Max Planck
Society Algorithms and Complexity, MPI for Informatics, Max Planck Society %T Snap Rounding of Bézier Curves : %G eng %U http://hdl.handle.net/11858/00-001M-0000-000F-20B9-0 %F EDOC: 356760 %F
OTHER: Local-ID: C12573CC004A8E26-13E19171EEC8D5E0C12572A0005C02F6-ACS-TR-121108-01 %F OTHER: ACS-TR-121108-01 %Y Max-Planck-Institut für Informatik %C Saarbrücken, Germany %D 2007 %P 19 p.
%B Research Report
Global stochastic optimization for robust and accurate human motion capture
J. Gall, T. Brox, B. Rosenhahn and H.-P. Seidel
Technical Report, 2007
Tracking of human motion in video is usually tackled either by local optimization or filtering approaches. While local optimization offers accurate estimates but often looses track due to local
optima, particle filtering can recover from errors at the expense of a poor accuracy due to overestimation of noise. In this paper, we propose to embed global stochastic optimization in a tracking
framework. This new optimization technique exhibits both the robustness of filtering strategies and a remarkable accuracy. We apply the optimization to an energy function that relies on silhouettes
and color, as well as some prior information on physical constraints. This framework provides a general solution to markerless human motion capture since neither excessive preprocessing nor strong
assumptions except of a 3D model are required. The optimization provides initialization and accurate tracking even in case of low contrast and challenging illumination. Our experimental evaluation
demonstrates the large improvements obtained with this technique. It comprises a quantitative error analysis comparing the approach with local optimization, particle filtering, and a heuristic based
on particle filtering.
@techreport{GallBroxRosenhahnSeidel2008, TITLE = {Global stochastic optimization for robust and accurate human motion capture}, AUTHOR = {Gall, J{\"u}rgen and Brox, Thomas and Rosenhahn, Bodo and
Seidel, Hans-Peter}, LANGUAGE = {eng}, URL = {http://domino.mpi-inf.mpg.de/internet/reports.nsf/NumberView/2007-4-008}, NUMBER = {MPI-I-2007-4-008}, INSTITUTION = {Max-Planck-Institut f{\"u}r
Informatik}, ADDRESS = {Saarbr{\"u}cken}, YEAR = {2007}, DATE = {2007}, ABSTRACT = {Tracking of human motion in video is usually tackled either by local optimization or filtering approaches. While
local optimization offers accurate estimates but often looses track due to local optima, particle filtering can recover from errors at the expense of a poor accuracy due to overestimation of noise.
In this paper, we propose to embed global stochastic optimization in a tracking framework. This new optimization technique exhibits both the robustness of filtering strategies and a remarkable
accuracy. We apply the optimization to an energy function that relies on silhouettes and color, as well as some prior information on physical constraints. This framework provides a general solution
to markerless human motion capture since neither excessive preprocessing nor strong assumptions except of a 3D model are required. The optimization provides initialization and accurate tracking even
in case of low contrast and challenging illumination. Our experimental evaluation demonstrates the large improvements obtained with this technique. It comprises a quantitative error analysis
comparing the approach with local optimization, particle filtering, and a heuristic based on particle filtering.}, TYPE = {Research Report / Max-Planck-Institut für Informatik}, }
%0 Report %A Gall, Jürgen %A Brox, Thomas %A Rosenhahn, Bodo %A Seidel, Hans-Peter %+ Computer Graphics, MPI for Informatics, Max Planck Society External Organizations Computer Graphics, MPI for
Informatics, Max Planck Society Computer Graphics, MPI for Informatics, Max Planck Society %T Global stochastic optimization for robust and accurate human motion capture : %G eng %U http://
hdl.handle.net/11858/00-001M-0000-0014-66CE-7 %U http://domino.mpi-inf.mpg.de/internet/reports.nsf/NumberView/2007-4-008 %Y Max-Planck-Institut für Informatik %C Saarbrücken %D 2007 %P 28
p. %X Tracking of human motion in video is usually tackled either by local optimization or filtering approaches. While local optimization offers accurate estimates but often looses track due to local
optima, particle filtering can recover from errors at the expense of a poor accuracy due to overestimation of noise. In this paper, we propose to embed global stochastic optimization in a tracking
framework. This new optimization technique exhibits both the robustness of filtering strategies and a remarkable accuracy. We apply the optimization to an energy function that relies on silhouettes
and color, as well as some prior information on physical constraints. This framework provides a general solution to markerless human motion capture since neither excessive preprocessing nor strong
assumptions except of a 3D model are required. The optimization provides initialization and accurate tracking even in case of low contrast and challenging illumination. Our experimental evaluation
demonstrates the large improvements obtained with this technique. It comprises a quantitative error analysis comparing the approach with local optimization, particle filtering, and a heuristic based
on particle filtering. %B Research Report / Max-Planck-Institut für Informatik
Clustered stochastic optimization for object recognition and pose estimation
J. Gall, B. Rosenhahn and H.-P. Seidel
Technical Report, 2007
We present an approach for estimating the 3D position and in case of articulated objects also the joint configuration from segmented 2D images. The pose estimation without initial information is a
challenging optimization problem in a high dimensional space and is essential for texture acquisition and initialization of model-based tracking algorithms. Our method is able to recognize the
correct object in the case of multiple objects and estimates its pose with a high accuracy. The key component is a particle-based global optimization method that converges to the global minimum
similar to simulated annealing. After detecting potential bounded subsets of the search space, the particles are divided into clusters and migrate to the most attractive cluster as the time
increases. The performance of our approach is verified by means of real scenes and a quantative error analysis for image distortions. Our experiments include rigid bodies and full human bodies.
@techreport{GallRosenhahnSeidel2007, TITLE = {Clustered stochastic optimization for object recognition and pose estimation}, AUTHOR = {Gall, J{\"u}rgen and Rosenhahn, Bodo and Seidel, Hans-Peter},
LANGUAGE = {eng}, URL = {http://domino.mpi-inf.mpg.de/internet/reports.nsf/NumberView/2007-4-001}, NUMBER = {MPI-I-2007-4-001}, INSTITUTION = {Max-Planck-Institut f{\"u}r Informatik}, ADDRESS =
{Saarbr{\"u}cken}, YEAR = {2007}, DATE = {2007}, ABSTRACT = {We present an approach for estimating the 3D position and in case of articulated objects also the joint configuration from segmented 2D
images. The pose estimation without initial information is a challenging optimization problem in a high dimensional space and is essential for texture acquisition and initialization of model-based
tracking algorithms. Our method is able to recognize the correct object in the case of multiple objects and estimates its pose with a high accuracy. The key component is a particle-based global
optimization method that converges to the global minimum similar to simulated annealing. After detecting potential bounded subsets of the search space, the particles are divided into clusters and
migrate to the most attractive cluster as the time increases. The performance of our approach is verified by means of real scenes and a quantative error analysis for image distortions. Our
experiments include rigid bodies and full human bodies.}, TYPE = {Research Report / Max-Planck-Institut für Informatik}, }
%0 Report %A Gall, Jürgen %A Rosenhahn, Bodo %A Seidel, Hans-Peter %+ Computer Graphics, MPI for Informatics, Max Planck Society Computer Graphics, MPI for Informatics, Max Planck Society
Computer Graphics, MPI for Informatics, Max Planck Society %T Clustered stochastic optimization for object recognition and pose estimation : %G eng %U http://hdl.handle.net/11858/
00-001M-0000-0014-66E5-2 %U http://domino.mpi-inf.mpg.de/internet/reports.nsf/NumberView/2007-4-001 %Y Max-Planck-Institut für Informatik %C Saarbrücken %D 2007 %P 23 p. %X We present an
approach for estimating the 3D position and in case of articulated objects also the joint configuration from segmented 2D images. The pose estimation without initial information is a challenging
optimization problem in a high dimensional space and is essential for texture acquisition and initialization of model-based tracking algorithms. Our method is able to recognize the correct object in
the case of multiple objects and estimates its pose with a high accuracy. The key component is a particle-based global optimization method that converges to the global minimum similar to simulated
annealing. After detecting potential bounded subsets of the search space, the particles are divided into clusters and migrate to the most attractive cluster as the time increases. The performance of
our approach is verified by means of real scenes and a quantative error analysis for image distortions. Our experiments include rigid bodies and full human bodies. %B Research Report /
Max-Planck-Institut für Informatik
Interacting and Annealing Particle Filters: Mathematics and a Recipe for Applications
J. Gall, J. Potthoff, C. Schnörr, B. Rosenhahn and H.-P. Seidel
Technical Report, 2007
Interacting and annealing are two powerful strategies that are applied in different areas of stochastic modelling and data analysis. Interacting particle systems approximate a distribution of
interest by a finite number of particles where the particles interact between the time steps. In computer vision, they are commonly known as particle filters. Simulated annealing, on the other hand,
is a global optimization method derived from statistical mechanics. A recent heuristic approach to fuse these two techniques for motion capturing has become known as annealed particle filter. In
order to analyze these techniques, we rigorously derive in this paper two algorithms with annealing properties based on the mathematical theory of interacting particle systems. Convergence results
and sufficient parameter restrictions enable us to point out limitations of the annealed particle filter. Moreover, we evaluate the impact of the parameters on the performance in various experiments,
including the tracking of articulated bodies from noisy measurements. Our results provide a general guidance on suitable parameter choices for different applications.
@techreport{GallPotthoffRosenhahnSchnoerrSeidel2006, TITLE = {Interacting and Annealing Particle Filters: Mathematics and a Recipe for Applications}, AUTHOR = {Gall, J{\"u}rgen and Potthoff, J{\"u}
rgen and Schn{\"o}rr, Christoph and Rosenhahn, Bodo and Seidel, Hans-Peter}, LANGUAGE = {eng}, NUMBER = {MPI-I-2006-4-009}, INSTITUTION = {Max-Planck-Institut f{\"u}r Informatik}, ADDRESS = {Saarbr{\
"u}cken}, YEAR = {2007}, DATE = {2007}, ABSTRACT = {Interacting and annealing are two powerful strategies that are applied in different areas of stochastic modelling and data analysis. Interacting
particle systems approximate a distribution of interest by a finite number of particles where the particles interact between the time steps. In computer vision, they are commonly known as particle
filters. Simulated annealing, on the other hand, is a global optimization method derived from statistical mechanics. A recent heuristic approach to fuse these two techniques for motion capturing has
become known as annealed particle filter. In order to analyze these techniques, we rigorously derive in this paper two algorithms with annealing properties based on the mathematical theory of
interacting particle systems. Convergence results and sufficient parameter restrictions enable us to point out limitations of the annealed particle filter. Moreover, we evaluate the impact of the
parameters on the performance in various experiments, including the tracking of articulated bodies from noisy measurements. Our results provide a general guidance on suitable parameter choices for
different applications.}, TYPE = {Research Report}, }
%0 Report %A Gall, Jürgen %A Potthoff, Jürgen %A Schnörr, Christoph %A Rosenhahn, Bodo %A Seidel, Hans-Peter %+ Computer Graphics, MPI for Informatics, Max Planck Society Computer
Graphics, MPI for Informatics, Max Planck Society Computer Graphics, MPI for Informatics, Max Planck Society %T Interacting and Annealing Particle Filters: Mathematics and a Recipe for Applications :
%G eng %U http://hdl.handle.net/11858/00-001M-0000-0027-13C7-D %Y Max-Planck-Institut für Informatik %C Saarbrücken %D 2007 %Z Review method: peer-reviewed %X Interacting and annealing are
two powerful strategies that are applied in different areas of stochastic modelling and data analysis. Interacting particle systems approximate a distribution of interest by a finite number of
particles where the particles interact between the time steps. In computer vision, they are commonly known as particle filters. Simulated annealing, on the other hand, is a global optimization method
derived from statistical mechanics. A recent heuristic approach to fuse these two techniques for motion capturing has become known as annealed particle filter. In order to analyze these techniques,
we rigorously derive in this paper two algorithms with annealing properties based on the mathematical theory of interacting particle systems. Convergence results and sufficient parameter restrictions
enable us to point out limitations of the annealed particle filter. Moreover, we evaluate the impact of the parameters on the performance in various experiments, including the tracking of articulated
bodies from noisy measurements. Our results provide a general guidance on suitable parameter choices for different applications. %B Research Report
LFthreads: a lock-free thread library
A. Gidenstam and M. Papatriantafilou
Technical Report, 2007
This paper presents the synchronization in LFthreads, a thread library entirely based on lock-free methods, i.e. no spin-locks or similar synchronization mechanisms are employed in the implementation
of the multithreading. Since lock-freedom is highly desirable in multiprocessors/multicores due to its advantages in parallelism, fault-tolerance, convoy-avoidance and more, there is an increased
demand in lock-free methods in parallel applications, hence also in multiprocessor/multicore system services. This is why a lock-free multithreading library is important. To the best of our knowledge
LFthreads is the first thread library that provides a lock-free implementation of blocking synchronization primitives for application threads. Lock-free implementation of objects with blocking
semantics may sound like a contradicting goal. However, such objects have benefits: e.g. library operations that block and unblock threads on the same synchronization object can make progress in
parallel while maintaining the desired thread-level semantics and without having to wait for any ``slow'' operations among them. Besides, as no spin-locks or similar synchronization mechanisms are
employed, processors are always able to do useful work. As a consequence, applications, too, can enjoy enhanced parallelism and fault-tolerance. The synchronization in LFthreads is achieved by a new
method, which we call responsibility hand-off (RHO), that does not need any special kernel support.
@techreport{, TITLE = {{LFthreads}: a lock-free thread library}, AUTHOR = {Gidenstam, Anders and Papatriantafilou, Marina}, LANGUAGE = {eng}, URL = {http://domino.mpi-inf.mpg.de/internet/reports.nsf/
NumberView/2007-1-003}, NUMBER = {MPI-I-2007-1-003}, INSTITUTION = {Max-Planck-Institut f{\"u}r Informatik}, ADDRESS = {Saarbr{\"u}cken}, YEAR = {2007}, DATE = {2007}, ABSTRACT = {This paper presents
the synchronization in LFthreads, a thread library entirely based on lock-free methods, i.e. no spin-locks or similar synchronization mechanisms are employed in the implementation of the
multithreading. Since lock-freedom is highly desirable in multiprocessors/multicores due to its advantages in parallelism, fault-tolerance, convoy-avoidance and more, there is an increased demand in
lock-free methods in parallel applications, hence also in multiprocessor/multicore system services. This is why a lock-free multithreading library is important. To the best of our knowledge LFthreads
is the first thread library that provides a lock-free implementation of blocking synchronization primitives for application threads. Lock-free implementation of objects with blocking semantics may
sound like a contradicting goal. However, such objects have benefits: e.g. library operations that block and unblock threads on the same synchronization object can make progress in parallel while
maintaining the desired thread-level semantics and without having to wait for any ``slow'' operations among them. Besides, as no spin-locks or similar synchronization mechanisms are employed,
processors are always able to do useful work. As a consequence, applications, too, can enjoy enhanced parallelism and fault-tolerance. The synchronization in LFthreads is achieved by a new method,
which we call responsibility hand-off (RHO), that does not need any special kernel support.}, TYPE = {Research Report / Max-Planck-Institut für Informatik}, }
%0 Report %A Gidenstam, Anders %A Papatriantafilou, Marina %+ Algorithms and Complexity, MPI for Informatics, Max Planck Society Algorithms and Complexity, MPI for Informatics, Max Planck Society %T
LFthreads: a lock-free thread library : %G eng %U http://hdl.handle.net/11858/00-001M-0000-0014-66F8-7 %U http://domino.mpi-inf.mpg.de/internet/reports.nsf/NumberView/2007-1-003 %Y
Max-Planck-Institut für Informatik %C Saarbrücken %D 2007 %P 36 p. %X This paper presents the synchronization in LFthreads, a thread library entirely based on lock-free methods, i.e. no
spin-locks or similar synchronization mechanisms are employed in the implementation of the multithreading. Since lock-freedom is highly desirable in multiprocessors/multicores due to its advantages
in parallelism, fault-tolerance, convoy-avoidance and more, there is an increased demand in lock-free methods in parallel applications, hence also in multiprocessor/multicore system services. This is
why a lock-free multithreading library is important. To the best of our knowledge LFthreads is the first thread library that provides a lock-free implementation of blocking synchronization primitives
for application threads. Lock-free implementation of objects with blocking semantics may sound like a contradicting goal. However, such objects have benefits: e.g. library operations that block and
unblock threads on the same synchronization object can make progress in parallel while maintaining the desired thread-level semantics and without having to wait for any ``slow'' operations among
them. Besides, as no spin-locks or similar synchronization mechanisms are employed, processors are always able to do useful work. As a consequence, applications, too, can enjoy enhanced parallelism
and fault-tolerance. The synchronization in LFthreads is achieved by a new method, which we call responsibility hand-off (RHO), that does not need any special kernel support. %B Research Report /
Max-Planck-Institut für Informatik
Global Illumination using Photon Ray Splatting
R. Herzog, V. Havran, S. Kinuwaki, K. Myszkowski and H.-P. Seidel
Technical Report, 2007
We present a novel framework for efficiently computing the indirect illumination in diffuse and moderately glossy scenes using density estimation techniques. A vast majority of existing global
illumination approaches either quickly computes an approximate solution, which may not be adequate for previews, or performs a much more time-consuming computation to obtain high-quality results for
the indirect illumination. Our method improves photon density estimation, which is an approximate solution, and leads to significantly better visual quality in particular for complex geometry, while
only slightly increasing the computation time. We perform direct splatting of photon rays, which allows us to use simpler search data structures. Our novel lighting computation is derived from basic
radiometric theory and requires only small changes to existing photon splatting approaches. Since our density estimation is carried out in ray space rather than on surfaces, as in the commonly used
photon mapping algorithm, the results are more robust against geometrically incurred sources of bias. This holds also in combination with final gathering where photon mapping often overestimates the
illumination near concave geometric features. In addition, we show that our splatting technique can be extended to handle moderately glossy surfaces and can be combined with traditional irradiance
caching for sparse sampling and filtering in image space.
@techreport{HerzogReport2007, TITLE = {Global Illumination using Photon Ray Splatting}, AUTHOR = {Herzog, Robert and Havran, Vlastimil and Kinuwaki, Shinichi and Myszkowski, Karol and Seidel,
Hans-Peter}, LANGUAGE = {eng}, NUMBER = {MPI-I-2007-4-007}, LOCALID = {Local-ID: C12573CC004A8E26-88919E23BF524D6AC12573C4005B8D41-HerzogReport2007}, INSTITUTION = {Max-Planck-Institut f{\"u}r
Informatik}, ADDRESS = {Saarbr{\"u}cken, Germany}, YEAR = {2007}, DATE = {2007}, ABSTRACT = {We present a novel framework for efficiently computing the indirect illumination in diffuse and moderately
glossy scenes using density estimation techniques. A vast majority of existing global illumination approaches either quickly computes an approximate solution, which may not be adequate for previews,
or performs a much more time-consuming computation to obtain high-quality results for the indirect illumination. Our method improves photon density estimation, which is an approximate solution, and
leads to significantly better visual quality in particular for complex geometry, while only slightly increasing the computation time. We perform direct splatting of photon rays, which allows us to
use simpler search data structures. Our novel lighting computation is derived from basic radiometric theory and requires only small changes to existing photon splatting approaches. Since our density
estimation is carried out in ray space rather than on surfaces, as in the commonly used photon mapping algorithm, the results are more robust against geometrically incurred sources of bias. This
holds also in combination with final gathering where photon mapping often overestimates the illumination near concave geometric features. In addition, we show that our splatting technique can be
extended to handle moderately glossy surfaces and can be combined with traditional irradiance caching for sparse sampling and filtering in image space.}, TYPE = {Research Report}, }
%0 Report %A Herzog, Robert %A Havran, Vlastimil %A Kinuwaki, Shinichi %A Myszkowski, Karol %A Seidel, Hans-Peter %+ Computer Graphics, MPI for Informatics, Max Planck Society International Max
Planck Research School, MPI for Informatics, Max Planck Society Computer Graphics, MPI for Informatics, Max Planck Society Computer Graphics, MPI for Informatics, Max Planck Society Computer
Graphics, MPI for Informatics, Max Planck Society Computer Graphics, MPI for Informatics, Max Planck Society %T Global Illumination using Photon Ray Splatting : %G eng %U http://hdl.handle.net/11858/
00-001M-0000-000F-1F57-6 %F EDOC: 356502 %F OTHER: Local-ID: C12573CC004A8E26-88919E23BF524D6AC12573C4005B8D41-HerzogReport2007 %Y Max-Planck-Institut für Informatik %C Saarbrücken, Germany
%D 2007 %P 66 p. %X We present a novel framework for efficiently computing the indirect illumination in diffuse and moderately glossy scenes using density estimation techniques. A vast majority of
existing global illumination approaches either quickly computes an approximate solution, which may not be adequate for previews, or performs a much more time-consuming computation to obtain
high-quality results for the indirect illumination. Our method improves photon density estimation, which is an approximate solution, and leads to significantly better visual quality in particular for
complex geometry, while only slightly increasing the computation time. We perform direct splatting of photon rays, which allows us to use simpler search data structures. Our novel lighting
computation is derived from basic radiometric theory and requires only small changes to existing photon splatting approaches. Since our density estimation is carried out in ray space rather than on
surfaces, as in the commonly used photon mapping algorithm, the results are more robust against geometrically incurred sources of bias. This holds also in combination with final gathering where
photon mapping often overestimates the illumination near concave geometric features. In addition, we show that our splatting technique can be extended to handle moderately glossy surfaces and can be
combined with traditional irradiance caching for sparse sampling and filtering in image space. %B Research Report
Superposition for Finite Domains
T. Hillenbrand and C. Weidenbach
Technical Report, 2007
@techreport{HillenbrandWeidenbach2007, TITLE = {Superposition for Finite Domains}, AUTHOR = {Hillenbrand, Thomas and Weidenbach, Christoph}, LANGUAGE = {eng}, NUMBER = {MPI-I-2007-RG1-002}, LOCALID =
{Local-ID: C12573CC004A8E26-1CF84BA6556F8748C12572C1002F229B-HillenbrandWeidenbach2007Report}, INSTITUTION = {Max-Planck-Institut f{\"u}r Informatik}, ADDRESS = {Saarbr{\"u}cken, Germany}, YEAR =
{2007}, DATE = {2007}, TYPE = {Max-Planck-Institut für Informatik / Research Report}, }
%0 Report %A Hillenbrand, Thomas %A Weidenbach, Christoph %+ Automation of Logic, MPI for Informatics, Max Planck Society Automation of Logic, MPI for Informatics, Max Planck Society %T Superposition
for Finite Domains : %G eng %U http://hdl.handle.net/11858/00-001M-0000-000F-20DA-8 %F EDOC: 356455 %F OTHER: Local-ID:
C12573CC004A8E26-1CF84BA6556F8748C12572C1002F229B-HillenbrandWeidenbach2007Report %Y Max-Planck-Institut für Informatik %C Saarbrücken, Germany %D 2007 %P 25 p. %B Max-Planck-Institut f&#
252;r Informatik / Research Report
Efficient Surface Reconstruction for Piecewise Smooth Objects
P. Jenke, M. Wand and W. Strasser
Technical Report, 2007
@techreport{Jenke2007, TITLE = {Efficient Surface Reconstruction for Piecewise Smooth Objects}, AUTHOR = {Jenke, Philipp and Wand, Michael and Strasser, Wolfgang}, LANGUAGE = {eng}, ISSN =
{0946-3852}, URL = {urn:nbn:de:bsz:21-opus-32001}, NUMBER = {WSI-2007-05}, INSTITUTION = {Wilhelm-Schickard-Institut / Universit{\"a}t T{\"u}bingen}, ADDRESS = {T{\"u}bingen}, YEAR = {2007}, DATE =
{2007}, TYPE = {WSI}, VOLUME = {2007-05}, }
%0 Report %A Jenke, Philipp %A Wand, Michael %A Strasser, Wolfgang %+ External Organizations Computer Graphics, MPI for Informatics, Max Planck Society External Organizations %T Efficient Surface
Reconstruction for Piecewise Smooth Objects : %G eng %U http://hdl.handle.net/11858/00-001M-0000-0023-D3F7-A %U urn:nbn:de:bsz:21-opus-32001 %Y Wilhelm-Schickard-Institut / Universität Tü
bingen %C Tübingen %D 2007 %P 17 p. %B WSI %N 2007-05 %@ false %U http://nbn-resolving.de/urn:nbn:de:bsz:21-opus-32001
NAGA: Searching and Ranking Knowledge
G. Kasneci, F. M. Suchanek, G. Ifrim, M. Ramanath and G. Weikum
Technical Report, 2007
The Web has the potential to become the world's largest knowledge base. In order to unleash this potential, the wealth of information available on the web needs to be extracted and organized. There
is a need for new querying techniques that are simple yet more expressive than those provided by standard keyword-based search engines. Search for knowledge rather than Web pages needs to consider
inherent semantic structures like entities (person, organization, etc.) and relationships (isA,locatedIn, etc.). In this paper, we propose {NAGA}, a new semantic search engine. {NAGA}'s knowledge
base, which is organized as a graph with typed edges, consists of millions of entities and relationships automatically extracted fromWeb-based corpora. A query language capable of expressing keyword
search for the casual user as well as graph queries with regular expressions for the expert, enables the formulation of queries with additional semantic information. We introduce a novel scoring
model, based on the principles of generative language models, which formalizes several notions like confidence, informativeness and compactness and uses them to rank query results. We demonstrate
{NAGA}'s superior result quality over current search engines by conducting a comprehensive evaluation, including user assessments, for advanced queries.
@techreport{TechReportKSIRW-2007, TITLE = {{NAGA}: Searching and Ranking Knowledge}, AUTHOR = {Kasneci, Gjergji and Suchanek, Fabian M. and Ifrim, Georgiana and Ramanath, Maya and Weikum, Gerhard},
LANGUAGE = {eng}, ISSN = {0946-011X}, NUMBER = {MPI-I-2007-5-001}, LOCALID = {Local-ID: C12573CC004A8E26-0C33A6E805909705C12572AE003DA15B-TechReportKSIRW-2007}, INSTITUTION = {Max-Planck-Institut f{\
"u}r Informatik}, ADDRESS = {Saarbr{\"u}cken, Germany}, YEAR = {2007}, DATE = {2007}, ABSTRACT = {The Web has the potential to become the world's largest knowledge base. In order to unleash this
potential, the wealth of information available on the web needs to be extracted and organized. There is a need for new querying techniques that are simple yet more expressive than those provided by
standard keyword-based search engines. Search for knowledge rather than Web pages needs to consider inherent semantic structures like entities (person, organization, etc.) and relationships
(isA,locatedIn, etc.). In this paper, we propose {NAGA}, a new semantic search engine. {NAGA}'s knowledge base, which is organized as a graph with typed edges, consists of millions of entities and
relationships automatically extracted fromWeb-based corpora. A query language capable of expressing keyword search for the casual user as well as graph queries with regular expressions for the
expert, enables the formulation of queries with additional semantic information. We introduce a novel scoring model, based on the principles of generative language models, which formalizes several
notions like confidence, informativeness and compactness and uses them to rank query results. We demonstrate {NAGA}'s superior result quality over current search engines by conducting a comprehensive
evaluation, including user assessments, for advanced queries.}, TYPE = {Research Report}, }
%0 Report %A Kasneci, Gjergji %A Suchanek, Fabian M. %A Ifrim, Georgiana %A Ramanath, Maya %A Weikum, Gerhard %+ Databases and Information Systems, MPI for Informatics, Max Planck Society Databases
and Information Systems, MPI for Informatics, Max Planck Society Databases and Information Systems, MPI for Informatics, Max Planck Society Databases and Information Systems, MPI for Informatics, Max
Planck Society Databases and Information Systems, MPI for Informatics, Max Planck Society %T NAGA: Searching and Ranking Knowledge : %G eng %U http://hdl.handle.net/11858/00-001M-0000-000F-1FFC-1 %F
EDOC: 356470 %@ 0946-011X %F OTHER: Local-ID: C12573CC004A8E26-0C33A6E805909705C12572AE003DA15B-TechReportKSIRW-2007 %Y Max-Planck-Institut für Informatik %C Saarbrücken, Germany %D 2007 %P
42 p. %X The Web has the potential to become the world's largest knowledge base. In order to unleash this potential, the wealth of information available on the web needs to be extracted and
organized. There is a need for new querying techniques that are simple yet more expressive than those provided by standard keyword-based search engines. Search for knowledge rather than Web pages
needs to consider inherent semantic structures like entities (person, organization, etc.) and relationships (isA,locatedIn, etc.). In this paper, we propose {NAGA}, a new semantic search engine.
{NAGA}'s knowledge base, which is organized as a graph with typed edges, consists of millions of entities and relationships automatically extracted fromWeb-based corpora. A query language capable of
expressing keyword search for the casual user as well as graph queries with regular expressions for the expert, enables the formulation of queries with additional semantic information. We introduce a
novel scoring model, based on the principles of generative language models, which formalizes several notions like confidence, informativeness and compactness and uses them to rank query results. We
demonstrate {NAGA}'s superior result quality over current search engines by conducting a comprehensive evaluation, including user assessments, for advanced queries. %B Research Report
Construction of smooth maps with mean value coordinates
T. Langer and H.-P. Seidel
Technical Report, 2007
Bernstein polynomials are a classical tool in Computer Aided Design to create smooth maps with a high degree of local control. They are used for the construction of B\'ezier surfaces, free-form
deformations, and many other applications. However, classical Bernstein polynomials are only defined for simplices and parallelepipeds. These can in general not directly capture the shape of
arbitrary objects. Instead, a tessellation of the desired domain has to be done first. We construct smooth maps on arbitrary sets of polytopes such that the restriction to each of the polytopes is a
Bernstein polynomial in mean value coordinates (or any other generalized barycentric coordinates). In particular, we show how smooth transitions between different domain polytopes can be ensured.
@techreport{LangerSeidel2007, TITLE = {Construction of smooth maps with mean value coordinates}, AUTHOR = {Langer, Torsten and Seidel, Hans-Peter}, LANGUAGE = {eng}, URL = {http://
domino.mpi-inf.mpg.de/internet/reports.nsf/NumberView/2007-4-002}, NUMBER = {MPI-I-2007-4-002}, INSTITUTION = {Max-Planck-Institut f{\"u}r Informatik}, ADDRESS = {Saarbr{\"u}cken}, YEAR = {2007},
DATE = {2007}, ABSTRACT = {Bernstein polynomials are a classical tool in Computer Aided Design to create smooth maps with a high degree of local control. They are used for the construction of B\
'ezier surfaces, free-form deformations, and many other applications. However, classical Bernstein polynomials are only defined for simplices and parallelepipeds. These can in general not directly
capture the shape of arbitrary objects. Instead, a tessellation of the desired domain has to be done first. We construct smooth maps on arbitrary sets of polytopes such that the restriction to each
of the polytopes is a Bernstein polynomial in mean value coordinates (or any other generalized barycentric coordinates). In particular, we show how smooth transitions between different domain
polytopes can be ensured.}, TYPE = {Research Report / Max-Planck-Institut für Informatik}, }
%0 Report %A Langer, Torsten %A Seidel, Hans-Peter %+ Computer Graphics, MPI for Informatics, Max Planck Society Computer Graphics, MPI for Informatics, Max Planck Society %T Construction of smooth
maps with mean value coordinates : %G eng %U http://hdl.handle.net/11858/00-001M-0000-0014-66DF-1 %U http://domino.mpi-inf.mpg.de/internet/reports.nsf/NumberView/2007-4-002 %Y Max-Planck-Institut f&#
252;r Informatik %C Saarbrücken %D 2007 %P 22 p. %X Bernstein polynomials are a classical tool in Computer Aided Design to create smooth maps with a high degree of local control. They are used
for the construction of B\'ezier surfaces, free-form deformations, and many other applications. However, classical Bernstein polynomials are only defined for simplices and parallelepipeds. These can
in general not directly capture the shape of arbitrary objects. Instead, a tessellation of the desired domain has to be done first. We construct smooth maps on arbitrary sets of polytopes such that
the restriction to each of the polytopes is a Bernstein polynomial in mean value coordinates (or any other generalized barycentric coordinates). In particular, we show how smooth transitions between
different domain polytopes can be ensured. %B Research Report / Max-Planck-Institut für Informatik
A volumetric approach to interactive shape editing
C. Stoll, E. de Aguiar, C. Theobalt and H.-P. Seidel
Technical Report, 2007
We present a novel approach to real-time shape editing that produces physically plausible deformations using an efficient and easy-to-implement volumetric approach. Our algorithm alternates between a
linear tetrahedral Laplacian deformation step and a differential update in which rotational transformations are approximated. By means of this iterative process we can achieve non-linear deformation
results while having to solve only linear equation systems. The differential update step relies on estimating the rotational component of the deformation relative to the rest pose. This makes the
method very stable as the shape can be reverted to its rest pose even after extreme deformations. Only a few point handles or area handles imposing an orientation are needed to achieve high quality
deformations, which makes the approach intuitive to use. We show that our technique is well suited for interactive shape manipulation and also provides an elegant way to animate models with captured
motion data.
@techreport{Stoll2007, TITLE = {A volumetric approach to interactive shape editing}, AUTHOR = {Stoll, Carsten and de Aguiar, Edilson and Theobalt, Christian and Seidel, Hans-Peter}, LANGUAGE = {eng},
URL = {http://domino.mpi-inf.mpg.de/internet/reports.nsf/NumberView/2007-4-004}, NUMBER = {MPI-I-2007-4-004}, INSTITUTION = {Max-Planck-Institut f{\"u}r Informatik}, ADDRESS = {Saarbr{\"u}cken}, YEAR
= {2007}, DATE = {2007}, ABSTRACT = {We present a novel approach to real-time shape editing that produces physically plausible deformations using an efficient and easy-to-implement volumetric
approach. Our algorithm alternates between a linear tetrahedral Laplacian deformation step and a differential update in which rotational transformations are approximated. By means of this iterative
process we can achieve non-linear deformation results while having to solve only linear equation systems. The differential update step relies on estimating the rotational component of the deformation
relative to the rest pose. This makes the method very stable as the shape can be reverted to its rest pose even after extreme deformations. Only a few point handles or area handles imposing an
orientation are needed to achieve high quality deformations, which makes the approach intuitive to use. We show that our technique is well suited for interactive shape manipulation and also provides
an elegant way to animate models with captured motion data.}, TYPE = {Research Report / Max-Planck-Institut für Informatik}, }
%0 Report %A Stoll, Carsten %A de Aguiar, Edilson %A Theobalt, Christian %A Seidel, Hans-Peter %+ Computer Graphics, MPI for Informatics, Max Planck Society Computer Graphics, MPI for Informatics,
Max Planck Society Computer Graphics, MPI for Informatics, Max Planck Society Computer Graphics, MPI for Informatics, Max Planck Society %T A volumetric approach to interactive shape editing : %G eng
%U http://hdl.handle.net/11858/00-001M-0000-0014-66D6-4 %U http://domino.mpi-inf.mpg.de/internet/reports.nsf/NumberView/2007-4-004 %Y Max-Planck-Institut für Informatik %C Saarbrücken %D
2007 %P 28 p. %X We present a novel approach to real-time shape editing that produces physically plausible deformations using an efficient and easy-to-implement volumetric approach. Our algorithm
alternates between a linear tetrahedral Laplacian deformation step and a differential update in which rotational transformations are approximated. By means of this iterative process we can achieve
non-linear deformation results while having to solve only linear equation systems. The differential update step relies on estimating the rotational component of the deformation relative to the rest
pose. This makes the method very stable as the shape can be reverted to its rest pose even after extreme deformations. Only a few point handles or area handles imposing an orientation are needed to
achieve high quality deformations, which makes the approach intuitive to use. We show that our technique is well suited for interactive shape manipulation and also provides an elegant way to animate
models with captured motion data. %B Research Report / Max-Planck-Institut für Informatik
Yago: a large ontology from Wikipedia and WordNet
F. Suchanek, G. Kasneci and G. Weikum
Technical Report, 2007
This article presents YAGO, a large ontology with high coverage and precision. YAGO has been automatically derived from Wikipedia and WordNet. It comprises entities and relations, and currently
contains more than 1.7 million entities and 15 million facts. These include the taxonomic Is-A hierarchy as well as semantic relations between entities. The facts for YAGO have been extracted from
the category system and the infoboxes of Wikipedia and have been combined with taxonomic relations from WordNet. Type checking techniques help us keep YAGO's precision at 95% -- as proven by an
extensive evaluation study. YAGO is based on a clean logical model with a decidable consistency. Furthermore, it allows representing n-ary relations in a natural way while maintaining compatibility
with RDFS. A powerful query model facilitates access to YAGO's data.
@techreport{, TITLE = {Yago: a large ontology from Wikipedia and {WordNet}}, AUTHOR = {Suchanek, Fabian and Kasneci, Gjergji and Weikum, Gerhard}, LANGUAGE = {eng}, URL = {http://
domino.mpi-inf.mpg.de/internet/reports.nsf/NumberView/2007-5-003}, NUMBER = {MPI-I-2007-5-003}, INSTITUTION = {Max-Planck-Institut f{\"u}r Informatik}, ADDRESS = {Saarbr{\"u}cken}, YEAR = {2007},
DATE = {2007}, ABSTRACT = {This article presents YAGO, a large ontology with high coverage and precision. YAGO has been automatically derived from Wikipedia and WordNet. It comprises entities and
relations, and currently contains more than 1.7 million entities and 15 million facts. These include the taxonomic Is-A hierarchy as well as semantic relations between entities. The facts for YAGO
have been extracted from the category system and the infoboxes of Wikipedia and have been combined with taxonomic relations from WordNet. Type checking techniques help us keep YAGO's precision at 95%
-- as proven by an extensive evaluation study. YAGO is based on a clean logical model with a decidable consistency. Furthermore, it allows representing n-ary relations in a natural way while
maintaining compatibility with RDFS. A powerful query model facilitates access to YAGO's data.}, TYPE = {Research Report / Max-Planck-Institut für Informatik}, }
%0 Report %A Suchanek, Fabian %A Kasneci, Gjergji %A Weikum, Gerhard %+ Databases and Information Systems, MPI for Informatics, Max Planck Society Databases and Information Systems, MPI for
Informatics, Max Planck Society Databases and Information Systems, MPI for Informatics, Max Planck Society %T Yago: a large ontology from Wikipedia and WordNet : %G eng %U http://hdl.handle.net/11858
/00-001M-0000-0014-66CA-F %U http://domino.mpi-inf.mpg.de/internet/reports.nsf/NumberView/2007-5-003 %Y Max-Planck-Institut für Informatik %C Saarbrücken %D 2007 %P 67 p. %X This article
presents YAGO, a large ontology with high coverage and precision. YAGO has been automatically derived from Wikipedia and WordNet. It comprises entities and relations, and currently contains more than
1.7 million entities and 15 million facts. These include the taxonomic Is-A hierarchy as well as semantic relations between entities. The facts for YAGO have been extracted from the category system
and the infoboxes of Wikipedia and have been combined with taxonomic relations from WordNet. Type checking techniques help us keep YAGO's precision at 95% -- as proven by an extensive evaluation
study. YAGO is based on a clean logical model with a decidable consistency. Furthermore, it allows representing n-ary relations in a natural way while maintaining compatibility with RDFS. A powerful
query model facilitates access to YAGO's data. %B Research Report / Max-Planck-Institut für Informatik
Gesture modeling and animation by imitation
I. Albrecht, M. Kipp, M. P. Neff and H.-P. Seidel
Technical Report, 2006
Animated characters that move and gesticulate appropriately with spoken text are useful in a wide range of applications. Unfortunately, they are very difficult to generate, even more so when a
unique, individual movement style is required. We present a system that is capable of producing full-body gesture animation for given input text in the style of a particular performer. Our process
starts with video of a performer whose gesturing style we wish to animate. A tool-assisted annotation process is first performed on the video, from which a statistical model of the person.s
particular gesturing style is built. Using this model and tagged input text, our generation algorithm creates a gesture script appropriate for the given text. As opposed to isolated singleton
gestures, our gesture script specifies a stream of continuous gestures coordinated with speech. This script is passed to an animation system, which enhances the gesture description with more detail
and prepares a refined description of the motion. An animation subengine can then generate either kinematic or physically simulated motion based on this description. The system is capable of creating
animation that replicates a particular performance in the video corpus, generating new animation for the spoken text that is consistent with the given performer.s style and creating performances of a
given text sample in the style of different performers.
@techreport{AlbrechtKippNeffSeidel2006, TITLE = {Gesture modeling and animation by imitation}, AUTHOR = {Albrecht, Irene and Kipp, Michael and Neff, Michael Paul and Seidel, Hans-Peter}, LANGUAGE =
{eng}, URL = {http://domino.mpi-inf.mpg.de/internet/reports.nsf/NumberView/2006-4-008}, NUMBER = {MPI-I-2006-4-008}, INSTITUTION = {Max-Planck-Institut f{\"u}r Informatik}, ADDRESS = {Saarbr{\"u}
cken}, YEAR = {2006}, DATE = {2006}, ABSTRACT = {Animated characters that move and gesticulate appropriately with spoken text are useful in a wide range of applications. Unfortunately, they are very
difficult to generate, even more so when a unique, individual movement style is required. We present a system that is capable of producing full-body gesture animation for given input text in the
style of a particular performer. Our process starts with video of a performer whose gesturing style we wish to animate. A tool-assisted annotation process is first performed on the video, from which
a statistical model of the person.s particular gesturing style is built. Using this model and tagged input text, our generation algorithm creates a gesture script appropriate for the given text. As
opposed to isolated singleton gestures, our gesture script specifies a stream of continuous gestures coordinated with speech. This script is passed to an animation system, which enhances the gesture
description with more detail and prepares a refined description of the motion. An animation subengine can then generate either kinematic or physically simulated motion based on this description. The
system is capable of creating animation that replicates a particular performance in the video corpus, generating new animation for the spoken text that is consistent with the given performer.s style
and creating performances of a given text sample in the style of different performers.}, TYPE = {Research Report / Max-Planck-Institut für Informatik}, }
%0 Report %A Albrecht, Irene %A Kipp, Michael %A Neff, Michael Paul %A Seidel, Hans-Peter %+ Computer Graphics, MPI for Informatics, Max Planck Society Multimodal Computing and Interaction Computer
Graphics, MPI for Informatics, Max Planck Society Computer Graphics, MPI for Informatics, Max Planck Society %T Gesture modeling and animation by imitation : %G eng %U http://hdl.handle.net/11858/
00-001M-0000-0014-6979-2 %U http://domino.mpi-inf.mpg.de/internet/reports.nsf/NumberView/2006-4-008 %Y Max-Planck-Institut für Informatik %C Saarbrücken %D 2006 %P 62 p. %X Animated
characters that move and gesticulate appropriately with spoken text are useful in a wide range of applications. Unfortunately, they are very difficult to generate, even more so when a unique,
individual movement style is required. We present a system that is capable of producing full-body gesture animation for given input text in the style of a particular performer. Our process starts
with video of a performer whose gesturing style we wish to animate. A tool-assisted annotation process is first performed on the video, from which a statistical model of the person.s particular
gesturing style is built. Using this model and tagged input text, our generation algorithm creates a gesture script appropriate for the given text. As opposed to isolated singleton gestures, our
gesture script specifies a stream of continuous gestures coordinated with speech. This script is passed to an animation system, which enhances the gesture description with more detail and prepares a
refined description of the motion. An animation subengine can then generate either kinematic or physically simulated motion based on this description. The system is capable of creating animation that
replicates a particular performance in the video corpus, generating new animation for the spoken text that is consistent with the given performer.s style and creating performances of a given text
sample in the style of different performers. %B Research Report / Max-Planck-Institut für Informatik
A neighborhood-based approach for clustering of linked document collections
R. Angelova and S. Siersdorfer
Technical Report, 2006
This technical report addresses the problem of automatically structuring linked document collections by using clustering. In contrast to traditional clustering, we study the clustering problem in the
light of available link structure information for the data set (e.g., hyperlinks among web documents or co-authorship among bibliographic data entries). Our approach is based on iterative relaxation
of cluster assignments, and can be built on top of any clustering algorithm (e.g., k-means or DBSCAN). These techniques result in higher cluster purity, better overall accuracy, and make
self-organization more robust. Our comprehensive experiments on three different real-world corpora demonstrate the benefits of our approach.
@techreport{AngelovaSiersdorfer2006, TITLE = {A neighborhood-based approach for clustering of linked document collections}, AUTHOR = {Angelova, Ralitsa and Siersdorfer, Stefan}, LANGUAGE = {eng}, URL
= {http://domino.mpi-inf.mpg.de/internet/reports.nsf/NumberView/2006-5-005}, NUMBER = {MPI-I-2006-5-005}, INSTITUTION = {Max-Planck-Institut f{\"u}r Informatik}, ADDRESS = {Saarbr{\"u}cken}, YEAR =
{2006}, DATE = {2006}, ABSTRACT = {This technical report addresses the problem of automatically structuring linked document collections by using clustering. In contrast to traditional clustering, we
study the clustering problem in the light of available link structure information for the data set (e.g., hyperlinks among web documents or co-authorship among bibliographic data entries). Our
approach is based on iterative relaxation of cluster assignments, and can be built on top of any clustering algorithm (e.g., k-means or DBSCAN). These techniques result in higher cluster purity,
better overall accuracy, and make self-organization more robust. Our comprehensive experiments on three different real-world corpora demonstrate the benefits of our approach.}, TYPE = {Research
Report / Max-Planck-Institut für Informatik}, }
%0 Report %A Angelova, Ralitsa %A Siersdorfer, Stefan %+ Databases and Information Systems, MPI for Informatics, Max Planck Society Databases and Information Systems, MPI for Informatics, Max Planck
Society %T A neighborhood-based approach for clustering of linked document collections : %G eng %U http://hdl.handle.net/11858/00-001M-0000-0014-670D-4 %U http://domino.mpi-inf.mpg.de/internet/
reports.nsf/NumberView/2006-5-005 %Y Max-Planck-Institut für Informatik %C Saarbrücken %D 2006 %P 32 p. %X This technical report addresses the problem of automatically structuring linked
document collections by using clustering. In contrast to traditional clustering, we study the clustering problem in the light of available link structure information for the data set (e.g.,
hyperlinks among web documents or co-authorship among bibliographic data entries). Our approach is based on iterative relaxation of cluster assignments, and can be built on top of any clustering
algorithm (e.g., k-means or DBSCAN). These techniques result in higher cluster purity, better overall accuracy, and make self-organization more robust. Our comprehensive experiments on three
different real-world corpora demonstrate the benefits of our approach. %B Research Report / Max-Planck-Institut für Informatik
Output-sensitive autocompletion search
H. Bast, I. Weber and C. W. Mortensen
Technical Report, 2006
We consider the following autocompletion search scenario: imagine a user of a search engine typing a query; then with every keystroke display those completions of the last query word that would lead
to the best hits, and also display the best such hits. The following problem is at the core of this feature: for a fixed document collection, given a set $D$ of documents, and an alphabetical range
$W$ of words, compute the set of all word-in-document pairs $(w,d)$ from the collection such that $w \in W$ and $d\in D$. We present a new data structure with the help of which such autocompletion
queries can be processed, on the average, in time linear in the input plus output size, independent of the size of the underlying document collection. At the same time, our data structure uses no
more space than an inverted index. Actual query processing times on a large test collection correlate almost perfectly with our theoretical bound.
@techreport{, TITLE = {Output-sensitive autocompletion search}, AUTHOR = {Bast, Holger and Weber, Ingmar and Mortensen, Christian Worm}, LANGUAGE = {eng}, URL = {http://domino.mpi-inf.mpg.de/internet
/reports.nsf/NumberView/2006-1-007}, NUMBER = {MPI-I-2006-1-007}, INSTITUTION = {Max-Planck-Institut f{\"u}r Informatik}, ADDRESS = {Saarbr{\"u}cken}, YEAR = {2006}, DATE = {2006}, ABSTRACT = {We
consider the following autocompletion search scenario: imagine a user of a search engine typing a query; then with every keystroke display those completions of the last query word that would lead to
the best hits, and also display the best such hits. The following problem is at the core of this feature: for a fixed document collection, given a set $D$ of documents, and an alphabetical range $W$
of words, compute the set of all word-in-document pairs $(w,d)$ from the collection such that $w \in W$ and $d\in D$. We present a new data structure with the help of which such autocompletion
queries can be processed, on the average, in time linear in the input plus output size, independent of the size of the underlying document collection. At the same time, our data structure uses no
more space than an inverted index. Actual query processing times on a large test collection correlate almost perfectly with our theoretical bound.}, TYPE = {Research Report / Max-Planck-Institut f&#
252;r Informatik}, }
%0 Report %A Bast, Holger %A Weber, Ingmar %A Mortensen, Christian Worm %+ Algorithms and Complexity, MPI for Informatics, Max Planck Society Algorithms and Complexity, MPI for Informatics, Max
Planck Society Algorithms and Complexity, MPI for Informatics, Max Planck Society %T Output-sensitive autocompletion search : %G eng %U http://hdl.handle.net/11858/00-001M-0000-0014-681A-D %U http://
domino.mpi-inf.mpg.de/internet/reports.nsf/NumberView/2006-1-007 %Y Max-Planck-Institut für Informatik %C Saarbrücken %D 2006 %P 17 p. %X We consider the following autocompletion search
scenario: imagine a user of a search engine typing a query; then with every keystroke display those completions of the last query word that would lead to the best hits, and also display the best such
hits. The following problem is at the core of this feature: for a fixed document collection, given a set $D$ of documents, and an alphabetical range $W$ of words, compute the set of all
word-in-document pairs $(w,d)$ from the collection such that $w \in W$ and $d\in D$. We present a new data structure with the help of which such autocompletion queries can be processed, on the
average, in time linear in the input plus output size, independent of the size of the underlying document collection. At the same time, our data structure uses no more space than an inverted index.
Actual query processing times on a large test collection correlate almost perfectly with our theoretical bound. %B Research Report / Max-Planck-Institut für Informatik
IO-Top-k: index-access optimized top-k query processing
H. Bast, D. Majumdar, R. Schenkel, C. Theobalt and G. Weikum
Technical Report, 2006
Top-k query processing is an important building block for ranked retrieval, with applications ranging from text and data integration to distributed aggregation of network logs and sensor data. Top-k
queries operate on index lists for a query's elementary conditions and aggregate scores for result candidates. One of the best implementation methods in this setting is the family of threshold
algorithms, which aim to terminate the index scans as early as possible based on lower and upper bounds for the final scores of result candidates. This procedure performs sequential disk accesses for
sorted index scans, but also has the option of performing random accesses to resolve score uncertainty. This entails scheduling for the two kinds of accesses: 1) the prioritization of different index
lists in the sequential accesses, and 2) the decision on when to perform random accesses and for which candidates. The prior literature has studied some of these scheduling issues, but only for each
of the two access types in isolation. The current paper takes an integrated view of the scheduling issues and develops novel strategies that outperform prior proposals by a large margin. Our main
contributions are new, principled, scheduling methods based on a Knapsack-related optimization for sequential accesses and a cost model for random accesses. The methods can be further boosted by
harnessing probabilistic estimators for scores, selectivities, and index list correlations. We also discuss efficient implementation techniques for the underlying data structures. In performance
experiments with three different datasets (TREC Terabyte, HTTP server logs, and IMDB), our methods achieved significant performance gains compared to the best previously known methods: a factor of up
to 3 in terms of execution costs, and a factor of 5 in terms of absolute run-times of our implementation. Our best techniques are close to a lower bound for the execution cost of the considered class
of threshold algorithms.
@techreport{BastMajumdarSchenkelTheobaldWeikum2006, TITLE = {{IO}-Top-k: index-access optimized top-k query processing}, AUTHOR = {Bast, Holger and Majumdar, Debapriyo and Schenkel, Ralf and
Theobalt, Christian and Weikum, Gerhard}, LANGUAGE = {eng}, URL = {http://domino.mpi-inf.mpg.de/internet/reports.nsf/NumberView/2006-5-002}, NUMBER = {MPI-I-2006-5-002}, INSTITUTION =
{Max-Planck-Institut f{\"u}r Informatik}, ADDRESS = {Saarbr{\"u}cken}, YEAR = {2006}, DATE = {2006}, ABSTRACT = {Top-k query processing is an important building block for ranked retrieval, with
applications ranging from text and data integration to distributed aggregation of network logs and sensor data. Top-k queries operate on index lists for a query's elementary conditions and aggregate
scores for result candidates. One of the best implementation methods in this setting is the family of threshold algorithms, which aim to terminate the index scans as early as possible based on lower
and upper bounds for the final scores of result candidates. This procedure performs sequential disk accesses for sorted index scans, but also has the option of performing random accesses to resolve
score uncertainty. This entails scheduling for the two kinds of accesses: 1) the prioritization of different index lists in the sequential accesses, and 2) the decision on when to perform random
accesses and for which candidates. The prior literature has studied some of these scheduling issues, but only for each of the two access types in isolation. The current paper takes an integrated view
of the scheduling issues and develops novel strategies that outperform prior proposals by a large margin. Our main contributions are new, principled, scheduling methods based on a Knapsack-related
optimization for sequential accesses and a cost model for random accesses. The methods can be further boosted by harnessing probabilistic estimators for scores, selectivities, and index list
correlations. We also discuss efficient implementation techniques for the underlying data structures. In performance experiments with three different datasets (TREC Terabyte, HTTP server logs, and
IMDB), our methods achieved significant performance gains compared to the best previously known methods: a factor of up to 3 in terms of execution costs, and a factor of 5 in terms of absolute
run-times of our implementation. Our best techniques are close to a lower bound for the execution cost of the considered class of threshold algorithms.}, TYPE = {Research Report / Max-Planck-Institut
für Informatik}, }
%0 Report %A Bast, Holger %A Majumdar, Debapriyo %A Schenkel, Ralf %A Theobalt, Christian %A Weikum, Gerhard %+ Algorithms and Complexity, MPI for Informatics, Max Planck Society Algorithms and
Complexity, MPI for Informatics, Max Planck Society Databases and Information Systems, MPI for Informatics, Max Planck Society Computer Graphics, MPI for Informatics, Max Planck Society Databases and
Information Systems, MPI for Informatics, Max Planck Society %T IO-Top-k: index-access optimized top-k query processing : %G eng %U http://hdl.handle.net/11858/00-001M-0000-0014-6716-E %U http://
domino.mpi-inf.mpg.de/internet/reports.nsf/NumberView/2006-5-002 %Y Max-Planck-Institut für Informatik %C Saarbrücken %D 2006 %P 49 p. %X Top-k query processing is an important building
block for ranked retrieval, with applications ranging from text and data integration to distributed aggregation of network logs and sensor data. Top-k queries operate on index lists for a query's
elementary conditions and aggregate scores for result candidates. One of the best implementation methods in this setting is the family of threshold algorithms, which aim to terminate the index scans
as early as possible based on lower and upper bounds for the final scores of result candidates. This procedure performs sequential disk accesses for sorted index scans, but also has the option of
performing random accesses to resolve score uncertainty. This entails scheduling for the two kinds of accesses: 1) the prioritization of different index lists in the sequential accesses, and 2) the
decision on when to perform random accesses and for which candidates. The prior literature has studied some of these scheduling issues, but only for each of the two access types in isolation. The
current paper takes an integrated view of the scheduling issues and develops novel strategies that outperform prior proposals by a large margin. Our main contributions are new, principled, scheduling
methods based on a Knapsack-related optimization for sequential accesses and a cost model for random accesses. The methods can be further boosted by harnessing probabilistic estimators for scores,
selectivities, and index list correlations. We also discuss efficient implementation techniques for the underlying data structures. In performance experiments with three different datasets (TREC
Terabyte, HTTP server logs, and IMDB), our methods achieved significant performance gains compared to the best previously known methods: a factor of up to 3 in terms of execution costs, and a factor
of 5 in terms of absolute run-times of our implementation. Our best techniques are close to a lower bound for the execution cost of the considered class of threshold algorithms. %B Research Report /
Max-Planck-Institut für Informatik
Mean value coordinates for arbitrary spherical polygons and polyhedra in $\mathbb{R}^{3}$
A. Belyaev, T. Langer and H.-P. Seidel
Technical Report, 2006
Since their introduction, mean value coordinates enjoy ever increasing popularity in computer graphics and computational mathematics because they exhibit a variety of good properties. Most
importantly, they are defined in the whole plane which allows interpolation and extrapolation without restrictions. Recently, mean value coordinates were generalized to spheres and to $\mathbb{R}^{3}
$. We show that these spherical and 3D mean value coordinates are well-defined on the whole sphere and the whole space $\mathbb{R}^{3}$, respectively.
@techreport{BelyaevLangerSeidel2006, TITLE = {Mean value coordinates for arbitrary spherical polygons and polyhedra in \${\textbackslash}mathbb{\textbraceleft}R{\textbraceright}{\textasciicircum}{\
textbraceleft}3{\textbraceright}\$}, AUTHOR = {Belyaev, Alexander and Langer, Torsten and Seidel, Hans-Peter}, LANGUAGE = {eng}, URL = {http://domino.mpi-inf.mpg.de/internet/reports.nsf/NumberView/
2006-4-010}, NUMBER = {MPI-I-2006-4-010}, INSTITUTION = {Max-Planck-Institut f{\"u}r Informatik}, ADDRESS = {Saarbr{\"u}cken}, YEAR = {2006}, DATE = {2006}, ABSTRACT = {Since their introduction, mean
value coordinates enjoy ever increasing popularity in computer graphics and computational mathematics because they exhibit a variety of good properties. Most importantly, they are defined in the
whole plane which allows interpolation and extrapolation without restrictions. Recently, mean value coordinates were generalized to spheres and to $\mathbb{R}^{3}$. We show that these spherical and
3D mean value coordinates are well-defined on the whole sphere and the whole space $\mathbb{R}^{3}$, respectively.}, TYPE = {Research Report / Max-Planck-Institut für Informatik}, }
%0 Report %A Belyaev, Alexander %A Langer, Torsten %A Seidel, Hans-Peter %+ Computer Graphics, MPI for Informatics, Max Planck Society Computer Graphics, MPI for Informatics, Max Planck Society
Computer Graphics, MPI for Informatics, Max Planck Society %T Mean value coordinates for arbitrary spherical polygons and polyhedra in $\mathbb{R}^{3}$ : %G eng %U http://hdl.handle.net/11858/
00-001M-0000-0014-671C-2 %U http://domino.mpi-inf.mpg.de/internet/reports.nsf/NumberView/2006-4-010 %Y Max-Planck-Institut für Informatik %C Saarbrücken %D 2006 %P 19 p. %X Since their
introduction, mean value coordinates enjoy ever increasing popularity in computer graphics and computational mathematics because they exhibit a variety of good properties. Most importantly, they are
defined in the whole plane which allows interpolation and extrapolation without restrictions. Recently, mean value coordinates were generalized to spheres and to $\mathbb{R}^{3}$. We show that these
spherical and 3D mean value coordinates are well-defined on the whole sphere and the whole space $\mathbb{R}^{3}$, respectively. %B Research Report / Max-Planck-Institut für Informatik
Skeleton-driven Laplacian Mesh Deformations
A. Belyaev, S. Yoshizawa and H.-P. Seidel
Technical Report, 2006
In this report, a new free-form shape deformation approach is proposed. We combine a skeleton-driven mesh deformation technique with discrete differential coordinates in order to create
natural-looking global shape deformations. Given a triangle mesh, we first extract a skeletal mesh, a two-sided Voronoi-based approximation of the medial axis. Next the skeletal mesh is modified by
free-form deformations. Then a desired global shape deformation is obtained by reconstructing the shape corresponding to the deformed skeletal mesh. The reconstruction is based on using discrete
differential coordinates. Our method preserves fine geometric details and original shape thickness because of using discrete differential coordinates and skeleton-driven deformations. We also develop
a new mesh evolution technique which allow us to eliminate possible global and local self-intersections of the deformed mesh while preserving fine geometric details. Finally, we present a
multiresolution version of our approach in order to simplify and accelerate the deformation process.
@techreport{BelyaevSeidelShin2006, TITLE = {Skeleton-driven {Laplacian} Mesh Deformations}, AUTHOR = {Belyaev, Alexander and Yoshizawa, Shin and Seidel, Hans-Peter}, LANGUAGE = {eng}, URL = {http://
domino.mpi-inf.mpg.de/internet/reports.nsf/NumberView/2006-4-005}, NUMBER = {MPI-I-2006-4-005}, INSTITUTION = {Max-Planck-Institut f{\"u}r Informatik}, ADDRESS = {Saarbr{\"u}cken}, YEAR = {2006},
DATE = {2006}, ABSTRACT = {In this report, a new free-form shape deformation approach is proposed. We combine a skeleton-driven mesh deformation technique with discrete differential coordinates in
order to create natural-looking global shape deformations. Given a triangle mesh, we first extract a skeletal mesh, a two-sided Voronoi-based approximation of the medial axis. Next the skeletal mesh
is modified by free-form deformations. Then a desired global shape deformation is obtained by reconstructing the shape corresponding to the deformed skeletal mesh. The reconstruction is based on
using discrete differential coordinates. Our method preserves fine geometric details and original shape thickness because of using discrete differential coordinates and skeleton-driven deformations.
We also develop a new mesh evolution technique which allow us to eliminate possible global and local self-intersections of the deformed mesh while preserving fine geometric details. Finally, we
present a multiresolution version of our approach in order to simplify and accelerate the deformation process.}, TYPE = {Research Report / Max-Planck-Institut für Informatik}, }
%0 Report %A Belyaev, Alexander %A Yoshizawa, Shin %A Seidel, Hans-Peter %+ Computer Graphics, MPI for Informatics, Max Planck Society Computer Graphics, MPI for Informatics, Max Planck Society
Computer Graphics, MPI for Informatics, Max Planck Society %T Skeleton-driven Laplacian Mesh Deformations : %G eng %U http://hdl.handle.net/11858/00-001M-0000-0014-67FF-6 %U http://
domino.mpi-inf.mpg.de/internet/reports.nsf/NumberView/2006-4-005 %Y Max-Planck-Institut für Informatik %C Saarbrücken %D 2006 %P 37 p. %X In this report, a new free-form shape deformation
approach is proposed. We combine a skeleton-driven mesh deformation technique with discrete differential coordinates in order to create natural-looking global shape deformations. Given a triangle
mesh, we first extract a skeletal mesh, a two-sided Voronoi-based approximation of the medial axis. Next the skeletal mesh is modified by free-form deformations. Then a desired global shape
deformation is obtained by reconstructing the shape corresponding to the deformed skeletal mesh. The reconstruction is based on using discrete differential coordinates. Our method preserves fine
geometric details and original shape thickness because of using discrete differential coordinates and skeleton-driven deformations. We also develop a new mesh evolution technique which allow us to
eliminate possible global and local self-intersections of the deformed mesh while preserving fine geometric details. Finally, we present a multiresolution version of our approach in order to simplify
and accelerate the deformation process. %B Research Report / Max-Planck-Institut für Informatik
Overlap-aware global df estimation in distributed information retrieval systems
M. Bender, S. Michel, G. Weikum and P. Triantafilou
Technical Report, 2006
Peer-to-Peer (P2P) search engines and other forms of distributed information retrieval (IR) are gaining momentum. Unlike in centralized IR, it is difficult and expensive to compute statistical
measures about the entire document collection as it is widely distributed across many computers in a highly dynamic network. On the other hand, such network-wide statistics, most notably, global
document frequencies of the individual terms, would be highly beneficial for ranking global search results that are compiled from different peers. This paper develops an efficient and scalable method
for estimating global document frequencies in a large-scale, highly dynamic P2P network with autonomous peers. The main difficulty that is addressed in this paper is that the local collections of
different peers may arbitrarily overlap, as many peers may choose to gather popular documents that fall into their specific interest profile. Our method is based on hash sketches as an underlying
technique for compact data synopses, and exploits specific properties of hash sketches for duplicate elimination in the counting process. We report on experiments with real Web data that demonstrate
the accuracy of our estimation method and also the benefit for better search result ranking.
@techreport{BenderMichelWeikumTriantafilou2006, TITLE = {Overlap-aware global df estimation in distributed information retrieval systems}, AUTHOR = {Bender, Matthias and Michel, Sebastian and Weikum,
Gerhard and Triantafilou, Peter}, LANGUAGE = {eng}, URL = {http://domino.mpi-inf.mpg.de/internet/reports.nsf/NumberView/2006-5-001}, NUMBER = {MPI-I-2006-5-001}, INSTITUTION = {Max-Planck-Institut f
{\"u}r Informatik}, ADDRESS = {Saarbr{\"u}cken}, YEAR = {2006}, DATE = {2006}, ABSTRACT = {Peer-to-Peer (P2P) search engines and other forms of distributed information retrieval (IR) are gaining
momentum. Unlike in centralized IR, it is difficult and expensive to compute statistical measures about the entire document collection as it is widely distributed across many computers in a highly
dynamic network. On the other hand, such network-wide statistics, most notably, global document frequencies of the individual terms, would be highly beneficial for ranking global search results that
are compiled from different peers. This paper develops an efficient and scalable method for estimating global document frequencies in a large-scale, highly dynamic P2P network with autonomous peers.
The main difficulty that is addressed in this paper is that the local collections of different peers may arbitrarily overlap, as many peers may choose to gather popular documents that fall into their
specific interest profile. Our method is based on hash sketches as an underlying technique for compact data synopses, and exploits specific properties of hash sketches for duplicate elimination in
the counting process. We report on experiments with real Web data that demonstrate the accuracy of our estimation method and also the benefit for better search result ranking.}, TYPE = {Research
Report / Max-Planck-Institut für Informatik}, }
%0 Report %A Bender, Matthias %A Michel, Sebastian %A Weikum, Gerhard %A Triantafilou, Peter %+ Databases and Information Systems, MPI for Informatics, Max Planck Society Databases and Information
Systems, MPI for Informatics, Max Planck Society Databases and Information Systems, MPI for Informatics, Max Planck Society External Organizations %T Overlap-aware global df estimation in distributed
information retrieval systems : %G eng %U http://hdl.handle.net/11858/00-001M-0000-0014-6719-8 %U http://domino.mpi-inf.mpg.de/internet/reports.nsf/NumberView/2006-5-001 %Y Max-Planck-Institut f&#
252;r Informatik %C Saarbrücken %D 2006 %P 25 p. %X Peer-to-Peer (P2P) search engines and other forms of distributed information retrieval (IR) are gaining momentum. Unlike in centralized IR, it
is difficult and expensive to compute statistical measures about the entire document collection as it is widely distributed across many computers in a highly dynamic network. On the other hand, such
network-wide statistics, most notably, global document frequencies of the individual terms, would be highly beneficial for ranking global search results that are compiled from different peers. This
paper develops an efficient and scalable method for estimating global document frequencies in a large-scale, highly dynamic P2P network with autonomous peers. The main difficulty that is addressed in
this paper is that the local collections of different peers may arbitrarily overlap, as many peers may choose to gather popular documents that fall into their specific interest profile. Our method is
based on hash sketches as an underlying technique for compact data synopses, and exploits specific properties of hash sketches for duplicate elimination in the counting process. We report on
experiments with real Web data that demonstrate the accuracy of our estimation method and also the benefit for better search result ranking. %B Research Report / Max-Planck-Institut für
Definition of File Format for Benchmark Instances for Arrangements of Quadrics
E. Berberich, F. Ebert and L. Kettner
Technical Report, 2006
@techreport{acs:bek-dffbiaq-06, TITLE = {Definition of File Format for Benchmark Instances for Arrangements of Quadrics}, AUTHOR = {Berberich, Eric and Ebert, Franziska and Kettner, Lutz}, LANGUAGE =
{eng}, NUMBER = {ACS-TR-123109-01}, INSTITUTION = {University of Groningen}, ADDRESS = {Groningen, The Netherlands}, YEAR = {2006}, DATE = {2006}, }
%0 Report %A Berberich, Eric %A Ebert, Franziska %A Kettner, Lutz %+ Algorithms and Complexity, MPI for Informatics, Max Planck Society Algorithms and Complexity, MPI for Informatics, Max Planck
Society Algorithms and Complexity, MPI for Informatics, Max Planck Society %T Definition of File Format for Benchmark Instances for Arrangements of Quadrics : %G eng %U http://hdl.handle.net/11858/
00-001M-0000-0019-E509-E %Y University of Groningen %C Groningen, The Netherlands %D 2006
Web-site with Benchmark Instances for Planar Curve Arrangements
E. Berberich, F. Ebert, E. Fogel and L. Kettner
Technical Report, 2006
@techreport{acs:bek-wbipca-06, TITLE = {Web-site with Benchmark Instances for Planar Curve Arrangements}, AUTHOR = {Berberich, Eric and Ebert, Franziska and Fogel, Efi and Kettner, Lutz}, LANGUAGE =
{eng}, NUMBER = {ACS-TR-123108-01}, INSTITUTION = {University of Groningen}, ADDRESS = {Groningen, The Netherlands}, YEAR = {2006}, DATE = {2006}, }
%0 Report %A Berberich, Eric %A Ebert, Franziska %A Fogel, Efi %A Kettner, Lutz %+ Algorithms and Complexity, MPI for Informatics, Max Planck Society Algorithms and Complexity, MPI for Informatics,
Max Planck Society External Organizations Algorithms and Complexity, MPI for Informatics, Max Planck Society %T Web-site with Benchmark Instances for Planar Curve Arrangements : %G eng %U http://
hdl.handle.net/11858/00-001M-0000-0019-E515-1 %Y University of Groningen %C Groningen, The Netherlands %D 2006
A framework for natural animation of digitized models
E. de Aguiar, R. Zayer, C. Theobalt, M. A. Magnor and H.-P. Seidel
Technical Report, 2006
We present a novel versatile, fast and simple framework to generate highquality animations of scanned human characters from input motion data. Our method is purely mesh-based and, in contrast to
skeleton-based animation, requires only a minimum of manual interaction. The only manual step that is required to create moving virtual people is the placement of a sparse set of correspondences
between triangles of an input mesh and triangles of the mesh to be animated. The proposed algorithm implicitly generates realistic body deformations, and can easily transfer motions between human
erent shape and proportions. erent types of input data, e.g. other animated meshes and motion capture les, in just the same way. Finally, and most importantly, it creates animations at interactive
frame rates. We feature two working prototype systems that demonstrate that our method can generate lifelike character animations from both marker-based and marker-less optical motion capture data.
@techreport{deAguiarZayerTheobaltMagnorSeidel2006, TITLE = {A framework for natural animation of digitized models}, AUTHOR = {de Aguiar, Edilson and Zayer, Rhaleb and Theobalt, Christian and Magnor,
Marcus A. and Seidel, Hans-Peter}, LANGUAGE = {eng}, URL = {http://domino.mpi-inf.mpg.de/internet/reports.nsf/NumberView/2006-4-003}, NUMBER = {MPI-I-2006-4-003}, INSTITUTION = {Max-Planck-Institut f
{\"u}r Informatik}, ADDRESS = {Saarbr{\"u}cken}, YEAR = {2006}, DATE = {2006}, ABSTRACT = {We present a novel versatile, fast and simple framework to generate highquality animations of scanned human
characters from input motion data. Our method is purely mesh-based and, in contrast to skeleton-based animation, requires only a minimum of manual interaction. The only manual step that is required
to create moving virtual people is the placement of a sparse set of correspondences between triangles of an input mesh and triangles of the mesh to be animated. The proposed algorithm implicitly
generates realistic body deformations, and can easily transfer motions between human erent shape and proportions. erent types of input data, e.g. other animated meshes and motion capture les, in just
the same way. Finally, and most importantly, it creates animations at interactive frame rates. We feature two working prototype systems that demonstrate that our method can generate lifelike
character animations from both marker-based and marker-less optical motion capture data.}, TYPE = {Research Report / Max-Planck-Institut für Informatik}, }
%0 Report %A de Aguiar, Edilson %A Zayer, Rhaleb %A Theobalt, Christian %A Magnor, Marcus A. %A Seidel, Hans-Peter %+ Computer Graphics, MPI for Informatics, Max Planck Society Computer Graphics, MPI
for Informatics, Max Planck Society Computer Graphics, MPI for Informatics, Max Planck Society Graphics - Optics - Vision, MPI for Informatics, Max Planck Society Computer Graphics, MPI for
Informatics, Max Planck Society %T A framework for natural animation of digitized models : %G eng %U http://hdl.handle.net/11858/00-001M-0000-0014-680B-F %U http://domino.mpi-inf.mpg.de/internet/
reports.nsf/NumberView/2006-4-003 %Y Max-Planck-Institut für Informatik %C Saarbrücken %D 2006 %P 27 p. %X We present a novel versatile, fast and simple framework to generate highquality
animations of scanned human characters from input motion data. Our method is purely mesh-based and, in contrast to skeleton-based animation, requires only a minimum of manual interaction. The only
manual step that is required to create moving virtual people is the placement of a sparse set of correspondences between triangles of an input mesh and triangles of the mesh to be animated. The
proposed algorithm implicitly generates realistic body deformations, and can easily transfer motions between human erent shape and proportions. erent types of input data, e.g. other animated meshes
and motion capture les, in just the same way. Finally, and most importantly, it creates animations at interactive frame rates. We feature two working prototype systems that demonstrate that our
method can generate lifelike character animations from both marker-based and marker-less optical motion capture data. %B Research Report / Max-Planck-Institut für Informatik
Construction of Low-discrepancy Point Sets of Small Size by Bracketing Covers and Dependent Randomized Rounding
B. Doerr and M. Gnewuch
Technical Report, 2006
We provide a deterministic algorithm that constructs small point sets exhibiting a low star discrepancy. The algorithm is based on bracketing and on recent results on randomized roundings respecting
hard constraints. It is structurally much simpler than the previous algorithm presented for this problem in [B. Doerr, M. Gnewuch, A. Srivastav. Bounds and constructions for the star discrepancy via
-covers. J. Complexity, 21: 691-709, 2005]. Besides leading to better theoretical run time bounds, our approach can be implemented with reasonable effort.
@techreport{SemKiel, TITLE = {Construction of Low-discrepancy Point Sets of Small Size by Bracketing Covers and Dependent Randomized Rounding}, AUTHOR = {Doerr, Benjamin and Gnewuch, Michael},
LANGUAGE = {eng}, NUMBER = {06-14}, INSTITUTION = {University Kiel}, ADDRESS = {Kiel}, YEAR = {2006}, DATE = {2006}, ABSTRACT = {We provide a deterministic algorithm that constructs small point sets
exhibiting a low star discrepancy. The algorithm is based on bracketing and on recent results on randomized roundings respecting hard constraints. It is structurally much simpler than the previous
algorithm presented for this problem in [B. Doerr, M. Gnewuch, A. Srivastav. Bounds and constructions for the star discrepancy via -covers. J. Complexity, 21: 691-709, 2005]. Besides leading to
better theoretical run time bounds, our approach can be implemented with reasonable effort.}, }
%0 Report %A Doerr, Benjamin %A Gnewuch, Michael %+ Algorithms and Complexity, MPI for Informatics, Max Planck Society External Organizations %T Construction of Low-discrepancy Point Sets of Small
Size by Bracketing Covers and Dependent Randomized Rounding : %G eng %U http://hdl.handle.net/11858/00-001M-0000-0019-E49F-6 %Y University Kiel %C Kiel %D 2006 %X We provide a deterministic algorithm
that constructs small point sets exhibiting a low star discrepancy. The algorithm is based on bracketing and on recent results on randomized roundings respecting hard constraints. It is structurally
much simpler than the previous algorithm presented for this problem in [B. Doerr, M. Gnewuch, A. Srivastav. Bounds and constructions for the star discrepancy via -covers. J. Complexity, 21: 691-709,
2005]. Besides leading to better theoretical run time bounds, our approach can be implemented with reasonable effort.
Design and evaluation of backward compatible high dynamic range video compression
A. Efremov, R. Mantiuk, K. Myszkowski and H.-P. Seidel
Technical Report, 2006
In this report we describe the details of the backward compatible high dynamic range (HDR) video compression algorithm. The algorithm is designed to facilitate a smooth transition from standard low
dynamic range (LDR) video to high fidelity high dynamic range content. The HDR and the corresponding LDR video frames are decorrelated and then compressed into a single MPEG stream, which can be
played on both existing DVD players and HDR-enabled devices.
@techreport{EfremovMantiukMyszkowskiSeidel, TITLE = {Design and evaluation of backward compatible high dynamic range video compression}, AUTHOR = {Efremov, Alexander and Mantiuk, Rafal and
Myszkowski, Karol and Seidel, Hans-Peter}, LANGUAGE = {eng}, URL = {http://domino.mpi-inf.mpg.de/internet/reports.nsf/NumberView/2006-4-001}, NUMBER = {MPI-I-2006-4-001}, INSTITUTION =
{Max-Planck-Institut f{\"u}r Informatik}, ADDRESS = {Saarbr{\"u}cken}, YEAR = {2006}, DATE = {2006}, ABSTRACT = {In this report we describe the details of the backward compatible high dynamic range
(HDR) video compression algorithm. The algorithm is designed to facilitate a smooth transition from standard low dynamic range (LDR) video to high fidelity high dynamic range content. The HDR and the
corresponding LDR video frames are decorrelated and then compressed into a single MPEG stream, which can be played on both existing DVD players and HDR-enabled devices.}, TYPE = {Research Report /
Max-Planck-Institut für Informatik}, }
%0 Report %A Efremov, Alexander %A Mantiuk, Rafal %A Myszkowski, Karol %A Seidel, Hans-Peter %+ Computer Graphics, MPI for Informatics, Max Planck Society Computer Graphics, MPI for Informatics, Max
Planck Society Computer Graphics, MPI for Informatics, Max Planck Society Computer Graphics, MPI for Informatics, Max Planck Society %T Design and evaluation of backward compatible high dynamic range
video compression : %G eng %U http://hdl.handle.net/11858/00-001M-0000-0014-6811-0 %U http://domino.mpi-inf.mpg.de/internet/reports.nsf/NumberView/2006-4-001 %Y Max-Planck-Institut für
Informatik %C Saarbrücken %D 2006 %P 50 p. %X In this report we describe the details of the backward compatible high dynamic range (HDR) video compression algorithm. The algorithm is designed to
facilitate a smooth transition from standard low dynamic range (LDR) video to high fidelity high dynamic range content. The HDR and the corresponding LDR video frames are decorrelated and then
compressed into a single MPEG stream, which can be played on both existing DVD players and HDR-enabled devices. %B Research Report / Max-Planck-Institut für Informatik
On the Complexity of Monotone Boolean Duality Testing
K. Elbassioni
Technical Report, 2006
We show that the duality of a pair of monotone Boolean functions in disjunctive normal forms can be tested in polylogarithmic time using a quasi-polynomial number of processors. Our decomposition
technique yields stronger bounds on the complexity of the problem than those currently known and also allows for generating all minimal transversals of a given hypergraph using only polynomial space.
@techreport{Elbassioni2006, TITLE = {On the Complexity of Monotone {Boolean} Duality Testing}, AUTHOR = {Elbassioni, Khaled}, LANGUAGE = {eng}, NUMBER = {DIMACS TR: 2006-01}, INSTITUTION = {DIMACS},
ADDRESS = {Piscataway, NJ}, YEAR = {2006}, DATE = {2006}, ABSTRACT = {We show that the duality of a pair of monotone Boolean functions in disjunctive normal forms can be tested in polylogarithmic
time using a quasi-polynomial number of processors. Our decomposition technique yields stronger bounds on the complexity of the problem than those currently known and also allows for generating all
minimal transversals of a given hypergraph using only polynomial space.}, }
%0 Report %A Elbassioni, Khaled %+ Algorithms and Complexity, MPI for Informatics, Max Planck Society %T On the Complexity of Monotone Boolean Duality Testing : %G eng %U http://hdl.handle.net/11858/
00-001M-0000-0019-E4CA-2 %Y DIMACS %C Piscataway, NJ %D 2006 %X We show that the duality of a pair of monotone Boolean functions in disjunctive normal forms can be tested in polylogarithmic time
using a quasi-polynomial number of processors. Our decomposition technique yields stronger bounds on the complexity of the problem than those currently known and also allows for generating all
minimal transversals of a given hypergraph using only polynomial space.
Controlled Perturbation for Delaunay Triangulations
S. Funke, C. Klein, K. Mehlhorn and S. Schmitt
Technical Report, 2006
@techreport{acstr123109-01, TITLE = {Controlled Perturbation for Delaunay Triangulations}, AUTHOR = {Funke, Stefan and Klein, Christian and Mehlhorn, Kurt and Schmitt, Susanne}, LANGUAGE = {eng},
NUMBER = {ACS-TR-121103-03}, INSTITUTION = {Algorithms for Complex Shapes with certified topology and numerics}, ADDRESS = {Instituut voor Wiskunde en Informatica, Groningen, NETHERLANDS}, YEAR =
{2006}, DATE = {2006}, }
%0 Report %A Funke, Stefan %A Klein, Christian %A Mehlhorn, Kurt %A Schmitt, Susanne %+ Algorithms and Complexity, MPI for Informatics, Max Planck Society Algorithms and Complexity, MPI for
Informatics, Max Planck Society Algorithms and Complexity, MPI for Informatics, Max Planck Society Algorithms and Complexity, MPI for Informatics, Max Planck Society %T Controlled Perturbation for
Delaunay Triangulations : %G eng %U http://hdl.handle.net/11858/00-001M-0000-0027-F72F-3 %Y Algorithms for Complex Shapes with certified topology and numerics %C Instituut voor Wiskunde en
Informatica, Groningen, NETHERLANDS %D 2006
Power assignment problems in wireless communication
S. Funke, S. Laue, R. Naujoks and L. Zvi
Technical Report, 2006
A fundamental class of problems in wireless communication is concerned with the assignment of suitable transmission powers to wireless devices/stations such that the resulting communication graph
satisfies certain desired properties and the overall energy consumed is minimized. Many concrete communication tasks in a wireless network like broadcast, multicast, point-to-point routing, creation
of a communication backbone, etc. can be regarded as such a power assignment problem. This paper considers several problems of that kind; for example one problem studied before in (Vittorio Bil{\`o}
et al: Geometric Clustering to Minimize the Sum of Cluster Sizes, ESA 2005) and (Helmut Alt et al.: Minimum-cost coverage of point sets by disks, SCG 2006) aims to select and assign powers to $k$ of
the stations such that all other stations are within reach of at least one of the selected stations. We improve the running time for obtaining a $(1+\epsilon)$-approximate solution for this problem
from $n^{((\alpha/\epsilon)^{O(d)})}$ as reported by Bil{\`o} et al. (see Vittorio Bil{\`o} et al: Geometric Clustering to Minimize the Sum of Cluster Sizes, ESA 2005) to $O\left( n+ {\left(\frac{k^
{2d+1}}{\epsilon^d}\right)}^{ \min{\{\; 2k,\;\; (\alpha/\epsilon)^{O(d)} \;\}} } \right)$ that is, we obtain a running time that is \emph{linear} in the network size. Further results include a
constant approximation algorithm for the TSP problem under squared (non-metric!) edge costs, which can be employed to implement a novel data aggregation protocol, as well as efficient schemes to
perform $k$-hop multicasts.
@techreport{, TITLE = {Power assignment problems in wireless communication}, AUTHOR = {Funke, Stefan and Laue, S{\"o}ren and Naujoks, Rouven and Zvi, Lotker}, LANGUAGE = {eng}, URL = {http://
domino.mpi-inf.mpg.de/internet/reports.nsf/NumberView/2006-1-004}, NUMBER = {MPI-I-2006-1-004}, INSTITUTION = {Max-Planck-Institut f{\"u}r Informatik}, ADDRESS = {Saarbr{\"u}cken}, YEAR = {2006},
DATE = {2006}, ABSTRACT = {A fundamental class of problems in wireless communication is concerned with the assignment of suitable transmission powers to wireless devices/stations such that the
resulting communication graph satisfies certain desired properties and the overall energy consumed is minimized. Many concrete communication tasks in a wireless network like broadcast, multicast,
point-to-point routing, creation of a communication backbone, etc. can be regarded as such a power assignment problem. This paper considers several problems of that kind; for example one problem
studied before in (Vittorio Bil{\`o} et al: Geometric Clustering to Minimize the Sum of Cluster Sizes, ESA 2005) and (Helmut Alt et al.: Minimum-cost coverage of point sets by disks, SCG 2006) aims
to select and assign powers to $k$ of the stations such that all other stations are within reach of at least one of the selected stations. We improve the running time for obtaining a $(1+\epsilon)
$-approximate solution for this problem from $n^{((\alpha/\epsilon)^{O(d)})}$ as reported by Bil{\`o} et al. (see Vittorio Bil{\`o} et al: Geometric Clustering to Minimize the Sum of Cluster Sizes,
ESA 2005) to $O\left( n+ {\left(\frac{k^{2d+1}}{\epsilon^d}\right)}^{ \min{\{\; 2k,\;\; (\alpha/\epsilon)^{O(d)} \;\}} } \right)$ that is, we obtain a running time that is \emph{linear} in the
network size. Further results include a constant approximation algorithm for the TSP problem under squared (non-metric!) edge costs, which can be employed to implement a novel data aggregation
protocol, as well as efficient schemes to perform $k$-hop multicasts.}, TYPE = {Research Report / Max-Planck-Institut für Informatik}, }
%0 Report %A Funke, Stefan %A Laue, Sören %A Naujoks, Rouven %A Zvi, Lotker %+ Algorithms and Complexity, MPI for Informatics, Max Planck Society Algorithms and Complexity, MPI for Informatics,
Max Planck Society Algorithms and Complexity, MPI for Informatics, Max Planck Society External Organizations %T Power assignment problems in wireless communication : %G eng %U http://hdl.handle.net/
11858/00-001M-0000-0014-6820-E %U http://domino.mpi-inf.mpg.de/internet/reports.nsf/NumberView/2006-1-004 %Y Max-Planck-Institut für Informatik %C Saarbrücken %D 2006 %P 25 p. %X A
fundamental class of problems in wireless communication is concerned with the assignment of suitable transmission powers to wireless devices/stations such that the resulting communication graph
satisfies certain desired properties and the overall energy consumed is minimized. Many concrete communication tasks in a wireless network like broadcast, multicast, point-to-point routing, creation
of a communication backbone, etc. can be regarded as such a power assignment problem. This paper considers several problems of that kind; for example one problem studied before in (Vittorio Bil{\`o}
et al: Geometric Clustering to Minimize the Sum of Cluster Sizes, ESA 2005) and (Helmut Alt et al.: Minimum-cost coverage of point sets by disks, SCG 2006) aims to select and assign powers to $k$ of
the stations such that all other stations are within reach of at least one of the selected stations. We improve the running time for obtaining a $(1+\epsilon)$-approximate solution for this problem
from $n^{((\alpha/\epsilon)^{O(d)})}$ as reported by Bil{\`o} et al. (see Vittorio Bil{\`o} et al: Geometric Clustering to Minimize the Sum of Cluster Sizes, ESA 2005) to $O\left( n+ {\left(\frac{k^
{2d+1}}{\epsilon^d}\right)}^{ \min{\{\; 2k,\;\; (\alpha/\epsilon)^{O(d)} \;\}} } \right)$ that is, we obtain a running time that is \emph{linear} in the network size. Further results include a
constant approximation algorithm for the TSP problem under squared (non-metric!) edge costs, which can be employed to implement a novel data aggregation protocol, as well as efficient schemes to
perform $k$-hop multicasts. %B Research Report / Max-Planck-Institut für Informatik
On fast construction of spatial hierarchies for ray tracing
V. Havran, R. Herzog and H.-P. Seidel
Technical Report, 2006
In this paper we address the problem of fast construction of spatial hierarchies for ray tracing with applications in animated environments including non-rigid animations. We discuss properties of
currently used techniques with $O(N \log N)$ construction time for kd-trees and bounding volume hierarchies. Further, we propose a hybrid data structure blending between a spatial kd-tree and
bounding volume primitives. We keep our novel hierarchical data structures algorithmically efficient and comparable with kd-trees by the use of a cost model based on surface area heuristics. Although
the time complexity $O(N \log N)$ is a lower bound required for construction of any spatial hierarchy that corresponds to sorting based on comparisons, using approximate method based on
discretization we propose a new hierarchical data structures with expected $O(N \log\log N)$ time complexity. We also discuss constants behind the construction algorithms of spatial hierarchies that
are important in practice. We document the performance of our algorithms by results obtained from the implementation tested on nine different scenes.
@techreport{HavranHerzogSeidel2006, TITLE = {On fast construction of spatial hierarchies for ray tracing}, AUTHOR = {Havran, Vlastimil and Herzog, Robert and Seidel, Hans-Peter}, LANGUAGE = {eng},
URL = {http://domino.mpi-inf.mpg.de/internet/reports.nsf/NumberView/2006-4-004}, NUMBER = {MPI-I-2006-4-004}, INSTITUTION = {Max-Planck-Institut f{\"u}r Informatik}, ADDRESS = {Saarbr{\"u}cken}, YEAR
= {2006}, DATE = {2006}, ABSTRACT = {In this paper we address the problem of fast construction of spatial hierarchies for ray tracing with applications in animated environments including non-rigid
animations. We discuss properties of currently used techniques with $O(N \log N)$ construction time for kd-trees and bounding volume hierarchies. Further, we propose a hybrid data structure blending
between a spatial kd-tree and bounding volume primitives. We keep our novel hierarchical data structures algorithmically efficient and comparable with kd-trees by the use of a cost model based on
surface area heuristics. Although the time complexity $O(N \log N)$ is a lower bound required for construction of any spatial hierarchy that corresponds to sorting based on comparisons, using
approximate method based on discretization we propose a new hierarchical data structures with expected $O(N \log\log N)$ time complexity. We also discuss constants behind the construction algorithms
of spatial hierarchies that are important in practice. We document the performance of our algorithms by results obtained from the implementation tested on nine different scenes.}, TYPE = {Research
Report / Max-Planck-Institut für Informatik}, }
%0 Report %A Havran, Vlastimil %A Herzog, Robert %A Seidel, Hans-Peter %+ Computer Graphics, MPI for Informatics, Max Planck Society Computer Graphics, MPI for Informatics, Max Planck Society
Computer Graphics, MPI for Informatics, Max Planck Society %T On fast construction of spatial hierarchies for ray tracing : %G eng %U http://hdl.handle.net/11858/00-001M-0000-0014-6807-8 %U http://
domino.mpi-inf.mpg.de/internet/reports.nsf/NumberView/2006-4-004 %Y Max-Planck-Institut für Informatik %C Saarbrücken %D 2006 %P 40 p. %X In this paper we address the problem of fast
construction of spatial hierarchies for ray tracing with applications in animated environments including non-rigid animations. We discuss properties of currently used techniques with $O(N \log N)$
construction time for kd-trees and bounding volume hierarchies. Further, we propose a hybrid data structure blending between a spatial kd-tree and bounding volume primitives. We keep our novel
hierarchical data structures algorithmically efficient and comparable with kd-trees by the use of a cost model based on surface area heuristics. Although the time complexity $O(N \log N)$ is a lower
bound required for construction of any spatial hierarchy that corresponds to sorting based on comparisons, using approximate method based on discretization we propose a new hierarchical data
structures with expected $O(N \log\log N)$ time complexity. We also discuss constants behind the construction algorithms of spatial hierarchies that are important in practice. We document the
performance of our algorithms by results obtained from the implementation tested on nine different scenes. %B Research Report / Max-Planck-Institut für Informatik
Yago - a core of semantic knowledge
G. Kasneci, F. Suchanek and G. Weikum
Technical Report, 2006
We present YAGO, a light-weight and extensible ontology with high coverage and quality. YAGO builds on entities and relations and currently contains roughly 900,000 entities and 5,000,000 facts. This
includes the Is-A hierarchy as well as non-taxonomic relations between entities (such as relation{hasWonPrize}). The facts have been automatically extracted from the unification of Wikipedia and
WordNet, using a carefully designed combination of rule-based and heuristic methods described in this paper. The resulting knowledge base is a major step beyond WordNet: in quality by adding
knowledge about individuals like persons, organizations, products, etc. with their semantic relationships -- and in quantity by increasing the number of facts by more than an order of magnitude. Our
empirical evaluation of fact correctness shows an accuracy of about 95%. YAGO is based on a logically clean model, which is decidable, extensible, and compatible with RDFS. Finally, we show how YAGO
can be further extended by state-of-the-art information extraction techniques.
@techreport{, TITLE = {Yago -- a core of semantic knowledge}, AUTHOR = {Kasneci, Gjergji and Suchanek, Fabian and Weikum, Gerhard}, LANGUAGE = {eng}, URL = {http://domino.mpi-inf.mpg.de/internet/
reports.nsf/NumberView/2006-5-006}, NUMBER = {MPI-I-2006-5-006}, INSTITUTION = {Max-Planck-Institut f{\"u}r Informatik}, ADDRESS = {Saarbr{\"u}cken}, YEAR = {2006}, DATE = {2006}, ABSTRACT = {We
present YAGO, a light-weight and extensible ontology with high coverage and quality. YAGO builds on entities and relations and currently contains roughly 900,000 entities and 5,000,000 facts. This
includes the Is-A hierarchy as well as non-taxonomic relations between entities (such as relation{hasWonPrize}). The facts have been automatically extracted from the unification of Wikipedia and
WordNet, using a carefully designed combination of rule-based and heuristic methods described in this paper. The resulting knowledge base is a major step beyond WordNet: in quality by adding
knowledge about individuals like persons, organizations, products, etc. with their semantic relationships -- and in quantity by increasing the number of facts by more than an order of magnitude. Our
empirical evaluation of fact correctness shows an accuracy of about 95%. YAGO is based on a logically clean model, which is decidable, extensible, and compatible with RDFS. Finally, we show how YAGO
can be further extended by state-of-the-art information extraction techniques.}, TYPE = {Research Report / Max-Planck-Institut für Informatik}, }
%0 Report %A Kasneci, Gjergji %A Suchanek, Fabian %A Weikum, Gerhard %+ Databases and Information Systems, MPI for Informatics, Max Planck Society Databases and Information Systems, MPI for
Informatics, Max Planck Society Databases and Information Systems, MPI for Informatics, Max Planck Society %T Yago - a core of semantic knowledge : %G eng %U http://hdl.handle.net/11858/
00-001M-0000-0014-670A-A %U http://domino.mpi-inf.mpg.de/internet/reports.nsf/NumberView/2006-5-006 %Y Max-Planck-Institut für Informatik %C Saarbrücken %D 2006 %P 39 p. %X We present YAGO,
a light-weight and extensible ontology with high coverage and quality. YAGO builds on entities and relations and currently contains roughly 900,000 entities and 5,000,000 facts. This includes the
Is-A hierarchy as well as non-taxonomic relations between entities (such as relation{hasWonPrize}). The facts have been automatically extracted from the unification of Wikipedia and WordNet, using a
carefully designed combination of rule-based and heuristic methods described in this paper. The resulting knowledge base is a major step beyond WordNet: in quality by adding knowledge about
individuals like persons, organizations, products, etc. with their semantic relationships -- and in quantity by increasing the number of facts by more than an order of magnitude. Our empirical
evaluation of fact correctness shows an accuracy of about 95%. YAGO is based on a logically clean model, which is decidable, extensible, and compatible with RDFS. Finally, we show how YAGO can be
further extended by state-of-the-art information extraction techniques. %B Research Report / Max-Planck-Institut für Informatik
Division-free computation of subresultants using bezout matrices
M. Kerber
Technical Report, 2006
We present an algorithm to compute the subresultant sequence of two polynomials that completely avoids division in the ground domain, generalizing an algorithm from Abdeljaoued et al.\ (see
Abdeljaoed et al.: Minors of Bezout Matrices\ldots, Int.\ J.\ of Comp.\ Math.\ 81, 2004). This is done by evaluating determinants of slightly manipulated Bezout matrices using an algorithm of
Berkowitz. Experiments show that our algorithm is superior compared to pseudo-division approaches for moderate degrees if the domain contains indeterminates.
@techreport{, TITLE = {Division-free computation of subresultants using bezout matrices}, AUTHOR = {Kerber, Michael}, LANGUAGE = {eng}, URL = {http://domino.mpi-inf.mpg.de/internet/reports.nsf/
NumberView/2006-1-006}, NUMBER = {MPI-I-2006-1-006}, INSTITUTION = {Max-Planck-Institut f{\"u}r Informatik}, ADDRESS = {Saarbr{\"u}cken}, YEAR = {2006}, DATE = {2006}, ABSTRACT = {We present an
algorithm to compute the subresultant sequence of two polynomials that completely avoids division in the ground domain, generalizing an algorithm from Abdeljaoued et al.\ (see Abdeljaoed et al.:
Minors of Bezout Matrices\ldots, Int.\ J.\ of Comp.\ Math.\ 81, 2004). This is done by evaluating determinants of slightly manipulated Bezout matrices using an algorithm of Berkowitz. Experiments
show that our algorithm is superior compared to pseudo-division approaches for moderate degrees if the domain contains indeterminates.}, TYPE = {Research Report / Max-Planck-Institut für
Informatik}, }
%0 Report %A Kerber, Michael %+ Algorithms and Complexity, MPI for Informatics, Max Planck Society %T Division-free computation of subresultants using bezout matrices : %G eng %U http://
hdl.handle.net/11858/00-001M-0000-0014-681D-7 %U http://domino.mpi-inf.mpg.de/internet/reports.nsf/NumberView/2006-1-006 %Y Max-Planck-Institut für Informatik %C Saarbrücken %D 2006 %P 20
p. %X We present an algorithm to compute the subresultant sequence of two polynomials that completely avoids division in the ground domain, generalizing an algorithm from Abdeljaoued et al.\ (see
Abdeljaoed et al.: Minors of Bezout Matrices\ldots, Int.\ J.\ of Comp.\ Math.\ 81, 2004). This is done by evaluating determinants of slightly manipulated Bezout matrices using an algorithm of
Berkowitz. Experiments show that our algorithm is superior compared to pseudo-division approaches for moderate degrees if the domain contains indeterminates. %B Research Report / Max-Planck-Institut
für Informatik
Exploiting Community Behavior for Enhanced Link Analysis and Web Search
J. Luxenburger and G. Weikum
Technical Report, 2006
Methods for Web link analysis and authority ranking such as PageRank are based on the assumption that a user endorses a Web page when creating a hyperlink to this page. There is a wealth of
additional user-behavior information that could be considered for improving authority analysis, for example, the history of queries that a user community posed to a search engine over an extended
time period, or observations about which query-result pages were clicked on and which ones were not clicked on after a user saw the summary snippets of the top-10 results. This paper enhances link
analysis methods by incorporating additional user assessments based on query logs and click streams, including negative feedback when a query-result page does not satisfy the user demand or is even
perceived as spam. Our methods use various novel forms of advanced Markov models whose states correspond to users and queries in addition to Web pages and whose links also reflect the relationships
derived from query-result clicks, query refinements, and explicit ratings. Preliminary experiments are presented as a proof of concept.
@techreport{TechReportDelis0447_2006, TITLE = {Exploiting Community Behavior for Enhanced Link Analysis and Web Search}, AUTHOR = {Luxenburger, Julia and Weikum, Gerhard}, LANGUAGE = {eng}, NUMBER =
{DELIS-TR-0447}, INSTITUTION = {University of Paderborn, Heinz Nixdorf Institute}, ADDRESS = {Paderborn, Germany}, YEAR = {2006}, DATE = {2006}, ABSTRACT = {Methods for Web link analysis and
authority ranking such as PageRank are based on the assumption that a user endorses a Web page when creating a hyperlink to this page. There is a wealth of additional user-behavior information that
could be considered for improving authority analysis, for example, the history of queries that a user community posed to a search engine over an extended time period, or observations about which
query-result pages were clicked on and which ones were not clicked on after a user saw the summary snippets of the top-10 results. This paper enhances link analysis methods by incorporating
additional user assessments based on query logs and click streams, including negative feedback when a query-result page does not satisfy the user demand or is even perceived as spam. Our methods use
various novel forms of advanced Markov models whose states correspond to users and queries in addition to Web pages and whose links also reflect the relationships derived from query-result clicks,
query refinements, and explicit ratings. Preliminary experiments are presented as a proof of concept.}, }
%0 Report %A Luxenburger, Julia %A Weikum, Gerhard %+ Databases and Information Systems, MPI for Informatics, Max Planck Society Databases and Information Systems, MPI for Informatics, Max Planck
Society %T Exploiting Community Behavior for Enhanced Link Analysis and Web Search : %G eng %U http://hdl.handle.net/11858/00-001M-0000-0019-BC47-9 %Y University of Paderborn, Heinz Nixdorf Institute
%C Paderborn, Germany %D 2006 %X Methods for Web link analysis and authority ranking such as PageRank are based on the assumption that a user endorses a Web page when creating a hyperlink to this
page. There is a wealth of additional user-behavior information that could be considered for improving authority analysis, for example, the history of queries that a user community posed to a search
engine over an extended time period, or observations about which query-result pages were clicked on and which ones were not clicked on after a user saw the summary snippets of the top-10 results.
This paper enhances link analysis methods by incorporating additional user assessments based on query logs and click streams, including negative feedback when a query-result page does not satisfy the
user demand or is even perceived as spam. Our methods use various novel forms of advanced Markov models whose states correspond to users and queries in addition to Web pages and whose links also
reflect the relationships derived from query-result clicks, query refinements, and explicit ratings. Preliminary experiments are presented as a proof of concept.
Feature-preserving non-local denoising of static and time-varying range data
O. Schall, A. Belyaev and H.-P. Seidel
Technical Report, 2006
We present a novel algorithm for accurately denoising static and time-varying range data. Our approach is inspired by similarity-based non-local image filtering. We show that our proposed method is
easy to implement and outperforms recent state-of-the-art filtering approaches. Furthermore, it preserves fine shape features and produces an accurate smoothing result in the spatial and along the
time domain.
@techreport{SchallBelyaevSeidel2006, TITLE = {Feature-preserving non-local denoising of static and time-varying range data}, AUTHOR = {Schall, Oliver and Belyaev, Alexander and Seidel, Hans-Peter},
LANGUAGE = {eng}, URL = {http://domino.mpi-inf.mpg.de/internet/reports.nsf/NumberView/2006-4-007}, NUMBER = {MPI-I-2006-4-007}, INSTITUTION = {Max-Planck-Institut f{\"u}r Informatik}, ADDRESS =
{Saarbr{\"u}cken}, YEAR = {2006}, DATE = {2006}, ABSTRACT = {We present a novel algorithm for accurately denoising static and time-varying range data. Our approach is inspired by similarity-based
non-local image filtering. We show that our proposed method is easy to implement and outperforms recent state-of-the-art filtering approaches. Furthermore, it preserves fine shape features and
produces an accurate smoothing result in the spatial and along the time domain.}, TYPE = {Research Report / Max-Planck-Institut für Informatik}, }
%0 Report %A Schall, Oliver %A Belyaev, Alexander %A Seidel, Hans-Peter %+ Computer Graphics, MPI for Informatics, Max Planck Society Computer Graphics, MPI for Informatics, Max Planck Society
Computer Graphics, MPI for Informatics, Max Planck Society %T Feature-preserving non-local denoising of static and time-varying range data : %G eng %U http://hdl.handle.net/11858/
00-001M-0000-0014-673D-7 %U http://domino.mpi-inf.mpg.de/internet/reports.nsf/NumberView/2006-4-007 %Y Max-Planck-Institut für Informatik %C Saarbrücken %D 2006 %P 22 p. %X We present a
novel algorithm for accurately denoising static and time-varying range data. Our approach is inspired by similarity-based non-local image filtering. We show that our proposed method is easy to
implement and outperforms recent state-of-the-art filtering approaches. Furthermore, it preserves fine shape features and produces an accurate smoothing result in the spatial and along the time
domain. %B Research Report / Max-Planck-Institut für Informatik
Combining linguistic and statistical analysis to extract relations from web documents
F. Suchanek, G. Ifrim and G. Weikum
Technical Report, 2006
Search engines, question answering systems and classification systems alike can greatly profit from formalized world knowledge. Unfortunately, manually compiled collections of world knowledge (such
as WordNet or the Suggested Upper Merged Ontology SUMO) often suffer from low coverage, high assembling costs and fast aging. In contrast, the World Wide Web provides an endless source of knowledge,
assembled by millions of people, updated constantly and available for free. In this paper, we propose a novel method for learning arbitrary binary relations from natural language Web documents,
without human interaction. Our system, LEILA, combines linguistic analysis and machine learning techniques to find robust patterns in the text and to generalize them. For initialization, we only
require a set of examples of the target relation and a set of counterexamples (e.g. from WordNet). The architecture consists of 3 stages: Finding patterns in the corpus based on the given examples,
assessing the patterns based on probabilistic confidence, and applying the generalized patterns to propose pairs for the target relation. We prove the benefits and practical viability of our approach
by extensive experiments, showing that LEILA achieves consistent improvements over existing comparable techniques (e.g. Snowball, TextToOnto).
@techreport{Suchanek2006, TITLE = {Combining linguistic and statistical analysis to extract relations from web documents}, AUTHOR = {Suchanek, Fabian and Ifrim, Georgiana and Weikum, Gerhard},
LANGUAGE = {eng}, URL = {http://domino.mpi-inf.mpg.de/internet/reports.nsf/NumberView/2006-5-004}, NUMBER = {MPI-I-2006-5-004}, INSTITUTION = {Max-Planck-Institut f{\"u}r Informatik}, ADDRESS =
{Saarbr{\"u}cken}, YEAR = {2006}, DATE = {2006}, ABSTRACT = {Search engines, question answering systems and classification systems alike can greatly profit from formalized world knowledge.
Unfortunately, manually compiled collections of world knowledge (such as WordNet or the Suggested Upper Merged Ontology SUMO) often suffer from low coverage, high assembling costs and fast aging. In
contrast, the World Wide Web provides an endless source of knowledge, assembled by millions of people, updated constantly and available for free. In this paper, we propose a novel method for learning
arbitrary binary relations from natural language Web documents, without human interaction. Our system, LEILA, combines linguistic analysis and machine learning techniques to find robust patterns in
the text and to generalize them. For initialization, we only require a set of examples of the target relation and a set of counterexamples (e.g. from WordNet). The architecture consists of 3 stages:
Finding patterns in the corpus based on the given examples, assessing the patterns based on probabilistic confidence, and applying the generalized patterns to propose pairs for the target relation.
We prove the benefits and practical viability of our approach by extensive experiments, showing that LEILA achieves consistent improvements over existing comparable techniques (e.g. Snowball,
TextToOnto).}, TYPE = {Research Report / Max-Planck-Institut für Informatik}, }
%0 Report %A Suchanek, Fabian %A Ifrim, Georgiana %A Weikum, Gerhard %+ Databases and Information Systems, MPI for Informatics, Max Planck Society Databases and Information Systems, MPI for
Informatics, Max Planck Society Databases and Information Systems, MPI for Informatics, Max Planck Society %T Combining linguistic and statistical analysis to extract relations from web documents :
%G eng %U http://hdl.handle.net/11858/00-001M-0000-0014-6710-9 %U http://domino.mpi-inf.mpg.de/internet/reports.nsf/NumberView/2006-5-004 %Y Max-Planck-Institut für Informatik %C Saarbrü
cken %D 2006 %P 37 p. %X Search engines, question answering systems and classification systems alike can greatly profit from formalized world knowledge. Unfortunately, manually compiled collections
of world knowledge (such as WordNet or the Suggested Upper Merged Ontology SUMO) often suffer from low coverage, high assembling costs and fast aging. In contrast, the World Wide Web provides an
endless source of knowledge, assembled by millions of people, updated constantly and available for free. In this paper, we propose a novel method for learning arbitrary binary relations from natural
language Web documents, without human interaction. Our system, LEILA, combines linguistic analysis and machine learning techniques to find robust patterns in the text and to generalize them. For
initialization, we only require a set of examples of the target relation and a set of counterexamples (e.g. from WordNet). The architecture consists of 3 stages: Finding patterns in the corpus based
on the given examples, assessing the patterns based on probabilistic confidence, and applying the generalized patterns to propose pairs for the target relation. We prove the benefits and practical
viability of our approach by extensive experiments, showing that LEILA achieves consistent improvements over existing comparable techniques (e.g. Snowball, TextToOnto). %B Research Report /
Max-Planck-Institut für Informatik
Enhanced dynamic reflectometry for relightable free-viewpoint video
C. Theobalt, N. Ahmed, H. P. A. Lensch, M. A. Magnor and H.-P. Seidel
Technical Report, 2006
Free-Viewpoint Video of Human Actors allows photo- realistic rendering of real-world people under novel viewing conditions. Dynamic Reflectometry extends the concept of free-view point video and
allows rendering in addition under novel lighting conditions. In this work, we present an enhanced method for capturing human shape and motion as well as dynamic surface reflectance properties from a
sparse set of input video streams. We augment our initial method for model-based relightable free-viewpoint video in several ways. Firstly, a single-skin mesh is introduced for the continuous
appearance of the model. Moreover an algorithm to detect and compensate lateral shifting of textiles in order to improve temporal texture registration is presented. Finally, a structured resampling
approach is introduced which enables reliable estimation of spatially varying surface reflectance despite a static recording setup. The new algorithm ingredients along with the Relightable 3D Video
framework enables us to realistically reproduce the appearance of animated virtual actors under different lighting conditions, as well as to interchange surface attributes among different people,
e.g. for virtual dressing. Our contribution can be used to create 3D renditions of real-world people under arbitrary novel lighting conditions on standard graphics hardware.
@techreport{TheobaltAhmedLenschMagnorSeidel2006, TITLE = {Enhanced dynamic reflectometry for relightable free-viewpoint video}, AUTHOR = {Theobalt, Christian and Ahmed, Naveed and Lensch, Hendrik P.
A. and Magnor, Marcus A. and Seidel, Hans-Peter}, LANGUAGE = {eng}, URL = {http://domino.mpi-inf.mpg.de/internet/reports.nsf/NumberView/2006-4-006}, NUMBER = {MPI-I-2006-4-006}, INSTITUTION =
{Max-Planck-Institut f{\"u}r Informatik}, ADDRESS = {Saarbr{\"u}cken}, YEAR = {2006}, DATE = {2006}, ABSTRACT = {Free-Viewpoint Video of Human Actors allows photo- realistic rendering of real-world
people under novel viewing conditions. Dynamic Reflectometry extends the concept of free-view point video and allows rendering in addition under novel lighting conditions. In this work, we present an
enhanced method for capturing human shape and motion as well as dynamic surface reflectance properties from a sparse set of input video streams. We augment our initial method for model-based
relightable free-viewpoint video in several ways. Firstly, a single-skin mesh is introduced for the continuous appearance of the model. Moreover an algorithm to detect and compensate lateral shifting
of textiles in order to improve temporal texture registration is presented. Finally, a structured resampling approach is introduced which enables reliable estimation of spatially varying surface
reflectance despite a static recording setup. The new algorithm ingredients along with the Relightable 3D Video framework enables us to realistically reproduce the appearance of animated virtual
actors under different lighting conditions, as well as to interchange surface attributes among different people, e.g. for virtual dressing. Our contribution can be used to create 3D renditions of
real-world people under arbitrary novel lighting conditions on standard graphics hardware.}, TYPE = {Research Report / Max-Planck-Institut für Informatik}, }
%0 Report %A Theobalt, Christian %A Ahmed, Naveed %A Lensch, Hendrik P. A. %A Magnor, Marcus A. %A Seidel, Hans-Peter %+ Computer Graphics, MPI for Informatics, Max Planck Society Computer Graphics,
MPI for Informatics, Max Planck Society Computer Graphics, MPI for Informatics, Max Planck Society Graphics - Optics - Vision, MPI for Informatics, Max Planck Society Computer Graphics, MPI for
Informatics, Max Planck Society %T Enhanced dynamic reflectometry for relightable free-viewpoint video : %G eng %U http://hdl.handle.net/11858/00-001M-0000-0014-67F4-B %U http://domino.mpi-inf.mpg.de
/internet/reports.nsf/NumberView/2006-4-006 %Y Max-Planck-Institut für Informatik %C Saarbrücken %D 2006 %P 37 p. %X Free-Viewpoint Video of Human Actors allows photo- realistic rendering
of real-world people under novel viewing conditions. Dynamic Reflectometry extends the concept of free-view point video and allows rendering in addition under novel lighting conditions. In this work,
we present an enhanced method for capturing human shape and motion as well as dynamic surface reflectance properties from a sparse set of input video streams. We augment our initial method for
model-based relightable free-viewpoint video in several ways. Firstly, a single-skin mesh is introduced for the continuous appearance of the model. Moreover an algorithm to detect and compensate
lateral shifting of textiles in order to improve temporal texture registration is presented. Finally, a structured resampling approach is introduced which enables reliable estimation of spatially
varying surface reflectance despite a static recording setup. The new algorithm ingredients along with the Relightable 3D Video framework enables us to realistically reproduce the appearance of
animated virtual actors under different lighting conditions, as well as to interchange surface attributes among different people, e.g. for virtual dressing. Our contribution can be used to create 3D
renditions of real-world people under arbitrary novel lighting conditions on standard graphics hardware. %B Research Report / Max-Planck-Institut für Informatik
GPU point list generation through histogram pyramids
G. Ziegler, A. Tevs, C. Theobalt and H.-P. Seidel
Technical Report, 2006
Image Pyramids are frequently used in porting non-local algorithms to graphics hardware. A Histogram pyramid (short: HistoPyramid), a special version of image pyramid, sums up the number of active
entries in a 2D image hierarchically. We show how a HistoPyramid can be utilized as an implicit indexing data structure, allowing us to convert a sparse matrix into a coordinate list of active cell
entries (a point list) on graphics hardware . The algorithm reduces a highly sparse matrix with N elements to a list of its M active entries in O(N) + M (log N) steps, despite the restricted graphics
hardware architecture. Applications are numerous, including feature detection, pixel classification and binning, conversion of 3D volumes to particle clouds and sparse matrix compression.
@techreport{OhtakeBelyaevSeidel2004, TITLE = {{GPU} point list generation through histogram pyramids}, AUTHOR = {Ziegler, Gernot and Tevs, Art and Theobalt, Christian and Seidel, Hans-Peter},
LANGUAGE = {eng}, URL = {http://domino.mpi-inf.mpg.de/internet/reports.nsf/NumberView/2006-4-002}, NUMBER = {MPI-I-2006-4-002}, INSTITUTION = {Max-Planck-Institut f{\"u}r Informatik}, ADDRESS =
{Saarbr{\"u}cken}, YEAR = {2006}, DATE = {2006}, ABSTRACT = {Image Pyramids are frequently used in porting non-local algorithms to graphics hardware. A Histogram pyramid (short: HistoPyramid), a
special version of image pyramid, sums up the number of active entries in a 2D image hierarchically. We show how a HistoPyramid can be utilized as an implicit indexing data structure, allowing us to
convert a sparse matrix into a coordinate list of active cell entries (a point list) on graphics hardware . The algorithm reduces a highly sparse matrix with N elements to a list of its M active
entries in O(N) + M (log N) steps, despite the restricted graphics hardware architecture. Applications are numerous, including feature detection, pixel classification and binning, conversion of 3D
volumes to particle clouds and sparse matrix compression.}, TYPE = {Research Report / Max-Planck-Institut für Informatik}, }
%0 Report %A Ziegler, Gernot %A Tevs, Art %A Theobalt, Christian %A Seidel, Hans-Peter %+ Computer Graphics, MPI for Informatics, Max Planck Society Computer Graphics, MPI for Informatics, Max Planck
Society Computer Graphics, MPI for Informatics, Max Planck Society Computer Graphics, MPI for Informatics, Max Planck Society %T GPU point list generation through histogram pyramids : %G eng %U http:
//hdl.handle.net/11858/00-001M-0000-0014-680E-9 %U http://domino.mpi-inf.mpg.de/internet/reports.nsf/NumberView/2006-4-002 %Y Max-Planck-Institut für Informatik %C Saarbrücken %D 2006 %P 13
p. %X Image Pyramids are frequently used in porting non-local algorithms to graphics hardware. A Histogram pyramid (short: HistoPyramid), a special version of image pyramid, sums up the number of
active entries in a 2D image hierarchically. We show how a HistoPyramid can be utilized as an implicit indexing data structure, allowing us to convert a sparse matrix into a coordinate list of active
cell entries (a point list) on graphics hardware . The algorithm reduces a highly sparse matrix with N elements to a list of its M active entries in O(N) + M (log N) steps, despite the restricted
graphics hardware architecture. Applications are numerous, including feature detection, pixel classification and binning, conversion of 3D volumes to particle clouds and sparse matrix compression. %B
Research Report / Max-Planck-Institut für Informatik
Improved algorithms for all-pairs approximate shortest paths in weighted graphs
S. Baswana and K. Telikepalli
Technical Report, 2005
The all-pairs approximate shortest-paths problem is an interesting variant of the classical all-pairs shortest-paths problem in graphs. The problem aims at building a data-structure for a given graph
with the following two features. Firstly, for any two vertices, it should report an {\emph{approximate}} shortest path between them, that is, a path which is longer than the shortest path by some {\
emph{small}} factor. Secondly, the data-structure should require less preprocessing time (strictly sub-cubic) and occupy optimal space (sub-quadratic), at the cost of this approximation. In this
paper, we present algorithms for computing all-pairs approximate shortest paths in a weighted undirected graph. These algorithms significantly improve the existing results for this problem.
@techreport{, TITLE = {Improved algorithms for all-pairs approximate shortest paths in weighted graphs}, AUTHOR = {Baswana, Surender and Telikepalli, Kavitha}, LANGUAGE = {eng}, URL = {http://
domino.mpi-inf.mpg.de/internet/reports.nsf/NumberView/2005-1-003}, NUMBER = {MPI-I-2005-1-003}, INSTITUTION = {Max-Planck-Institut f{\"u}r Informatik}, ADDRESS = {Saarbr{\"u}cken}, YEAR = {2005},
DATE = {2005}, ABSTRACT = {The all-pairs approximate shortest-paths problem is an interesting variant of the classical all-pairs shortest-paths problem in graphs. The problem aims at building a
data-structure for a given graph with the following two features. Firstly, for any two vertices, it should report an {\emph{approximate}} shortest path between them, that is, a path which is longer
than the shortest path by some {\emph{small}} factor. Secondly, the data-structure should require less preprocessing time (strictly sub-cubic) and occupy optimal space (sub-quadratic), at the cost of
this approximation. In this paper, we present algorithms for computing all-pairs approximate shortest paths in a weighted undirected graph. These algorithms significantly improve the existing results
for this problem.}, TYPE = {Research Report / Max-Planck-Institut für Informatik}, }
%0 Report %A Baswana, Surender %A Telikepalli, Kavitha %+ Algorithms and Complexity, MPI for Informatics, Max Planck Society External Organizations %T Improved algorithms for all-pairs approximate
shortest paths in weighted graphs : %G eng %U http://hdl.handle.net/11858/00-001M-0000-0014-6854-9 %U http://domino.mpi-inf.mpg.de/internet/reports.nsf/NumberView/2005-1-003 %Y Max-Planck-Institut f&
#252;r Informatik %C Saarbrücken %D 2005 %P 26 p. %X The all-pairs approximate shortest-paths problem is an interesting variant of the classical all-pairs shortest-paths problem in graphs. The
problem aims at building a data-structure for a given graph with the following two features. Firstly, for any two vertices, it should report an {\emph{approximate}} shortest path between them, that
is, a path which is longer than the shortest path by some {\emph{small}} factor. Secondly, the data-structure should require less preprocessing time (strictly sub-cubic) and occupy optimal space
(sub-quadratic), at the cost of this approximation. In this paper, we present algorithms for computing all-pairs approximate shortest paths in a weighted undirected graph. These algorithms
significantly improve the existing results for this problem. %B Research Report / Max-Planck-Institut für Informatik
STXXL: Standard Template Library for XXL Data Sets
R. Dementiev, L. Kettner and P. Sanders
Technical Report, 2005
@techreport{Kettner2005StxxlReport, TITLE = {{STXXL}: Standard Template Library for {XXL} Data Sets}, AUTHOR = {Dementiev, Roman and Kettner, Lutz and Sanders, Peter}, LANGUAGE = {eng}, NUMBER =
{2005/18}, INSTITUTION = {Fakult{\"a}t f{\"u}r Informatik, University of Karlsruhe}, ADDRESS = {Karlsruhe, Germany}, YEAR = {2005}, DATE = {2005}, }
%0 Report %A Dementiev, Roman %A Kettner, Lutz %A Sanders, Peter %+ Algorithms and Complexity, MPI for Informatics, Max Planck Society Algorithms and Complexity, MPI for Informatics, Max Planck
Society Algorithms and Complexity, MPI for Informatics, Max Planck Society %T STXXL: Standard Template Library for XXL Data Sets : %G eng %U http://hdl.handle.net/11858/00-001M-0000-0019-E689-4 %Y
Fakultät für Informatik, University of Karlsruhe %C Karlsruhe, Germany %D 2005
An emperical model for heterogeneous translucent objects
C. Fuchs, M. Gösele, T. Chen and H.-P. Seidel
Technical Report, 2005
We introduce an empirical model for multiple scattering in heterogeneous translucent objects for which classical approximations such as the dipole approximation to the di usion equation are no longer
valid. Motivated by the exponential fall-o of scattered intensity with distance, di use subsurface scattering is represented as a sum of exponentials per surface point plus a modulation texture.
Modeling quality can be improved by using an anisotropic model where exponential parameters are determined per surface location and scattering direction. We validate the scattering model for a set of
planar object samples which were recorded under controlled conditions and quantify the modeling error. Furthermore, several translucent objects with complex geometry are captured and compared to the
real object under similar illumination conditions.
@techreport{FuchsGoeseleChenSeidel, TITLE = {An emperical model for heterogeneous translucent objects}, AUTHOR = {Fuchs, Christian and G{\"o}sele, Michael and Chen, Tongbo and Seidel, Hans-Peter},
LANGUAGE = {eng}, URL = {http://domino.mpi-inf.mpg.de/internet/reports.nsf/NumberView/2005-4-006}, NUMBER = {MPI-I-2005-4-006}, INSTITUTION = {Max-Planck-Institut f{\"u}r Informatik}, ADDRESS =
{Saarbr{\"u}cken}, YEAR = {2005}, DATE = {2005}, ABSTRACT = {We introduce an empirical model for multiple scattering in heterogeneous translucent objects for which classical approximations such as
the dipole approximation to the di usion equation are no longer valid. Motivated by the exponential fall-o of scattered intensity with distance, di use subsurface scattering is represented as a sum
of exponentials per surface point plus a modulation texture. Modeling quality can be improved by using an anisotropic model where exponential parameters are determined per surface location and
scattering direction. We validate the scattering model for a set of planar object samples which were recorded under controlled conditions and quantify the modeling error. Furthermore, several
translucent objects with complex geometry are captured and compared to the real object under similar illumination conditions.}, TYPE = {Research Report / Max-Planck-Institut für Informatik}, }
%0 Report %A Fuchs, Christian %A Gösele, Michael %A Chen, Tongbo %A Seidel, Hans-Peter %+ Computer Graphics, MPI for Informatics, Max Planck Society Computer Graphics, MPI for Informatics, Max
Planck Society Computer Graphics, MPI for Informatics, Max Planck Society Computer Graphics, MPI for Informatics, Max Planck Society %T An emperical model for heterogeneous translucent objects : %G
eng %U http://hdl.handle.net/11858/00-001M-0000-0014-682F-0 %U http://domino.mpi-inf.mpg.de/internet/reports.nsf/NumberView/2005-4-006 %Y Max-Planck-Institut für Informatik %C Saarbrücken
%D 2005 %P 20 p. %X We introduce an empirical model for multiple scattering in heterogeneous translucent objects for which classical approximations such as the dipole approximation to the di usion
equation are no longer valid. Motivated by the exponential fall-o of scattered intensity with distance, di use subsurface scattering is represented as a sum of exponentials per surface point plus a
modulation texture. Modeling quality can be improved by using an anisotropic model where exponential parameters are determined per surface location and scattering direction. We validate the
scattering model for a set of planar object samples which were recorded under controlled conditions and quantify the modeling error. Furthermore, several translucent objects with complex geometry are
captured and compared to the real object under similar illumination conditions. %B Research Report / Max-Planck-Institut für Informatik
Reflectance from images: a model-based approach for human faces
M. Fuchs, V. Blanz, H. P. A. Lensch and H.-P. Seidel
Technical Report, 2005
In this paper, we present an image-based framework that acquires the reflectance properties of a human face. A range scan of the face is not required. Based on a morphable face model, the system
estimates the 3D shape, and establishes point-to-point correspondence across images taken from different viewpoints, and across different individuals' faces. This provides a common parameterization
of all reconstructed surfaces that can be used to compare and transfer BRDF data between different faces. Shape estimation from images compensates deformations of the face during the measurement
process, such as facial expressions. In the common parameterization, regions of homogeneous materials on the face surface can be defined a-priori. We apply analytical BRDF models to express the
reflectance properties of each region, and we estimate their parameters in a least-squares fit from the image data. For each of the surface points, the diffuse component of the BRDF is locally
refined, which provides high detail. We present results for multiple analytical BRDF models, rendered at novelorientations and lighting conditions.
@techreport{FuchsBlanzLenschSeidel2005, TITLE = {Reflectance from images: a model-based approach for human faces}, AUTHOR = {Fuchs, Martin and Blanz, Volker and Lensch, Hendrik P. A. and Seidel,
Hans-Peter}, LANGUAGE = {eng}, URL = {http://domino.mpi-inf.mpg.de/internet/reports.nsf/NumberView/2005-4-001}, NUMBER = {MPI-I-2005-4-001}, INSTITUTION = {Max-Planck-Institut f{\"u}r Informatik},
ADDRESS = {Saarbr{\"u}cken}, YEAR = {2005}, DATE = {2005}, ABSTRACT = {In this paper, we present an image-based framework that acquires the reflectance properties of a human face. A range scan of the
face is not required. Based on a morphable face model, the system estimates the 3D shape, and establishes point-to-point correspondence across images taken from different viewpoints, and across
different individuals' faces. This provides a common parameterization of all reconstructed surfaces that can be used to compare and transfer BRDF data between different faces. Shape estimation from
images compensates deformations of the face during the measurement process, such as facial expressions. In the common parameterization, regions of homogeneous materials on the face surface can be
defined a-priori. We apply analytical BRDF models to express the reflectance properties of each region, and we estimate their parameters in a least-squares fit from the image data. For each of the
surface points, the diffuse component of the BRDF is locally refined, which provides high detail. We present results for multiple analytical BRDF models, rendered at novelorientations and lighting
conditions.}, TYPE = {Research Report / Max-Planck-Institut für Informatik}, }
%0 Report %A Fuchs, Martin %A Blanz, Volker %A Lensch, Hendrik P. A. %A Seidel, Hans-Peter %+ Computer Graphics, MPI for Informatics, Max Planck Society Computer Graphics, MPI for Informatics, Max
Planck Society Computer Graphics, MPI for Informatics, Max Planck Society Computer Graphics, MPI for Informatics, Max Planck Society %T Reflectance from images: a model-based approach for human faces
: %G eng %U http://hdl.handle.net/11858/00-001M-0000-0014-683F-C %U http://domino.mpi-inf.mpg.de/internet/reports.nsf/NumberView/2005-4-001 %Y Max-Planck-Institut für Informatik %C Saarbrü
cken %D 2005 %P 33 p. %X In this paper, we present an image-based framework that acquires the reflectance properties of a human face. A range scan of the face is not required. Based on a morphable
face model, the system estimates the 3D shape, and establishes point-to-point correspondence across images taken from different viewpoints, and across different individuals' faces. This provides a
common parameterization of all reconstructed surfaces that can be used to compare and transfer BRDF data between different faces. Shape estimation from images compensates deformations of the face
during the measurement process, such as facial expressions. In the common parameterization, regions of homogeneous materials on the face surface can be defined a-priori. We apply analytical BRDF
models to express the reflectance properties of each region, and we estimate their parameters in a least-squares fit from the image data. For each of the surface points, the diffuse component of the
BRDF is locally refined, which provides high detail. We present results for multiple analytical BRDF models, rendered at novelorientations and lighting conditions. %B Research Report /
Max-Planck-Institut für Informatik
Cycle bases of graphs and sampled manifolds
C. Gotsman, K. Kaligosi, K. Mehlhorn, D. Michail and E. Pyrga
Technical Report, 2005
Point samples of a surface in $\R^3$ are the dominant output of a multitude of 3D scanning devices. The usefulness of these devices rests on being able to extract properties of the surface from the
sample. We show that, under certain sampling conditions, the minimum cycle basis of a nearest neighbor graph of the sample encodes topological information about the surface and yields bases for the
trivial and non-trivial loops of the surface. We validate our results by experiments.
@techreport{, TITLE = {Cycle bases of graphs and sampled manifolds}, AUTHOR = {Gotsman, Craig and Kaligosi, Kanela and Mehlhorn, Kurt and Michail, Dimitrios and Pyrga, Evangelia}, LANGUAGE = {eng},
URL = {http://domino.mpi-inf.mpg.de/internet/reports.nsf/NumberView/2005-1-008}, NUMBER = {MPI-I-2005-1-008}, INSTITUTION = {Max-Planck-Institut f{\"u}r Informatik}, ADDRESS = {Saarbr{\"u}cken}, YEAR
= {2005}, DATE = {2005}, ABSTRACT = {Point samples of a surface in $\R^3$ are the dominant output of a multitude of 3D scanning devices. The usefulness of these devices rests on being able to extract
properties of the surface from the sample. We show that, under certain sampling conditions, the minimum cycle basis of a nearest neighbor graph of the sample encodes topological information about the
surface and yields bases for the trivial and non-trivial loops of the surface. We validate our results by experiments.}, TYPE = {Research Report / Max-Planck-Institut für Informatik}, }
%0 Report %A Gotsman, Craig %A Kaligosi, Kanela %A Mehlhorn, Kurt %A Michail, Dimitrios %A Pyrga, Evangelia %+ External Organizations Algorithms and Complexity, MPI for Informatics, Max Planck
Society Algorithms and Complexity, MPI for Informatics, Max Planck Society Algorithms and Complexity, MPI for Informatics, Max Planck Society Algorithms and Complexity, MPI for Informatics, Max
Planck Society %T Cycle bases of graphs and sampled manifolds : %G eng %U http://hdl.handle.net/11858/00-001M-0000-0014-684C-E %U http://domino.mpi-inf.mpg.de/internet/reports.nsf/NumberView/
2005-1-008 %Y Max-Planck-Institut für Informatik %C Saarbrücken %D 2005 %P 30 p. %X Point samples of a surface in $\R^3$ are the dominant output of a multitude of 3D scanning devices. The
usefulness of these devices rests on being able to extract properties of the surface from the sample. We show that, under certain sampling conditions, the minimum cycle basis of a nearest neighbor
graph of the sample encodes topological information about the surface and yields bases for the trivial and non-trivial loops of the surface. We validate our results by experiments. %B Research Report
/ Max-Planck-Institut für Informatik
Reachability substitutes for planar digraphs
I. Katriel, M. Kutz and M. Skutella
Technical Report, 2005
Given a digraph $G = (V,E)$ with a set $U$ of vertices marked ``interesting,'' we want to find a smaller digraph $\RS{} = (V',E')$ with $V' \supseteq U$ in such a way that the reachabilities amongst
those interesting vertices in $G$ and \RS{} are the same. So with respect to the reachability relations within $U$, the digraph \RS{} is a substitute for $G$. We show that while almost all graphs do
not have reachability substitutes smaller than $\Ohmega(|U|^2/\log |U|)$, every planar graph has a reachability substitute of size $\Oh(|U| \log^2 |U|)$. Our result rests on two new structural
results for planar dags, a separation procedure and a reachability theorem, which might be of independent interest.
@techreport{, TITLE = {Reachability substitutes for planar digraphs}, AUTHOR = {Katriel, Irit and Kutz, Martin and Skutella, Martin}, LANGUAGE = {eng}, URL = {http://domino.mpi-inf.mpg.de/internet/
reports.nsf/NumberView/2005-1-002}, NUMBER = {MPI-I-2005-1-002}, INSTITUTION = {Max-Planck-Institut f{\"u}r Informatik}, ADDRESS = {Saarbr{\"u}cken}, YEAR = {2005}, DATE = {2005}, ABSTRACT = {Given a
digraph $G = (V,E)$ with a set $U$ of vertices marked ``interesting,'' we want to find a smaller digraph $\RS{} = (V',E')$ with $V' \supseteq U$ in such a way that the reachabilities amongst those
interesting vertices in $G$ and \RS{} are the same. So with respect to the reachability relations within $U$, the digraph \RS{} is a substitute for $G$. We show that while almost all graphs do not
have reachability substitutes smaller than $\Ohmega(|U|^2/\log |U|)$, every planar graph has a reachability substitute of size $\Oh(|U| \log^2 |U|)$. Our result rests on two new structural results
for planar dags, a separation procedure and a reachability theorem, which might be of independent interest.}, TYPE = {Research Report / Max-Planck-Institut für Informatik}, }
%0 Report %A Katriel, Irit %A Kutz, Martin %A Skutella, Martin %+ Algorithms and Complexity, MPI for Informatics, Max Planck Society Algorithms and Complexity, MPI for Informatics, Max Planck Society
Algorithms and Complexity, MPI for Informatics, Max Planck Society %T Reachability substitutes for planar digraphs : %G eng %U http://hdl.handle.net/11858/00-001M-0000-0014-6859-0 %U http://
domino.mpi-inf.mpg.de/internet/reports.nsf/NumberView/2005-1-002 %Y Max-Planck-Institut für Informatik %C Saarbrücken %D 2005 %P 24 p. %X Given a digraph $G = (V,E)$ with a set $U$ of
vertices marked ``interesting,'' we want to find a smaller digraph $\RS{} = (V',E')$ with $V' \supseteq U$ in such a way that the reachabilities amongst those interesting vertices in $G$ and \RS{}
are the same. So with respect to the reachability relations within $U$, the digraph \RS{} is a substitute for $G$. We show that while almost all graphs do not have reachability substitutes smaller
than $\Ohmega(|U|^2/\log |U|)$, every planar graph has a reachability substitute of size $\Oh(|U| \log^2 |U|)$. Our result rests on two new structural results for planar dags, a separation procedure
and a reachability theorem, which might be of independent interest. %B Research Report / Max-Planck-Institut für Informatik
A faster algorithm for computing a longest common increasing subsequence
I. Katriel and M. Kutz
Technical Report, 2005
Let $A=\langle a_1,\dots,a_n\rangle$ and $B=\langle b_1,\dots,b_m \rangle$ be two sequences with $m \ge n$, whose elements are drawn from a totally ordered set. We present an algorithm that finds a
longest common increasing subsequence of $A$ and $B$ in $O(m\log m+n\ell\log n)$ time and $O(m + n\ell)$ space, where $\ell$ is the length of the output. A previous algorithm by Yang et al. needs $\
Theta(mn)$ time and space, so ours is faster for a wide range of values of $m,n$ and $\ell$.
@techreport{, TITLE = {A faster algorithm for computing a longest common increasing subsequence}, AUTHOR = {Katriel, Irit and Kutz, Martin}, LANGUAGE = {eng}, URL = {http://domino.mpi-inf.mpg.de/
internet/reports.nsf/NumberView/2005-1-007}, NUMBER = {MPI-I-2005-1-007}, INSTITUTION = {Max-Planck-Institut f{\"u}r Informatik}, ADDRESS = {Saarbr{\"u}cken}, YEAR = {2005}, DATE = {2005}, ABSTRACT =
{Let $A=\langle a_1,\dots,a_n\rangle$ and $B=\langle b_1,\dots,b_m \rangle$ be two sequences with $m \ge n$, whose elements are drawn from a totally ordered set. We present an algorithm that finds a
longest common increasing subsequence of $A$ and $B$ in $O(m\log m+n\ell\log n)$ time and $O(m + n\ell)$ space, where $\ell$ is the length of the output. A previous algorithm by Yang et al. needs $\
Theta(mn)$ time and space, so ours is faster for a wide range of values of $m,n$ and $\ell$.}, TYPE = {Research Report / Max-Planck-Institut für Informatik}, }
%0 Report %A Katriel, Irit %A Kutz, Martin %+ Algorithms and Complexity, MPI for Informatics, Max Planck Society Algorithms and Complexity, MPI for Informatics, Max Planck Society %T A faster
algorithm for computing a longest common increasing subsequence : %G eng %U http://hdl.handle.net/11858/00-001M-0000-0014-684F-8 %U http://domino.mpi-inf.mpg.de/internet/reports.nsf/NumberView/
2005-1-007 %Y Max-Planck-Institut für Informatik %C Saarbrücken %D 2005 %P 13 p. %X Let $A=\langle a_1,\dots,a_n\rangle$ and $B=\langle b_1,\dots,b_m \rangle$ be two sequences with $m \ge
n$, whose elements are drawn from a totally ordered set. We present an algorithm that finds a longest common increasing subsequence of $A$ and $B$ in $O(m\log m+n\ell\log n)$ time and $O(m + n\ell)$
space, where $\ell$ is the length of the output. A previous algorithm by Yang et al. needs $\Theta(mn)$ time and space, so ours is faster for a wide range of values of $m,n$ and $\ell$. %B Research
Report / Max-Planck-Institut für Informatik
Photometric calibration of high dynamic range cameras
G. Krawczyk, M. Gösele and H.-P. Seidel
Technical Report, 2005
@techreport{KrawczykGoeseleSeidel2005, TITLE = {Photometric calibration of high dynamic range cameras}, AUTHOR = {Krawczyk, Grzegorz and G{\"o}sele, Michael and Seidel, Hans-Peter}, LANGUAGE = {eng},
URL = {http://domino.mpi-inf.mpg.de/internet/reports.nsf/NumberView/2005-4-005}, NUMBER = {MPI-I-2005-4-005}, INSTITUTION = {Max-Planck-Institut f{\"u}r Informatik}, ADDRESS = {Saarbr{\"u}cken}, YEAR
= {2005}, DATE = {2005}, TYPE = {Research Report / Max-Planck-Institut für Informatik}, }
%0 Report %A Krawczyk, Grzegorz %A Gösele, Michael %A Seidel, Hans-Peter %+ Computer Graphics, MPI for Informatics, Max Planck Society Computer Graphics, MPI for Informatics, Max Planck Society
Computer Graphics, MPI for Informatics, Max Planck Society %T Photometric calibration of high dynamic range cameras : %G eng %U http://hdl.handle.net/11858/00-001M-0000-0014-6834-2 %U http://
domino.mpi-inf.mpg.de/internet/reports.nsf/NumberView/2005-4-005 %Y Max-Planck-Institut für Informatik %C Saarbrücken %D 2005 %P 21 p. %B Research Report / Max-Planck-Institut für
Analysis and design of discrete normals and curvatures
T. Langer, A. Belyaev and H.-P. Seidel
Technical Report, 2005
Accurate estimations of geometric properties of a surface (a curve) from its discrete approximation are important for many computer graphics and computer vision applications. To assess and improve
the quality of such an approximation we assume that the smooth surface (curve) is known in general form. Then we can represent the surface (curve) by a Taylor series expansion and compare its
geometric properties with the corresponding discrete approximations. In turn we can either prove convergence of these approximations towards the true properties as the edge lengths tend to zero, or
we can get hints how to eliminate the error. In this report we propose and study discrete schemes for estimating the curvature and torsion of a smooth 3D curve approximated by a polyline. Thereby we
make some interesting findings about connections between (smooth) classical curves and certain estimation schemes for polylines. Furthermore, we consider several popular schemes for estimating the
surface normal of a dense triangle mesh interpolating a smooth surface, and analyze their asymptotic properties. Special attention is paid to the mean curvature vector, that approximates both, normal
direction and mean curvature. We evaluate a common discrete approximation and show how asymptotic analysis can be used to improve it. It turns out that the integral formulation of the mean curvature
\begin{equation*} H = \frac{1}{2 \pi} \int_0^{2 \pi} \kappa(\phi) d\phi, \end{equation*} can be computed by an exact quadrature formula. The same is true for the integral formulations of Gaussian
curvature and the Taubin tensor. The exact quadratures are then used to obtain reliable estimates of the curvature tensor of a smooth surface approximated by a dense triangle mesh. The proposed
method is fast and often demonstrates a better performance than conventional curvature tensor estimation approaches. We also show that the curvature tensor approximated by our approach converges
towards the true curvature tensor as the edge lengths tend to zero.
@techreport{LangerBelyaevSeidel2005, TITLE = {Analysis and design of discrete normals and curvatures}, AUTHOR = {Langer, Torsten and Belyaev, Alexander and Seidel, Hans-Peter}, LANGUAGE = {eng}, URL
= {http://domino.mpi-inf.mpg.de/internet/reports.nsf/NumberView/2005-4-003}, NUMBER = {MPI-I-2005-4-003}, INSTITUTION = {Max-Planck-Institut f{\"u}r Informatik}, ADDRESS = {Saarbr{\"u}cken}, YEAR =
{2005}, DATE = {2005}, ABSTRACT = {Accurate estimations of geometric properties of a surface (a curve) from its discrete approximation are important for many computer graphics and computer vision
applications. To assess and improve the quality of such an approximation we assume that the smooth surface (curve) is known in general form. Then we can represent the surface (curve) by a Taylor
series expansion and compare its geometric properties with the corresponding discrete approximations. In turn we can either prove convergence of these approximations towards the true properties as
the edge lengths tend to zero, or we can get hints how to eliminate the error. In this report we propose and study discrete schemes for estimating the curvature and torsion of a smooth 3D curve
approximated by a polyline. Thereby we make some interesting findings about connections between (smooth) classical curves and certain estimation schemes for polylines. Furthermore, we consider
several popular schemes for estimating the surface normal of a dense triangle mesh interpolating a smooth surface, and analyze their asymptotic properties. Special attention is paid to the mean
curvature vector, that approximates both, normal direction and mean curvature. We evaluate a common discrete approximation and show how asymptotic analysis can be used to improve it. It turns out
that the integral formulation of the mean curvature \begin{equation*} H = \frac{1}{2 \pi} \int_0^{2 \pi} \kappa(\phi) d\phi, \end{equation*} can be computed by an exact quadrature formula. The same
is true for the integral formulations of Gaussian curvature and the Taubin tensor. The exact quadratures are then used to obtain reliable estimates of the curvature tensor of a smooth surface
approximated by a dense triangle mesh. The proposed method is fast and often demonstrates a better performance than conventional curvature tensor estimation approaches. We also show that the
curvature tensor approximated by our approach converges towards the true curvature tensor as the edge lengths tend to zero.}, TYPE = {Research Report / Max-Planck-Institut für Informatik}, }
%0 Report %A Langer, Torsten %A Belyaev, Alexander %A Seidel, Hans-Peter %+ Computer Graphics, MPI for Informatics, Max Planck Society Computer Graphics, MPI for Informatics, Max Planck Society
Computer Graphics, MPI for Informatics, Max Planck Society %T Analysis and design of discrete normals and curvatures : %G eng %U http://hdl.handle.net/11858/00-001M-0000-0014-6837-B %U http://
domino.mpi-inf.mpg.de/internet/reports.nsf/NumberView/2005-4-003 %Y Max-Planck-Institut für Informatik %C Saarbrücken %D 2005 %P 42 p. %X Accurate estimations of geometric properties of a
surface (a curve) from its discrete approximation are important for many computer graphics and computer vision applications. To assess and improve the quality of such an approximation we assume that
the smooth surface (curve) is known in general form. Then we can represent the surface (curve) by a Taylor series expansion and compare its geometric properties with the corresponding discrete
approximations. In turn we can either prove convergence of these approximations towards the true properties as the edge lengths tend to zero, or we can get hints how to eliminate the error. In this
report we propose and study discrete schemes for estimating the curvature and torsion of a smooth 3D curve approximated by a polyline. Thereby we make some interesting findings about connections
between (smooth) classical curves and certain estimation schemes for polylines. Furthermore, we consider several popular schemes for estimating the surface normal of a dense triangle mesh
interpolating a smooth surface, and analyze their asymptotic properties. Special attention is paid to the mean curvature vector, that approximates both, normal direction and mean curvature. We
evaluate a common discrete approximation and show how asymptotic analysis can be used to improve it. It turns out that the integral formulation of the mean curvature \begin{equation*} H = \frac{1}{2
\pi} \int_0^{2 \pi} \kappa(\phi) d\phi, \end{equation*} can be computed by an exact quadrature formula. The same is true for the integral formulations of Gaussian curvature and the Taubin tensor. The
exact quadratures are then used to obtain reliable estimates of the curvature tensor of a smooth surface approximated by a dense triangle mesh. The proposed method is fast and often demonstrates a
better performance than conventional curvature tensor estimation approaches. We also show that the curvature tensor approximated by our approach converges towards the true curvature tensor as the
edge lengths tend to zero. %B Research Report / Max-Planck-Institut für Informatik
Rank-maximal through maximum weight matchings
D. Michail
Technical Report, 2005
Given a bipartite graph $G( V, E)$, $ V = A \disjointcup B$ where $|V|=n, |E|=m$ and a partition of the edge set into $r \le m$ disjoint subsets $E = E_1 \disjointcup E_2 \disjointcup \dots \
disjointcup E_r$, which are called ranks, the {\em rank-maximal matching} problem is to find a matching $M$ of $G$ such that $|M \cap E_1|$ is maximized and given that $|M \cap E_2|$, and so on. Such
a problem arises as an optimization criteria over a possible assignment of a set of applicants to a set of posts. The matching represents the assignment and the ranks on the edges correspond to a
ranking on the posts submitted by the applicants. The rank-maximal matching problem has been previously studied where a $O( r \sqrt n m )$ time and linear space algorithm~\cite{IKMMP} was presented.
In this paper we present a new simpler algorithm which matches the running time and space complexity of the above algorithm. The new algorithm is based on a different approach, by exploiting that the
rank-maximal matching problem can be reduced to a maximum weight matching problem where the weight of an edge of rank $i$ is $2^{ \ceil{\log n} (r-i)}$. By exploiting that these edge weights are
steeply distributed we design a scaling algorithm which scales by a factor of $n$ in each phase. We also show that in each phase one maximum cardinality computation is sufficient to get a new optimal
solution. This algorithm answers an open question raised on the same paper on whether the reduction to the maximum-weight matching problem can help us derive an efficient algorithm.
@techreport{, TITLE = {Rank-maximal through maximum weight matchings}, AUTHOR = {Michail, Dimitrios}, LANGUAGE = {eng}, URL = {http://domino.mpi-inf.mpg.de/internet/reports.nsf/NumberView/
2005-1-001}, NUMBER = {MPI-I-2005-1-001}, INSTITUTION = {Max-Planck-Institut f{\"u}r Informatik}, ADDRESS = {Saarbr{\"u}cken}, YEAR = {2005}, DATE = {2005}, ABSTRACT = {Given a bipartite graph $G( V,
E)$, $ V = A \disjointcup B$ where $|V|=n, |E|=m$ and a partition of the edge set into $r \le m$ disjoint subsets $E = E_1 \disjointcup E_2 \disjointcup \dots \disjointcup E_r$, which are called
ranks, the {\em rank-maximal matching} problem is to find a matching $M$ of $G$ such that $|M \cap E_1|$ is maximized and given that $|M \cap E_2|$, and so on. Such a problem arises as an
optimization criteria over a possible assignment of a set of applicants to a set of posts. The matching represents the assignment and the ranks on the edges correspond to a ranking on the posts
submitted by the applicants. The rank-maximal matching problem has been previously studied where a $O( r \sqrt n m )$ time and linear space algorithm~\cite{IKMMP} was presented. In this paper we
present a new simpler algorithm which matches the running time and space complexity of the above algorithm. The new algorithm is based on a different approach, by exploiting that the rank-maximal
matching problem can be reduced to a maximum weight matching problem where the weight of an edge of rank $i$ is $2^{ \ceil{\log n} (r-i)}$. By exploiting that these edge weights are steeply
distributed we design a scaling algorithm which scales by a factor of $n$ in each phase. We also show that in each phase one maximum cardinality computation is sufficient to get a new optimal
solution. This algorithm answers an open question raised on the same paper on whether the reduction to the maximum-weight matching problem can help us derive an efficient algorithm.}, TYPE =
{Research Report / Max-Planck-Institut für Informatik}, }
%0 Report %A Michail, Dimitrios %+ Algorithms and Complexity, MPI for Informatics, Max Planck Society %T Rank-maximal through maximum weight matchings : %G eng %U http://hdl.handle.net/11858/
00-001M-0000-0014-685C-A %U http://domino.mpi-inf.mpg.de/internet/reports.nsf/NumberView/2005-1-001 %Y Max-Planck-Institut für Informatik %C Saarbrücken %D 2005 %P 22 p. %X Given a
bipartite graph $G( V, E)$, $ V = A \disjointcup B$ where $|V|=n, |E|=m$ and a partition of the edge set into $r \le m$ disjoint subsets $E = E_1 \disjointcup E_2 \disjointcup \dots \disjointcup
E_r$, which are called ranks, the {\em rank-maximal matching} problem is to find a matching $M$ of $G$ such that $|M \cap E_1|$ is maximized and given that $|M \cap E_2|$, and so on. Such a problem
arises as an optimization criteria over a possible assignment of a set of applicants to a set of posts. The matching represents the assignment and the ranks on the edges correspond to a ranking on
the posts submitted by the applicants. The rank-maximal matching problem has been previously studied where a $O( r \sqrt n m )$ time and linear space algorithm~\cite{IKMMP} was presented. In this
paper we present a new simpler algorithm which matches the running time and space complexity of the above algorithm. The new algorithm is based on a different approach, by exploiting that the
rank-maximal matching problem can be reduced to a maximum weight matching problem where the weight of an edge of rank $i$ is $2^{ \ceil{\log n} (r-i)}$. By exploiting that these edge weights are
steeply distributed we design a scaling algorithm which scales by a factor of $n$ in each phase. We also show that in each phase one maximum cardinality computation is sufficient to get a new optimal
solution. This algorithm answers an open question raised on the same paper on whether the reduction to the maximum-weight matching problem can help us derive an efficient algorithm. %B Research
Report / Max-Planck-Institut für Informatik
Sparse meshing of uncertain and noisy surface scattered data
O. Schall, A. Belyaev and H.-P. Seidel
Technical Report, 2005
In this paper, we develop a method for generating a high-quality approximation of a noisy set of points sampled from a smooth surface by a sparse triangle mesh. The main idea of the method consists
of defining an appropriate set of approximation centers and use them as the vertices of a mesh approximating given scattered data. To choose the approximation centers, a clustering procedure is used.
With every point of the input data we associate a local uncertainty measure which is used to estimate the importance of the point contribution to the reconstructed surface. Then a global uncertainty
measure is constructed from local ones. The approximation centers are chosen as the points where the global uncertainty measure attains its local minima. It allows us to achieve a high-quality
approximation of uncertain and noisy point data by a sparse mesh. An interesting feature of our approach is that the uncertainty measures take into account the normal directions estimated at the
scattered points. In particular it results in accurate reconstruction of high-curvature regions.
@techreport{SchallBelyaevSeidel2005, TITLE = {Sparse meshing of uncertain and noisy surface scattered data}, AUTHOR = {Schall, Oliver and Belyaev, Alexander and Seidel, Hans-Peter}, LANGUAGE = {eng},
URL = {http://domino.mpi-inf.mpg.de/internet/reports.nsf/NumberView/2005-4-002}, NUMBER = {MPI-I-2005-4-002}, INSTITUTION = {Max-Planck-Institut f{\"u}r Informatik}, ADDRESS = {Saarbr{\"u}cken}, YEAR
= {2005}, DATE = {2005}, ABSTRACT = {In this paper, we develop a method for generating a high-quality approximation of a noisy set of points sampled from a smooth surface by a sparse triangle mesh.
The main idea of the method consists of defining an appropriate set of approximation centers and use them as the vertices of a mesh approximating given scattered data. To choose the approximation
centers, a clustering procedure is used. With every point of the input data we associate a local uncertainty measure which is used to estimate the importance of the point contribution to the
reconstructed surface. Then a global uncertainty measure is constructed from local ones. The approximation centers are chosen as the points where the global uncertainty measure attains its local
minima. It allows us to achieve a high-quality approximation of uncertain and noisy point data by a sparse mesh. An interesting feature of our approach is that the uncertainty measures take into
account the normal directions estimated at the scattered points. In particular it results in accurate reconstruction of high-curvature regions.}, TYPE = {Research Report / Max-Planck-Institut fü
r Informatik}, }
%0 Report %A Schall, Oliver %A Belyaev, Alexander %A Seidel, Hans-Peter %+ Computer Graphics, MPI for Informatics, Max Planck Society Computer Graphics, MPI for Informatics, Max Planck Society
Computer Graphics, MPI for Informatics, Max Planck Society %T Sparse meshing of uncertain and noisy surface scattered data : %G eng %U http://hdl.handle.net/11858/00-001M-0000-0014-683C-1 %U http://
domino.mpi-inf.mpg.de/internet/reports.nsf/NumberView/2005-4-002 %Y Max-Planck-Institut für Informatik %C Saarbrücken %D 2005 %P 20 p. %X In this paper, we develop a method for generating a
high-quality approximation of a noisy set of points sampled from a smooth surface by a sparse triangle mesh. The main idea of the method consists of defining an appropriate set of approximation
centers and use them as the vertices of a mesh approximating given scattered data. To choose the approximation centers, a clustering procedure is used. With every point of the input data we associate
a local uncertainty measure which is used to estimate the importance of the point contribution to the reconstructed surface. Then a global uncertainty measure is constructed from local ones. The
approximation centers are chosen as the points where the global uncertainty measure attains its local minima. It allows us to achieve a high-quality approximation of uncertain and noisy point data by
a sparse mesh. An interesting feature of our approach is that the uncertainty measures take into account the normal directions estimated at the scattered points. In particular it results in accurate
reconstruction of high-curvature regions. %B Research Report / Max-Planck-Institut für Informatik
Automated retraining methods for document classification and their parameter tuning
S. Siersdorfer and G. Weikum
Technical Report, 2005
This paper addresses the problem of semi-supervised classification on document collections using retraining (also called self-training). A possible application is focused Web crawling which may start
with very few, manually selected, training documents but can be enhanced by automatically adding initially unlabeled, positively classified Web pages for retraining. Such an approach is by itself not
robust and faces tuning problems regarding parameters like the number of selected documents, the number of retraining iterations, and the ratio of positive and negative classified samples used for
retraining. The paper develops methods for automatically tuning these parameters, based on predicting the leave-one-out error for a re-trained classifier and avoiding that the classifier is diluted
by selecting too many or weak documents for retraining. Our experiments with three different datasets confirm the practical viability of the approach.
@techreport{SiersdorferWeikum2005, TITLE = {Automated retraining methods for document classification and their parameter tuning}, AUTHOR = {Siersdorfer, Stefan and Weikum, Gerhard}, LANGUAGE = {eng},
URL = {http://domino.mpi-inf.mpg.de/internet/reports.nsf/NumberView/2005-5-002}, NUMBER = {MPI-I-2005-5-002}, INSTITUTION = {Max-Planck-Institut f{\"u}r Informatik}, ADDRESS = {Saarbr{\"u}cken}, YEAR
= {2005}, DATE = {2005}, ABSTRACT = {This paper addresses the problem of semi-supervised classification on document collections using retraining (also called self-training). A possible application is
focused Web crawling which may start with very few, manually selected, training documents but can be enhanced by automatically adding initially unlabeled, positively classified Web pages for
retraining. Such an approach is by itself not robust and faces tuning problems regarding parameters like the number of selected documents, the number of retraining iterations, and the ratio of
positive and negative classified samples used for retraining. The paper develops methods for automatically tuning these parameters, based on predicting the leave-one-out error for a re-trained
classifier and avoiding that the classifier is diluted by selecting too many or weak documents for retraining. Our experiments with three different datasets confirm the practical viability of the
approach.}, TYPE = {Research Report / Max-Planck-Institut für Informatik}, }
%0 Report %A Siersdorfer, Stefan %A Weikum, Gerhard %+ Databases and Information Systems, MPI for Informatics, Max Planck Society Databases and Information Systems, MPI for Informatics, Max Planck
Society %T Automated retraining methods for document classification and their parameter tuning : %G eng %U http://hdl.handle.net/11858/00-001M-0000-0014-6823-8 %U http://domino.mpi-inf.mpg.de/
internet/reports.nsf/NumberView/2005-5-002 %Y Max-Planck-Institut für Informatik %C Saarbrücken %D 2005 %P 23 p. %X This paper addresses the problem of semi-supervised classification on
document collections using retraining (also called self-training). A possible application is focused Web crawling which may start with very few, manually selected, training documents but can be
enhanced by automatically adding initially unlabeled, positively classified Web pages for retraining. Such an approach is by itself not robust and faces tuning problems regarding parameters like the
number of selected documents, the number of retraining iterations, and the ratio of positive and negative classified samples used for retraining. The paper develops methods for automatically tuning
these parameters, based on predicting the leave-one-out error for a re-trained classifier and avoiding that the classifier is diluted by selecting too many or weak documents for retraining. Our
experiments with three different datasets confirm the practical viability of the approach. %B Research Report / Max-Planck-Institut für Informatik
Joint Motion and Reflectance Capture for Creating Relightable 3D Videos
C. Theobalt, N. Ahmed, E. de Aguiar, G. Ziegler, H. Lensch, M. Magnor and H.-P. Seidel
Technical Report, 2005
\begin{abstract} Passive optical motion capture is able to provide authentically animated, photo-realistically and view-dependently textured models of real people. To import real-world characters
into virtual environments, however, also surface reflectance properties must be known. We describe a video-based modeling approach that captures human motion as well as reflectance characteristics
from a handful of synchronized video recordings. The presented method is able to recover spatially varying reflectance properties of clothes % dynamic objects ? by exploiting the time-varying
orientation of each surface point with respect to camera and light direction. The resulting model description enables us to match animated subject appearance to different lighting conditions, as well
as to interchange surface attributes among different people, e.g. for virtual dressing. Our contribution allows populating virtual worlds with correctly relit, real-world people.\\ \end{abstract}
@techreport{TheobaltTR2005, TITLE = {Joint Motion and Reflectance Capture for Creating Relightable {3D} Videos}, AUTHOR = {Theobalt, Christian and Ahmed, Naveed and de Aguiar, Edilson and Ziegler,
Gernot and Lensch, Hendrik and Magnor, Marcus and Seidel, Hans-Peter}, LANGUAGE = {eng}, NUMBER = {MPI-I-2005-4-004}, LOCALID = {Local-ID:
C1256BDE005F57A8-5B757D3AA9584EEBC12570A7003C813D-TheobaltTR2005}, YEAR = {2005}, DATE = {2005}, ABSTRACT = {\begin{abstract} Passive optical motion capture is able to provide authentically animated,
photo-realistically and view-dependently textured models of real people. To import real-world characters into virtual environments, however, also surface reflectance properties must be known. We
describe a video-based modeling approach that captures human motion as well as reflectance characteristics from a handful of synchronized video recordings. The presented method is able to recover
spatially varying reflectance properties of clothes % dynamic objects ? by exploiting the time-varying orientation of each surface point with respect to camera and light direction. The resulting
model description enables us to match animated subject appearance to different lighting conditions, as well as to interchange surface attributes among different people, e.g. for virtual dressing. Our
contribution allows populating virtual worlds with correctly relit, real-world people.\\ \end{abstract}}, }
%0 Report %A Theobalt, Christian %A Ahmed, Naveed %A de Aguiar, Edilson %A Ziegler, Gernot %A Lensch, Hendrik %A Magnor, Marcus %A Seidel, Hans-Peter %+ Computer Graphics, MPI for Informatics, Max
Planck Society Programming Logics, MPI for Informatics, Max Planck Society Computer Graphics, MPI for Informatics, Max Planck Society Computer Graphics, MPI for Informatics, Max Planck Society
Computer Graphics, MPI for Informatics, Max Planck Society Computer Graphics, MPI for Informatics, Max Planck Society Graphics - Optics - Vision, MPI for Informatics, Max Planck Society Computer
Graphics, MPI for Informatics, Max Planck Society %T Joint Motion and Reflectance Capture for Creating Relightable 3D Videos : %G eng %U http://hdl.handle.net/11858/00-001M-0000-000F-2879-B %F EDOC:
520731 %F OTHER: Local-ID: C1256BDE005F57A8-5B757D3AA9584EEBC12570A7003C813D-TheobaltTR2005 %D 2005 %X \begin{abstract} Passive optical motion capture is able to provide authentically animated,
photo-realistically and view-dependently textured models of real people. To import real-world characters into virtual environments, however, also surface reflectance properties must be known. We
describe a video-based modeling approach that captures human motion as well as reflectance characteristics from a handful of synchronized video recordings. The presented method is able to recover
spatially varying reflectance properties of clothes % dynamic objects ? by exploiting the time-varying orientation of each surface point with respect to camera and light direction. The resulting
model description enables us to match animated subject appearance to different lighting conditions, as well as to interchange surface attributes among different people, e.g. for virtual dressing. Our
contribution allows populating virtual worlds with correctly relit, real-world people.\\ \end{abstract}
Filtering algorithms for the Same and UsedBy constraints
N. Beldiceanu, I. Katriel and S. Thiel
Technical Report, 2004
@techreport{, TITLE = {Filtering algorithms for the Same and {UsedBy} constraints}, AUTHOR = {Beldiceanu, Nicolas and Katriel, Irit and Thiel, Sven}, LANGUAGE = {eng}, NUMBER = {MPI-I-2004-1-001},
INSTITUTION = {Max-Planck-Institut f{\"u}r Informatik}, ADDRESS = {Saarbr{\"u}cken}, YEAR = {2004}, DATE = {2004-01}, TYPE = {Research Report}, }
%0 Report %A Beldiceanu, Nicolas %A Katriel, Irit %A Thiel, Sven %+ Max Planck Society Algorithms and Complexity, MPI for Informatics, Max Planck Society Algorithms and Complexity, MPI for
Informatics, Max Planck Society %T Filtering algorithms for the Same and UsedBy constraints : %G eng %U http://hdl.handle.net/11858/00-001M-0000-000F-290C-C %F EDOC: 237881 %Y Max-Planck-Institut f&#
252;r Informatik %C Saarbrücken %D 2004 %P 33 p. %B Research Report
EXACUS: Efficient and Exact Algorithms for Curves and Surfaces
E. Berberich, A. Eigenwillig, M. Hemmer, S. Hert, L. Kettner, K. Mehlhorn, J. Reichel, S. Schmitt, E. Schömer, D. Weber and N. Wolpert
Technical Report, 2004
@techreport{Berberich_ECG-TR-361200-02, TITLE = {{EXACUS} : Efficient and Exact Algorithms for Curves and Surfaces}, AUTHOR = {Berberich, Eric and Eigenwillig, Arno and Hemmer, Michael and Hert,
Susan and Kettner, Lutz and Mehlhorn, Kurt and Reichel, Joachim and Schmitt, Susanne and Sch{\"o}mer, Elmar and Weber, Dennis and Wolpert, Nicola}, LANGUAGE = {eng}, NUMBER = {ECG-TR-361200-02},
INSTITUTION = {INRIA}, ADDRESS = {Sophia Antipolis}, YEAR = {2004}, DATE = {2004}, TYPE = {ECG Technical Report}, EDITOR = {{Effective Computational Geometry for Curves and Surfaces}}, }
%0 Report %A Berberich, Eric %A Eigenwillig, Arno %A Hemmer, Michael %A Hert, Susan %A Kettner, Lutz %A Mehlhorn, Kurt %A Reichel, Joachim %A Schmitt, Susanne %A Schömer, Elmar %A Weber, Dennis
%A Wolpert, Nicola %+ Algorithms and Complexity, MPI for Informatics, Max Planck Society Algorithms and Complexity, MPI for Informatics, Max Planck Society Algorithms and Complexity, MPI for
Informatics, Max Planck Society Algorithms and Complexity, MPI for Informatics, Max Planck Society Algorithms and Complexity, MPI for Informatics, Max Planck Society Algorithms and Complexity, MPI
for Informatics, Max Planck Society Algorithms and Complexity, MPI for Informatics, Max Planck Society Algorithms and Complexity, MPI for Informatics, Max Planck Society Algorithms and Complexity,
MPI for Informatics, Max Planck Society Algorithms and Complexity, MPI for Informatics, Max Planck Society Algorithms and Complexity, MPI for Informatics, Max Planck Society %T EXACUS : Efficient and
Exact Algorithms for Curves and Surfaces : %G eng %U http://hdl.handle.net/11858/00-001M-0000-000F-2B89-6 %F EDOC: 237751 %Y INRIA %C Sophia Antipolis %D 2004 %Z name of event: Untitled Event %Z date
of event: - %Z place of event: %P 8 p. %B ECG Technical Report
An empirical comparison of software for constructing arrangements of curved arcs
E. Berberich, A. Eigenwillig, I. Emiris, E. Fogel, M. Hemmer, D. Halperin, A. Kakargias, L. Kettner, K. Mehlhorn, S. Pion, E. Schömer, M. Teillaud, R. Wein and N. Wolpert
Technical Report, 2004
@techreport{Berberich_ECG-TR-361200-01, TITLE = {An empirical comparison of software for constructing arrangements of curved arcs}, AUTHOR = {Berberich, Eric and Eigenwillig, Arno and Emiris, Ioannis
and Fogel, Efraim and Hemmer, Michael and Halperin, Dan and Kakargias, Athanasios and Kettner, Lutz and Mehlhorn, Kurt and Pion, Sylvain and Sch{\"o}mer, Elmar and Teillaud, Monique and Wein, Ron and
Wolpert, Nicola}, LANGUAGE = {eng}, NUMBER = {ECG-TR-361200-01}, INSTITUTION = {INRIA}, ADDRESS = {Sophia Antipolis}, YEAR = {2004}, DATE = {2004}, TYPE = {ECG Technical Report}, EDITOR = {{Effective
Computational Geometry for Curves and Surfaces}}, }
%0 Report %A Berberich, Eric %A Eigenwillig, Arno %A Emiris, Ioannis %A Fogel, Efraim %A Hemmer, Michael %A Halperin, Dan %A Kakargias, Athanasios %A Kettner, Lutz %A Mehlhorn, Kurt %A Pion, Sylvain
%A Schömer, Elmar %A Teillaud, Monique %A Wein, Ron %A Wolpert, Nicola %+ Algorithms and Complexity, MPI for Informatics, Max Planck Society Algorithms and Complexity, MPI for Informatics, Max
Planck Society Algorithms and Complexity, MPI for Informatics, Max Planck Society Algorithms and Complexity, MPI for Informatics, Max Planck Society Algorithms and Complexity, MPI for Informatics,
Max Planck Society Algorithms and Complexity, MPI for Informatics, Max Planck Society Algorithms and Complexity, MPI for Informatics, Max Planck Society Algorithms and Complexity, MPI for
Informatics, Max Planck Society Algorithms and Complexity, MPI for Informatics, Max Planck Society %T An empirical comparison of software for constructing arrangements of curved arcs : %G eng %U
http://hdl.handle.net/11858/00-001M-0000-000F-2B87-A %F EDOC: 237743 %Y INRIA %C Sophia Antipolis %D 2004 %Z name of event: Untitled Event %Z date of event: - %Z place of event: %P 11 p. %B ECG
Technical Report
On the Hadwiger’s Conjecture for Graphs Products
L. S. Chandran and N. Sivadasan
Technical Report, 2004a
@techreport{TR2004, TITLE = {On the {Hadwiger's} Conjecture for Graphs Products}, AUTHOR = {Chandran, L. Sunil and Sivadasan, N.}, LANGUAGE = {eng}, ISBN = {0946-011X}, NUMBER = {MPI-I-2004-1-006},
INSTITUTION = {Max-Planck-Institut f{\"u}r Informatik}, ADDRESS = {Saarbr{\"u}cken, Germany}, YEAR = {2004}, DATE = {2004}, TYPE = {Research Report}, }
%0 Report %A Chandran, L. Sunil %A Sivadasan, N. %+ Discrete Optimization, MPI for Informatics, Max Planck Society Algorithms and Complexity, MPI for Informatics, Max Planck Society %T On the
Hadwiger's Conjecture for Graphs Products : %G eng %U http://hdl.handle.net/11858/00-001M-0000-001A-0C8F-A %@ 0946-011X %Y Max-Planck-Institut für Informatik %C Saarbrücken, Germany %D 2004
%B Research Report
On the Hadwiger’s conjecture for graph products
L. S. Chandran and N. Sivadasan
Technical Report, 2004b
@techreport{, TITLE = {On the Hadwiger's conjecture for graph products}, AUTHOR = {Chandran, L. Sunil and Sivadasan, Naveen}, LANGUAGE = {eng}, NUMBER = {MPI-I-2004-1-006}, INSTITUTION =
{Max-Planck-Institut f{\"u}r Informatik}, ADDRESS = {Saarbr{\"u}cken}, YEAR = {2004}, DATE = {2004}, TYPE = {Max-Planck-Institut für Informatik <Saarbrücken>: Research Report}, EDITOR =
{{Max-Planck-Institut f{\"u}r Informatik {\textless}Saarbr{\"u}cken{\textgreater}}}, }
%0 Report %A Chandran, L. Sunil %A Sivadasan, Naveen %+ Max Planck Society Algorithms and Complexity, MPI for Informatics, Max Planck Society %T On the Hadwiger's conjecture for graph products : %G
eng %U http://hdl.handle.net/11858/00-001M-0000-000F-2BA6-4 %F EDOC: 241593 %Y Max-Planck-Institut für Informatik %C Saarbrücken %D 2004 %P 10 p. %B Max-Planck-Institut für Informatik
<Saarbrücken>: Research Report
Faster ray tracing with SIMD shaft culling
K. Dmitriev, V. Havran and H.-P. Seidel
Technical Report, 2004
@techreport{, TITLE = {Faster ray tracing with {SIMD} shaft culling}, AUTHOR = {Dmitriev, Kirill and Havran, Vlastimil and Seidel, Hans-Peter}, LANGUAGE = {eng}, NUMBER = {MPI-I-2004-4-006},
INSTITUTION = {Max-Planck-Institut f{\"u}r Informatik}, ADDRESS = {Saarbr{\"u}cken}, YEAR = {2004}, DATE = {2004-12}, TYPE = {Max-Planck-Institut für Informatik <Saarbrücken>: Research
Report}, EDITOR = {{Max-Planck-Institut f{\"u}r Informatik {\textless}Saarbr{\"u}cken{\textgreater}}}, }
%0 Report %A Dmitriev, Kirill %A Havran, Vlastimil %A Seidel, Hans-Peter %+ Max Planck Society Computer Graphics, MPI for Informatics, Max Planck Society Computer Graphics, MPI for Informatics, Max
Planck Society %T Faster ray tracing with SIMD shaft culling : %G eng %U http://hdl.handle.net/11858/00-001M-0000-000F-28BB-A %F EDOC: 237860 %Y Max-Planck-Institut für Informatik %C Saarbr&#
252;cken %D 2004 %P 13 p. %B Max-Planck-Institut für Informatik <Saarbrücken>: Research Report
The LEDA class real number - extended version
S. Funke, K. Mehlhorn, S. Schmitt, C. Burnikel, R. Fleischer and S. Schirra
Technical Report, 2004
@techreport{Funke_ECG-TR-363110-01, TITLE = {The {LEDA} class real number -- extended version}, AUTHOR = {Funke, Stefan and Mehlhorn, Kurt and Schmitt, Susanne and Burnikel, Christoph and Fleischer,
Rudolf and Schirra, Stefan}, LANGUAGE = {eng}, NUMBER = {ECG-TR-363110-01}, INSTITUTION = {INRIA}, ADDRESS = {Sophia Antipolis}, YEAR = {2004}, DATE = {2004}, TYPE = {ECG Technical Report}, EDITOR =
{{Effective Computational Geometry for Curves and Surfaces}}, }
%0 Report %A Funke, Stefan %A Mehlhorn, Kurt %A Schmitt, Susanne %A Burnikel, Christoph %A Fleischer, Rudolf %A Schirra, Stefan %+ Algorithms and Complexity, MPI for Informatics, Max Planck Society
Algorithms and Complexity, MPI for Informatics, Max Planck Society Algorithms and Complexity, MPI for Informatics, Max Planck Society Algorithms and Complexity, MPI for Informatics, Max Planck
Society Algorithms and Complexity, MPI for Informatics, Max Planck Society Algorithms and Complexity, MPI for Informatics, Max Planck Society %T The LEDA class real number - extended version : %G eng
%U http://hdl.handle.net/11858/00-001M-0000-000F-2B8C-F %F EDOC: 237780 %Y INRIA %C Sophia Antipolis %D 2004 %Z name of event: Untitled Event %Z date of event: - %Z place of event: %P 2 p. %B ECG
Technical Report
Modeling hair using a wisp hair model
J. Haber, C. Schmitt, M. Koster and H.-P. Seidel
Technical Report, 2004
@techreport{, TITLE = {Modeling hair using a wisp hair model}, AUTHOR = {Haber, J{\"o}rg and Schmitt, Carina and Koster, Martin and Seidel, Hans-Peter}, LANGUAGE = {eng}, NUMBER = {MPI-I-2004-4-001},
INSTITUTION = {Max-Planck-Institut f{\"u}r Informatik}, ADDRESS = {Saarbr{\"u}cken}, YEAR = {2004}, DATE = {2004-05}, TYPE = {Max-Planck-Institut für Informatik <Saarbrücken>: Research
Report}, EDITOR = {{Max-Planck-Institut f{\"u}r Informatik {\textless}Saarbr{\"u}cken{\textgreater}}}, }
%0 Report %A Haber, Jörg %A Schmitt, Carina %A Koster, Martin %A Seidel, Hans-Peter %+ Computer Graphics, MPI for Informatics, Max Planck Society Max Planck Society Computer Graphics, MPI for
Informatics, Max Planck Society Computer Graphics, MPI for Informatics, Max Planck Society %T Modeling hair using a wisp hair model : %G eng %U http://hdl.handle.net/11858/00-001M-0000-000F-28F6-4 %F
EDOC: 237864 %Y Max-Planck-Institut für Informatik %C Saarbrücken %D 2004 %P 38 p. %B Max-Planck-Institut für Informatik <Saarbrücken>: Research Report
Effects of a modular filter on geometric applications
M. Hemmer, L. Kettner and E. Schömer
Technical Report, 2004
@techreport{Hemmer_ECG-TR-363111-01, TITLE = {Effects of a modular filter on geometric applications}, AUTHOR = {Hemmer, Michael and Kettner, Lutz and Sch{\"o}mer, Elmar}, LANGUAGE = {eng}, NUMBER =
{ECG-TR-363111-01}, INSTITUTION = {INRIA}, ADDRESS = {Sophia Antipolis}, YEAR = {2004}, DATE = {2004}, TYPE = {ECG Technical Report}, EDITOR = {{Effective Computational Geometry for Curves and
Surfaces}}, }
%0 Report %A Hemmer, Michael %A Kettner, Lutz %A Schömer, Elmar %+ Algorithms and Complexity, MPI for Informatics, Max Planck Society Algorithms and Complexity, MPI for Informatics, Max Planck
Society Algorithms and Complexity, MPI for Informatics, Max Planck Society %T Effects of a modular filter on geometric applications : %G eng %U http://hdl.handle.net/11858/00-001M-0000-000F-2B8F-9 %F
EDOC: 237782 %Y INRIA %C Sophia Antipolis %D 2004 %Z name of event: Untitled Event %Z date of event: - %Z place of event: %P 7 p. %B ECG Technical Report
Neural meshes: surface reconstruction with a learning algorithm
I. Ivrissimtzis, W.-K. Jeong, S. Lee, Y. Lee and H.-P. Seidel
Technical Report, 2004
@techreport{, TITLE = {Neural meshes: surface reconstruction with a learning algorithm}, AUTHOR = {Ivrissimtzis, Ioannis and Jeong, Won-Ki and Lee, Seungyong and Lee, Yunjin and Seidel, Hans-Peter},
LANGUAGE = {eng}, NUMBER = {MPI-I-2004-4-005}, INSTITUTION = {Max-Planck-Institut f{\"u}r Informatik}, ADDRESS = {Saarbr{\"u}cken}, YEAR = {2004}, DATE = {2004-10}, TYPE = {Research Report}, }
%0 Report %A Ivrissimtzis, Ioannis %A Jeong, Won-Ki %A Lee, Seungyong %A Lee, Yunjin %A Seidel, Hans-Peter %+ Computer Graphics, MPI for Informatics, Max Planck Society Computer Graphics, MPI for
Informatics, Max Planck Society Computer Graphics, MPI for Informatics, Max Planck Society Computer Graphics, MPI for Informatics, Max Planck Society Computer Graphics, MPI for Informatics, Max
Planck Society %T Neural meshes: surface reconstruction with a learning algorithm : %G eng %U http://hdl.handle.net/11858/00-001M-0000-000F-28C9-A %F EDOC: 237862 %Y Max-Planck-Institut für
Informatik %C Saarbrücken %D 2004 %P 16 p. %B Research Report
On algorithms for online topological ordering and sorting
I. Katriel
Technical Report, 2004
@techreport{, TITLE = {On algorithms for online topological ordering and sorting}, AUTHOR = {Katriel, Irit}, LANGUAGE = {eng}, NUMBER = {MPI-I-2004-1-003}, INSTITUTION = {Max-Planck-Institut f{\"u}r
Informatik}, ADDRESS = {Saarbr{\"u}cken}, YEAR = {2004}, DATE = {2004-02}, TYPE = {Research Report}, }
%0 Report %A Katriel, Irit %+ Algorithms and Complexity, MPI for Informatics, Max Planck Society %T On algorithms for online topological ordering and sorting : %G eng %U http://hdl.handle.net/11858/
00-001M-0000-000F-2906-7 %F EDOC: 237878 %Y Max-Planck-Institut für Informatik %C Saarbrücken %D 2004 %P 12 p. %B Research Report
Classroom examples of robustness problems in geometric computations
L. Kettner, K. Mehlhorn, S. Pion, S. Schirra and C. Yap
Technical Report, 2004
@techreport{Kettner_ECG-TR-363100-01, TITLE = {Classroom examples of robustness problems in geometric computations}, AUTHOR = {Kettner, Lutz and Mehlhorn, Kurt and Pion, Sylvain and Schirra, Stefan
and Yap, Chee}, LANGUAGE = {eng}, NUMBER = {ECG-TR-363100-01}, INSTITUTION = {INRIA}, ADDRESS = {Sophia Antipolis}, YEAR = {2004}, DATE = {2004}, TYPE = {ECG Technical Report}, EDITOR = {{Effective
Computational Geometry for Curves and Surfaces}}, VOLUME = {3221}, }
%0 Report %A Kettner, Lutz %A Mehlhorn, Kurt %A Pion, Sylvain %A Schirra, Stefan %A Yap, Chee %+ Algorithms and Complexity, MPI for Informatics, Max Planck Society Algorithms and Complexity, MPI for
Informatics, Max Planck Society Algorithms and Complexity, MPI for Informatics, Max Planck Society Algorithms and Complexity, MPI for Informatics, Max Planck Society Algorithms and Complexity, MPI
for Informatics, Max Planck Society %T Classroom examples of robustness problems in geometric computations : %G eng %U http://hdl.handle.net/11858/00-001M-0000-000F-2B92-0 %F EDOC: 237797 %Y INRIA %C
Sophia Antipolis %D 2004 %Z name of event: Untitled Event %Z date of event: - %Z place of event: %P 12 p. %B ECG Technical Report %N 3221
A fast root checking algorithm
C. Klein
Technical Report, 2004
@techreport{Klein_ECG-TR-363109-02, TITLE = {A fast root checking algorithm}, AUTHOR = {Klein, Christian}, LANGUAGE = {eng}, NUMBER = {ECG-TR-363109-02}, INSTITUTION = {INRIA}, ADDRESS = {Sophia
Antipolis}, YEAR = {2004}, DATE = {2004}, TYPE = {ECG Technical Report}, EDITOR = {{Effective computational geometry for curves and surfaces}}, }
%0 Report %A Klein, Christian %+ Algorithms and Complexity, MPI for Informatics, Max Planck Society %T A fast root checking algorithm : %G eng %U http://hdl.handle.net/11858/00-001M-0000-000F-2B96-8
%F EDOC: 237826 %Y INRIA %C Sophia Antipolis %D 2004 %Z name of event: Untitled Event %Z date of event: - %Z place of event: %P 11 p. %B ECG Technical Report
New bounds for the Descartes method
W. Krandick and K. Mehlhorn
Technical Report, 2004
@techreport{Krandick_DU-CS-04-04, TITLE = {New bounds for the Descartes method}, AUTHOR = {Krandick, Werner and Mehlhorn, Kurt}, LANGUAGE = {eng}, NUMBER = {DU-CS-04-04}, INSTITUTION = {Drexel
University}, ADDRESS = {Philadelphia, Pa.}, YEAR = {2004}, DATE = {2004}, TYPE = {Drexel University / Department of Computer Science:Technical Report}, EDITOR = {{Drexel University {\textless}
Philadelphia, Pa.{\textgreater} / Department of Computer Science}}, }
%0 Report %A Krandick, Werner %A Mehlhorn, Kurt %+ Algorithms and Complexity, MPI for Informatics, Max Planck Society Algorithms and Complexity, MPI for Informatics, Max Planck Society %T New bounds
for the Descartes method : %G eng %U http://hdl.handle.net/11858/00-001M-0000-000F-2B99-2 %F EDOC: 237829 %Y Drexel University %C Philadelphia, Pa. %D 2004 %P 18 p. %B Drexel University / Department
of Computer Science:Technical Report
A simpler linear time 2/3-epsilon approximation
P. Sanders and S. Pettie
Technical Report, 2004a
@techreport{, TITLE = {A simpler linear time 2/3-epsilon approximation}, AUTHOR = {Sanders, Peter and Pettie, Seth}, LANGUAGE = {eng}, NUMBER = {MPI-I-2004-1-002}, INSTITUTION = {Max-Planck-Institut
f{\"u}r Informatik}, ADDRESS = {Saarbr{\"u}cken}, YEAR = {2004}, DATE = {2004-01}, TYPE = {Max-Planck-Institut für Informatik <Saarbrücken>: Research Report}, EDITOR = {{Max-Planck-Institut
f{\"u}r Informatik {\textless}Saarbr{\"u}cken{\textgreater}}}, }
%0 Report %A Sanders, Peter %A Pettie, Seth %+ Algorithms and Complexity, MPI for Informatics, Max Planck Society Algorithms and Complexity, MPI for Informatics, Max Planck Society %T A simpler
linear time 2/3-epsilon approximation : %G eng %U http://hdl.handle.net/11858/00-001M-0000-000F-2909-1 %F EDOC: 237880 %Y Max-Planck-Institut für Informatik %C Saarbrücken %D 2004 %P 7 p.
%B Max-Planck-Institut für Informatik <Saarbrücken>: Research Report
A simpler linear time 2/3 - epsilon approximation for maximum weight matching
P. Sanders and S. Pettie
Technical Report, 2004b
We present two $\twothirds - \epsilon$ approximation algorithms for the maximum weight matching problem that run in time $O(m\log\frac{1}{\epsilon})$. We give a simple and practical randomized
algorithm and a somewhat more complicated deterministic algorithm. Both algorithms are exponentially faster in terms of $\epsilon$ than a recent algorithm by Drake and Hougardy. We also show that our
algorithms can be generalized to find a $1-\epsilon$ approximatation to the maximum weight matching, for any $\epsilon>0$.
@techreport{, TITLE = {A simpler linear time 2/3 -- epsilon approximation for maximum weight matching}, AUTHOR = {Sanders, Peter and Pettie, Seth}, LANGUAGE = {eng}, URL = {http://
domino.mpi-inf.mpg.de/internet/reports.nsf/NumberView/2004-1-002}, NUMBER = {MPI-I-2004-1-002}, INSTITUTION = {Max-Planck-Institut f{\"u}r Informatik}, ADDRESS = {Saarbr{\"u}cken}, YEAR = {2004},
DATE = {2004}, ABSTRACT = {We present two $\twothirds -- \epsilon$ approximation algorithms for the maximum weight matching problem that run in time $O(m\log\frac{1}{\epsilon})$. We give a simple and
practical randomized algorithm and a somewhat more complicated deterministic algorithm. Both algorithms are exponentially faster in terms of $\epsilon$ than a recent algorithm by Drake and Hougardy.
We also show that our algorithms can be generalized to find a $1-\epsilon$ approximatation to the maximum weight matching, for any $\epsilon>0$.}, TYPE = {Research Report / Max-Planck-Institut f&#
252;r Informatik}, }
%0 Report %A Sanders, Peter %A Pettie, Seth %+ Algorithms and Complexity, MPI for Informatics, Max Planck Society Algorithms and Complexity, MPI for Informatics, Max Planck Society %T A simpler
linear time 2/3 - epsilon approximation for maximum weight matching : %G eng %U http://hdl.handle.net/11858/00-001M-0000-0014-6862-B %U http://domino.mpi-inf.mpg.de/internet/reports.nsf/NumberView/
2004-1-002 %Y Max-Planck-Institut für Informatik %C Saarbrücken %D 2004 %P 10 p. %X We present two $\twothirds - \epsilon$ approximation algorithms for the maximum weight matching problem
that run in time $O(m\log\frac{1}{\epsilon})$. We give a simple and practical randomized algorithm and a somewhat more complicated deterministic algorithm. Both algorithms are exponentially faster in
terms of $\epsilon$ than a recent algorithm by Drake and Hougardy. We also show that our algorithms can be generalized to find a $1-\epsilon$ approximatation to the maximum weight matching, for any $
\epsilon>0$. %B Research Report / Max-Planck-Institut für Informatik
Common subexpression search in LEDA_reals: a study of the diamond-operator
S. Schmitt
Technical Report, 2004a
@techreport{Schmitt_ECG-TR-363109-01, TITLE = {Common subexpression search in {LEDA}{\textunderscore}reals : a study of the diamond-operator}, AUTHOR = {Schmitt, Susanne}, LANGUAGE = {eng}, NUMBER =
{ECG-TR-363109-01}, INSTITUTION = {INRIA}, ADDRESS = {Sophia Antipolis}, YEAR = {2004}, DATE = {2004}, TYPE = {ECG Technical Report}, EDITOR = {{Effective Computational Geometry for Curves and
Surfaces}}, }
%0 Report %A Schmitt, Susanne %+ Algorithms and Complexity, MPI for Informatics, Max Planck Society %T Common subexpression search in LEDA_reals : a study of the diamond-operator : %G eng %U http://
hdl.handle.net/11858/00-001M-0000-000F-2B9C-B %F EDOC: 237830 %Y INRIA %C Sophia Antipolis %D 2004 %Z name of event: Untitled Event %Z date of event: - %Z place of event: %P 5 p. %B ECG Technical
Improved separation bounds for the diamond operator
S. Schmitt
Technical Report, 2004b
@techreport{Schmitt_ECG-TR-363108-01, TITLE = {Improved separation bounds for the diamond operator}, AUTHOR = {Schmitt, Susanne}, LANGUAGE = {eng}, NUMBER = {ECG-TR-363108-01}, INSTITUTION = {INRIA},
ADDRESS = {Sophia Antipolis}, YEAR = {2004}, DATE = {2004}, TYPE = {ECG Techical Report}, EDITOR = {{Effective Computational Geometry for Curves and Surfaces}}, }
%0 Report %A Schmitt, Susanne %+ Algorithms and Complexity, MPI for Informatics, Max Planck Society %T Improved separation bounds for the diamond operator : %G eng %U http://hdl.handle.net/11858/
00-001M-0000-000F-2B9F-5 %F EDOC: 237831 %Y INRIA %C Sophia Antipolis %D 2004 %Z name of event: Untitled Event %Z date of event: - %Z place of event: %P 13 p. %B ECG Techical Report
A comparison of polynomial evaluation schemes
S. Schmitt and L. Fousse
Technical Report, 2004
@techreport{, TITLE = {A comparison of polynomial evaluation schemes}, AUTHOR = {Schmitt, Susanne and Fousse, Laurent}, LANGUAGE = {eng}, NUMBER = {MPI-I-2004-1-005}, INSTITUTION =
{Max-Planck-Institut f{\"u}r Informatik}, ADDRESS = {Saarbr{\"u}cken}, YEAR = {2004}, DATE = {2004-06}, TYPE = {Max-Planck-Institut für Informatik <Saarbrücken>: Research Report}, EDITOR =
{Becker and {Max-Planck-Institut f{\"u}r Informatik {\textless}Saarbr{\"u}cken{\textgreater}}}, }
%0 Report %A Schmitt, Susanne %A Fousse, Laurent %+ Algorithms and Complexity, MPI for Informatics, Max Planck Society Algorithms and Complexity, MPI for Informatics, Max Planck Society %T A
comparison of polynomial evaluation schemes : %G eng %U http://hdl.handle.net/11858/00-001M-0000-000F-28EC-B %F EDOC: 237875 %Y Max-Planck-Institut für Informatik %C Saarbrücken %D 2004 %P
16 p. %B Max-Planck-Institut für Informatik <Saarbrücken>: Research Report
Goal-oriented methods and meta methods for document classification and their parameter tuning
S. Siersdorfer, S. Sizov and G. Weikum
Technical Report, 2004
@techreport{, TITLE = {Goal-oriented methods and meta methods for document classification and their parameter tuning}, AUTHOR = {Siersdorfer, Stefan and Sizov, Sergej and Weikum, Gerhard}, LANGUAGE =
{eng}, NUMBER = {MPI-I-2004-5-001}, INSTITUTION = {Max-Planck-Institut f{\"u}r Informatik}, ADDRESS = {Saarbr{\"u}cken}, YEAR = {2004}, DATE = {2004-05}, TYPE = {Research Report}, }
%0 Report %A Siersdorfer, Stefan %A Sizov, Sergej %A Weikum, Gerhard %+ Databases and Information Systems, MPI for Informatics, Max Planck Society Databases and Information Systems, MPI for
Informatics, Max Planck Society Databases and Information Systems, MPI for Informatics, Max Planck Society %T Goal-oriented methods and meta methods for document classification and their parameter
tuning : %G eng %U http://hdl.handle.net/11858/00-001M-0000-000F-28F3-A %F EDOC: 237842 %Y Max-Planck-Institut für Informatik %C Saarbrücken %D 2004 %P 36 p. %B Research Report
On scheduling with bounded migration
N. Sivadasan, P. Sanders and M. Skutella
Technical Report, 2004a
@techreport{, TITLE = {On scheduling with bounded migration}, AUTHOR = {Sivadasan, Naveen and Sanders, Peter and Skutella, Martin}, LANGUAGE = {eng}, NUMBER = {MPI-I-2004-1-004}, INSTITUTION =
{Max-Planck-Institut f{\"u}r Informatik}, ADDRESS = {Saarbr{\"u}cken}, YEAR = {2004}, DATE = {2004-05}, TYPE = {Max-Planck-Institut für Informatik <Saarbrücken>: Research Report}, EDITOR =
{{Max-Planck-Institut f{\"u}r Informatik {\textless}Saarbr{\"u}cken{\textgreater}}}, }
%0 Report %A Sivadasan, Naveen %A Sanders, Peter %A Skutella, Martin %+ Algorithms and Complexity, MPI for Informatics, Max Planck Society Algorithms and Complexity, MPI for Informatics, Max Planck
Society Algorithms and Complexity, MPI for Informatics, Max Planck Society %T On scheduling with bounded migration : %G eng %U http://hdl.handle.net/11858/00-001M-0000-000F-28F9-D %F EDOC: 237877 %Y
Max-Planck-Institut für Informatik %C Saarbrücken %D 2004 %P 22 p. %B Max-Planck-Institut für Informatik <Saarbrücken>: Research Report
Online scheduling with bounded migration
N. Sivadasan, P. Sanders and M. Skutella
Technical Report, 2004b
@techreport{, TITLE = {Online scheduling with bounded migration}, AUTHOR = {Sivadasan, Naveen and Sanders, Peter and Skutella, Martin}, LANGUAGE = {eng}, URL = {http://domino.mpi-inf.mpg.de/internet/
reports.nsf/NumberView/2004-1-004}, NUMBER = {MPI-I-2004-1-004}, INSTITUTION = {Max-Planck-Institut f{\"u}r Informatik}, ADDRESS = {Saarbr{\"u}cken}, YEAR = {2004}, DATE = {2004}, TYPE = {Research
Report / Max-Planck-Institut für Informatik}, }
%0 Report %A Sivadasan, Naveen %A Sanders, Peter %A Skutella, Martin %+ Algorithms and Complexity, MPI for Informatics, Max Planck Society Algorithms and Complexity, MPI for Informatics, Max Planck
Society Algorithms and Complexity, MPI for Informatics, Max Planck Society %T Online scheduling with bounded migration : %G eng %U http://hdl.handle.net/11858/00-001M-0000-0014-685F-4 %U http://
domino.mpi-inf.mpg.de/internet/reports.nsf/NumberView/2004-1-004 %Y Max-Planck-Institut für Informatik %C Saarbrücken %D 2004 %P 21 p. %B Research Report / Max-Planck-Institut für
r-Adaptive parameterization of surfaces
R. Zayer, C. Rössl and H.-P. Seidel
Technical Report, 2004
@techreport{, TITLE = {r-Adaptive parameterization of surfaces}, AUTHOR = {Zayer, Rhaleb and R{\"o}ssl, Christian and Seidel, Hans-Peter}, LANGUAGE = {eng}, NUMBER = {MPI-I-2004-4-004}, INSTITUTION =
{Max-Planck-Institut f{\"u}r Informatik}, ADDRESS = {Saarbr{\"u}cken}, YEAR = {2004}, DATE = {2004-06}, TYPE = {Max-Planck-Institut für Informatik <Saarbrücken>: Research Report}, EDITOR =
{{Max-Planck-Institut f{\"u}r Informatik {\textless}Saarbr{\"u}cken{\textgreater}}}, }
%0 Report %A Zayer, Rhaleb %A Rössl, Christian %A Seidel, Hans-Peter %+ Computer Graphics, MPI for Informatics, Max Planck Society Computer Graphics, MPI for Informatics, Max Planck Society
Computer Graphics, MPI for Informatics, Max Planck Society %T r-Adaptive parameterization of surfaces : %G eng %U http://hdl.handle.net/11858/00-001M-0000-000F-28E9-2 %F EDOC: 237863 %Y
Max-Planck-Institut für Informatik %C Saarbrücken %D 2004 %P 10 p. %B Max-Planck-Institut für Informatik <Saarbrücken>: Research Report
Improving linear programming approaches for the Steiner tree problem
E. Althaus, T. Polzin and S. Daneshmand
Technical Report, 2003
We present two theoretically interesting and empirically successful techniques for improving the linear programming approaches, namely graph transformation and local cuts, in the context of the
Steiner problem. We show the impact of these techniques on the solution of the largest benchmark instances ever solved.
@techreport{MPI-I-2003-1-004, TITLE = {Improving linear programming approaches for the Steiner tree problem}, AUTHOR = {Althaus, Ernst and Polzin, Tobias and Daneshmand, Siavash}, LANGUAGE = {eng},
NUMBER = {MPI-I-2003-1-004}, INSTITUTION = {Max-Planck-Institut f{\"u}r Informatik}, ADDRESS = {Saarbr{\"u}cken}, YEAR = {2003}, DATE = {2003}, ABSTRACT = {We present two theoretically interesting
and empirically successful techniques for improving the linear programming approaches, namely graph transformation and local cuts, in the context of the Steiner problem. We show the impact of these
techniques on the solution of the largest benchmark instances ever solved.}, TYPE = {Research Report / Max-Planck-Institut für Informatik}, }
%0 Report %A Althaus, Ernst %A Polzin, Tobias %A Daneshmand, Siavash %+ Algorithms and Complexity, MPI for Informatics, Max Planck Society Algorithms and Complexity, MPI for Informatics, Max Planck
Society Algorithms and Complexity, MPI for Informatics, Max Planck Society %T Improving linear programming approaches for the Steiner tree problem : %G eng %U http://hdl.handle.net/11858/
00-001M-0000-0014-6BB9-F %Y Max-Planck-Institut für Informatik %C Saarbrücken %D 2003 %P 19 p. %X We present two theoretically interesting and empirically successful techniques for
improving the linear programming approaches, namely graph transformation and local cuts, in the context of the Steiner problem. We show the impact of these techniques on the solution of the largest
benchmark instances ever solved. %B Research Report / Max-Planck-Institut für Informatik
Random knapsack in expected polynomial time
R. Beier and B. Vöcking
Technical Report, 2003
In this paper, we present the first average-case analysis proving an expected polynomial running time for an exact algorithm for the 0/1 knapsack problem. In particular, we prove, for various input
distributions, that the number of {\em dominating solutions\/} (i.e., Pareto-optimal knapsack fillings) to this problem is polynomially bounded in the number of available items. An algorithm by
Nemhauser and Ullmann can enumerate these solutions very efficiently so that a polynomial upper bound on the number of dominating solutions implies an algorithm with expected polynomial running time.
The random input model underlying our analysis is very general and not restricted to a particular input distribution. We assume adversarial weights and randomly drawn profits (or vice versa). Our
analysis covers general probability distributions with finite mean, and, in its most general form, can even handle different probability distributions for the profits of different items. This feature
enables us to study the effects of correlations between profits and weights. Our analysis confirms and explains practical studies showing that so-called strongly correlated instances are harder to
solve than weakly correlated ones.
@techreport{, TITLE = {Random knapsack in expected polynomial time}, AUTHOR = {Beier, Ren{\'e} and V{\"o}cking, Berthold}, LANGUAGE = {eng}, URL = {http://domino.mpi-inf.mpg.de/internet/reports.nsf/
NumberView/2003-1-003}, NUMBER = {MPI-I-2003-1-003}, INSTITUTION = {Max-Planck-Institut f{\"u}r Informatik}, ADDRESS = {Saarbr{\"u}cken}, YEAR = {2003}, DATE = {2003}, ABSTRACT = {In this paper, we
present the first average-case analysis proving an expected polynomial running time for an exact algorithm for the 0/1 knapsack problem. In particular, we prove, for various input distributions, that
the number of {\em dominating solutions\/} (i.e., Pareto-optimal knapsack fillings) to this problem is polynomially bounded in the number of available items. An algorithm by Nemhauser and Ullmann can
enumerate these solutions very efficiently so that a polynomial upper bound on the number of dominating solutions implies an algorithm with expected polynomial running time. The random input model
underlying our analysis is very general and not restricted to a particular input distribution. We assume adversarial weights and randomly drawn profits (or vice versa). Our analysis covers general
probability distributions with finite mean, and, in its most general form, can even handle different probability distributions for the profits of different items. This feature enables us to study the
effects of correlations between profits and weights. Our analysis confirms and explains practical studies showing that so-called strongly correlated instances are harder to solve than weakly
correlated ones.}, TYPE = {Research Report / Max-Planck-Institut für Informatik}, }
%0 Report %A Beier, René %A Vöcking, Berthold %+ Algorithms and Complexity, MPI for Informatics, Max Planck Society Algorithms and Complexity, MPI for Informatics, Max Planck Society %T
Random knapsack in expected polynomial time : %G eng %U http://hdl.handle.net/11858/00-001M-0000-0014-6BBC-9 %U http://domino.mpi-inf.mpg.de/internet/reports.nsf/NumberView/2003-1-003 %Y
Max-Planck-Institut für Informatik %C Saarbrücken %D 2003 %P 22 p. %X In this paper, we present the first average-case analysis proving an expected polynomial running time for an exact
algorithm for the 0/1 knapsack problem. In particular, we prove, for various input distributions, that the number of {\em dominating solutions\/} (i.e., Pareto-optimal knapsack fillings) to this
problem is polynomially bounded in the number of available items. An algorithm by Nemhauser and Ullmann can enumerate these solutions very efficiently so that a polynomial upper bound on the number
of dominating solutions implies an algorithm with expected polynomial running time. The random input model underlying our analysis is very general and not restricted to a particular input
distribution. We assume adversarial weights and randomly drawn profits (or vice versa). Our analysis covers general probability distributions with finite mean, and, in its most general form, can even
handle different probability distributions for the profits of different items. This feature enables us to study the effects of correlations between profits and weights. Our analysis confirms and
explains practical studies showing that so-called strongly correlated instances are harder to solve than weakly correlated ones. %B Research Report / Max-Planck-Institut für Informatik
A custom designed density estimation method for light transport
P. Bekaert, P. Slusallek, R. Cools, V. Havran and H.-P. Seidel
Technical Report, 2003
We present a new Monte Carlo method for solving the global illumination problem in environments with general geometry descriptions and light emission and scattering properties. Current Monte Carlo
global illumination algorithms are based on generic density estimation techniques that do not take into account any knowledge about the nature of the data points --- light and potential particle hit
points --- from which a global illumination solution is to be reconstructed. We propose a novel estimator, especially designed for solving linear integral equations such as the rendering equation.
The resulting single-pass global illumination algorithm promises to combine the flexibility and robustness of bi-directional path tracing with the efficiency of algorithms such as photon mapping.
@techreport{BekaertSlusallekCoolsHavranSeidel, TITLE = {A custom designed density estimation method for light transport}, AUTHOR = {Bekaert, Philippe and Slusallek, Philipp and Cools, Ronald and
Havran, Vlastimil and Seidel, Hans-Peter}, LANGUAGE = {eng}, URL = {http://domino.mpi-inf.mpg.de/internet/reports.nsf/NumberView/2003-4-004}, NUMBER = {MPI-I-2003-4-004}, INSTITUTION =
{Max-Planck-Institut f{\"u}r Informatik}, ADDRESS = {Saarbr{\"u}cken}, YEAR = {2003}, DATE = {2003}, ABSTRACT = {We present a new Monte Carlo method for solving the global illumination problem in
environments with general geometry descriptions and light emission and scattering properties. Current Monte Carlo global illumination algorithms are based on generic density estimation techniques
that do not take into account any knowledge about the nature of the data points --- light and potential particle hit points --- from which a global illumination solution is to be reconstructed. We
propose a novel estimator, especially designed for solving linear integral equations such as the rendering equation. The resulting single-pass global illumination algorithm promises to combine the
flexibility and robustness of bi-directional path tracing with the efficiency of algorithms such as photon mapping.}, TYPE = {Research Report / Max-Planck-Institut für Informatik}, }
%0 Report %A Bekaert, Philippe %A Slusallek, Philipp %A Cools, Ronald %A Havran, Vlastimil %A Seidel, Hans-Peter %+ Computer Graphics, MPI for Informatics, Max Planck Society Cluster of Excellence
Multimodal Computing and Interaction External Organizations Computer Graphics, MPI for Informatics, Max Planck Society Computer Graphics, MPI for Informatics, Max Planck Society %T A custom designed
density estimation method for light transport : %G eng %U http://hdl.handle.net/11858/00-001M-0000-0014-6922-2 %U http://domino.mpi-inf.mpg.de/internet/reports.nsf/NumberView/2003-4-004 %Y
Max-Planck-Institut für Informatik %C Saarbrücken %D 2003 %P 28 p. %X We present a new Monte Carlo method for solving the global illumination problem in environments with general geometry
descriptions and light emission and scattering properties. Current Monte Carlo global illumination algorithms are based on generic density estimation techniques that do not take into account any
knowledge about the nature of the data points --- light and potential particle hit points --- from which a global illumination solution is to be reconstructed. We propose a novel estimator,
especially designed for solving linear integral equations such as the rendering equation. The resulting single-pass global illumination algorithm promises to combine the flexibility and robustness of
bi-directional path tracing with the efficiency of algorithms such as photon mapping. %B Research Report / Max-Planck-Institut für Informatik
Girth and treewidth
S. Chandran Leela and C. R. Subramanian
Technical Report, 2003
@techreport{, TITLE = {Girth and treewidth}, AUTHOR = {Chandran Leela, Sunil and Subramanian, C. R.}, LANGUAGE = {eng}, URL = {http://domino.mpi-inf.mpg.de/internet/reports.nsf/NumberView/
2003-NWG2-001}, NUMBER = {MPI-I-2003-NWG2-001}, INSTITUTION = {Max-Planck-Institut f{\"u}r Informatik}, ADDRESS = {Saarbr{\"u}cken}, YEAR = {2003}, DATE = {2003}, TYPE = {Research Report /
Max-Planck-Institut für Informatik}, }
%0 Report %A Chandran Leela, Sunil %A Subramanian, C. R. %+ Algorithms and Complexity, MPI for Informatics, Max Planck Society Algorithms and Complexity, MPI for Informatics, Max Planck Society %T
Girth and treewidth : %G eng %U http://hdl.handle.net/11858/00-001M-0000-0014-6868-0 %U http://domino.mpi-inf.mpg.de/internet/reports.nsf/NumberView/2003-NWG2-001 %Y Max-Planck-Institut für
Informatik %C Saarbrücken %D 2003 %P 11 p. %B Research Report / Max-Planck-Institut für Informatik
On the Bollob’as -- Eldridge conjecture for bipartite graphs
B. Csaba
Technical Report, 2003
Let $G$ be a simple graph on $n$ vertices. A conjecture of Bollob\'as and Eldridge~\cite{be78} asserts that if $\delta (G)\ge {kn-1 \over k+1}$ then $G$ contains any $n$ vertex graph $H$ with $\Delta
(H) = k$. We strengthen this conjecture: we prove that if $H$ is bipartite, $3 \le \Delta(H)$ is bounded and $n$ is sufficiently large , then there exists $\beta >0$ such that if $\delta (G)\ge {\
Delta \over {\Delta +1}}(1-\beta)n$, then $H \subset G$.
@techreport{Csaba2003, TITLE = {On the Bollob{\textbackslash}'as -- Eldridge conjecture for bipartite graphs}, AUTHOR = {Csaba, Bela}, LANGUAGE = {eng}, NUMBER = {MPI-I-2003-1-009}, INSTITUTION =
{Max-Planck-Institut f{\"u}r Informatik}, ADDRESS = {Saarbr{\"u}cken}, YEAR = {2003}, DATE = {2003}, ABSTRACT = {Let $G$ be a simple graph on $n$ vertices. A conjecture of Bollob\'as and Eldridge~\
cite{be78} asserts that if $\delta (G)\ge {kn-1 \over k+1}$ then $G$ contains any $n$ vertex graph $H$ with $\Delta(H) = k$. We strengthen this conjecture: we prove that if $H$ is bipartite, $3 \le \
Delta(H)$ is bounded and $n$ is sufficiently large , then there exists $\beta >0$ such that if $\delta (G)\ge {\Delta \over {\Delta +1}}(1-\beta)n$, then $H \subset G$.}, TYPE = {Research Report /
Max-Planck-Institut für Informatik}, }
%0 Report %A Csaba, Bela %+ Algorithms and Complexity, MPI for Informatics, Max Planck Society %T On the Bollob\'as -- Eldridge conjecture for bipartite graphs : %G eng %U http://hdl.handle.net/11858
/00-001M-0000-0014-6B3A-F %Y Max-Planck-Institut für Informatik %C Saarbrücken %D 2003 %P 29 p. %X Let $G$ be a simple graph on $n$ vertices. A conjecture of Bollob\'as and Eldridge~\cite
{be78} asserts that if $\delta (G)\ge {kn-1 \over k+1}$ then $G$ contains any $n$ vertex graph $H$ with $\Delta(H) = k$. We strengthen this conjecture: we prove that if $H$ is bipartite, $3 \le \
Delta(H)$ is bounded and $n$ is sufficiently large , then there exists $\beta >0$ such that if $\delta (G)\ge {\Delta \over {\Delta +1}}(1-\beta)n$, then $H \subset G$. %B Research Report /
Max-Planck-Institut für Informatik
On the probability of rendezvous in graphs
M. Dietzfelbinger and H. Tamaki
Technical Report, 2003
In a simple graph $G$ without isolated nodes the following random experiment is carried out: each node chooses one of its neighbors uniformly at random. We say a rendezvous occurs if there are
adjacent nodes $u$ and $v$ such that $u$ chooses $v$ and $v$ chooses $u$; the probability that this happens is denoted by $s(G)$. M{\'e}tivier \emph{et al.} (2000) asked whether it is true that $s(G)
\ge s(K_n)$ for all $n$-node graphs $G$, where $K_n$ is the complete graph on $n$ nodes. We show that this is the case. Moreover, we show that evaluating $s(G)$ for a given graph $G$ is a \
numberP-complete problem, even if only $d$-regular graphs are considered, for any $d\ge5$.
@techreport{MPI-I-94-224, TITLE = {On the probability of rendezvous in graphs}, AUTHOR = {Dietzfelbinger, Martin and Tamaki, Hisao}, LANGUAGE = {eng}, NUMBER = {MPI-I-2003-1-006}, INSTITUTION =
{Max-Planck-Institut f{\"u}r Informatik}, ADDRESS = {Saarbr{\"u}cken}, YEAR = {2003}, DATE = {2003}, ABSTRACT = {In a simple graph $G$ without isolated nodes the following random experiment is
carried out: each node chooses one of its neighbors uniformly at random. We say a rendezvous occurs if there are adjacent nodes $u$ and $v$ such that $u$ chooses $v$ and $v$ chooses $u$; the
probability that this happens is denoted by $s(G)$. M{\'e}tivier \emph{et al.} (2000) asked whether it is true that $s(G)\ge s(K_n)$ for all $n$-node graphs $G$, where $K_n$ is the complete graph on
$n$ nodes. We show that this is the case. Moreover, we show that evaluating $s(G)$ for a given graph $G$ is a \numberP-complete problem, even if only $d$-regular graphs are considered, for any $d\
ge5$.}, TYPE = {Research Report / Max-Planck-Institut für Informatik}, }
%0 Report %A Dietzfelbinger, Martin %A Tamaki, Hisao %+ Algorithms and Complexity, MPI for Informatics, Max Planck Society Algorithms and Complexity, MPI for Informatics, Max Planck Society %T On the
probability of rendezvous in graphs : %G eng %U http://hdl.handle.net/11858/00-001M-0000-0014-6B83-7 %Y Max-Planck-Institut für Informatik %C Saarbrücken %D 2003 %P 30 p. %X In a simple
graph $G$ without isolated nodes the following random experiment is carried out: each node chooses one of its neighbors uniformly at random. We say a rendezvous occurs if there are adjacent nodes $u$
and $v$ such that $u$ chooses $v$ and $v$ chooses $u$; the probability that this happens is denoted by $s(G)$. M{\'e}tivier \emph{et al.} (2000) asked whether it is true that $s(G)\ge s(K_n)$ for all
$n$-node graphs $G$, where $K_n$ is the complete graph on $n$ nodes. We show that this is the case. Moreover, we show that evaluating $s(G)$ for a given graph $G$ is a \numberP-complete problem, even
if only $d$-regular graphs are considered, for any $d\ge5$. %B Research Report / Max-Planck-Institut für Informatik
Almost random graphs with simple hash functions
M. Dietzfelbinger and P. Woelfel
Technical Report, 2003
We describe a simple randomized construction for generating pairs of hash functions h_1,h_2 from a universe U to ranges V=[m]={0,1,...,m-1} and W=[m] so that for every key set S\subseteq U with n=|S|
<= m/(1+epsilon) the (random) bipartite (multi)graph with node set V + W and edge set {(h_1(x),h_2(x)) | x in S} exhibits a structure that is essentially random. The construction combines d-wise
independent classes for d a relatively small constant with the well-known technique of random offsets. While keeping the space needed to store the description of h_1 and h_2 at O(n^zeta), for zeta<1
fixed arbitrarily, we obtain a much smaller (constant) evaluation time than previous constructions of this kind, which involved Siegel's high-performance hash classes. The main new technique is the
combined analysis of the graph structure and the inner structure of the hash functions, as well as a new way of looking at the cycle structure of random (multi)graphs. The construction may be applied
to improve on Pagh and Rodler's ``cuckoo hashing'' (2001), to obtain a simpler and faster alternative to a recent construction of "Ostlin and Pagh (2002/03) for simulating uniform hashing on a key
set S, and to the simulation of shared memory on distributed memory machines. We also describe a novel way of implementing (approximate) d-wise independent hashing without using polynomials.
@techreport{, TITLE = {Almost random graphs with simple hash functions}, AUTHOR = {Dietzfelbinger, Martin and Woelfel, Philipp}, LANGUAGE = {eng}, URL = {http://domino.mpi-inf.mpg.de/internet/
reports.nsf/NumberView/2003-1-005}, NUMBER = {MPI-I-2003-1-005}, INSTITUTION = {Max-Planck-Institut f{\"u}r Informatik}, ADDRESS = {Saarbr{\"u}cken}, YEAR = {2003}, DATE = {2003}, ABSTRACT = {We
describe a simple randomized construction for generating pairs of hash functions h_1,h_2 from a universe U to ranges V=[m]={0,1,...,m-1} and W=[m] so that for every key set S\subseteq U with n=|S| <=
m/(1+epsilon) the (random) bipartite (multi)graph with node set V + W and edge set {(h_1(x),h_2(x)) | x in S} exhibits a structure that is essentially random. The construction combines d-wise
independent classes for d a relatively small constant with the well-known technique of random offsets. While keeping the space needed to store the description of h_1 and h_2 at O(n^zeta), for zeta<1
fixed arbitrarily, we obtain a much smaller (constant) evaluation time than previous constructions of this kind, which involved Siegel's high-performance hash classes. The main new technique is the
combined analysis of the graph structure and the inner structure of the hash functions, as well as a new way of looking at the cycle structure of random (multi)graphs. The construction may be applied
to improve on Pagh and Rodler's ``cuckoo hashing'' (2001), to obtain a simpler and faster alternative to a recent construction of "Ostlin and Pagh (2002/03) for simulating uniform hashing on a key
set S, and to the simulation of shared memory on distributed memory machines. We also describe a novel way of implementing (approximate) d-wise independent hashing without using polynomials.}, TYPE =
{Research Report / Max-Planck-Institut für Informatik}, }
%0 Report %A Dietzfelbinger, Martin %A Woelfel, Philipp %+ Algorithms and Complexity, MPI for Informatics, Max Planck Society External Organizations %T Almost random graphs with simple hash functions
: %G eng %U http://hdl.handle.net/11858/00-001M-0000-0014-6BB3-C %U http://domino.mpi-inf.mpg.de/internet/reports.nsf/NumberView/2003-1-005 %Y Max-Planck-Institut für Informatik %C Saarbrü
cken %D 2003 %P 23 p. %X We describe a simple randomized construction for generating pairs of hash functions h_1,h_2 from a universe U to ranges V=[m]={0,1,...,m-1} and W=[m] so that for every key
set S\subseteq U with n=|S| <= m/(1+epsilon) the (random) bipartite (multi)graph with node set V + W and edge set {(h_1(x),h_2(x)) | x in S} exhibits a structure that is essentially random. The
construction combines d-wise independent classes for d a relatively small constant with the well-known technique of random offsets. While keeping the space needed to store the description of h_1 and
h_2 at O(n^zeta), for zeta<1 fixed arbitrarily, we obtain a much smaller (constant) evaluation time than previous constructions of this kind, which involved Siegel's high-performance hash classes.
The main new technique is the combined analysis of the graph structure and the inner structure of the hash functions, as well as a new way of looking at the cycle structure of random (multi)graphs.
The construction may be applied to improve on Pagh and Rodler's ``cuckoo hashing'' (2001), to obtain a simpler and faster alternative to a recent construction of "Ostlin and Pagh (2002/03) for
simulating uniform hashing on a key set S, and to the simulation of shared memory on distributed memory machines. We also describe a novel way of implementing (approximate) d-wise independent hashing
without using polynomials. %B Research Report / Max-Planck-Institut für Informatik
Specification of the Traits Classes for CGAL Arrangements of Curves
E. Fogel, D. Halperin, R. Wein, M. Teillaud, E. Berberich, A. Eigenwillig, S. Hert and L. Kettner
Technical Report, 2003
@techreport{ecg:fhw-stcca-03, TITLE = {Specification of the Traits Classes for {CGAL} Arrangements of Curves}, AUTHOR = {Fogel, Efi and Halperin, Dan and Wein, Ron and Teillaud, Monique and
Berberich, Eric and Eigenwillig, Arno and Hert, Susan and Kettner, Lutz}, LANGUAGE = {eng}, NUMBER = {ECG-TR-241200-01}, INSTITUTION = {INRIA}, ADDRESS = {Sophia-Antipolis}, YEAR = {2003}, DATE =
{2003}, TYPE = {Technical Report}, }
%0 Report %A Fogel, Efi %A Halperin, Dan %A Wein, Ron %A Teillaud, Monique %A Berberich, Eric %A Eigenwillig, Arno %A Hert, Susan %A Kettner, Lutz %+ External Organizations External Organizations
Algorithms and Complexity, MPI for Informatics, Max Planck Society External Organizations Algorithms and Complexity, MPI for Informatics, Max Planck Society Algorithms and Complexity, MPI for
Informatics, Max Planck Society Algorithms and Complexity, MPI for Informatics, Max Planck Society Algorithms and Complexity, MPI for Informatics, Max Planck Society %T Specification of the Traits
Classes for CGAL Arrangements of Curves : %G eng %U http://hdl.handle.net/11858/00-001M-0000-0027-B4C6-5 %Y INRIA %C Sophia-Antipolis %D 2003 %B Technical Report
The dimension of $C^1$ splines of arbitrary degree on a tetrahedral partition
T. Hangelbroek, G. Nürnberger, C. Rössl, H.-P. Seidel and F. Zeilfelder
Technical Report, 2003
We consider the linear space of piecewise polynomials in three variables which are globally smooth, i.e., trivariate $C^1$ splines. The splines are defined on a uniform tetrahedral partition $\
Delta$, which is a natural generalization of the four-directional mesh. By using Bernstein-B{\´e}zier techniques, we establish formulae for the dimension of the $C^1$ splines of arbitrary degree.
@techreport{HangelbroekNurnbergerRoesslSeidelZeilfelder2003, TITLE = {The dimension of \$C{\textasciicircum}1\$ splines of arbitrary degree on a tetrahedral partition}, AUTHOR = {Hangelbroek, Thomas
and N{\"u}rnberger, G{\"u}nther and R{\"o}ssl, Christian and Seidel, Hans-Peter and Zeilfelder, Frank}, LANGUAGE = {eng}, URL = {http://domino.mpi-inf.mpg.de/internet/reports.nsf/NumberView/
2003-4-005}, NUMBER = {MPI-I-2003-4-005}, INSTITUTION = {Max-Planck-Institut f{\"u}r Informatik}, ADDRESS = {Saarbr{\"u}cken}, YEAR = {2003}, DATE = {2003}, ABSTRACT = {We consider the linear space
of piecewise polynomials in three variables which are globally smooth, i.e., trivariate $C^1$ splines. The splines are defined on a uniform tetrahedral partition $\Delta$, which is a natural
generalization of the four-directional mesh. By using Bernstein-B{\´e}zier techniques, we establish formulae for the dimension of the $C^1$ splines of arbitrary degree.}, TYPE = {Research Report
/ Max-Planck-Institut für Informatik}, }
%0 Report %A Hangelbroek, Thomas %A Nürnberger, Günther %A Rössl, Christian %A Seidel, Hans-Peter %A Zeilfelder, Frank %+ Computer Graphics, MPI for Informatics, Max Planck Society
External Organizations Computer Graphics, MPI for Informatics, Max Planck Society Computer Graphics, MPI for Informatics, Max Planck Society Computer Graphics, MPI for Informatics, Max Planck Society
%T The dimension of $C^1$ splines of arbitrary degree on a tetrahedral partition : %G eng %U http://hdl.handle.net/11858/00-001M-0000-0014-6887-A %U http://domino.mpi-inf.mpg.de/internet/reports.nsf/
NumberView/2003-4-005 %Y Max-Planck-Institut für Informatik %C Saarbrücken %D 2003 %P 39 p. %X We consider the linear space of piecewise polynomials in three variables which are globally
smooth, i.e., trivariate $C^1$ splines. The splines are defined on a uniform tetrahedral partition $\Delta$, which is a natural generalization of the four-directional mesh. By using Bernstein-B{\&#
180;e}zier techniques, we establish formulae for the dimension of the $C^1$ splines of arbitrary degree. %B Research Report / Max-Planck-Institut für Informatik
Fast bound consistency for the global cardinality constraint
I. Katriel and S. Thiel
Technical Report, 2003
We show an algorithm for bound consistency of {\em global cardinality constraints}, which runs in time $O(n+n')$ plus the time required to sort the assignment variables by range endpoints, where $n$
is the number of assignment variables and $n'$ is the number of values in the union of their ranges. We thus offer a fast alternative to R\'egin's arc consistency algorithm~\cite{Regin} which runs in
time $O(n^{3/2}n')$ and space $O(n\cdot n')$. Our algorithm also achieves bound consistency for the number of occurrences of each value, which has not been done before.
@techreport{, TITLE = {Fast bound consistency for the global cardinality constraint}, AUTHOR = {Katriel, Irit and Thiel, Sven}, LANGUAGE = {eng}, URL = {http://domino.mpi-inf.mpg.de/internet/
reports.nsf/NumberView/2003-1-013}, NUMBER = {MPI-I-2003-1-013}, INSTITUTION = {Max-Planck-Institut f{\"u}r Informatik}, ADDRESS = {Saarbr{\"u}cken}, YEAR = {2003}, DATE = {2003}, ABSTRACT = {We show
an algorithm for bound consistency of {\em global cardinality constraints}, which runs in time $O(n+n')$ plus the time required to sort the assignment variables by range endpoints, where $n$ is the
number of assignment variables and $n'$ is the number of values in the union of their ranges. We thus offer a fast alternative to R\'egin's arc consistency algorithm~\cite{Regin} which runs in time
$O(n^{3/2}n')$ and space $O(n\cdot n')$. Our algorithm also achieves bound consistency for the number of occurrences of each value, which has not been done before.}, TYPE = {Research Report /
Max-Planck-Institut für Informatik}, }
%0 Report %A Katriel, Irit %A Thiel, Sven %+ Algorithms and Complexity, MPI for Informatics, Max Planck Society Algorithms and Complexity, MPI for Informatics, Max Planck Society %T Fast bound
consistency for the global cardinality constraint : %G eng %U http://hdl.handle.net/11858/00-001M-0000-0014-6B1F-D %U http://domino.mpi-inf.mpg.de/internet/reports.nsf/NumberView/2003-1-013 %Y
Max-Planck-Institut für Informatik %C Saarbrücken %D 2003 %P 30 p. %X We show an algorithm for bound consistency of {\em global cardinality constraints}, which runs in time $O(n+n')$ plus
the time required to sort the assignment variables by range endpoints, where $n$ is the number of assignment variables and $n'$ is the number of values in the union of their ranges. We thus offer a
fast alternative to R\'egin's arc consistency algorithm~\cite{Regin} which runs in time $O(n^{3/2}n')$ and space $O(n\cdot n')$. Our algorithm also achieves bound consistency for the number of
occurrences of each value, which has not been done before. %B Research Report / Max-Planck-Institut für Informatik
Sum-Multicoloring on paths
A. Kovács
Technical Report, 2003
The question, whether the preemptive Sum Multicoloring (pSMC) problem is hard on paths was raised by Halldorsson et al. ["Multi-coloring trees", Information and Computation, 180(2):113-129,2002]. The
pSMC problem is a scheduling problem where the pairwise conflicting jobs are represented by a conflict graph, and the time lengths of jobs by integer weights on the nodes. The goal is to schedule the
jobs so that the sum of their finishing times is minimized. In the paper we give an O(n^3p) time algorithm for the pSMC problem on paths, where n is the number of nodes and p is the largest time
length. The result easily carries over to cycles.
@techreport{, TITLE = {Sum-Multicoloring on paths}, AUTHOR = {Kov{\'a}cs, Annamaria}, LANGUAGE = {eng}, URL = {http://domino.mpi-inf.mpg.de/internet/reports.nsf/NumberView/2003-1-015}, NUMBER =
{MPI-I-2003-1-015}, INSTITUTION = {Max-Planck-Institut f{\"u}r Informatik}, ADDRESS = {Saarbr{\"u}cken}, YEAR = {2003}, DATE = {2003}, ABSTRACT = {The question, whether the preemptive Sum
Multicoloring (pSMC) problem is hard on paths was raised by Halldorsson et al. ["Multi-coloring trees", Information and Computation, 180(2):113-129,2002]. The pSMC problem is a scheduling problem
where the pairwise conflicting jobs are represented by a conflict graph, and the time lengths of jobs by integer weights on the nodes. The goal is to schedule the jobs so that the sum of their
finishing times is minimized. In the paper we give an O(n^3p) time algorithm for the pSMC problem on paths, where n is the number of nodes and p is the largest time length. The result easily carries
over to cycles.}, TYPE = {Research Report / Max-Planck-Institut für Informatik}, }
%0 Report %A Kovács, Annamaria %+ Algorithms and Complexity, MPI for Informatics, Max Planck Society %T Sum-Multicoloring on paths : %G eng %U http://hdl.handle.net/11858/
00-001M-0000-0014-6B18-C %U http://domino.mpi-inf.mpg.de/internet/reports.nsf/NumberView/2003-1-015 %Y Max-Planck-Institut für Informatik %C Saarbrücken %D 2003 %P 20 p. %X The question,
whether the preemptive Sum Multicoloring (pSMC) problem is hard on paths was raised by Halldorsson et al. ["Multi-coloring trees", Information and Computation, 180(2):113-129,2002]. The pSMC problem
is a scheduling problem where the pairwise conflicting jobs are represented by a conflict graph, and the time lengths of jobs by integer weights on the nodes. The goal is to schedule the jobs so that
the sum of their finishing times is minimized. In the paper we give an O(n^3p) time algorithm for the pSMC problem on paths, where n is the number of nodes and p is the largest time length. The
result easily carries over to cycles. %B Research Report / Max-Planck-Institut für Informatik
Selfish traffic allocation for server farms
P. Krysta, A. Czumaj and B. Vöcking
Technical Report, 2003
We study the price of selfish routing in non-cooperative networks like the Internet. In particular, we investigate the price of selfish routing using the coordination ratio and other (e.g.,
bicriteria) measures in the recently introduced game theoretic network model of Koutsoupias and Papadimitriou. We generalize this model towards general, monotone families of cost functions and cost
functions from queueing theory. A summary of our main results for general, monotone cost functions is as follows.
@techreport{, TITLE = {Selfish traffic allocation for server farms}, AUTHOR = {Krysta, Piotr and Czumaj, Artur and V{\"o}cking, Berthold}, LANGUAGE = {eng}, URL = {http://domino.mpi-inf.mpg.de/
internet/reports.nsf/NumberView/2003-1-011}, NUMBER = {MPI-I-2003-1-011}, INSTITUTION = {Max-Planck-Institut f{\"u}r Informatik}, ADDRESS = {Saarbr{\"u}cken}, YEAR = {2003}, DATE = {2003}, ABSTRACT =
{We study the price of selfish routing in non-cooperative networks like the Internet. In particular, we investigate the price of selfish routing using the coordination ratio and other (e.g.,
bicriteria) measures in the recently introduced game theoretic network model of Koutsoupias and Papadimitriou. We generalize this model towards general, monotone families of cost functions and cost
functions from queueing theory. A summary of our main results for general, monotone cost functions is as follows.}, TYPE = {Research Report / Max-Planck-Institut für Informatik}, }
%0 Report %A Krysta, Piotr %A Czumaj, Artur %A Vöcking, Berthold %+ Algorithms and Complexity, MPI for Informatics, Max Planck Society Algorithms and Complexity, MPI for Informatics, Max Planck
Society Algorithms and Complexity, MPI for Informatics, Max Planck Society %T Selfish traffic allocation for server farms : %G eng %U http://hdl.handle.net/11858/00-001M-0000-0014-6B33-E %U http://
domino.mpi-inf.mpg.de/internet/reports.nsf/NumberView/2003-1-011 %Y Max-Planck-Institut für Informatik %C Saarbrücken %D 2003 %P 43 p. %X We study the price of selfish routing in
non-cooperative networks like the Internet. In particular, we investigate the price of selfish routing using the coordination ratio and other (e.g., bicriteria) measures in the recently introduced
game theoretic network model of Koutsoupias and Papadimitriou. We generalize this model towards general, monotone families of cost functions and cost functions from queueing theory. A summary of our
main results for general, monotone cost functions is as follows. %B Research Report / Max-Planck-Institut für Informatik
Scheduling and traffic allocation for tasks with bounded splittability
P. Krysta, P. Sanders and B. Vöcking
Technical Report, 2003
We investigate variants of the well studied problem of scheduling tasks on uniformly related machines to minimize the makespan. In the $k$-splittable scheduling problem each task can be broken into
at most $k \ge 2$ pieces each of which has to be assigned to a different machine. In the slightly more general SAC problem each task $j$ comes with its own splittability parameter $k_j$, where we
assume $k_j \ge 2$. These problems are known to be $\npc$-hard and, hence, previous research mainly focuses on approximation algorithms. Our motivation to study these scheduling problems is traffic
allocation for server farms based on a variant of the Internet Domain Name Service (DNS) that uses a stochastic splitting of request streams. Optimal solutions for the $k$-splittable scheduling
problem yield optimal solutions for this traffic allocation problem. Approximation ratios, however, do not translate from one problem to the other because of non-linear latency functions. In fact, we
can prove that the traffic allocation problem with standard latency functions from Queueing Theory cannot be approximated in polynomial time within any finite factor because of the extreme behavior
of these functions. Because of the inapproximability, we turn our attention to fixed-parameter tractable algorithms. Our main result is a polynomial time algorithm computing an exact solution for the
$k$-splittable scheduling problem as well as the SAC problem for any fixed number of machines. The running time of our algorithm increases exponentially with the number of machines but is only linear
in the number of tasks. This result is the first proof that bounded splittability reduces the complexity of scheduling as the unsplittable scheduling is known to be $\npc$-hard already for two
machines. Furthermore, since our algorithm solves the scheduling problem exactly, it also solves the traffic allocation problem that motivated our study.
@techreport{MPI-I-2003-1-002, TITLE = {Scheduling and traffic allocation for tasks with bounded splittability}, AUTHOR = {Krysta, Piotr and Sanders, Peter and V{\"o}cking, Berthold}, LANGUAGE =
{eng}, NUMBER = {MPI-I-2003-1-002}, INSTITUTION = {Max-Planck-Institut f{\"u}r Informatik}, ADDRESS = {Saarbr{\"u}cken}, YEAR = {2003}, DATE = {2003}, ABSTRACT = {We investigate variants of the well
studied problem of scheduling tasks on uniformly related machines to minimize the makespan. In the $k$-splittable scheduling problem each task can be broken into at most $k \ge 2$ pieces each of
which has to be assigned to a different machine. In the slightly more general SAC problem each task $j$ comes with its own splittability parameter $k_j$, where we assume $k_j \ge 2$. These problems
are known to be $\npc$-hard and, hence, previous research mainly focuses on approximation algorithms. Our motivation to study these scheduling problems is traffic allocation for server farms based on
a variant of the Internet Domain Name Service (DNS) that uses a stochastic splitting of request streams. Optimal solutions for the $k$-splittable scheduling problem yield optimal solutions for this
traffic allocation problem. Approximation ratios, however, do not translate from one problem to the other because of non-linear latency functions. In fact, we can prove that the traffic allocation
problem with standard latency functions from Queueing Theory cannot be approximated in polynomial time within any finite factor because of the extreme behavior of these functions. Because of the
inapproximability, we turn our attention to fixed-parameter tractable algorithms. Our main result is a polynomial time algorithm computing an exact solution for the $k$-splittable scheduling problem
as well as the SAC problem for any fixed number of machines. The running time of our algorithm increases exponentially with the number of machines but is only linear in the number of tasks. This
result is the first proof that bounded splittability reduces the complexity of scheduling as the unsplittable scheduling is known to be $\npc$-hard already for two machines. Furthermore, since our
algorithm solves the scheduling problem exactly, it also solves the traffic allocation problem that motivated our study.}, TYPE = {Research Report / Max-Planck-Institut für Informatik}, }
%0 Report %A Krysta, Piotr %A Sanders, Peter %A Vöcking, Berthold %+ Algorithms and Complexity, MPI for Informatics, Max Planck Society Algorithms and Complexity, MPI for Informatics, Max Planck
Society Algorithms and Complexity, MPI for Informatics, Max Planck Society %T Scheduling and traffic allocation for tasks with bounded splittability : %G eng %U http://hdl.handle.net/11858/
00-001M-0000-0014-6BD1-8 %Y Max-Planck-Institut für Informatik %C Saarbrücken %D 2003 %P 15 p. %X We investigate variants of the well studied problem of scheduling tasks on uniformly
related machines to minimize the makespan. In the $k$-splittable scheduling problem each task can be broken into at most $k \ge 2$ pieces each of which has to be assigned to a different machine. In
the slightly more general SAC problem each task $j$ comes with its own splittability parameter $k_j$, where we assume $k_j \ge 2$. These problems are known to be $\npc$-hard and, hence, previous
research mainly focuses on approximation algorithms. Our motivation to study these scheduling problems is traffic allocation for server farms based on a variant of the Internet Domain Name Service
(DNS) that uses a stochastic splitting of request streams. Optimal solutions for the $k$-splittable scheduling problem yield optimal solutions for this traffic allocation problem. Approximation
ratios, however, do not translate from one problem to the other because of non-linear latency functions. In fact, we can prove that the traffic allocation problem with standard latency functions from
Queueing Theory cannot be approximated in polynomial time within any finite factor because of the extreme behavior of these functions. Because of the inapproximability, we turn our attention to
fixed-parameter tractable algorithms. Our main result is a polynomial time algorithm computing an exact solution for the $k$-splittable scheduling problem as well as the SAC problem for any fixed
number of machines. The running time of our algorithm increases exponentially with the number of machines but is only linear in the number of tasks. This result is the first proof that bounded
splittability reduces the complexity of scheduling as the unsplittable scheduling is known to be $\npc$-hard already for two machines. Furthermore, since our algorithm solves the scheduling problem
exactly, it also solves the traffic allocation problem that motivated our study. %B Research Report / Max-Planck-Institut für Informatik
Visualization of volume data with quadratic super splines
C. Rössl, F. Zeilfelder, G. Nürnberger and H.-P. Seidel
Technical Report, 2003
We develop a new approach to reconstruct non-discrete models from gridded volume samples. As a model, we use quadratic, trivariate super splines on a uniform tetrahedral partition $\Delta$. The
approximating splines are determined in a natural and completely symmetric way by averaging local data samples such that appropriate smoothness conditions are automatically satisfied. On each
tetrahedron of $\Delta$ , the spline is a polynomial of total degree two which provides several advantages including the e cient computation, evaluation and visualization of the model. We apply
Bernstein-B{\´e}zier techniques wellknown in Computer Aided Geometric Design to compute and evaluate the trivariate spline and its gradient. With this approach the volume data can be visualized e
ciently e.g. with isosurface ray-casting. Along an arbitrary ray the splines are univariate, piecewise quadratics and thus the exact intersection for a prescribed isovalue can be easily determined in
an analytic and exact way. Our results confirm the e ciency of the method and demonstrate a high visual quality for rendered isosurfaces.
@techreport{RoesslZeilfelderNurnbergerSeidel2003, TITLE = {Visualization of volume data with quadratic super splines}, AUTHOR = {R{\"o}ssl, Christian and Zeilfelder, Frank and N{\"u}rnberger, G{\"u}
nther and Seidel, Hans-Peter}, LANGUAGE = {eng}, URL = {http://domino.mpi-inf.mpg.de/internet/reports.nsf/NumberView/2004-4-006}, NUMBER = {MPI-I-2004-4-006}, INSTITUTION = {Max-Planck-Institut f{\
"u}r Informatik}, ADDRESS = {Saarbr{\"u}cken}, YEAR = {2003}, DATE = {2003}, ABSTRACT = {We develop a new approach to reconstruct non-discrete models from gridded volume samples. As a model, we use
quadratic, trivariate super splines on a uniform tetrahedral partition $\Delta$. The approximating splines are determined in a natural and completely symmetric way by averaging local data samples
such that appropriate smoothness conditions are automatically satisfied. On each tetrahedron of $\Delta$ , the spline is a polynomial of total degree two which provides several advantages including
the e cient computation, evaluation and visualization of the model. We apply Bernstein-B{\´e}zier techniques wellknown in Computer Aided Geometric Design to compute and evaluate the trivariate
spline and its gradient. With this approach the volume data can be visualized e ciently e.g. with isosurface ray-casting. Along an arbitrary ray the splines are univariate, piecewise quadratics and
thus the exact intersection for a prescribed isovalue can be easily determined in an analytic and exact way. Our results confirm the e ciency of the method and demonstrate a high visual quality for
rendered isosurfaces.}, TYPE = {Research Report}, }
%0 Report %A Rössl, Christian %A Zeilfelder, Frank %A Nürnberger, Günther %A Seidel, Hans-Peter %+ Computer Graphics, MPI for Informatics, Max Planck Society Computer Graphics, MPI for
Informatics, Max Planck Society External Organizations Computer Graphics, MPI for Informatics, Max Planck Society %T Visualization of volume data with quadratic super splines : %G eng %U http://
hdl.handle.net/11858/00-001M-0000-0014-6AE8-D %U http://domino.mpi-inf.mpg.de/internet/reports.nsf/NumberView/2004-4-006 %Y Max-Planck-Institut für Informatik %C Saarbrücken %D 2003 %P 15
p. %X We develop a new approach to reconstruct non-discrete models from gridded volume samples. As a model, we use quadratic, trivariate super splines on a uniform tetrahedral partition $\Delta$. The
approximating splines are determined in a natural and completely symmetric way by averaging local data samples such that appropriate smoothness conditions are automatically satisfied. On each
tetrahedron of $\Delta$ , the spline is a polynomial of total degree two which provides several advantages including the e cient computation, evaluation and visualization of the model. We apply
Bernstein-B{\´e}zier techniques wellknown in Computer Aided Geometric Design to compute and evaluate the trivariate spline and its gradient. With this approach the volume data can be visualized
e ciently e.g. with isosurface ray-casting. Along an arbitrary ray the splines are univariate, piecewise quadratics and thus the exact intersection for a prescribed isovalue can be easily determined
in an analytic and exact way. Our results confirm the e ciency of the method and demonstrate a high visual quality for rendered isosurfaces. %B Research Report
Asynchronous parallel disk sorting
P. Sanders and R. Dementiev
Technical Report, 2003
We develop an algorithm for parallel disk sorting, whose I/O cost approaches the lower bound and that guarantees almost perfect overlap between I/O and computation. Previous algorithms have either
suboptimal I/O volume or cannot guarantee that I/O and computations can always be overlapped. We give an efficient implementation that can (at least) compete with the best practical implementations
but gives additional performance guarantees. For the experiments we have configured a state of the art machine that can sustain full bandwidth I/O with eight disks and is very cost effective.
@techreport{MPI-I-2003-1-001, TITLE = {Asynchronous parallel disk sorting}, AUTHOR = {Sanders, Peter and Dementiev, Roman}, LANGUAGE = {eng}, NUMBER = {MPI-I-2003-1-001}, INSTITUTION =
{Max-Planck-Institut f{\"u}r Informatik}, ADDRESS = {Saarbr{\"u}cken}, YEAR = {2003}, DATE = {2003}, ABSTRACT = {We develop an algorithm for parallel disk sorting, whose I/O cost approaches the lower
bound and that guarantees almost perfect overlap between I/O and computation. Previous algorithms have either suboptimal I/O volume or cannot guarantee that I/O and computations can always be
overlapped. We give an efficient implementation that can (at least) compete with the best practical implementations but gives additional performance guarantees. For the experiments we have configured
a state of the art machine that can sustain full bandwidth I/O with eight disks and is very cost effective.}, TYPE = {Research Report / Max-Planck-Institut für Informatik}, }
%0 Report %A Sanders, Peter %A Dementiev, Roman %+ Algorithms and Complexity, MPI for Informatics, Max Planck Society Algorithms and Complexity, MPI for Informatics, Max Planck Society %T
Asynchronous parallel disk sorting : %G eng %U http://hdl.handle.net/11858/00-001M-0000-0014-6C80-5 %Y Max-Planck-Institut für Informatik %C Saarbrücken %D 2003 %P 22 p. %X We develop an
algorithm for parallel disk sorting, whose I/O cost approaches the lower bound and that guarantees almost perfect overlap between I/O and computation. Previous algorithms have either suboptimal I/O
volume or cannot guarantee that I/O and computations can always be overlapped. We give an efficient implementation that can (at least) compete with the best practical implementations but gives
additional performance guarantees. For the experiments we have configured a state of the art machine that can sustain full bandwidth I/O with eight disks and is very cost effective. %B Research
Report / Max-Planck-Institut für Informatik
Polynomial time algorithms for network information flow
P. Sanders
Technical Report, 2003
The famous max-flow min-cut theorem states that a source node $s$ can send information through a network (V,E) to a sink node t at a rate determined by the min-cut separating s and t. Recently it has
been shown that this rate can also be achieved for multicasting to several sinks provided that the intermediate nodes are allowed to reencode the information they receive. We give polynomial time
algorithms for solving this problem. We additionally underline the potential benefit of coding by showing that multicasting without coding sometimes only allows a rate that is a factor Omega(log |V|)
@techreport{, TITLE = {Polynomial time algorithms for network information flow}, AUTHOR = {Sanders, Peter}, LANGUAGE = {eng}, URL = {http://domino.mpi-inf.mpg.de/internet/reports.nsf/NumberView/
2003-1-008}, NUMBER = {MPI-I-2003-1-008}, INSTITUTION = {Max-Planck-Institut f{\"u}r Informatik}, ADDRESS = {Saarbr{\"u}cken}, YEAR = {2003}, DATE = {2003}, ABSTRACT = {The famous max-flow min-cut
theorem states that a source node $s$ can send information through a network (V,E) to a sink node t at a rate determined by the min-cut separating s and t. Recently it has been shown that this rate
can also be achieved for multicasting to several sinks provided that the intermediate nodes are allowed to reencode the information they receive. We give polynomial time algorithms for solving this
problem. We additionally underline the potential benefit of coding by showing that multicasting without coding sometimes only allows a rate that is a factor Omega(log |V|) smaller.}, TYPE = {Research
Report / Max-Planck-Institut für Informatik}, }
%0 Report %A Sanders, Peter %+ Algorithms and Complexity, MPI for Informatics, Max Planck Society %T Polynomial time algorithms for network information flow : %G eng %U http://hdl.handle.net/11858/
00-001M-0000-0014-6B4A-B %U http://domino.mpi-inf.mpg.de/internet/reports.nsf/NumberView/2003-1-008 %Y Max-Planck-Institut für Informatik %C Saarbrücken %D 2003 %P 18 p. %X The famous
max-flow min-cut theorem states that a source node $s$ can send information through a network (V,E) to a sink node t at a rate determined by the min-cut separating s and t. Recently it has been shown
that this rate can also be achieved for multicasting to several sinks provided that the intermediate nodes are allowed to reencode the information they receive. We give polynomial time algorithms for
solving this problem. We additionally underline the potential benefit of coding by showing that multicasting without coding sometimes only allows a rate that is a factor Omega(log |V|) smaller. %B
Research Report / Max-Planck-Institut für Informatik
Cross-monotonic cost sharing methods for connected facility location games
G. Schäfer and S. Leonardi
Technical Report, 2003
We present cost sharing methods for connected facility location games that are cross-monotonic, competitive, and recover a constant fraction of the cost of the constructed solution. The novelty of
this paper is that we use randomized algorithms, and that we share the expected cost among the participating users. As a consequence, our cost sharing methods are simple, and achieve attractive
approximation ratios for various NP-hard problems. We also provide a primal-dual cost sharing method for the connected facility location game with opening costs.
@techreport{, TITLE = {Cross-monotonic cost sharing methods for connected facility location games}, AUTHOR = {Sch{\"a}fer, Guido and Leonardi, Stefano}, LANGUAGE = {eng}, URL = {http://
domino.mpi-inf.mpg.de/internet/reports.nsf/NumberView/2003-1-017}, NUMBER = {MPI-I-2003-1-017}, INSTITUTION = {Max-Planck-Institut f{\"u}r Informatik}, ADDRESS = {Saarbr{\"u}cken}, YEAR = {2003},
DATE = {2003}, ABSTRACT = {We present cost sharing methods for connected facility location games that are cross-monotonic, competitive, and recover a constant fraction of the cost of the constructed
solution. The novelty of this paper is that we use randomized algorithms, and that we share the expected cost among the participating users. As a consequence, our cost sharing methods are simple, and
achieve attractive approximation ratios for various NP-hard problems. We also provide a primal-dual cost sharing method for the connected facility location game with opening costs.}, TYPE = {Research
Report / Max-Planck-Institut für Informatik}, }
%0 Report %A Schäfer, Guido %A Leonardi, Stefano %+ Algorithms and Complexity, MPI for Informatics, Max Planck Society Algorithms and Complexity, MPI for Informatics, Max Planck Society %T
Cross-monotonic cost sharing methods for connected facility location games : %G eng %U http://hdl.handle.net/11858/00-001M-0000-0014-6B12-7 %U http://domino.mpi-inf.mpg.de/internet/reports.nsf/
NumberView/2003-1-017 %Y Max-Planck-Institut für Informatik %C Saarbrücken %D 2003 %P 10 p. %X We present cost sharing methods for connected facility location games that are
cross-monotonic, competitive, and recover a constant fraction of the cost of the constructed solution. The novelty of this paper is that we use randomized algorithms, and that we share the expected
cost among the participating users. As a consequence, our cost sharing methods are simple, and achieve attractive approximation ratios for various NP-hard problems. We also provide a primal-dual cost
sharing method for the connected facility location game with opening costs. %B Research Report / Max-Planck-Institut für Informatik
Topology matters: smoothed competitive analysis of metrical task systems
G. Schäfer and N. Sivadasan
Technical Report, 2003
We consider online problems that can be modeled as \emph{metrical task systems}: An online algorithm resides in a graph $G$ of $n$ nodes and may move in this graph at a cost equal to the distance.
The algorithm has to service a sequence of \emph{tasks} that arrive online; each task specifies for each node a \emph{request cost} that is incurred if the algorithm services the task in this
particular node. The objective is to minimize the total request cost plus the total travel cost. Several important online problems can be modeled as metrical task systems. Borodin, Linial and Saks \
cite{BLS92} presented a deterministic \emph{work function algorithm} (WFA) for metrical task systems having a tight competitive ratio of $2n-1$. However, the competitive ratio often is an
over-pessimistic estimation of the true performance of an online algorithm. In this paper, we present a \emph{smoothed competitive analysis} of WFA. Given an adversarial task sequence, we smoothen
the request costs by means of a symmetric additive smoothing model and analyze the competitive ratio of WFA on the smoothed task sequence. Our analysis reveals that the smoothed competitive ratio of
WFA is much better than $O(n)$ and that it depends on several topological parameters of the underlying graph $G$, such as the minimum edge length $U_{\min}$, the maximum degree $D$, and the edge
diameter $diam$. Assuming that the ratio between the maximum and the minimum edge length of $G$ is bounded by a constant, the smoothed competitive ratio of WFA becomes $O(diam (U_{\min}/\sigma + \log
(D)))$ and $O(\sqrt{n \cdot (U_{\min}/\sigma + \log(D))})$, where $\sigma$ denotes the standard deviation of the smoothing distribution. For example, already for perturbations with $\sigma = \Theta
(U_{\min})$ the competitive ratio reduces to $O(\log n)$ on a clique and to $O(\sqrt{n})$ on a line. We also prove that for a large class of graphs these bounds are asymptotically tight. Furthermore,
we provide two lower bounds for any arbitrary graph. We obtain a better bound of $O(\beta \cdot (U_{\min}/\psigma + \log(D)))$ on the smoothed competitive ratio of WFA if each adversarial task
contains at most $\beta$ non-zero entries. Our analysis holds for various probability distributions, including the uniform and the normal distribution. We also provide the first average case analysis
of WFA. We prove that WFA has $O(\log(D))$ expected competitive ratio if the request costs are chosen randomly from an arbitrary non-increasing distribution with standard deviation.
@techreport{, TITLE = {Topology matters: smoothed competitive analysis of metrical task systems}, AUTHOR = {Sch{\"a}fer, Guido and Sivadasan, Naveen}, LANGUAGE = {eng}, URL = {http://
domino.mpi-inf.mpg.de/internet/reports.nsf/NumberView/2003-1-016}, NUMBER = {MPI-I-2003-1-016}, INSTITUTION = {Max-Planck-Institut f{\"u}r Informatik}, ADDRESS = {Saarbr{\"u}cken}, YEAR = {2003},
DATE = {2003}, ABSTRACT = {We consider online problems that can be modeled as \emph{metrical task systems}: An online algorithm resides in a graph $G$ of $n$ nodes and may move in this graph at a
cost equal to the distance. The algorithm has to service a sequence of \emph{tasks} that arrive online; each task specifies for each node a \emph{request cost} that is incurred if the algorithm
services the task in this particular node. The objective is to minimize the total request cost plus the total travel cost. Several important online problems can be modeled as metrical task systems.
Borodin, Linial and Saks \cite{BLS92} presented a deterministic \emph{work function algorithm} (WFA) for metrical task systems having a tight competitive ratio of $2n-1$. However, the competitive
ratio often is an over-pessimistic estimation of the true performance of an online algorithm. In this paper, we present a \emph{smoothed competitive analysis} of WFA. Given an adversarial task
sequence, we smoothen the request costs by means of a symmetric additive smoothing model and analyze the competitive ratio of WFA on the smoothed task sequence. Our analysis reveals that the smoothed
competitive ratio of WFA is much better than $O(n)$ and that it depends on several topological parameters of the underlying graph $G$, such as the minimum edge length $U_{\min}$, the maximum degree
$D$, and the edge diameter $diam$. Assuming that the ratio between the maximum and the minimum edge length of $G$ is bounded by a constant, the smoothed competitive ratio of WFA becomes $O(diam (U_{\
min}/\sigma + \log(D)))$ and $O(\sqrt{n \cdot (U_{\min}/\sigma + \log(D))})$, where $\sigma$ denotes the standard deviation of the smoothing distribution. For example, already for perturbations with
$\sigma = \Theta(U_{\min})$ the competitive ratio reduces to $O(\log n)$ on a clique and to $O(\sqrt{n})$ on a line. We also prove that for a large class of graphs these bounds are asymptotically
tight. Furthermore, we provide two lower bounds for any arbitrary graph. We obtain a better bound of $O(\beta \cdot (U_{\min}/\psigma + \log(D)))$ on the smoothed competitive ratio of WFA if each
adversarial task contains at most $\beta$ non-zero entries. Our analysis holds for various probability distributions, including the uniform and the normal distribution. We also provide the first
average case analysis of WFA. We prove that WFA has $O(\log(D))$ expected competitive ratio if the request costs are chosen randomly from an arbitrary non-increasing distribution with standard
deviation.}, TYPE = {Research Report / Max-Planck-Institut für Informatik}, }
%0 Report %A Schäfer, Guido %A Sivadasan, Naveen %+ Algorithms and Complexity, MPI for Informatics, Max Planck Society Algorithms and Complexity, MPI for Informatics, Max Planck Society %T
Topology matters: smoothed competitive analysis of metrical task systems : %G eng %U http://hdl.handle.net/11858/00-001M-0000-0014-6B15-1 %U http://domino.mpi-inf.mpg.de/internet/reports.nsf/
NumberView/2003-1-016 %Y Max-Planck-Institut für Informatik %C Saarbrücken %D 2003 %P 28 p. %X We consider online problems that can be modeled as \emph{metrical task systems}: An online
algorithm resides in a graph $G$ of $n$ nodes and may move in this graph at a cost equal to the distance. The algorithm has to service a sequence of \emph{tasks} that arrive online; each task
specifies for each node a \emph{request cost} that is incurred if the algorithm services the task in this particular node. The objective is to minimize the total request cost plus the total travel
cost. Several important online problems can be modeled as metrical task systems. Borodin, Linial and Saks \cite{BLS92} presented a deterministic \emph{work function algorithm} (WFA) for metrical task
systems having a tight competitive ratio of $2n-1$. However, the competitive ratio often is an over-pessimistic estimation of the true performance of an online algorithm. In this paper, we present a
\emph{smoothed competitive analysis} of WFA. Given an adversarial task sequence, we smoothen the request costs by means of a symmetric additive smoothing model and analyze the competitive ratio of
WFA on the smoothed task sequence. Our analysis reveals that the smoothed competitive ratio of WFA is much better than $O(n)$ and that it depends on several topological parameters of the underlying
graph $G$, such as the minimum edge length $U_{\min}$, the maximum degree $D$, and the edge diameter $diam$. Assuming that the ratio between the maximum and the minimum edge length of $G$ is bounded
by a constant, the smoothed competitive ratio of WFA becomes $O(diam (U_{\min}/\sigma + \log(D)))$ and $O(\sqrt{n \cdot (U_{\min}/\sigma + \log(D))})$, where $\sigma$ denotes the standard deviation
of the smoothing distribution. For example, already for perturbations with $\sigma = \Theta(U_{\min})$ the competitive ratio reduces to $O(\log n)$ on a clique and to $O(\sqrt{n})$ on a line. We also
prove that for a large class of graphs these bounds are asymptotically tight. Furthermore, we provide two lower bounds for any arbitrary graph. We obtain a better bound of $O(\beta \cdot (U_{\min}/\
psigma + \log(D)))$ on the smoothed competitive ratio of WFA if each adversarial task contains at most $\beta$ non-zero entries. Our analysis holds for various probability distributions, including
the uniform and the normal distribution. We also provide the first average case analysis of WFA. We prove that WFA has $O(\log(D))$ expected competitive ratio if the request costs are chosen randomly
from an arbitrary non-increasing distribution with standard deviation. %B Research Report / Max-Planck-Institut für Informatik
A note on the smoothed complexity of the single-source shortest path problem
G. Schäfer
Technical Report, 2003
Banderier, Beier and Mehlhorn showed that the single-source shortest path problem has smoothed complexity $O(m+n(K-k))$ if the edge costs are $K$-bit integers and the last $k$ least significant bits
are perturbed randomly. Their analysis holds if each bit is set to $0$ or $1$ with probability $\frac{1}{2}$. We extend their result and show that the same analysis goes through for a large class of
probability distributions: We prove a smoothed complexity of $O(m+n(K-k))$ if the last $k$ bits of each edge cost are replaced by some random number chosen from $[0, \dots, 2^k-1]$ according to some
\emph{arbitrary} probability distribution $\pdist$ whose expectation is not too close to zero. We do not require that the edge costs are perturbed independently. The same time bound holds even if the
random perturbations are heterogeneous. If $k=K$ our analysis implies a linear average case running time for various probability distributions. We also show that the running time is $O(m+n(K-k))$
with high probability if the random replacements are chosen independently.
@techreport{, TITLE = {A note on the smoothed complexity of the single-source shortest path problem}, AUTHOR = {Sch{\"a}fer, Guido}, LANGUAGE = {eng}, URL = {http://domino.mpi-inf.mpg.de/internet/
reports.nsf/NumberView/2003-1-018}, NUMBER = {MPI-I-2003-1-018}, INSTITUTION = {Max-Planck-Institut f{\"u}r Informatik}, ADDRESS = {Saarbr{\"u}cken}, YEAR = {2003}, DATE = {2003}, ABSTRACT =
{Banderier, Beier and Mehlhorn showed that the single-source shortest path problem has smoothed complexity $O(m+n(K-k))$ if the edge costs are $K$-bit integers and the last $k$ least significant bits
are perturbed randomly. Their analysis holds if each bit is set to $0$ or $1$ with probability $\frac{1}{2}$. We extend their result and show that the same analysis goes through for a large class of
probability distributions: We prove a smoothed complexity of $O(m+n(K-k))$ if the last $k$ bits of each edge cost are replaced by some random number chosen from $[0, \dots, 2^k-1]$ according to some
\emph{arbitrary} probability distribution $\pdist$ whose expectation is not too close to zero. We do not require that the edge costs are perturbed independently. The same time bound holds even if the
random perturbations are heterogeneous. If $k=K$ our analysis implies a linear average case running time for various probability distributions. We also show that the running time is $O(m+n(K-k))$
with high probability if the random replacements are chosen independently.}, TYPE = {Research Report / Max-Planck-Institut für Informatik}, }
%0 Report %A Schäfer, Guido %+ Algorithms and Complexity, MPI for Informatics, Max Planck Society %T A note on the smoothed complexity of the single-source shortest path problem : %G eng %U
http://hdl.handle.net/11858/00-001M-0000-0014-6B0D-6 %U http://domino.mpi-inf.mpg.de/internet/reports.nsf/NumberView/2003-1-018 %Y Max-Planck-Institut für Informatik %C Saarbrücken %D 2003
%P 8 p. %X Banderier, Beier and Mehlhorn showed that the single-source shortest path problem has smoothed complexity $O(m+n(K-k))$ if the edge costs are $K$-bit integers and the last $k$ least
significant bits are perturbed randomly. Their analysis holds if each bit is set to $0$ or $1$ with probability $\frac{1}{2}$. We extend their result and show that the same analysis goes through for
a large class of probability distributions: We prove a smoothed complexity of $O(m+n(K-k))$ if the last $k$ bits of each edge cost are replaced by some random number chosen from $[0, \dots, 2^k-1]$
according to some \emph{arbitrary} probability distribution $\pdist$ whose expectation is not too close to zero. We do not require that the edge costs are perturbed independently. The same time bound
holds even if the random perturbations are heterogeneous. If $k=K$ our analysis implies a linear average case running time for various probability distributions. We also show that the running time is
$O(m+n(K-k))$ with high probability if the random replacements are chosen independently. %B Research Report / Max-Planck-Institut für Informatik
Average case and smoothed competitive analysis of the multi-level feedback algorithm
G. Schäfer, L. Becchetti, S. Leonardi, A. Marchetti-Spaccamela and T. Vredeveld
Technical Report, 2003
In this paper we introduce the notion of smoothed competitive analysis of online algorithms. Smoothed analysis has been proposed by Spielman and Teng [\emph{Smoothed analysis of algorithms: Why the
simplex algorithm usually takes polynomial time}, STOC, 2001] to explain the behaviour of algorithms that work well in practice while performing very poorly from a worst case analysis point of view.
We apply this notion to analyze the Multi-Level Feedback (MLF) algorithm to minimize the total flow time on a sequence of jobs released over time when the processing time of a job is only known at
time of completion. The initial processing times are integers in the range $[1,2^K]$. We use a partial bit randomization model, where the initial processing times are smoothened by changing the $k$
least significant bits under a quite general class of probability distributions. We show that MLF admits a smoothed competitive ratio of $O((2^k/\sigma)^3 + (2^k/\sigma)^2 2^{K-k})$, where $\sigma$
denotes the standard deviation of the distribution. In particular, we obtain a competitive ratio of $O(2^{K-k})$ if $\sigma = \Theta(2^k)$. We also prove an $\Omega(2^{K-k})$ lower bound for any
deterministic algorithm that is run on processing times smoothened according to the partial bit randomization model. For various other smoothening models, including the additive symmetric smoothening
model used by Spielman and Teng, we give a higher lower bound of $\Omega(2^K)$. A direct consequence of our result is also the first average case analysis of MLF. We show a constant expected ratio of
the total flow time of MLF to the optimum under several distributions including the uniform distribution.
@techreport{, TITLE = {Average case and smoothed competitive analysis of the multi-level feedback algorithm}, AUTHOR = {Sch{\"a}fer, Guido and Becchetti, Luca and Leonardi, Stefano and
Marchetti-Spaccamela, Alberto and Vredeveld, Tjark}, LANGUAGE = {eng}, URL = {http://domino.mpi-inf.mpg.de/internet/reports.nsf/NumberView/2003-1-014}, NUMBER = {MPI-I-2003-1-014}, INSTITUTION =
{Max-Planck-Institut f{\"u}r Informatik}, ADDRESS = {Saarbr{\"u}cken}, YEAR = {2003}, DATE = {2003}, ABSTRACT = {In this paper we introduce the notion of smoothed competitive analysis of online
algorithms. Smoothed analysis has been proposed by Spielman and Teng [\emph{Smoothed analysis of algorithms: Why the simplex algorithm usually takes polynomial time}, STOC, 2001] to explain the
behaviour of algorithms that work well in practice while performing very poorly from a worst case analysis point of view. We apply this notion to analyze the Multi-Level Feedback (MLF) algorithm to
minimize the total flow time on a sequence of jobs released over time when the processing time of a job is only known at time of completion. The initial processing times are integers in the range $
[1,2^K]$. We use a partial bit randomization model, where the initial processing times are smoothened by changing the $k$ least significant bits under a quite general class of probability
distributions. We show that MLF admits a smoothed competitive ratio of $O((2^k/\sigma)^3 + (2^k/\sigma)^2 2^{K-k})$, where $\sigma$ denotes the standard deviation of the distribution. In particular,
we obtain a competitive ratio of $O(2^{K-k})$ if $\sigma = \Theta(2^k)$. We also prove an $\Omega(2^{K-k})$ lower bound for any deterministic algorithm that is run on processing times smoothened
according to the partial bit randomization model. For various other smoothening models, including the additive symmetric smoothening model used by Spielman and Teng, we give a higher lower bound of $
\Omega(2^K)$. A direct consequence of our result is also the first average case analysis of MLF. We show a constant expected ratio of the total flow time of MLF to the optimum under several
distributions including the uniform distribution.}, TYPE = {Research Report / Max-Planck-Institut für Informatik}, }
%0 Report %A Schäfer, Guido %A Becchetti, Luca %A Leonardi, Stefano %A Marchetti-Spaccamela, Alberto %A Vredeveld, Tjark %+ Algorithms and Complexity, MPI for Informatics, Max Planck Society
External Organizations Algorithms and Complexity, MPI for Informatics, Max Planck Society External Organizations External Organizations %T Average case and smoothed competitive analysis of the
multi-level feedback algorithm : %G eng %U http://hdl.handle.net/11858/00-001M-0000-0014-6B1C-4 %U http://domino.mpi-inf.mpg.de/internet/reports.nsf/NumberView/2003-1-014 %Y Max-Planck-Institut f&#
252;r Informatik %C Saarbrücken %D 2003 %P 31 p. %X In this paper we introduce the notion of smoothed competitive analysis of online algorithms. Smoothed analysis has been proposed by Spielman
and Teng [\emph{Smoothed analysis of algorithms: Why the simplex algorithm usually takes polynomial time}, STOC, 2001] to explain the behaviour of algorithms that work well in practice while
performing very poorly from a worst case analysis point of view. We apply this notion to analyze the Multi-Level Feedback (MLF) algorithm to minimize the total flow time on a sequence of jobs
released over time when the processing time of a job is only known at time of completion. The initial processing times are integers in the range $[1,2^K]$. We use a partial bit randomization model,
where the initial processing times are smoothened by changing the $k$ least significant bits under a quite general class of probability distributions. We show that MLF admits a smoothed competitive
ratio of $O((2^k/\sigma)^3 + (2^k/\sigma)^2 2^{K-k})$, where $\sigma$ denotes the standard deviation of the distribution. In particular, we obtain a competitive ratio of $O(2^{K-k})$ if $\sigma = \
Theta(2^k)$. We also prove an $\Omega(2^{K-k})$ lower bound for any deterministic algorithm that is run on processing times smoothened according to the partial bit randomization model. For various
other smoothening models, including the additive symmetric smoothening model used by Spielman and Teng, we give a higher lower bound of $\Omega(2^K)$. A direct consequence of our result is also the
first average case analysis of MLF. We show a constant expected ratio of the total flow time of MLF to the optimum under several distributions including the uniform distribution. %B Research Report /
Max-Planck-Institut für Informatik
The Diamond Operator for Real Algebraic Numbers
S. Schmitt
Technical Report, 2003
Real algebraic numbers are real roots of polynomials with integral coefficients. They can be represented as expressions whose leaves are integers and whose internal nodes are additions, subtractions,
multiplications, divisions, k-th root operations for integral k, or taking roots of polynomials whose coefficients are given by the value of subexpressions. This last operator is called the diamond
operator. I explain the implementation of the diamond operator in a LEDA extension package.
@techreport{s-doran-03, TITLE = {The Diamond Operator for Real Algebraic Numbers}, AUTHOR = {Schmitt, Susanne}, LANGUAGE = {eng}, NUMBER = {ECG-TR-243107-01}, INSTITUTION = {Effective Computational
Geometry for Curves and Surfaces}, ADDRESS = {Sophia Antipolis, FRANCE}, YEAR = {2003}, DATE = {2003}, ABSTRACT = {Real algebraic numbers are real roots of polynomials with integral coefficients.
They can be represented as expressions whose leaves are integers and whose internal nodes are additions, subtractions, multiplications, divisions, k-th root operations for integral k, or taking roots
of polynomials whose coefficients are given by the value of subexpressions. This last operator is called the diamond operator. I explain the implementation of the diamond operator in a LEDA extension
package.}, }
%0 Report %A Schmitt, Susanne %+ Algorithms and Complexity, MPI for Informatics, Max Planck Society %T The Diamond Operator for Real Algebraic Numbers : %G eng %U http://hdl.handle.net/11858/
00-001M-0000-0019-EBB1-B %Y Effective Computational Geometry for Curves and Surfaces %C Sophia Antipolis, FRANCE %D 2003 %X Real algebraic numbers are real roots of polynomials with integral
coefficients. They can be represented as expressions whose leaves are integers and whose internal nodes are additions, subtractions, multiplications, divisions, k-th root operations for integral k,
or taking roots of polynomials whose coefficients are given by the value of subexpressions. This last operator is called the diamond operator. I explain the implementation of the diamond operator in
a LEDA extension package.
A linear time heuristic for the branch-decomposition of planar graphs
H. Tamaki
Technical Report, 2003a
Let $G$ be a biconnected planar graph given together with its planar drawing. A {\em face-vertex walk} in $G$ of length $k$ is an alternating sequence $x_0, \ldots x_k$ of vertices and faces (i.e.,
if $x_{i - 1}$ is a face then $x_i$ is a vertex and vice versa) such that $x_{i - 1}$ and $x_i$ are incident with each other for $1 \leq i \leq k$. For each vertex or face $x$ of $G$, let $\alpha_x$
denote the length of the shortest face-vertex walk from the outer face of $G$ to $x$. Let $\alpha_G$ denote the maximum of $\alpha_x$ over all vertices/faces $x$. We show that there always exits a
branch-decomposition of $G$ with width $\alpha_G$ and that such a decomposition can be constructed in linear time. We also give experimental results, in which we compare the width of our
decomposition with the optimal width and with the width obtained by some heuristics for general graphs proposed by previous researchers, on test instances used by those researchers. On 56 out of the
total 59 test instances, our method gives a decomposition within additive 2 of the optimum width and on 33 instances it achieves the optimum width.
@techreport{, TITLE = {A linear time heuristic for the branch-decomposition of planar graphs}, AUTHOR = {Tamaki, Hisao}, LANGUAGE = {eng}, URL = {http://domino.mpi-inf.mpg.de/internet/reports.nsf/
NumberView/2003-1-010}, NUMBER = {MPI-I-2003-1-010}, INSTITUTION = {Max-Planck-Institut f{\"u}r Informatik}, ADDRESS = {Saarbr{\"u}cken}, YEAR = {2003}, DATE = {2003}, ABSTRACT = {Let $G$ be a
biconnected planar graph given together with its planar drawing. A {\em face-vertex walk} in $G$ of length $k$ is an alternating sequence $x_0, \ldots x_k$ of vertices and faces (i.e., if $x_{i -- 1}
$ is a face then $x_i$ is a vertex and vice versa) such that $x_{i -- 1}$ and $x_i$ are incident with each other for $1 \leq i \leq k$. For each vertex or face $x$ of $G$, let $\alpha_x$ denote the
length of the shortest face-vertex walk from the outer face of $G$ to $x$. Let $\alpha_G$ denote the maximum of $\alpha_x$ over all vertices/faces $x$. We show that there always exits a
branch-decomposition of $G$ with width $\alpha_G$ and that such a decomposition can be constructed in linear time. We also give experimental results, in which we compare the width of our
decomposition with the optimal width and with the width obtained by some heuristics for general graphs proposed by previous researchers, on test instances used by those researchers. On 56 out of the
total 59 test instances, our method gives a decomposition within additive 2 of the optimum width and on 33 instances it achieves the optimum width.}, TYPE = {Research Report / Max-Planck-Institut f&#
252;r Informatik}, }
%0 Report %A Tamaki, Hisao %+ Algorithms and Complexity, MPI for Informatics, Max Planck Society %T A linear time heuristic for the branch-decomposition of planar graphs : %G eng %U http://
hdl.handle.net/11858/00-001M-0000-0014-6B37-6 %U http://domino.mpi-inf.mpg.de/internet/reports.nsf/NumberView/2003-1-010 %Y Max-Planck-Institut für Informatik %C Saarbrücken %D 2003 %P 18
p. %X Let $G$ be a biconnected planar graph given together with its planar drawing. A {\em face-vertex walk} in $G$ of length $k$ is an alternating sequence $x_0, \ldots x_k$ of vertices and faces
(i.e., if $x_{i - 1}$ is a face then $x_i$ is a vertex and vice versa) such that $x_{i - 1}$ and $x_i$ are incident with each other for $1 \leq i \leq k$. For each vertex or face $x$ of $G$, let $\
alpha_x$ denote the length of the shortest face-vertex walk from the outer face of $G$ to $x$. Let $\alpha_G$ denote the maximum of $\alpha_x$ over all vertices/faces $x$. We show that there always
exits a branch-decomposition of $G$ with width $\alpha_G$ and that such a decomposition can be constructed in linear time. We also give experimental results, in which we compare the width of our
decomposition with the optimal width and with the width obtained by some heuristics for general graphs proposed by previous researchers, on test instances used by those researchers. On 56 out of the
total 59 test instances, our method gives a decomposition within additive 2 of the optimum width and on 33 instances it achieves the optimum width. %B Research Report / Max-Planck-Institut für
Alternating cycles contribution: a strategy of tour-merging for the traveling salesman problem
H. Tamaki
Technical Report, 2003b
A strategy of merging several traveling salesman tours into a better tour, called ACC (Alternating Cycles Contribution) is introduced. Two algorithms embodying this strategy for geometric instances
is implemented and used to enhance Helsgaun's implementaton of his variant of the Lin-Kernighan heuristic. Experiments on the large instances in TSPLIB show that a significant improvement of
performance is obtained. These algorithms were used in September 2002 to find a new best tour for the largest instance pla85900 in TSPLIB.
@techreport{, TITLE = {Alternating cycles contribution: a strategy of tour-merging for the traveling salesman problem}, AUTHOR = {Tamaki, Hisao}, LANGUAGE = {eng}, URL = {http://domino.mpi-inf.mpg.de
/internet/reports.nsf/NumberView/2003-1-007}, NUMBER = {MPI-I-2003-1-007}, INSTITUTION = {Max-Planck-Institut f{\"u}r Informatik}, ADDRESS = {Saarbr{\"u}cken}, YEAR = {2003}, DATE = {2003}, ABSTRACT
= {A strategy of merging several traveling salesman tours into a better tour, called ACC (Alternating Cycles Contribution) is introduced. Two algorithms embodying this strategy for geometric
instances is implemented and used to enhance Helsgaun's implementaton of his variant of the Lin-Kernighan heuristic. Experiments on the large instances in TSPLIB show that a significant improvement
of performance is obtained. These algorithms were used in September 2002 to find a new best tour for the largest instance pla85900 in TSPLIB.}, TYPE = {Research Report / Max-Planck-Institut für
Informatik}, }
%0 Report %A Tamaki, Hisao %+ Algorithms and Complexity, MPI for Informatics, Max Planck Society %T Alternating cycles contribution: a strategy of tour-merging for the traveling salesman problem : %G
eng %U http://hdl.handle.net/11858/00-001M-0000-0014-6B66-B %U http://domino.mpi-inf.mpg.de/internet/reports.nsf/NumberView/2003-1-007 %Y Max-Planck-Institut für Informatik %C Saarbrücken
%D 2003 %P 22 p. %X A strategy of merging several traveling salesman tours into a better tour, called ACC (Alternating Cycles Contribution) is introduced. Two algorithms embodying this strategy for
geometric instances is implemented and used to enhance Helsgaun's implementaton of his variant of the Lin-Kernighan heuristic. Experiments on the large instances in TSPLIB show that a significant
improvement of performance is obtained. These algorithms were used in September 2002 to find a new best tour for the largest instance pla85900 in TSPLIB. %B Research Report / Max-Planck-Institut f&#
252;r Informatik
3D acquisition of mirroring objects
M. Tarini, H. P. A. Lensch, M. Gösele and H.-P. Seidel
Technical Report, 2003
Objects with mirroring optical characteristics are left out of the scope of most 3D scanning methods. We present here a new automatic acquisition approach, shape-from-distortion, that focuses on that
category of objects, requires only a still camera and a color monitor, and produces range scans (plus a normal and a reflectance map) of the target. Our technique consists of two steps: first, an
improved environment matte is captured for the mirroring object, using the interference of patterns with different frequencies in order to obtain sub-pixel accuracy. Then, the matte is converted into
a normal and a depth map by exploiting the self coherence of a surface when integrating the normal map along different paths. The results show very high accuracy, capturing even smallest surface
details. The acquired depth maps can be further processed using standard techniques to produce a complete 3D mesh of the object.
@techreport{TariniLenschGoeseleSeidel2003, TITLE = {{3D} acquisition of mirroring objects}, AUTHOR = {Tarini, Marco and Lensch, Hendrik P. A. and G{\"o}sele, Michael and Seidel, Hans-Peter}, LANGUAGE
= {eng}, NUMBER = {MPI-I-2003-4-001}, INSTITUTION = {Max-Planck-Institut f{\"u}r Informatik}, ADDRESS = {Saarbr{\"u}cken}, YEAR = {2003}, DATE = {2003}, ABSTRACT = {Objects with mirroring optical
characteristics are left out of the scope of most 3D scanning methods. We present here a new automatic acquisition approach, shape-from-distortion, that focuses on that category of objects, requires
only a still camera and a color monitor, and produces range scans (plus a normal and a reflectance map) of the target. Our technique consists of two steps: first, an improved environment matte is
captured for the mirroring object, using the interference of patterns with different frequencies in order to obtain sub-pixel accuracy. Then, the matte is converted into a normal and a depth map by
exploiting the self coherence of a surface when integrating the normal map along different paths. The results show very high accuracy, capturing even smallest surface details. The acquired depth maps
can be further processed using standard techniques to produce a complete 3D mesh of the object.}, TYPE = {Research Report / Max-Planck-Institut für Informatik}, }
%0 Report %A Tarini, Marco %A Lensch, Hendrik P. A. %A Gösele, Michael %A Seidel, Hans-Peter %+ Computer Graphics, MPI for Informatics, Max Planck Society Computer Graphics, MPI for Informatics,
Max Planck Society Computer Graphics, MPI for Informatics, Max Planck Society Computer Graphics, MPI for Informatics, Max Planck Society %T 3D acquisition of mirroring objects : %G eng %U http://
hdl.handle.net/11858/00-001M-0000-0014-6AF5-F %Y Max-Planck-Institut für Informatik %C Saarbrücken %D 2003 %P 37 p. %X Objects with mirroring optical characteristics are left out of the
scope of most 3D scanning methods. We present here a new automatic acquisition approach, shape-from-distortion, that focuses on that category of objects, requires only a still camera and a color
monitor, and produces range scans (plus a normal and a reflectance map) of the target. Our technique consists of two steps: first, an improved environment matte is captured for the mirroring object,
using the interference of patterns with different frequencies in order to obtain sub-pixel accuracy. Then, the matte is converted into a normal and a depth map by exploiting the self coherence of a
surface when integrating the normal map along different paths. The results show very high accuracy, capturing even smallest surface details. The acquired depth maps can be further processed using
standard techniques to produce a complete 3D mesh of the object. %B Research Report / Max-Planck-Institut für Informatik
A flexible and versatile studio for synchronized multi-view video recording
C. Theobalt, M. Li, M. A. Magnor and H.-P. Seidel
Technical Report, 2003
In recent years, the convergence of Computer Vision and Computer Graphics has put forth new research areas that work on scene reconstruction from and analysis of multi-view video footage. In
free-viewpoint video, for example, new views of a scene are generated from an arbitrary viewpoint in real-time from a set of real multi-view input video streams. The analysis of real-world scenes
from multi-view video to extract motion information or reflection models is another field of research that greatly benefits from high-quality input data. Building a recording setup for multi-view
video involves a great effort on the hardware as well as the software side. The amount of image data to be processed is huge, a decent lighting and camera setup is essential for a naturalistic scene
appearance and robust background subtraction, and the computing infrastructure has to enable real-time processing of the recorded material. This paper describes the recording setup for multi-view
video acquisition that enables the synchronized recording of dynamic scenes from multiple camera positions under controlled conditions. The requirements to the room and their implementation in the
separate components of the studio are described in detail. The efficiency and flexibility of the room is demonstrated on the basis of the results that we obtain with a real-time 3D scene
reconstruction system, a system for non-intrusive optical motion capture and a model-based free-viewpoint video system for human actors. ~
@techreport{TheobaltMingMagnorSeidel2003, TITLE = {A flexible and versatile studio for synchronized multi-view video recording}, AUTHOR = {Theobalt, Christian and Li, Ming and Magnor, Marcus A. and
Seidel, Hans-Peter}, LANGUAGE = {eng}, NUMBER = {MPI-I-2003-4-002}, INSTITUTION = {Max-Planck-Institut f{\"u}r Informatik}, ADDRESS = {Saarbr{\"u}cken}, YEAR = {2003}, DATE = {2003}, ABSTRACT = {In
recent years, the convergence of Computer Vision and Computer Graphics has put forth new research areas that work on scene reconstruction from and analysis of multi-view video footage. In
free-viewpoint video, for example, new views of a scene are generated from an arbitrary viewpoint in r | {"url":"https://www.mpi-inf.mpg.de/de/publications/research-reports","timestamp":"2024-11-02T15:07:14Z","content_type":"text/html","content_length":"1049101","record_id":"<urn:uuid:8027cd6c-9e32-4418-9d47-3702204467f4>","cc-path":"CC-MAIN-2024-46/segments/1730477027714.37/warc/CC-MAIN-20241102133748-20241102163748-00876.warc.gz"} |
AP Physics Resources
“Our greatest weakness lies in giving up. The most certain way to succeed is always to try just one more time.”
– Thomas A. Edison
Today I will give you a free response practice question on fluid mechanics. The question is meant for AP Physics B aspirants. But it will be useful for AP Physics C aspirants also as it high lights
basic principles in hydrostatics. Generally questions meant for AP Physics C will be tougher than those for AP Physics B. So the question I give below can serve only as a part of a more difficult
question in their case. Here is the question:
The adjoining figure shows an empty thin walled cubical vessel of side 0.4 m and mass 6.4 kg floating on kerosene contained in a large tank (not shown). Density of kerosene is 800 kgm^–3 where as the
density of water is 1000 kgm^–3. You may take the acceleration due to gravity as 10 ms^–2. Now, answer the following questions:
(a) Calculate the magnitudes of the force of buoyancy and the force of gravity acting on the empty vessel and state their directions.
(b) Calculate the height x of the portion of the empty vessel that is submerged in kerosene.
(c) Water is slowly poured into the vessel so that an additional 0.2 m of the height of the vessel is submerged in kerosene. Calculate the volume of water added to obtain this condition.
Try to answer the above question which carries 10 points. You have about 11 minutes for answering it. I’ll be back shortly with a model answer for your benefit.
You will find a useful post in this section here.
“Men often become what they believe themselves to be. If I believe I cannot do something, it makes me incapable of doing it. But when I believe I can, then I acquire the ability to do it even if I
didn’t have it in the beginning.”
– Mahatma Gandhi
Today we will discuss a few questions (MCQ) involving kinematics and elastic collision. The first four questions are relevant to AP Physics B as well as AP Physics C while the last question is
relevant to AP Physics C.
(1) A particle moves from point A to point B (Fig.) in 2 seconds, covering three quarters of a circle of radius 1 m. What is the magnitude of the average velocity of the particle?
(a) 0.5 ms^–1
(b) 1 ms^–1
(c) √2 ms^–1
(d) 1/√2 ms^–1
(e) 2√2 ms^–1
The displacement of the particle during 2 seconds is equal to the length of the straight line AB. Since OA and OB have the same length of 1 m, AB = √2 m (length of the hypotenuse of the right angled
triangle AOB.
Therefore average velocity = (√2)/2 = 1/√2 ms^–1
(2) A small object initially at rest starts sliding down from point P (Fig.) on a perfectly smooth inclined plane of inclination (θ) 30º and collides normally and elastically with the surface A of a
large fixed block. If the distance PA (measured along the incline) is 2.5 m, what is the time taken by the object to traverse this distance? (g = 10 ms^–2)
(a) 0.25 s
(b) 0.5 s
(c) 1 s
(d) 1.25 s
(e) 1.5 s
The motion of the object down the plane is uniformly accelerated and you can use the equation,
s = ut + ½ at^2 with usual notations.
Here displacement s = 2.5 m, u = 0 and a = g sinθ = 10 sin30º = 5 ms^–2, which is the component of gravitational acceleration down the incline. Therefore we have
2.5 = 0 + ½ ×5 × t^2
This gives t = 1 s.
(3) In the above question, after starting from the point P, the minimum time required for the object to return to P is
(a) 0.5 s
(b) 1 s
(c) 1.5 s
(d) 2 s
(e) 2.5 s
Because of the elastic collision with the block, the velocity of the small object gets reversed. It travels up the incline for 1 seccond covering the distance of 2.5 metre and momentarily comes to
rest. The times required for the trips down the inclined plane and up the inclined plane are equal since the acceleration is g sinθ throughout the motion. Therefore, after starting from the point P,
the minimum time required for the object to return to P is
1 s +1 s = 2 s.
(4) In question No.2 suppose the inclined plane is not perfectly smooth, but offers a small frictional resistance. The object slides downwards from point P and collides with the block elastically
after time t[1]. It then slides upwards and momentarily comes to rest after an additional time t[2]. Which one among the following statements is correct?
(a) t[1] is less than 1 s
(b) t[1] = t[2] = 1 s
(c) t[1] = t[2]
(d) t[1] is less than t[2]
(e) t[1] is greater than t[2]
During the downward trip the acceleration has magnitude less than g sinθ since the frictional force opposes the motion of the object. In solving question No.2 we have found that the time for the
downward trip is 1 second when the downward acceleration has magnitude g sinθ, appropriate to the case of a perfectly smooth incline. Since the magnitude of the downward acceleration is reduced in
the case of an inclined plane that offers frictional resistance, the time required for the downward trip is increased.
During the upward trip (after colliding with the block) the deceleration has magnitude greater than g sinθ since the frictional force as well as gravity oppose the motion of the object. The object
therefore comes to rest in a shorter time.
Therefore t[1] is greater than t[2] [Option (e)].
[When you project a ball up, the time of ascent will be equal to time of descent only if the air resistance is negligible. If the air resistance is not negligible, you will find that the time of
ascent is less than the time of descent].
The following question is specifically meant for AP Physics C aspirants:
(4) A small object initially at rest at point P (Fig.) on a perfectly smooth inclined plane of inclination (θ) 30º starts sliding down under gravity and collides normally and elastically with the
surface A of a large block that is projected up the incline. Assume that the mass of the small object is negligible compared to the mass of the block. If the distance PA (measured along the incline)
and the velocity of the block up the incline at the instant of collision are 2.5 m and 2 ms^–1 respectively, what will be the velocity of the small object immediately after the collision? (g = 10 ms^
(a) 5 ms^–1
(b) 7 ms^–1
(c) 9 ms^–1
(d) 3 ms^–1
(e) 2 ms^–1
In the case of an elastic collision the relative velocity after the collision is equal and opposite to the relative velocity before the collision:
u[1 ]– u[2 ]= –(v[1 ]– v[2])…………(i)
At the instant of collision the large block moves up the incline with velocity 2 ms^–1. (Let us take this direction as positive). Or, u[1 ]= 2 ms^–1.
The velocity of the small object at the moment of collision is down the incline and hence negative. Its magnitude is 5 ms^–1 as is obtained from the equation v^2 = u^2 + 2as:
v^2 = 0^2 + 2 g sinθ × 2.5 = 2×10 sin30º × 2.5 = 25 from which v = 5 ms^–1
Therefore, u[2] = – 5 ms^–1
The relative velocity before collision is u[1 ]– u[2] = 2 – (–5)
The relative velocity after collision is (v[1 ]– v[2]) = 2 – v[2 ]where v[2] is the velocity of the small object just after the collision. (The velocity of the large block after collision is
unchanged since its mass is large compared to the mass of the small object. Or, v[1] = u[1])
Therefore, from Eq (i) we have
2 – (–5) = –(2[ ]– v[2])
This gives v[2] = 9 ms^–1.
[You can obtain v[2] by solving the following equations highlighting the conservation of momentum and kinetic energy in the case of elastic collisions:
m[1]u[1] + m[2]u[2] = m[1]v[1] + m[2]v[2]………………………..(i)
½ m[1]u[1]^2 + ½ m[2]u[2]^2 = ½ m[1]v[1]^2 + ½ m[2]v[2]^2…………..(ii)
Equations (i) and(ii) can be solved for the velocities v[1] and v[2] of the block and the small object respectively after the collision. You will get
v[1] = [(m[1]– m[2])u[1] + 2m[2]u[2]] /(m[1]+m[2]) and
v[2] = [(m[2]– m[1])u[2] + 2m[1]u[1]] /(m[1]+m[2])
Here m[1] >> m[2], u[1] = 2 ms^–1 and u[2] = – 5 ms^–1 so that
v[1] ≈ u[1] = 2 ms^–1 and
v[1] ≈ –u[2] + 2 u[1] = – (– 5) + (2×2) = 9 ms^–1
* * * * * * * * * * * * * * * * * * * * * * *
If you would like just arguments (without using lengthy mathematical steps, you may proceed like this (after obtaining the velocity of the object just before collision as –5 ms^–1):
Before collision the block has velocity 2 ms^–1 where as the small object has velocity –5 ms^–1 (relative to the ground). If the block is taken to be at rest for convenience, you have to imagine that
the small object is moving towards the block with a velocity of –7 ms^–1. We are in fact using a frame of reference in which the block is at rest and are finding the velocities of the block and the
small object in this frame by adding a velocity of –2 ms^–1 to both:
2–2 = 0 and –5–2 = –7.
Just after the elastic collision, the velocity of the object becomes 7 ms^–1 relative to the block which we kept at rest for the convenience of argument. Our frame of reference is to be brought back
to the ground. For this we add a velocity of +2 ms^–1 to the block and the small object and obtain the velocity of the block as 2 ms^–1 (0+2 = 2) and the velocity of the small object as 9 ms^–1 (7+2= | {"url":"http://www.apphysicsresources.com/2011/09/","timestamp":"2024-11-08T04:14:07Z","content_type":"application/xhtml+xml","content_length":"130736","record_id":"<urn:uuid:da75d4c7-36f8-4a8c-a758-fbf6682c1dea>","cc-path":"CC-MAIN-2024-46/segments/1730477028025.14/warc/CC-MAIN-20241108035242-20241108065242-00572.warc.gz"} |
Inferring nuclear structure from heavy
SciPost Submission Page
Inferring nuclear structure from heavy isobar collisions using Trajectum
by Govert Nijs, Wilke van der Schee
This Submission thread is now published as
Submission summary
Authors (as registered SciPost users): Wilke van der Schee
Submission information
Preprint Link: https://arxiv.org/abs/2112.13771v3 (pdf)
Date accepted: 2023-06-07
Date submitted: 2023-04-05 12:54
Submitted by: van der Schee, Wilke
Submitted to: SciPost Physics
Ontological classification
Academic field: Physics
Specialties: • Nuclear Physics - Theory
Approach: Theoretical
Nuclei with equal number of baryons but varying proton number (isobars) have many commonalities, but differ in both electric charge and nuclear structure. Relativistic collisions of such isobars
provide unique opportunities to study the variation of the magnetic field, provided the nuclear structure is well understood. In this Letter we simulate collisions using several state-of-the-art
parametrizations of the $^{96}_{40}$Zr and $^{96}_{44}$Ru isobars and show that a comparison with the exciting STAR measurement arXiv:2109.00131 of ultrarelativistic collisions can uniquely identify
the structure of both isobars. This not only provides an urgently needed understanding of the structure of the Zirconium and Ruthenium isobars, but also paves the way for more detailed studies of
nuclear structure using relativistic heavy ion collisions.
Author comments upon resubmission
We wish to thank the referee for their careful reading of the manuscript. We believe we have addressed their questions below. In addition to the referee's questions, we found that in the left panels
of Fig.~6, the ratio was incorrectly labelled as ZrZr/RuRu. We have corrected this to RuRu/ZrZr.
The referee writes:
This manuscript studied how to use measurements in ultra-relativistic heavy-ion collisions to probe the nuclear structure of the colliding nuclei. The authors performed high statistics numerical
simulations for Ru+Ru and Zr+Zr collisions at the top RHIC energy with the \emph{Trajectum} framework. They studied how particle yield, mean transverse momentum, and anisotropic flow coefficients
depend on different nuclear structure configurations parameterized by five sets of Woods-Saxon parameters. The paper was written clearly and contained important physics insights for the RHIC
isobar program. This study also builds connections between low-energy nuclear structures and high-energy relativistic heavy-ion collisions. I would recommend it for publication once the authors
clarify the following questions.
To build a connection between the structure of nuclei and high-energy heavy-ion collisions, the authors should explain the underlying assumptions for how the produced initial-state energy density
profile in the heavy-ion collision is related to the nucleus' structure. For example, will different energy deposition models weaken the sensitivity of the Woods Saxon deformation parameters on
heavy-ion observables?
Our response:
In the ratio between Ru and Zr, dependence on model parameters usually cancels to a large degree. We show an explicit example of this in Fig.~9, where we show that changing the viscosities changes
$v_2$ and $v_3$ for both Ru and Zr, but in the ratio this dependence cancels to within statistical uncertainties. Given that computing isobars is statistically demanding, we did not check explicitly
whether varying parameters related to the initial energy deposition has an effect on the sensitivity of the heavy-ion observables on Woods-Saxon deformation parameters, but any such effects are
similarly expected to cancel out when taking the ratio between Ru and Zr.
We have added the sentence ``More generally, it is expected that the dependence of observables on other model dependencies such as $d_{\rm min}$ in the initial state or other pre-hydrodynamic
parameters mostly cancel when taking a ratio of observables from the two isobars.'' on page 8 to make this clear from the text.
The referee writes:
The Woods-Saxon parameters listed in Table 1 assumed the nucleon were point-like objects. However, in the Trento initial condition model, the nucleons are assumed to have finite sizes. Did the
authors correct the Woods-Saxon parameters for finite nucleon sizes, as discussed in Phys.~Rev.~C 79, 064904 (2009)?
Our response:
Indeed one can make Woods-Saxon parameters which either describe the charge or baryon number density, or describe the point density of the nucleons. As the referee points these Woods-Saxon parameters
are only approximately equal. In principle, our calculation requires the parameters for the positions of the nucleons. The parameters for case 1 and 2, however, come from relatively old references
that likely do not include the effects described in Phys.~Rev.~C 79, 064904 (2009). Cases 3 till 5 are more modern and we think that they describe the point densities of the nucleons.
However, similar to the point made above the small difference in the Woods-Saxon parameters affects Ru and Zr equally and hence in the ratio these differences cancel. Since we were quite clear that
our study is not a precision attempt at describing Ru and Zr separately we decided not to further comment on the charge versus point density in this paper. If the referee is interested we have a more
specific discussion in 2206.13522 about this, but we did not feel it relevant enough for isobars to expand on this in the current work.
The referee writes:
Did the authors consider the short-range hard-core repulsion between nucleons in their nuclear configurations? Would these short-range correlations affect the observable ratios between the two
isobar collisions?
Our response:
The Trento model incorporates a minimal distance requirement for the placement of the nucleons inside the nucleus, where we require nucleons to be at least $d_\text{min}$ apart. In Bayesian analyses
we generally find little dependence on $d_\text{min}$, and it has little effect on observables. As mentioned in our reply to the referee's first question, especially in the isobar ratio any
dependence is expected to largely cancel out.
We have added the following on page 3 to make this clear in the text: ``As in [26], Trento also includes a hard-core repulsion implemented through a minimal inter-nucleon distance $d_\text{min}$.''
List of changes
Published as SciPost Phys. 15, 041 (2023)
Reports on this Submission | {"url":"https://scipost.org/submissions/2112.13771v3/","timestamp":"2024-11-13T16:29:39Z","content_type":"text/html","content_length":"37089","record_id":"<urn:uuid:05cb72bd-39bb-4723-a688-d8b7cef05900>","cc-path":"CC-MAIN-2024-46/segments/1730477028369.36/warc/CC-MAIN-20241113135544-20241113165544-00449.warc.gz"} |
Table of Contents
How to Use the Loan Calculator
To use our loan simulator, simply enter the requested data into the forms:
1. First, enter the total loan amount, that is the amount you will request from your bank or creditor.
2. Second, enter the annual interest rate that your creditor will apply to the credit.
3. And third and last, the repayment term, which is the stipulated time to settle the borrowed amount plus interest.
Once you have entered all the data in their respective forms, you only need to click “Calculate” for our tool to generate the calculation of the monthly amount you must pay, as well as the total
interest you should have paid once the repayment term is over.
If you want to learn how to do all these calculations by hand, we explain it below, step by step and with examples so you don't miss any details.
How to Calculate Your Loan Payments
To calculate them you will need to learn to use this mathematical formula:
$$\text{Monthly payment} = \frac{\text{Capital} * i}{1 - (1+i)^{-n}}$$
• Capital = borrowed money.
• i = interest.
• n = total number of payments.
Let's imagine you take a loan of €80,000 with an annual rate of 5%, monthly payment frequency, and a maturity of 25 years.
First, we must find out the “i”
Considering that you have to pay each month, it will suffice to divide the figure into 12 equal parts.
In this case, it is 5%, which fractionally is written as 0.05 therefore:
$$i = \frac{0.05}{12} = 0.0041667$$
Then we must find out the “n”
In this step, we must find out how many payment installments the financial product you have chosen has.
In this case, if you want to find the total number of payments, multiply 25*12 to determine the number of installments which will be 300.
$$n = 25 \times 12 = 300$$
Now we calculate the amount of the monthly payment
Once you have gotten this far, it's time to calculate the amount of the monthly payment of your loan, for which it will suffice to substitute the numbers into the variables of the first formula.
$$\text{Monthly payment} = \frac{80000 * 0.0041667}{1 - (1+0.0041667)^{-300}} = 467.67$$
It may seem complex, but if you follow the procedure calmly, you will solve it more easily.
In the case of our example, the monthly installment payment would be €467.67.
Finally, we calculate the total interest
Now that you know how to calculate the monthly payment, you can calculate the total interest to be paid throughout the loan process.
For this, multiply all the installments by the monthly payment we just calculated. Then, subtract the borrowed capital.
In our case, the formula would be as follows:
$$\text{Interest} = 300 \times 467.67 - 80000 = €60301$$
How to Calculate Loan Interest with Excel
If you prefer not to calculate the interest on your loan by hand because it is very tedious, a good alternative to speed up the process is to do it with Microsoft Excel.
For this, you only need to know three figures:
The borrowed capital, let's suppose that, maintaining the previous example, you borrow €80,000.
The sum of the installments you must pay. Remember that to obtain it you must multiply the number of annual installments by the total duration in years. If, for example, you have to pay monthly
installments for 25 years: 25 *12 =300.
The interest rate, of course, adapted to the number of installments you will pay over 12 months. Following the previous example, if the annual interest is 5%, the monthly rate you must enter in the
equation is 0.00416.
Excel has by default the equation to calculate the monthly payments of a loan. Therefore, it will suffice to give it the necessary information to perform the calculation.
For this, enter in any cell the formula “=PMT” to tell it that you want to calculate the monthly payments of a credit or financing.
Now enter the data we have previously calculated in this order (interest rate, number of monthly payments, the total amount of the loan with a negative sign; 0).
Following the figures calculated for the previous case it would be:
$$\text{=PMT(0.0041667; 300; -80000; 0)}$$
The figure 0 is used to indicate that the total you will have to pay will be equal to 0 once you have made the 300 payments.
If you have entered the function correctly, the installment of each of your monthly payments will appear in cell A1 of your spreadsheet.
In this case, you will see €467.31.
If for any reason in cell A1 you see the result “#NUM!”, it means that you have entered something incorrectly. Double-check if all the factors are written correctly. If they are not, correct them and
try again.
To calculate the total amount to pay at the end of the loan duration, simply multiply the amount of the monthly installment by the total number of installments.
In this case €467.31 * 300, which will result in €140,193. This is the total amount you will end up paying after 25 years.
If, on the other hand, you want to know how much the interest you will pay once all the loan installments have expired, subtract the initially requested amount from the calculated amount above.
In this case, it would be as follows:
$$€140193 – €80,000 = €60,193$$
This will be the total amount you will have to pay in interest.
How to Calculate the Interest on a Loan
There are various types of loans, which vary both in terms and benefits. Knowing how to determine a monthly interest rate, or the total interest you will have paid at the end of the loan, are
fundamental pieces of information for acquiring the most convenient financial product.
First, you will have to define the initial capital, that is, the total amount you will obtain and that you will have to request from the bank or the respective creditor.
Second, it is essential to know the interest rate and the cost you will pay for requesting a loan. That is, the interests you will pay on the capital during the duration of the loan.
Consider the Types of Interest
Personal loans usually have fluctuations in interest rates during their validity, and this is determined both by the risk profile of the client and by the contracted period of time.
When choosing a financial product, you must choose between several options.
The first option consists of paying more expensive interest, that is, installments with higher balances over a shorter contracted period of time.
This option is much more attractive for all those users who have greater economic stability or formidable liquidity.
Of course, if you decide to pay in fewer installments, you will end up paying more expensive interest, since the longevity of the provision will be shorter.
The second option focuses more on those users with less liquidity or who have a variable monthly income, as could be the case with salespeople who earn on commission.
Subsequently, we will need to know the frequency of capitalization. From a technical point of view, the frequency of capitalization refers to the time in which the interests you must pay are
This concept has a strong impact because it will allow knowing how often the installments are due: the greater the frequency of the installments, the lower will be the amount of the installment you
will have to pay.
The frequency with which a loan is capitalized also influences the calculation of compound interest, understanding these interests as those that will make you pay as a surcharge for the precedents.
The shorter the time period in which the interests are capitalized, the higher will be the total amount they will charge you.
Consider APR and EAR
The process for capitalization is the main factor to be able to distinguish the well-known as Annual Percentage Rate (APR) and the Effective Annual Rate (EAR).
Capitalization can be expressed in various ways:
The APR is the rate paid for each time period (without taking into account the frequency with which you capitalize), to which the number of installments must be multiplied, until reaching the annual
The EAR is a somewhat more complex equation, which also takes into consideration interests of another nature, and helps you understand how much you will actually pay annually.
If you check the informative papers you will also find a concept known as AER, which is somewhat similar to EAR, however, you will find the rest of the costs you will pay.
Consider the Loan Repayment Time
Then, you must take into account the loan repayment time, that is, the time period in which it must be paid.
The duration changes depending on the nature of the financial product, and it is important to choose the one whose maturity suits your needs.
If it lasts longer, normally, it will imply a greater amount of interest paid in total on the loan.
Such duration is linked to the frequency with which the payments will be made. The installments can be monthly, quarterly, or even annually, or they may have a single installment.
Theoretically, any periodization is possible regardless of the required figure or the demanded time period.
In reality, those that are personal in nature and have a rather short duration, are usually paid with monthly installments.
Most personal loans establish penalties for repaying the total credit balance early, this means that you will have to pay more in the case that you prefer to extinguish the debt contracted before the
agreed time.
Final Tips
Learning to calculate the installments of a loan will give you enough information to know which type and conditions are most convenient for you.
If you have liquidity and are looking for a lower-cost product to meet your needs, one with a shorter duration but with higher monthly payments is perhaps the most suitable option for you.
Remember that higher monthly payments translate into lower total interest.
And that is our small contribution on loans and interests, we hope it has helped you understand the whole procedure and has dispelled all the doubts you had before starting to read.
If you have reached this point with a good taste in your mouth, do not forget to share this post on your social networks, that way your contacts can also get to know this community of calculators and
free simulators. | {"url":"https://calcuonline.com/loan-calculator/?widget=yes","timestamp":"2024-11-07T05:38:17Z","content_type":"text/html","content_length":"70706","record_id":"<urn:uuid:ed52ec23-a1c9-4e6a-be71-267dc2b4e47f>","cc-path":"CC-MAIN-2024-46/segments/1730477027957.23/warc/CC-MAIN-20241107052447-20241107082447-00552.warc.gz"} |
$1000 to $1,000,000,000 | Forex Forum by Myfxbook
Aug 30, 2010 at 08:29
5,811 Views
33 Replies
I wanted to share this information with the new traders, because they often forget about the long term picture and focus on huge gains at day one. You do not need 100 pip swing trades or huge gains
All you need is 20 pips consistently. So, yes some days you may make more and some days you may make less, but it order for you to become a full time trader, you must make 100 pips a week for years
to come... This means that you do not need to risk your accounts, by trading news, or market opens. All you need is 1 good trade a day. Thats it..
This is not some far fetched dream, nor is this impossible. We are not getting greedy, and we are not shooting for the stars.
We have realistic goals and we can achieve them. Yes you will need a solid strategy and discipline to follow that strategy day in and day out. But it is all very possible.
This concept of 20 pips a day has been written on my BIG white Board so many times that ''How can I forget this simple goal.''
However, when I sit down to trade. I forget about that my goal is to ONLY make a few good trades and earn my 20 pips. And this is what has been the leading factor of my failure for years. Honestly,
this is the main reason. We all feel that, We can achieve more, so lets put on another trade.... well in the end,,, you should have been happy with that 20 pip gain. Because after trading for a few
years, that 20 pips a day goal, would have turned into a nice portfolio. But greed often gets in the way, and we contnue to lose our money.
So new and old traders... Print this out. Hang it on the wall. and remember, ''You don't need all the opportunities, you only need a few that can earn you 20 pips a day'''
This information has been taken from my website, as I thought we should all SEE THE BIG PICTURE
This is for investors as well... Stop trying to find traders that can double your account every month. Find a trader that is consistently making pips... In the long run, you will have grown your
account much further than trying to find the best trader on the block
Compounding Lot Sizes When Your Equity grows.
This technique can grow your account astronomically in a short amount of time. The only way to increase the outcome of success with compounding is to increase your accuracy, risk management and
discipline skills. We all must follow the rules that come along with making money quickly. The first objective to handle Draw-down and have a strict guideline of exiting bad trades at a loss.
Especially, considering that “Unmanaged” trades are the losing trades that make all(except a few) traders fail. Often enough, it is one specific trade that a trader will not exit from that
consequently ends their trading career prematurely. . If the trader does not close their losing trade quickly, their account equity can be severally damaged. You do not want an equity curve that
looks like a martingale. An inevitable crash in the end. We have to secure profits and limit losses quickly.
Account Balance – Lot per Trade – Pip Value in USD
$1000 = .1lot per trade = $1 = 1 pip after spread
$2500 = .25lot per trade = $2.50 = 1 pip after spread
$5000 = .5lot per trade = $5 = 1 pip after spread
$10000 = 1.0lot per trade = $10 = 1 pip after spread
$20000 = 2.0lot per trade = $20 = 1 pip after spread
$50000 = 5.0 lot per trade = $50 = 1 pip after spread
$100000 = 10.0lot per trade = $100 = 1 pip after spread
$500000 = 50.0lot per trade = $500 = 1 pip after spread -MT4 Limit
$1000000 = 100.0lot per trade = $1000 = 1 pip after spread
$1000: Start Compounding
Month 1
20 pips X 5 days a week1 = $100 (100pips x Pip Value)= Balance $1100
20 pips X 5 days a week2 = $100 (100pips x Pip Value)= Balance $1200
20 pips X 5 days a wee3 = $100 (100pips x Pip Value)= Balance $1300
20 pips X 5 days a wee4 = $100 (100pips x Pip Value)= Balance $1400
Month 2
20 pips X 5 days a week1 = $100 (100pips x Pip Value)= Balance $1500
20 pips X 5 days a week2 = $100 (100pips x Pip Value)= Balance $1600
20 pips X 5 days a wee3 = $100 (100pips x Pip Value)= Balance $1700
20 pips X 5 days a wee4 = $100 (100pips x Pip Value)= Balance $1800
Month 3
20 pips X 5 days a week1 = $100 (100pips x Pip Value)= Balance $1900
20 pips X 5 days a week2 = $100 (100pips x Pip Value)= Balance $2000
20 pips X 5 days a wee3 = $100 (100pips x Pip Value)= Balance $2100
20 pips X 5 days a wee4 = $100 (100pips x Pip Value)= Balance $2200
Month 4
20 pips X 5 days a week1 = $100 (100pips x Pip Value)= Balance $2300
20 pips X 5 days a week2 = $100 (100pips x Pip Value)= Balance $2400
20 pips X 5 days a wee3 = $100 (100pips x Pip Value)= Balance $2500
20 pips X 5 days a wee4 = $250 (100pips x Pip Value)= Balance $2750
Month 5
20 pips X 5 days a week1 = $250 (100pips x Pip Value)= Balance $3000
20 pips X 5 days a week2 = $250 (100pips x Pip Value)= Balance $3250
20 pips X 5 days a wee3 = $250 (100pips x Pip Value)= Balance $3500
20 pips X 5 days a wee4 = $200 (100pips x Pip Value)= Balance $3750
Month 6
20 pips X 5 days a week1 = $250 (100pips x Pip Value)= Balance $4000
20 pips X 5 days a week2 = $250 (100pips x Pip Value)= Balance $4250
20 pips X 5 days a wee3 = $250 (100pips x Pip Value)= Balance $4500
20 pips X 5 days a wee4 = $250 (100pips x Pip Value)= Balance $4750
Month 7
20 pips X 5 days a week1 = $250 (100pips x Pip Value)= Balance $5000
20 pips X 5 days a week2 = $500 (100pips x Pip Value)= Balance $5500
20 pips X 5 days a wee3 = $500 (100pips x Pip Value)= Balance $6000
20 pips X 5 days a wee4 = $500 (100pips x Pip Value)= Balance $6500
Month 8
20 pips X 5 days a week1 = $500 (100pips x Pip Value)= Balance $7000
20 pips X 5 days a week2 = $500 (100pips x Pip Value)= Balance $7500
20 pips X 5 days a wee3 = $500 (100pips x Pip Value)= Balance $8000
20 pips X 5 days a wee4 = $500 (100pips x Pip Value)= Balance $8500
Month 9
20 pips X 5 days a week1 = $500 (100pips x Pip Value)= Balance $9000
20 pips X 5 days a week2 = $500 (100pips x Pip Value)= Balance $9500
20 pips X 5 days a wee3 = $500 (100pips x Pip Value)= Balance $10000
20 pips X 5 days a wee4 = $1000 (100pips x Pip Value)= Balance 11000
Month 10
20 pips X 5 days a week1 = $1000 (100pips x Pip Value)= Balance $12000
20 pips X 5 days a week2 = $1000 (100pips x Pip Value)= Balance $13000
20 pips X 5 days a wee3 = $1000 (100pips x Pip Value)= Balance $14000
20 pips X 5 days a wee4 = $1000 (100pips x Pip Value)= Balance $15000
Month 11
20 pips X 5 days a week1 = $1000 (100pips x Pip Value)= Balance $16000
20 pips X 5 days a week2 = $1000 (100pips x Pip Value)= Balance $17000
20 pips X 5 days a wee3 = $1000 (100pips x Pip Value)= Balance $18000
20 pips X 5 days a wee4 = $1000 (100pips x Pip Value)= Balance $19000
Month 12
20 pips X 5 days a week1 = $1000 (100pips x Pip Value)= Balance $20000
20 pips X 5 days a week2 = $2000 (100pips x Pip Value)= Balance $22000
20 pips X 5 days a wee3 = $2000 (100pips x Pip Value)= Balance $240000
20 pips X 5 days a wee4 = $2000 (100pips x Pip Value)= Balance $26000
We will have an opportunity to achieve $2,000 as a weekly income after the first year
Only 20 pips a day
Year 2- Month 2
20 pips X 5 days a week1 = $2000 (100pips x Pip Value)= Balance $28000
20 pips X 5 days a week2 = $2000 (100pips x Pip Value)= Balance $30000
20 pips X 5 days a wee3 = $2000 (100pips x Pip Value)= Balance $32000
20 pips X 5 days a wee4 = $2000 (100pips x Pip Value)= Balance $34000
Year 2- Month 3
20 pips X 5 days a week1 = $2000 (100pips x Pip Value)= Balance $36000
20 pips X 5 days a week2 = $2000 (100pips x Pip Value)= Balance $38000
20 pips X 5 days a wee3 = $2000 (100pips x Pip Value)= Balance $40000
20 pips X 5 days a wee4 = $2000 (100pips x Pip Value)= Balance $42000
Year 2- Month 4
20 pips X 5 days a week1 = $2000 (100pips x Pip Value)= Balance $44000
20 pips X 5 days a week2 = $2000 (100pips x Pip Value)= Balance $46000
20 pips X 5 days a wee3 = $2000 (100pips x Pip Value)= Balance $48000
20 pips X 5 days a wee4 = $2000 (100pips x Pip Value)= Balance $50000
Year 2- Month 5
20 pips X 5 days a week1 = $5000 (100pips x Pip Value)= Balance $55000
20 pips X 5 days a week2 = $5000 (100pips x Pip Value)= Balance $60000
20 pips X 5 days a wee3 = $5000 (100pips x Pip Value)= Balance $65000
20 pips X 5 days a wee4 = $5000 (100pips x Pip Value)= Balance $70000
Year 2- Month 6
20 pips X 5 days a week1 = $5000 (100pips x Pip Value)= Balance $75000
20 pips X 5 days a week2 = $5000 (100pips x Pip Value)= Balance $80000
20 pips X 5 days a wee3 = $5000 (100pips x Pip Value)= Balance $85000
20 pips X 5 days a wee4 = $5000 (100pips x Pip Value)= Balance $90000
Year 2- Month 7
20 pips X 5 days a week1 = $5000 (100pips x Pip Value)= Balance $95000
20 pips X 5 days a week2 = $5000 (100pips x Pip Value)= Balance $100000
20 pips X 5 days a wee3 = $10000 (100pips x Pip Value)= Balance $110000
20 pips X 5 days a wee4 = $10000 (100pips x Pip Value)= Balance $120000
Year 2- Month 8
20 pips X 5 days a week1 = $10000 (100pips x Pip Value)= Balance $130000
20 pips X 5 days a week2 = $10000 (100pips x Pip Value)= Balance $140000
20 pips X 5 days a wee3 = $10000 (100pips x Pip Value)= Balance $150000
20 pips X 5 days a wee4 = $10000 (100pips x Pip Value)= Balance $160000
Year 2- Month 9
20 pips X 5 days a week1 = $10000 (100pips x Pip Value)= Balance $170000
20 pips X 5 days a week2 = $10000 (100pips x Pip Value)= Balance $180000
20 pips X 5 days a wee3 = $10000 (100pips x Pip Value)= Balance $190000
20 pips X 5 days a wee4 = $10000 (100pips x Pip Value)= Balance $200000
Year 2- Month 10
20 pips X 5 days a week1 = $10000 (100pips x Pip Value)= Balance $210000
20 pips X 5 days a week2 = $10000 (100pips x Pip Value)= Balance $220000
20 pips X 5 days a wee3 = $10000 (100pips x Pip Value)= Balance $230000
20 pips X 5 days a wee4 = $10000 (100pips x Pip Value)= Balance $240000
Year 2- Month 11
20 pips X 5 days a week1 = $10000 (100pips x Pip Value)= Balance $250000
20 pips X 5 days a week2 = $10000 (100pips x Pip Value)= Balance $260000
20 pips X 5 days a wee3 = $10000 (100pips x Pip Value)= Balance $270000
20 pips X 5 days a wee4 = $10000 (100pips x Pip Value)= Balance $280000
Year 2- Month 12
20 pips X 5 days a week1 = $10000 (100pips x Pip Value)= Balance $290000
20 pips X 5 days a week2 = $10000 (100pips x Pip Value)= Balance $300000
20 pips X 5 days a wee3 = $10000 (100pips x Pip Value)= Balance $310000
20 pips X 5 days a wee4 = $10000 (100pips x Pip Value)= Balance $320000
Year 3- Month 1
20 pips X 5 days a week1 = $10000 (100pips x Pip Value)= Balance $330000
20 pips X 5 days a week2 = $10000 (100pips x Pip Value)= Balance $340000
20 pips X 5 days a wee3 = $10000 (100pips x Pip Value)= Balance $350000
20 pips X 5 days a wee4 = $10000 (100pips x Pip Value)= Balance $360000
Year 3- Month2
20 pips X 5 days a week1 = $10000 (100pips x Pip Value)= Balance $370000
20 pips X 5 days a week2 = $10000 (100pips x Pip Value)= Balance $380000
20 pips X 5 days a wee3 = $10000 (100pips x Pip Value)= Balance $390000
20 pips X 5 days a wee4 = $10000 (100pips x Pip Value)= Balance $400000
Year 3- Month3
20 pips X 5 days a week1 = $10000 (100pips x Pip Value)= Balance $410000
20 pips X 5 days a week2 = $10000 (100pips x Pip Value)= Balance $420000
20 pips X 5 days a wee3 = $10000 (100pips x Pip Value)= Balance $430000
20 pips X 5 days a wee4 = $10000 (100pips x Pip Value)= Balance $440000
Year 3- Month4
20 pips X 5 days a week1 = $10000 (100pips x Pip Value)= Balance $450000
20 pips X 5 days a week2 = $10000 (100pips x Pip Value)= Balance $460000
20 pips X 5 days a wee3 = $10000 (100pips x Pip Value)= Balance $470000
20 pips X 5 days a wee4 = $10000 (100pips x Pip Value)= Balance $480000
Year 3- Month5
20 pips X 5 days a week1 = $10000 (100pips x Pip Value)= Balance $490000
20 pips X 5 days a week2 = $10000 (100pips x Pip Value)= Balance $500000
20 pips X 5 days a wee3 = $50000 (100pips x Pip Value)= Balance $550000
20 pips X 5 days a wee4 = $50000 (100pips x Pip Value)= Balance $600000
Year 3- Month6
20 pips X 5 days a week1 = $50000 (100pips x Pip Value)= Balance $650000
20 pips X 5 days a week2 = $50000 (100pips x Pip Value)= Balance $700000
20 pips X 5 days a wee3 = $50000 (100pips x Pip Value)= Balance $750000
20 pips X 5 days a wee4 = $50000 (100pips x Pip Value)= Balance $800000
Year 3- Month7
20 pips X 5 days a week1 = $50000 (100pips x Pip Value)= Balance $850000
20 pips X 5 days a week2 = $50000 (100pips x Pip Value)= Balance $900000
20 pips X 5 days a wee3 = $50000 (100pips x Pip Value)= Balance $950000
20 pips X 5 days a wee4 = $50000 (100pips x Pip Value)= Balance $1000000
Congratulations. You have been promoted to C.E.O and have been awarded an Elite Trader Certificate, granting you access to new liquidity pools and 100 Lot positions.
How does $100,000 as weekly income sound?
New Liquidity Providers
Smaller Commission percentage per trade
Institutional Platforms
Possible Job Opportunities
Year 3- Mont8
20 pips X 5 days a week1 = $100000 (100pips x Pip Value)= Balance $1100000
20 pips X 5 days a week2 = $100000 (100pips x Pip Value)= Balance $1200000
20 pips X 5 days a wee3 = $100000 (100pips x Pip Value)= Balance $1300000
20 pips X 5 days a wee4 = $100000 (100pips x Pip Value)= Balance $1400000
well said man.. well said ! this for sure is a sermon that everyone needs to listen to....not only the new traders.
winning isnt victory and losing isnt defeat.
i will like to add again.
your post just boils down to compounding interest. a trader can actually turn $1000 to $1 million in the next 3 years if he makes 21.2% compounded monthly.
the 20 pip a day target is cool but it might make the trader indulge in 'over-trading' thinking he must place a trade EVERYDAY and MAKE THAT 20 PIPS ! and when he makes a loss..he TRIES TOO HARD and
places more trades or increases lots to get that daily target. this is not cool ! so i suggest that a MONTHLY target is set. an achievable target like 21% so the trader knows he needs to make 5% this
week... he might make more but the 5% stays in the account. this involves discipline and until there is discipline consistency will NEVER come !
winning isnt victory and losing isnt defeat.
Aug 30, 2010 at 11:21 (edited Aug 30, 2010 at 11:23)
Member Since May 30, 2010 64 posts
Hahaha when I saw the thread name I was like 'ohh not one of these again....'
I think most people have too high expectations, are too impatient and are not willing to practice and master the 'boring' side of what it takes to become a professional. And I'm referring to money
management, psychology, risk management, amongst many other things that are needed as well.
Also many people don't have their priorities 'right' when trading, they always think of how much they could earn in a trade when they should be focusing on protecting the capital and risk management.
Enough of my ranting..
Best regards, Stefán.
A person is only limited by the thoughts he chooses.
Aug 30, 2010 at 20:12
Member Since Aug 13, 2010 43 posts
good post
"The first rule of forecasting should be that the unforeseen keeps making the future unforeseeable." - David McCasland (January 5,2012, Our Daily Bread)
Aug 30, 2010 at 22:19
Member Since May 18, 2010 47 posts
Will , no offense.. but that essay was not worthy when matched up with Victors summary :)
Most people get caught up in these figures game because the only thing they see is the last number ..i.e, a 1000K account..
They forget that its made in years and they try to get to that goal in months ...one thing is for sure, a real gambler and a real trader is alike....coz both take calculated risk. People say forex is
not gambling...but it really is. Thats why its called speculative market...even if you are an EW or fibb expert , you are still speculating :)
If i had a 1000K i wud invent a time machine and give it for lease, so that you dont have to follow those lagging indicators anymore :D
Sep 04, 2010 at 00:32
Member Since Aug 11, 2010 3 posts
Ah yes..but...gambling is gambling because you have no control over losses. Bet $1,000 and if you lose you have lost the entire $1,000. Speculate with $1,000 and if you lose your loss is limited to
your stop-loss or the point at which you exit the market. Just like owning a home, if the price should begin to fall you would sell-out, right? Is that gambling?
By the way, if you start with $1 and double your money every day, how many days will it take to reach $1-million? It will take 20 days! I think the point being made is that the FX market, and the
leverage available, presents a real opportunity for the average person to become wealthy PROVIDED sensible money management forms part of the traders' tool kit.
Another point of interest before I stop ranting on is the 50/50 nature of FX trading. Like flipping a coin, over time the results will be 50% heads, and 50% tails. Try it! So, roughly 50% of trades
will win, and 50% of trades will lose. So, 2 to 1 odds or better is essential to long term gain. That is, a 100pip stop must be matched by a 200pip or larger take profit. Over 100 trades you lose
5,000pips, and gain 10,000 pips, for a 5,000pip net gain!
Having goals in life adds motivation and creates a pathway to success. Without laying out some sort of structure, you cannot justify if you are on track or not. This is only an agenda and forecast of
what we should be focusing on.
Just like a business, planning and building a solid foundation is the most important aspect. Most businesses fail within the first few years because of poor management and lack of capital. Now lack
of capital is normally caused from the first problem- Poor management. In order to have enough capital for long term expenditures and potential growth, we need to plan for the future.
In order to plan for the future, we need goals to keep us in check. These goals are the little stepping stones that get us somewhere in life. Not the idea nor the vision.
Our priority is to make $. In order to make money, we have to conserve our capital, allowing us to make that next trade. Calculated risk = Trading, innovation, and is a necessary procedure to
accomplish something on YOUR OWN. With out taking on some risk, you will have be an employee for most of your life.
Yes, we are speculators. Thats the name of the game. Some speculators are just more accurate than others.
Ultimately, they can stay solvent long enough to make another trade = Risk management.
Voffi posted:
they always think of how much they could earn in a trade when they should be focusing on protecting the capital and risk management.
Best regards, Stefán.
Exactly. For every trade you put on, you need to determine HOW much you are willing to lose.. If the loss is greater than the return and your accuracy is under 90%, you should not take the trade and
wait for a better opportunity.
Just because we may have a goal of 20 a day, does not mean we have to risk 100 pips to gain the 20 pips. That is foolish, but unfortunately, I see traders exercising this concept. HOPE and HOLD>
Thats not planning, thats ending your career prematurely.. OK back to trading...
Sep 08, 2010 at 11:19
Member Since Jul 21, 2010 111 posts
I just have to say I feel that when trading with these kind of objectives it's foolish to trade with fixed lot size. Easier for sure but trading with percents makes it a lot smoother.
As I see it you strive for 10% gain every week(a huge gain btw).
Starting balance 1000$
20 pips X 5 days a week1 = $100 (100pips x Pip Value)= Balance $1100
That's a 10% gain. If you traded with 10% gain each week(lot size likely varies every trade) you have a very different profit curve:
10% for week1 = Balance $1100
10% for week2 = Balance $1210
10% for week3 = Balance $1331
10% for week4 = Balance $1464
In first month we're 64 dollars ahead. And still you trade with 20 pips gained a day... that doesn't change. Only traded lot-size changes.
To simplify next moths(to make it even more simpler every month calculated here is 4 weeks):
Month 2 end balance = $2144 - $344 more profit
Month 3 end balance = $3138 - $938 more profit
Month 4 end balance = $4595 - $1845 more profit
Month 5 end balance = $6727
Month 6 end balance = $9850
Month 7 end balance = $14,421
Month 8 end balance = $21,114
Month 9 end balance = $30,913
Month 10 end balance = $45,259
Month 11 end balance = $66,264
1 year profit(calculated with 48 weeks only) = $97,017 - a whopping $71,017 difference to a fixed lot-size($26,000) in profits!!!!
Only difference being a percentage of balance at stake instead of trading fixed lot-size.
In three years(48 week year as calculated before) and eight months profit gained this way would be astonishing $198,730,122!! Difference of $197,330,122!!
Too incredible? Yes, definitely.
I mean 10% gain per week is insane be it fixed-lot or percentage. What if we calculated this so that we have the same profit after three years and eight months as the presented in first post but with
a varied lot-size? Our weekly gain would need to be significantly smaller and far more realistic.
Our goal = $1,400,000 or so in 3 years and 8 months. Insert simple math here :) I'll just cut to the result to save us some pain.
We would only need about 5.83% gain per week to achieve this(actually less since I calculated with only a 48 week year(there are times you really don't want to trade so that's the reason for 48 weeks
only)).Far more realistic objective than 10% per week thought it's still a big gain.
Just my thoughts. Feel free to rant about it.
P.S. And nothing really guarantees that one can actually make any profit in the first place. These were just calculations that IMHO prove a point about money management but this doesn't make anyone a
successful trader.
Less effort, better results.
Thanks for spending the time to contribute. I'll get back to you on this... Compounding is Crazy isn't it?
Sep 08, 2010 at 16:04 (edited Sep 08, 2010 at 16:11)
Member Since Aug 23, 2010 34 posts
Sounds great system to make $1000 become $1.000.000.000 in 3 years.
For the Money Managers, maybe it can be use. It has daily target for 20 pip.
But for the part time trader just like me, as long i have chance to make green pips, i will open an order.
even it is not everyday.
For my self, i still try to make my $1000 to become $10.000, but i don't give target about how long i must get it.
About compounding, ever hear but never try before 😀
Nice work Will, looking forward to getting on board with this.
11:15, restate my assumptions: 1. Mathematics is the language of nature. 2. Everything around us can be represented and understood through numbers. 3. If you graph these numbers, patterns emerge.
Therefore: There are patterns everywhere in nature.
stevetrade posted:
Nice work Will, looking forward to getting on board with this.
Thanks Steve.. I have some interesting news too.. Its very exciting. 😄
Excellent, I shall look forward to hearing it.
11:15, restate my assumptions: 1. Mathematics is the language of nature. 2. Everything around us can be represented and understood through numbers. 3. If you graph these numbers, patterns emerge.
Therefore: There are patterns everywhere in nature.
Sep 14, 2010 at 23:11
Member Since Aug 22, 2010 5 posts
I have just opened a new $1000 real account and will be ecstatic if I can hit 100K.
Will be interesting, seeing that I am a part time trader.
Thanks to everyone that has helped me get set up here on myFXbook.com.
Happy Trading,
Trying to increase my portfolio one pip at a time!
Sep 24, 2010 at 06:24
Member Since Apr 09, 2010 11 posts
This idea has some sense. Last month or so, I have been working on letting profits run well beyond I usually did with improved entries. I have been closing half of my position somewhere between 30-50
pips and move stop to BE letting the rest run. I guess the month was not very good for letting profits run as all trends on pairs I am trading are in suspension, reversing or ranging. So, I
calculated, that had I closed all trades completely at where i closed only half position my returns would have been much better., But over long term , I guess it still has sense to let profits run
for more pips. Just let half position run and simply to continue same thing looking for opportunities and entering closing half position at some predetermined target, moving stop to BE and letting
the rest run. It is should be done very obviously only with trending pairs.
I think 20 take profit might be too little as it implies risk of about 20 pips which is too little in my opinion. I guess more reasonable stops are 40-50 pips with similar goals. Position size should
be set accordingly, but I would not risk more than 2% per trade. I tested 3% daily, it can be quite dangerous when 2% is ok and 1% is obviously the safest.
1 %,2% or 3% per trade not per day | {"url":"https://www.myfxbook.com/community/new-traders/1000-1000000000/48916,1","timestamp":"2024-11-08T03:34:48Z","content_type":"text/html","content_length":"321672","record_id":"<urn:uuid:2e5f59a8-3fcd-438e-95fb-85a2ddfa65bb>","cc-path":"CC-MAIN-2024-46/segments/1730477028019.71/warc/CC-MAIN-20241108003811-20241108033811-00422.warc.gz"} |
How Many Aluminum Atoms Are in 3.78 Grams of Aluminum? - Aluminum Profile Blog
How Many Aluminum Atoms Are in 3.78 Grams of Aluminum?
When it comes to understanding the structure of matter, one of the most fundamental concepts is that of atoms. Atoms are the basic building blocks of all matter, and they come in a variety of
different shapes and sizes. The number of atoms present in a given amount of matter can vary greatly, but determining this number is an important part of many scientific fields.
In this article, we will explore the number of aluminum atoms that are present in 3.78 grams of aluminum. We will look at the concept of atoms and examine the atomic mass of aluminum before
calculating the number of atoms present in 3.78 grams. Finally, we will compare our results to other measurements.
Exploring the Atomic Composition of 3.78 Grams of Aluminum
Exploring the Atomic Composition of 3.78 Grams of Aluminum
Before we can begin to calculate the number of aluminum atoms in 3.78 grams of aluminum, we must first understand what an atom is. An atom is the smallest unit of matter that retains its chemical
properties, and it consists of a nucleus surrounded by electrons. The nucleus contains protons and neutrons, while the electrons orbit around the nucleus.
Now that we have a basic understanding of atoms, we can begin to explore how many atoms make up 3.78 grams of aluminum. To do this, we must first understand what the atomic mass of aluminum is.
How Many Atoms Are in 3.78 Grams of Aluminum?
The atomic mass of aluminum is 26.981539 amu (atomic mass units). This means that one mole (6.022 x 10^23) of aluminum atoms has a mass of 26.981539 grams. Using this information, we can now
calculate the number of aluminum atoms present in 3.78 grams of aluminum.
Calculation of Aluminum Atoms in 3.78 Grams
Calculation of Aluminum Atoms in 3.78 Grams
To calculate the number of aluminum atoms in 3.78 grams, we must use Avogadro’s number. Avogadro’s number is the number of atoms or molecules in one mole (6.022 x 10^23). By dividing the mass of 3.78
grams of aluminum by the molar mass of aluminum (26.981539 amu), we can calculate the number of moles of aluminum present in 3.78 grams.
Using Avogadro’s number, we can then calculate the number of aluminum atoms present in 3.78 grams. To do this, we simply multiply the number of moles of aluminum by Avogadro’s number (6.022 x 10^23).
A Look at the Number of Aluminum Atoms in 3.78 Grams
After performing the calculations, we find that there are approximately 1.4 x 10^23 aluminum atoms in 3.78 grams of aluminum. This means that for every gram of aluminum, there are approximately 2.7 x
10^22 atoms of aluminum. This number is quite large, but it is important to remember that atoms are incredibly small and therefore the total mass of atoms present in 3.78 grams is relatively small.
To put this number into perspective, let’s compare it to other measurements. For example, a teaspoon of sugar weighs approximately 4.2 grams. If we were to calculate the number of aluminum atoms in a
teaspoon of sugar, we would find that there are approximately 5.9 x 10^22 atoms of aluminum present.
Examining the Atomic Mass of 3.78 Grams of Aluminum
Examining the Atomic Mass of 3.78 Grams of Aluminum
Now that we have calculated the number of aluminum atoms in 3.78 grams of aluminum, we can take a closer look at the atomic mass of aluminum. The atomic mass of aluminum is 26.981539 amu, which means
that for every gram of aluminum, there are approximately 26.981539 grams of aluminum atoms.
By multiplying the number of atoms in 3.78 grams of aluminum by the atomic mass of aluminum, we can calculate the total mass of aluminum atoms in 3.78 grams. In this case, the total mass of aluminum
atoms in 3.78 grams is approximately 1.02 x 10^-24 grams.
In conclusion, we have explored the number of aluminum atoms that are present in 3.78 grams of aluminum. We have discussed the concept of atoms and examined the atomic mass of aluminum before
calculating the number of atoms present in 3.78 grams. Finally, we have compared our results to other measurements.
The final result of our calculation is that there are approximately 1.4 x 10^23 aluminum atoms in 3.78 grams of aluminum. This number is quite large, but it is important to remember that atoms are
incredibly small and therefore the total mass of atoms present in 3.78 grams is relatively small. | {"url":"https://www.museoinclusivo.com/how-many-aluminum-atoms-are-in-3-78-g-of-aluminum/","timestamp":"2024-11-12T17:21:09Z","content_type":"text/html","content_length":"57155","record_id":"<urn:uuid:8eafaca6-4db2-438d-b7ab-703dffe27e8c>","cc-path":"CC-MAIN-2024-46/segments/1730477028273.63/warc/CC-MAIN-20241112145015-20241112175015-00337.warc.gz"} |
Why did the doctor switch jobs? | Thinkmad.in
Share with
Facebook Comment
You may also like..
You walk up to a mountain that has two paths. One leads to the other side of the mountain, and the other will get you lost forever. Two twins know the path that leads to the other side. You can ask
them only one question. Except! One lies and one tells the truth, and you don’t know which is which. So, What do you ask?
Answer: You ask each twin What would your brother say?. This works because.... Well let's say the correct path is on the left side. So say you asked the liar "What would your brother say?" Well, the
liar would know his brother was honest and he would say the left side, but since the liar lies, he would say right. If you asked the honest twin the same question, he would say right, because he
knows his brother will lie. Therefore, you would know that the correct path was the left!
Show Answer
You ask each twin What would your brother say?. This works because…. Well let’s say the correct path is on the left side. So say you asked the liar “What would your brother say?” Well, the liar would
know his brother was honest and he would say the left side, but since the liar lies, he would say right. If you asked the honest twin the same question, he would say right, because he knows his
brother will lie. Therefore, you would know that the correct path was the left!
Previous Next | {"url":"https://thinkmad.in/jokes-and-riddles/why-did-the-doctor-switch-jobs/","timestamp":"2024-11-08T14:22:52Z","content_type":"text/html","content_length":"322238","record_id":"<urn:uuid:33ba36ae-3f13-4526-965f-0a4e946e2ef3>","cc-path":"CC-MAIN-2024-46/segments/1730477028067.32/warc/CC-MAIN-20241108133114-20241108163114-00494.warc.gz"} |
Ortiz, J.M., Avalos, S., Riquelme, A.I., Leuangthong, O., Madani, N. and Frenzel, M., Uncertainty and Value: Optimising Geometallurgical Performance Along the Mining Value Chain. Elements.
To maximise the value of a mining operation and minimise its environmental and social impacts, all processes—from the ore deposit to the final product and waste streams—should be optimised together.
However, mining and metallurgical processes are inherently variable and uncertain due to the natural heterogeneity of ore deposits and the limited information and incomplete models available on ore
behaviour throughout the process chain. Propagating these effects to geometallurgical models is important because they are used to make decisions with potentially large environmental and economic
impacts. In this paper, we describe the need for geometallurgical optimisation routines to account for the effects of uncertainties, and the tools needed to manage them, by summarising the routines
that already exist and those that are still missing.
Link to the article
Jelvez, E., Ortiz, J., Varela, N.M., Askari-Nasab, H. and Nelis, G., 2023. A Multi-Stage Methodology for Long-Term Open-Pit Mine Production Planning under Ore Grade Uncertainty. Mathematics, 11(18),
The strategic planning of open pit operations defines the best strategy for extraction of the mineral deposit to maximize the net present value. The process of strategic planning must deal with
several sources of uncertainty; therefore, many authors have proposed models to incorporate it at each of its stages: Computation of the ultimate pit, optimization of pushbacks, and production
scheduling. However, most works address it at each level independently, with few aiming at the whole process. In this work, we propose a methodology based on new mathematical optimization models and
the application of conditional simulation of the deposit for addressing the geological uncertainty at all stages. We test the method in a real case study and evaluate whether incorporating
uncertainty increases the quality of the solutions. Moreover, we benefit from our integrated framework to evaluate the relative impact of uncertainty at each stage. This could be used by
decision-makers as a guide for detecting risks and focusing efforts.
Link to the article
Riquelme, Á.I. and Ortiz, J.M., 2023. A Riemannian tool for clustering of geo-spatial multivariate data. Mathematical Geosciences, pp.1-21.
Geological modeling is essential for the characterization of natural phenomena and can be done in two steps: (1) clustering the data into consistent groups and (2) modeling the extent of these groups
in space to define domains, honoring the labels defined in the previous step. The clustering step can be based on the information of continuous multivariate data in space instead of relying on the
geological logging provided. However, extracting coherent spatial multivariate information is challenging when the variables show complex relationships, such as nonlinear correlation, heteroscedastic
behavior, or spatial trends. In this work, we propose a method for clustering data, valid for domaining when multiple continuous variables are available and robust enough to deal with cases where
complex relationships are found. The method looks at the local correlation matrix between variables at sample locations inferred in a local neighborhood. Changes in the local correlation between
these attributes in space can be used to characterize the domains. By endowing the space of correlation matrices with a manifold structure, matrices are then clustered by adapting the K-means
algorithm to this manifold context, using Riemannian geometry tools. A real case study illustrates the methodology. This example demonstrates how the clustering methodology proposed honors the
spatial configuration of data delivering spatially connected clusters even when complex nonlinear relationships in the attribute space are shown.
Link to the article
Avalos, S., Ortiz, J.M. and Srivastava, R.M., 2023. Geostatistics Toronto 2021. Mathematical Geosciences, pp.1-2.
The International Geostatistics Congress has become a landmark for the geostatistics community to present the best work and most innovative research. The Congress is held once every four years.
The Eleventh Congress was originally planned for the summer of 2020 in Toronto, but the COVID-19 pandemic hit and, as with many other things, plans were completely derailed because of the
impossibility to meet in person. After much discussion by the Geostatistics Executive Committee that supports the organization of the Congress, and given the many unknowns at the time, it was decided
to postpone the Congress to the summer of 2021, hence rebranding it to Toronto 2021, as a reminder of the odd circumstances that we were faced with.
The Congress was held fully online, but with an original format, where all presentations were made available one week in advance, and the actual meeting time was used for brief summaries of each
presentation and a Q &A session with extensive discussions about the topics. Despite being online, the discussions were lively and interesting, which again made this a special occasion. As always,
presentations were split into sessions: Theory, Petroleum, Mining, Earth Science, and Domains. Authors were invited to submit a paper for a special issue on the conference in Mathematical
Geosciences. These papers went through the rigorous review process that the journal has for all its papers. The special issue includes five articles on diverse topics, each of which is briefly
discussed next.
Link to the book
Avalos, S. and Ortiz, J.M., 2023. Spatial Multivariate Morphing Transformation. Mathematical Geosciences, pp.1-37.
Earth science phenomena, in particular mineralization of ore deposits, can be characterized by the spatial and statistical features of multivariate information. The relationships among these
variables are often complex, encountering non-linear features, compositional constraints, and heteroscedasticity. Capturing and reproducing their statistical and spatial distributions is essential
for uncertainty management, allowing for better decision-making and process control. In this work, we present a novel spatial multivariate morphing transformation that maps the initial multivariate
space into a spatially and statistically decorrelated multi-Gaussian space. The spatial structures of the Gaussian random variables are modeled independently, and values are simulated at unsampled
locations using a conventional univariate geostatistical simulation algorithm. Multivariate features and relationships are reintroduced by mapping from the multi-Gaussian distribution into the
initial space. The spaces are paired following the fundamentals of point cloud morphing using discrete optimal transport to minimize the distance between landmark points between spaces. New simulated
values are mapped from the anchored multi-Gaussian space into the multivariate space via thin-plate spline interpolation conditioned to the k-spatially known closest samples. The effectiveness of the
method is demonstrated in a 6-dimensional dataset with strong non-linear relationships and spatial continuity. The resulting multivariate statistical and spatial metrics have been compared with
simulations obtained by projection pursuit multivariate transformation.
Link to the article
Avalos, S. and Ortiz, J.M., 2023. Multivariate Geostatistical Simulation and Deep Q-Learning to Optimize Mining Decisions. Mathematical Geosciences, pp.1-20.
In open pit mines, the long-term scheduling defines how the mine should be developed. Uncertainties in geological attributes makes the search for an optimal scheduling a challenging problem. In this
work, we provide a framework to account for uncertainties in the spatial distribution of grades in long-term mine planning using deep Q-Learning. Mining, processing and metallurgical constraints are
accounted as restrictions in the reinforcement learning environment. Such environment provides a flexible structure to incorporate geometallurgical properties in production scheduling, as part of the
block model. Geometric constraints (block precedence) and operational restrictions have been included as part of the agent-environment interaction. The effectiveness of the method is demonstrated in
a controlled study case using a real multivariate drill-hole dataset, maximizing the net-present value of the project. The present framework can be extended and improved, to meet the particular needs
and requirements of mining operations. We discuss on the current limitations and potential for further research and applications.
Link to the article
Utili, S., Agosti, A., Morales, N., Valderrama, C., Pell, R. and Albornoz, G., 2022. Optimal pitwall shapes to increase financial return and decrease carbon footprint of open Pit mines. Mining,
Metallurgy & Exploration, 39(2), pp.335-355.
The steepness of the slopes of an open pit mine has a substantial influence on the financial return of the mine. The paper proposes a novel design methodology where overall steeper pitwalls are
employed without compromising the safety of the mine. In current design practice, pitwall profiles are often planar in cross-section within each rock layer; i.e., the profile inclination across each
layer tends to be constant. Here instead, a new geotechnical software, OptimalSlope, is employed to determine optimal pitwall profiles of depth varying inclination. OptimalSlope seeks the solution of
a mathematical optimization problem where the overall steepness of the pitwall, from crest to toe, is maximized for an assigned lithology, geotechnical properties, and factor of safety (FoS). Bench
geometries (bench height, face inclination, minimum berm width) are imposed in the optimization as constraints which bind the maximum local inclination of the sought optimal profile together with any
other constraints such as geological discontinuities that may influence slope failure. The obtained optimal profiles are always steeper than their planar counterparts (i.e., the planar profiles
exhibiting the same FoS) up to 8° depending on rock type and severity of constraints on local inclinations. The design of a copper mine is first carried out employing planar pitwalls, secondly
adopting the optimal pitwall profiles determined by OptimalSlope. The adoption of optimal slope profiles leads to a 34% higher net present value and reductions of carbon footprint and energy
consumption of 0.17 Mt CO2 eq and 82.5 million MJ respectively due to a 15% reduction of rockwaste volume.
Link to the article
Moraga, C., Kracht, W. and Ortiz, J.M., 2022. Process simulation to determine blending and residence time distribution in mineral processing plants. Minerals Engineering, 187, p.107807.
Mineral processing plant performance depends on multiple factors, including the feed and the parameters to control the process. In this work, we show how to assess plant performance using
geometallurgical modeling and dynamic simulation. Several models that describe comminution, classification, flotation, and residence time distribution (RTD) are implemented as modules and then
connected to represent generic plant configurations. The estimation of the RTD is used to assess the ore blending generated within the plant through a methodology based on weighting the ore
contribution at the plant discharge. Additionally, the RTD is used to display the ore permanence at different plant stages, which can be used as an operational input to anticipate the consequences of
a perturbation in the feed. Different simulation scenarios are tested using synthetic data, including different plant configurations, time support for blending assessment, and ore feeding sequence.
The results show that the simulation is sensitive to these attributes. Significant differences are detected in the generated product compositions when the plant configuration is changed. Also,
distinct mine plans can be evaluated through simulation, predicting their processing performance. Therefore, the simulation tool developed can be used to evaluate real mineral processing operations
and to test different operative strategies.
Link to the article
Jelvez, E., Morales, N. and Ortiz, J.M., 2021. Stochastic final pit limits: an efficient frontier analysis under geological uncertainty in the open-pit mining industry. Mathematics, 10(1), p.100.
In the context of planning the exploitation of an open-pit mine, the final pit limit problem consists of finding the volume to be extracted so that it maximizes the total profit of exploitation
subject to overall slope angles to keep pit walls stable. To address this problem, the ore deposit is discretized as a block model, and efficient algorithms are used to find the optimal final pit.
However, this methodology assumes a deterministic scenario, i.e., it does not consider that information, such as ore grades, is subject to several sources of uncertainty. This paper presents a model
based on stochastic programming, seeking a balance between conflicting objectives: on the one hand, it maximizes the expected value of the open-pit mining business and simultaneously minimizes the
risk of losses, measured as conditional value at risk, associated with the uncertainty in the estimation of the mineral content found in the deposit, which is characterized by a set of conditional
simulations. This allows generating a set of optimal solutions in the expected return vs. risk space, forming the Pareto front or efficient frontier of final pit alternatives under geological
uncertainty. In addition, some criteria are proposed that can be used by the decision maker of the mining company to choose which final pit best fits the return/risk trade off according to its
objectives. This methodology was applied on a real case study, making a comparison with other proposals in the literature. The results show that our proposal better manages the relationship in
controlling the risk of suffering economic losses without renouncing high expected profit.
Link to the article
Jélvez, E., Morales, N. and Askari-Nasab, H., 2020. A new model for automated pushback selection. Computers & Operations Research, 115, p.104456.
The design of pushbacks is essential to long-term open pit mine scheduling because it partitions the pit space into individual units, controlling ore and waste production. In this paper, a new model
is proposed for the pushback selection procedure, which consists of characterizing the potential pushbacks based on the comprehensive family of nested pits and selecting those ones that meet a set of
criteria, for instance, bounded ore and waste. An advantage of this method is the possibility to automate the pushback selection methodology, applying well-defined criteria for the selection and
reducing the time employed in the planning task.
Link to the article
Avalos, S., Kracht, W. and Ortiz, J.M., 2020. An LSTM approach for SAG mill operational relative-hardness prediction. Minerals, 10(9), p.734.
Ore hardness plays a critical role in comminution circuits. Ore hardness is usually characterized at sample support in order to populate geometallurgical block models. However, the required
attributes are not always available and suffer for lack of temporal resolution. We propose an operational relative-hardness definition and the use of real-time operational data to train a Long
Short-Term Memory, a deep neural network architecture, to forecast the upcoming operational relative-hardness. We applied the proposed methodology on two SAG mill datasets, of one year period each.
Results show accuracies above 80% on both SAG mills at a short upcoming period of times and around 1% of misclassifications between soft and hard characterization. The proposed application can be
extended to any crushing and grinding equipment to forecast categorical attributes that are relevant to downstream processes.
Link to the article
Avalos, S. and Ortiz, J.M., 2020. Recursive convolutional neural networks in a multiple-point statistics framework. Computers & geosciences, 141, p.104522.
This work proposes a new technique for multiple-point statistics simulation based on a recursive convolutional neural network approach coined RCNN. The work focuses on methodology and implementation
rather than performance to demonstrate the potential of deep learning techniques in geosciences. Two and three dimensional case studies are carried out. A sensitivity analysis is presented over the
main RCNN structural parameters using a well-known training image of channel structures in two dimensions. The optimum parameters found are applied into image reconstruction problems using two other
training images. A three dimensional case is shown using a synthetic lithological surface-based model. The quality of realizations is measured by statistical, spatial and accuracy metrics. The RCNN
method is compared to standard MPS techniques and an improving framework is proposed by using the RCNN E-type as secondary information. Strengths and weaknesses of the methodology are discussed by
reviewing the theoretical and practical aspects.
Link to the article
Peredo, O.F., Baeza, D., Ortiz, J.M. and Herrero, J.R., 2018. A path-level exact parallelization strategy for sequential simulation. Computers & Geosciences, 110, pp.10-22.
Sequential Simulation is a well known method in geostatistical modelling. Following the Bayesian approach for simulation of conditionally dependent random events, Sequential Indicator Simulation
(SIS) method draws simulated values for K categories (categorical case) or classes defined by K different thresholds (continuous case). Similarly, Sequential Gaussian Simulation (SGS) method draws
simulated values from a multivariate Gaussian field. In this work, a path-level approach to parallelize SIS and SGS methods is presented. A first stage of re-arrangement of the simulation path is
performed, followed by a second stage of parallel simulation for non-conflicting nodes. A key advantage of the proposed parallelization method is to generate identical realizations as with the
original non-parallelized methods. Case studies are presented using two sequential simulation codes from GSLIB: SISIM and SGSIM. Execution time and speedup results are shown for large-scale domains,
with many categories and maximum kriging neighbours in each case, achieving high speedup results in the best scenarios using 16 threads of execution in a single machine.
Link to the article
Nelis, S.G., Ortiz, J.M. and Morales, V.N., 2018. Antithetic random fields applied to mine planning under uncertainty. Computers & Geosciences, 121, pp.23-29.
Traditional practice in mine planning often relies on estimation techniques that fail to account for the intrinsic uncertainty of geology and grades, which may have significant consequences in the
mine operation. Dealing with this uncertainty has been a major topic in the last years, where different algorithms and stochastic optimization models have been proposed to tackle this issue. However,
the increasing complexity of these stochastic models and the use of several simulations to represent the deposit variability impose a computational challenge in terms of resolution times, making them
difficult to apply in large data or complex mining operations. In this paper we explore the antithetic random fields approach as a variance reduction technique, to solve a stochastic short-term mine
planning problem, aiming to reduce the number of simulations required to obtain a reliable NPV value. The reliability of the result is measured by the variance of the NPV when the problem is
optimized with different sets of realizations. Our results show that this technique produces a significant variance reduction in the inference of the expected NPV value in the stochastic problem for
a copper deposit application, generating a lower dispersion with a smaller sample size, compared to traditional simulation techniques.
Link to the article
Baeza, D., Ihle, C.F. and Ortiz, J.M., 2017. A comparison between ACO and Dijkstra algorithms for optimal ore concentrate pipeline routing. Journal of Cleaner Production, 144, pp.149-160.
One of the important aspects pertaining the mining industry is the use of territory. This is especially important when part of the operations are meant to cross regions outside the boundaries of
mines or processing plants. In Chile and other countries there are many long distance pipelines (carrying water, ore concentrate or tailings), connecting locations dozens of kilometers apart. In this
paper, the focus is placed on a methodological comparison between two different implementations of the lowest cost route for this kind of system. One is Ant Colony Optimization (ACO), a metaheuristic
approach belonging to the particle swarm family of algorithms, and the other one is the widely used Dijkstra method. Although both methods converge to solutions in reasonable time, ACO can yield
slightly suboptimal paths; however, it offers the potential to find good solutions to some problems that might be prohibitive using the Dijkstra approach in cases where the cost function must be
dyamically calculated. The two optimization approaches are compared in terms of their computational cost and accuracy in a routing problem including costs for the length and local slopes of the
route. In particular, penalizing routes with either steep slopes in the direction of the trajectory or high cross-slopes yields to optimal routes that depart from traditional shortest path solutions.
The accuracy of using ACO in this kind of setting, compared to Dijkstra, are discussed.
Link to the article
Lobos, R., Silva, J.F., Ortiz, J.M., Díaz, G. and Egaña, A., 2016. Analysis and classification of natural rock textures based on new transform-based features. Mathematical Geosciences, 48,
This work develops a mathematical method to extract relevant information about natural rock textures to address the problem of automatic classification. Classical methods of texture analysis cannot
be directly applied in this context, since rock textures are typically characterized by both stationary patterns (a classic kind of texture) and geometric forms, which are not properly captured with
conventional methods. Due to the presence of these two phenomena, a new classification approach is proposed in which each rock texture class is individually analyzed developing a specific
low-dimensional discriminative feature. For this task, multi-scale transform domain representations are adopted, allowing the analysis of the images at several levels of scale and orientation. The
proposed method is applied to a database of digital photographs acquired in a porphyry copper mining project, showing better performance than state-of-the-art techniques, and additionally presenting
a low computational cost.
Link to the article
Peredo, O., Ortiz, J.M. and Leuangthong, O., 2016. Inverse modeling of moving average isotropic kernels for non-parametric three-dimensional gaussian simulation. Mathematical Geosciences, 48(5),
Moving average simulation can be summarized as a convolution between a spatial kernel and a white noise random field. The kernel can be calculated once the variogram model is known. An inverse
approach to moving average simulation is proposed, where the kernel is determined based on the experimental variogram map in a non-parametric way, thus no explicit variogram modeling is required. The
omission of structural modeling in the simulation work-flow may be particularly attractive if spatial inference is challenging and/or practitioners lack confidence in this task. A non-linear inverse
problem is formulated in order to solve the problem of discrete kernel weight estimation. The objective function is the squared euclidean distance between experimental variogram values and the
convolution of a stationary random field with Dirac covariance and the simulated kernel. The isotropic property of the kernel weights is imposed as a linear constraint in the problem, together with
lower and upper bounds for the weight values. Implementation details and examples are presented to demonstrate the performance and potential extensions of this method.
Link to the article
Peredo, O., Ortiz, J.M. and Herrero, J.R., 2015. Acceleration of the Geostatistical Software Library (GSLIB) by code optimization and hybrid parallel programming. Computers & Geosciences, 85,
The Geostatistical Software Library (GSLIB) has been used in the geostatistical community for more than thirty years. It was designed as a bundle of sequential Fortran codes, and today it is still in
use by many practitioners and researchers. Despite its widespread use, few attempts have been reported in order to bring this package to the multi-core era. Using all CPU resources, GSLIB algorithms
can handle large datasets and grids, where tasks are compute- and memory-intensive applications. In this work, a methodology is presented to accelerate GSLIB applications using code optimization and
hybrid parallel processing, specifically for compute-intensive applications. Minimal code modifications are added decreasing as much as possible the elapsed time of execution of the studied routines.
If multi-core processing is available, the user can activate OpenMP directives to speed up the execution using all resources of the CPU. If multi-node processing is available, the execution is
enhanced using MPI messages between the compute nodes.Four case studies are presented: experimental variogram calculation, kriging estimation, sequential gaussian and indicator simulation. For each
application, three scenarios (small, large and extra large) are tested using a desktop environment with 4 CPU-cores and a multi-node server with 128 CPU-nodes. Elapsed times, speedup and efficiency
results are shown.
Link to the article | {"url":"https://www.apmodtech.com/Publications/Articles","timestamp":"2024-11-09T16:32:50Z","content_type":"text/html","content_length":"67596","record_id":"<urn:uuid:23360eed-b75b-465f-bfb9-d4d6c5190915>","cc-path":"CC-MAIN-2024-46/segments/1730477028125.59/warc/CC-MAIN-20241109151915-20241109181915-00359.warc.gz"} |
ball mill crusher kw rating
Each ball mill has two 7800 kW motors, giving a total of 44 MW of installed mill power in each of the six grinding lines. The final product from these large regrinding ball mills using small
grinding media is approximately 28 μm and further upgraded by finisher magnetic separators to produce the final concentrate.
WhatsApp: +86 18838072829
UNUSED METSO 22' X 38' (6706MM X 11582MM) BALL MILL, 10,000 KW (13,410 HP) TWIN PINION 50 HZ MOTORS AND GEAR REDUCERS. UNUSED OUTOTEC 24' X 17' ( X ) EGL SAG MILL WITH 6,300 HP (4,750 KW)
VARIABLE SPEED DRIVE. UNUSED SANDVIK MODEL CG850 60 X 113 GYRATORY CRUSHER, 800 KW MOTOR.
WhatsApp: +86 18838072829
UNUSED FLSMIDTH 26' x 43' (8m x 13m) Dual Pinion Ball Mill with 2 ABB 9,000 kW (12,069 HP) Motors for Total Power of 18,000 kW (24,138 HP) Inventory ID: 6CHM01. UNUSED FLSMIDTH 26' x 43' (8m x
13m) Dual Pinion Ball Mill with 2 ABB 9,000 kW (12,069 HP) Motors for Total Power of 18,000 kW (24,138 HP) Manufacturer: FLSMIDTH. Location: North ...
WhatsApp: +86 18838072829
Problem Calculate the operating speed of a ball mill from the following data: (i) Diameter of ball mill=500 mm (ii) Diameter of ball= 50mm Operating speed of ball mill is 35 % of critical speed.
Problem Calculate the power required in horsepower to crush 150000 kg of feed, if 80 % of feed passes through 2 inch screen and 80% of product ...
WhatsApp: +86 18838072829
In Fig. 3, Fig. 4 the data is presented in order to show the effect of the fresh feed size, particularly the % −6″ +1″ (−152 +25 mm), on the SAG mill power consumption (Pc) in kW and on the SAG
mill specific energy consumption (Ecs) in kW h/t, obtained by dividing the power consumed (kW) by the fresh feed rate (t/h).
WhatsApp: +86 18838072829
oversize material too large for a crusher. Crushers are used to reduce particle size enough so that the material can be processed into finer particles in a grinder. A typical processing line at a
mine might consist of a crusher followed by a SAG mill followed by a ball mill. In this context, the SAG mill and ball mill are considered grinders rather
WhatsApp: +86 18838072829
Ball Mills. Ball mills have been the primary piece of machinery in traditional hard rock grinding circuits for 100+ years. They are proven workhorses, with discharge mesh sizes from ~40M to
<200M. Use of a ball mill is the best choice when long term, stationary milling is justified by an operation. Sold individually or as part of our turnkey ...
WhatsApp: +86 18838072829
The AFA range of Liquid Starter stands out for their high current rating and power range. These starters are rated from 2,000 to 20,000 kW, have high resistance ratio and a very low final
resistance at their minimal electrolyte concentration. They are often used for the starting of mills, ventilators, pumps, crushers and also in sugar mills
WhatsApp: +86 18838072829
The cement ball mill is a kind of cement grinding is mainly used for grinding the clinker and raw materials of the cement plant, and also for grinding various ores in metallurgical, chemical, and
electric power has the characteristics of strong adaptability to materials, continuous production, a large crushing ratio, and easy to adjust the fineness of grinding products.
WhatsApp: +86 18838072829
3. Analysis of Variant Ball Mill Drive Systems. The basic element of a ball mill is the drum, in which the milling process takes place ( Figure 1 ). The length of the drum in the analyzed mill
(without the lining) is m, and the internal diameter is m. The mass of the drum without the grinding media is 84 Mg.
WhatsApp: +86 18838072829
The machine was fabricated using locally available materials. The fabricated stone crusher was tested and the actual capacity was found to be 301 kg/h with a throughput efficiency of %. The ...
WhatsApp: +86 18838072829
The hammer mill is the best known and by far the most widely used crushing device employing the impact principle of breaking and grinding stone. Thus far we have described machines which do a
portion of their work by impact, but the only machine described in which this action plays an important role was the sledging roll type and particularly the Edison roll crusher and in these
machines ...
WhatsApp: +86 18838072829
The feed enters one end of the ball mill, and discharges out the other end. Ball mills vary greatly in size, from large industrial ball mills measuring more than 25 ft. in diameter to small mills
used for sample preparation in laboratories. Rod mills are similar to ball mills and use metal rods as the grinding media. Pebble mills use rock ...
WhatsApp: +86 18838072829
Mining Spare Parts. Girth Gear, Support roller, jaw plate, kiln tyre, pinion gear, ball mill assembly, kiln shell, mill gear, etc. Company Instruction Henan Machinery Factory is one of largest
machinery manufacturing and important enterprises in Zhengzhou City, founded in 1902. After jointstock reform, we established the Henan Dajia Mining Machinery Co., Ltd ...
WhatsApp: +86 18838072829
The sample was received crushed appropriately for the ball mill test. Ball Mill Grindability Test was conducted by standard practice using 100mesh (150 pm) closing screens. The ball mill work
index is shown below. BM Wi (kWhr/st) = ; BM Wi (kWhr/mt) = ; Bond Ball Mill
WhatsApp: +86 18838072829
Ball Mill can accept a feed size of 12mm or less and deliver appropriate size in the range of 50m. The speed of Ball Mill varies between 60 to 70 As the product size become fines, the capacity of
a mill reduces the energy requirement increases. The power consumption of a ball mill is in the range of to 10 kwh/ton.
WhatsApp: +86 18838072829
The acceleration factor of the ball or rod mass is a function of the peripheral speed of the mill. Thus. n = c9np/√D, the above equation becomes P = f1 (D²)·f5 (πD c9 np/√D) = cs np As a first
approximation, the capacity, T, of a mill may be considered as a function of the force acting inside the mill.
WhatsApp: +86 18838072829
Industrial ball mills can coarsely crush relatively large material, while labgrade ball mills are suitable for finely milling glass to the micron level and further. 'High energy' ball milling
offers users the capacity to reliably grind the material into nanoscale particles. 4. Centrifugal Mill. The majority of centrifugal mills are found ...
WhatsApp: +86 18838072829
Then in ballwear formula (25), T = /K Log10 Da/Db; but from (29), K = Rt/Wt. Then T = /Rt Log10 Da/Db T is 1 day, Wt is the original weight of the ball charge, and Rt is the ball wear for one
day. Then Log10 Da/Db = Rt/ are all known, and it is only necessary to solve for Db, the diameter of the balls to be added.
WhatsApp: +86 18838072829
A ball mill, a type of grinder, is a cylindrical device used in grinding (or mixing) materials like ores, chemicals, ceramic raw materials and paints. Ball mills rotate around a horizontal axis,
partially filled with the material to be ground plus the grinding medium. Different materials are used as media, including ceramic balls, flint pebbles ...
WhatsApp: +86 18838072829
•Typically this includes all crushers, HPGRs, and tumbling mills (AG/SAG, rod, and ball mills) involved in reducing the size of the primary crusher product to that of the final product (usually
the cyclone overflow of the last stage of grinding prior to flotation/leaching). 2. Feed rate to the circuit (dry tonnes/h) 3.
WhatsApp: +86 18838072829
Mills usually have motors in the following range: SAG mill motor: 7,435 kW or an HPGR circuit with motors: Ball mill 1 (4,015 kW) and Ball mill 2 (4,152 kW) for a porphyry copper molybdenum mine.
Comparing mill operations for both open pit mining to underground mining, where the metal is separated from rock ore containing the metal mined, both ...
WhatsApp: +86 18838072829
Modifications to the SABC comminution circuit included an increase in the SAG mill ball charge from 8% to 10% v / v; an increase in the mill ball charge from 23% v / v to 27% v / v; an increase
in the maximum operating power draw in the ball mill to 5800 kW; the replacement of the HP Series pebble crusher with a TC84 crusher; and the addition of...
WhatsApp: +86 18838072829
Ball mill is the key equipment for crushing raw materials again after being broken. It is widely used in cement. silicate, new construction matcrials, refractary materials, fertilizes, black and
nonferrous metal ore dressing, glass ceramic industry, etc. ... (kw) Weight (T) Ф900×1800 ...
WhatsApp: +86 18838072829
In this technical article electrical systems in cement plant will be touched upon. All machines are driven by electric motors. Majority of the motors are 400 440 volts. A selected few motors of
higher ratings are MV motors with 3300, or 6600 or 11000 volts. Most motors are fixed speed and unidirectional motors.
WhatsApp: +86 18838072829
Final answer. Bituminous coals lumps are needed for boiler feed, CORE Crush labs uses a conventional Ball Mill crusher, such that the average size of coal lumps are reduced from 50 mm to 5 mm
with a determined total energy consumption of 23 kW/(kg/s). You need to compute what would be the consumption of energy (in kJ/kg ) needed to crush the ...
WhatsApp: +86 18838072829 | {"url":"https://mineralyne.fr/Jan_14/ball-mill-crusher-kw-rating.html","timestamp":"2024-11-11T08:14:15Z","content_type":"application/xhtml+xml","content_length":"25650","record_id":"<urn:uuid:440d8d5b-132a-48d9-ba80-901ac4210019>","cc-path":"CC-MAIN-2024-46/segments/1730477028220.42/warc/CC-MAIN-20241111060327-20241111090327-00636.warc.gz"} |
Column stack and wall panel classification (column and wall:ACI 318)
Slenderness ratio
For columns: The slenderness ratio, k lu/r, of the restrained length (note: not necessarily the stack length – it will be longer if there is no restraint at either end of the stack) about each axis
is calculated as follows:
(k lu/r)[y] = k *lu[y] /(√(l[y] / A[g]))
(k lu/r)[z] = k* lu[z]/(√(I[z] / A[g]))
slenderness ratio = k*lu/r
k is an effective length factor
lu[y] is the unsupported column length in respect of major axis (y axis)
lu[z] is the unsupported column length in respect of minor axis (z axis)
r[y] is the radius of gyration of the column in the y-direction
r[z] is the radius of gyration of the column in the z-direction
I[y] is the second moment of area of the stack section about the major axis (y axis)
I[z] is the second moment of area of the stack section about the major axis (z axis)
A[g] is the cross-sectional area of the stack section
For unbraced columns
IF (k lu/r)[y] ≤ 22
THEN slenderness can be neglected and column can be designed as short column
ELSE, column is considered as slender
IF (k lu/r)[z] ≤ 22
THEN slenderness can be neglected and column can be designed as short column
ELSE, column is considered as slender
For braced columns
IF (k lu/r)[y] ≤ MIN((34-12*M1/M2), 40)
THEN slenderness can be neglected and column can be designed as short column
ELSE, column is considered as slender
IF (k lu/r)[z] ≤ MIN((34-12*M1/M2), 40)
THEN slenderness can be neglected and column can be designed as short column
ELSE, column is considered as slender
M1 = the smaller factored end moment on the column, to be taken as positive if member is bent in single curvature and negative if bent in double curvature
= MIN [ABS(M[top]), ABS(M[bot])]
M2 = the larger factored end moment on the column always taken as positive
= MAX [ABS(M[top]), ABS(M[bot])] | {"url":"https://support.tekla.com/doc/tekla-structural-designer/2024/ref_columnstackandwallpanelclassificationcolumnandwallaci318","timestamp":"2024-11-09T23:58:15Z","content_type":"text/html","content_length":"58829","record_id":"<urn:uuid:f8c17708-40ef-4cab-affd-aecfd9351a57>","cc-path":"CC-MAIN-2024-46/segments/1730477028164.10/warc/CC-MAIN-20241109214337-20241110004337-00009.warc.gz"} |
Learn some machine learning fundamentals in an afternoon
Here is a plan to learn ML fundamentals in an afternoon by watching some videos on youtube:
Follow this plan
Machine learning fundamentals:
[Stop and drink coffee, eat a snack]
How to address bias and variance:
Extra material:
Test your knowledge
• What is bias?
□ A: Bla
□ B: The inability of a machine learning model (e.g. linear regression) to express the true relationship between X and Y
□ C: Bla
• What is variance?
□ A: The difference in how well a model fits different datasets (e.g. training and test)
□ B: Bla
□ C: Bla
• What problem does regularization, bagging and boosting address?
□ A: Bla
□ B: Bla
□ C: Finds the sweet spot between simple and complicated models
• What is regularization?
• What is bagging?
• What is boosting?
□ A: Bla
□ B: Bla
□ C: Bla
□ What is bootstrapping?
□ A: Repeat an experiment a bunch of times until we feel certain about the result
□ B: Repeatly random sample n times (with replacement) from a set of n observations and build up a histogram of any statistic, e.g. the mean.
□ C: Augment a small set of observations with synthetic samples to increase sample size
Leave a Comment
You must be logged in to post a comment.
This site uses Akismet to reduce spam. Learn how your comment data is processed. | {"url":"https://skipperkongen.dk/2021/08/25/learn-some-machine-learning-fundamentals-in-an-afternoon/","timestamp":"2024-11-04T01:45:22Z","content_type":"text/html","content_length":"112864","record_id":"<urn:uuid:5b2e174b-8c35-4d58-a578-298e34253b72>","cc-path":"CC-MAIN-2024-46/segments/1730477027809.13/warc/CC-MAIN-20241104003052-20241104033052-00316.warc.gz"} |
Formulas Question - Referencing Another Sheet
Good morning,
I am trying to pull calculate an average from data being pulled from another sheet. I want to be able to calculate the average duration, by month of completed projects.
I basically want to say what is the average of the duration fields if the phase is completed and the end date column is in the month of January and the year 2022.
What is the easiest way to do this every way I try to end up with an error.
Thank you,
• You are going to need an AVG/COLLECT.
=AVG(COLLECT({Duration Column}, {Phase Column}, @cell = "Complete", {Date Column}, AND(IFERROR(MONTH(@cell), 0) = 1, IFERROR(YEAR(@cell), 0) = 2022))
Help Article Resources | {"url":"https://community.smartsheet.com/discussion/90328/formulas-question-referencing-another-sheet","timestamp":"2024-11-06T14:24:35Z","content_type":"text/html","content_length":"390118","record_id":"<urn:uuid:2c32ee28-2817-415e-9e04-ffc016c23a90>","cc-path":"CC-MAIN-2024-46/segments/1730477027932.70/warc/CC-MAIN-20241106132104-20241106162104-00414.warc.gz"} |
Mod Divmod | HackerRank
One of the built-in functions of Python is divmod, which takes two arguments and and returns a tuple containing the quotient of first and then the remainder .
For example:
>>> print divmod(177,10)
(17, 7)
Here, the integer division is 177/10 => 17 and the modulo operator is 177%10 => 7.
Read in two integers, and , and print three lines.
The first line is the integer division (While using Python2 remember to import division from __future__).
The second line is the result of the modulo operator: .
The third line prints the divmod of and .
Input Format
The first line contains the first integer, , and the second line contains the second integer, .
Output Format
Print the result as described above.
Sample Input
Sample Output
(17, 7) | {"url":"https://www.hackerrank.com/challenges/python-mod-divmod/problem?isFullScreen=true","timestamp":"2024-11-07T09:20:52Z","content_type":"text/html","content_length":"750256","record_id":"<urn:uuid:ecd55947-87e2-43d5-83ef-69189d9b21c9>","cc-path":"CC-MAIN-2024-46/segments/1730477027987.79/warc/CC-MAIN-20241107083707-20241107113707-00128.warc.gz"} |
Solution to Problem B – Candy
IPSC 1999
Solution to Problem B – Candy
This was the easiest problem of the contest. We provide correct output files and a program solving the problem.
The algorithm is very simple. First, read input into an array and count the total number of candies in all packets. If the total number of candies is not divisible by number of packets, it is not
possible for all packets to be of the same size. Otherwise compute the number of candies C that should be in one packet. The smallest number of moves can be achieved in such way that you always
remove one candy from any packet that contains more than C candies and put it into a packet that contains less than Ccandies. Thus, the total number of moves can be computed as a sum of all candies
that has to be removed from all the packets containing more than C candies. Note, that in the difficult data set the sum of all candies exceeded 32767 and thus you should use 4-byte integers.
Answers for the B1 data set could be computed easily by hand. This input contained only two very small and trivial blocks of data (with 5 and 3 packets) and one block with 101 packets. Sizes of
packets in this block made an arithmetic progression 3,4,5,...,103. Well-known formula says that 1+2+3+...+n=(n+1)n/2. The total number of candies could be computed as 1+2+...+103-1-2=103*104/2-3=
103*52-3=5353. Next we get 5353/101=53, thus each packet should contain 53 candies. We should remove 1 candy from a packet that contains 54 candies, 2 candies from a packet that contains 55 candies,
..., 50 candies from a packet that contains 103 candies. Thus the number of candies moved is 1+2+...+50=50*51/2=25*51=1275.
Since there are three team members and only one computer, it is probably clever to compute at least some of the inputs by hand. On the other side this problem was so easy that it might take less time
to write a program than to do computation mentioned above by hand. | {"url":"https://ipsc.ksp.sk/1999/real/solutions/b.html","timestamp":"2024-11-04T06:12:11Z","content_type":"text/html","content_length":"5221","record_id":"<urn:uuid:44ac4fc4-65dd-4594-8547-db37e191ac8f>","cc-path":"CC-MAIN-2024-46/segments/1730477027812.67/warc/CC-MAIN-20241104034319-20241104064319-00212.warc.gz"} |
Professors (75)
Associate Professors (79)
Assistant Professors (58)
Postdocs (91)
Teachers (11)
Non-academics (54)
Others (6)
Unknowns (15)
Data Update
May you notice any obsolete information about one or more SISSA alumni, please write at webmaster.math@sissa.it.
Name Origin Year Phd Denomination Position Country
Riccardo Iraso Italy 2018 Geometry and Mathematical Physics Postdoc Germany
Daniele Dimonte Italy 2019 Geometry and Mathematical Physics Postdoc Switzerland
Raffaele Scandone Italy 2018 Mathematical Analysis, Modelling and Applications Postdoc Italy
Noe Angelo Caruso Australia 2019 Mathematical Analysis, Modelling and Applications Postdoc Italy
Emanuele Tasso Italy 2019 Mathematical Analysis, Modelling and Applications Postdoc Austria
Maicol Caponi Italy 2019 Mathematical Analysis, Modelling and Applications Postdoc Germany
Zakia Zainib Pakistan 2019 Mathematical Analysis, Modelling and Applications Postdoc Switzerland
Ornela Mulita Albania 2019 Mathematical Analysis, Modelling and Applications Postdoc Germany
William Daniel Montoya Cataño Colombia 2019 Geometry and Mathematical Physics Postdoc Brazil
Konstantin Aleshkin Russia 2019 Geometry and Mathematical Physics Postdoc USA
Matteo Gallone Italia 2019 Geometry and Mathematical Physics Postdoc Italy
Massimo Bagnarol Italy 2019 Geometry and Mathematical Physics Postdoc Germany
Alessandro Carotenuto Italy 2019 Geometry and Mathematical Physics Postdoc Czech Republic
Carlo Scarpa Italy 2020 Geometry and Mathematical Physics Postdoc Italy
Xiao Han China 2020 Geometry and Mathematical Physics Postdoc Polonia
Saddam Hijazi Palestine 2020 Mathematical Analysis, Modelling and Applications Postdoc Germany
Filippo Riva Italy 2020 Mathematical Analysis, Modelling and Applications Postdoc Italy
Luca Franzoi Italy 2020 Mathematical Analysis, Modelling and Applications Postdoc United Arab Emirates
Daniele Agostinelli Italy 2020 Mathematical Analysis, Modelling and Applications Postdoc Canada
Monica Nonino Italy 2020 Mathematical Analysis, Modelling and Applications Postdoc Austria
Ekaterina Mukoseeva Russia 2020 Mathematical Analysis, Modelling and Applications Postdoc Finland
Luca Tamanini Italy 2017 Mathematical Analysis, Modelling and Applications Postdoc Germany
Federico Murgante Italy 2023 Mathematical Analysis, Modelling and Applications Postdoc Italy
Tommaso Rossi Italy 2021 Geometry and Mathematical Physics Postdoc Germany
Francesco Boarotto Italy 2016 Applied Mathematics Postdoc France
Ivan Yuri Violo Italy 2021 Mathematical Analysis, Modelling and Applications Postdoc Finland
Francesco Nobili Italy 2021 Mathematical Analysis, Modelling and Applications Postdoc Finland
Emanuele Caputo Italy 2021 Mathematical Analysis, Modelling and Applications Postdoc Finland
Giuliano Klun Italy 2021 Mathematical Analysis, Modelling and Applications Postdoc Italy
Francesco Sapio Italy 2021 Mathematical Analysis, Modelling and Applications Postdoc Austria | {"url":"https://www.math.sissa.it/alumni?order=field_alumni_position&sort=desc&page=4","timestamp":"2024-11-07T17:05:54Z","content_type":"application/xhtml+xml","content_length":"63369","record_id":"<urn:uuid:4f21a897-7b68-41d0-9713-209b2987e9ff>","cc-path":"CC-MAIN-2024-46/segments/1730477028000.52/warc/CC-MAIN-20241107150153-20241107180153-00099.warc.gz"} |
Hybrid quantum search with genetic algorithm optimization
Department of Computer and Information Technology, University Politehnica of Timisoara, Timisoara, Timis, Romania
Academic Editor
Subject Areas
Quantum computing, Quantum genetic algorithms, Genetic algorithm optimization, Hybrid quantum genetic algorithm
© 2024 Ardelean and Udrescu
This is an open access article distributed under the terms of the Creative Commons Attribution License, which permits unrestricted use, distribution, reproduction and adaptation in any medium and
for any purpose provided that it is properly attributed. For attribution, the original author(s), title, publication source (PeerJ Computer Science) and either DOI or URL of the article must be
Cite this article
2024. Hybrid quantum search with genetic algorithm optimization. PeerJ Computer Science 10:e2210 https://doi.org/10.7717/peerj-cs.2210
Quantum genetic algorithms (QGA) integrate genetic programming and quantum computing to address search and optimization problems. The standard strategy of the hybrid QGA approach is to add quantum
resources to classical genetic algorithms (GA), thus improving their efficacy (i.e., quantum optimization of a classical algorithm). However, the extent of such improvements is still unclear.
Conversely, Reduced Quantum Genetic Algorithm (RQGA) is a fully quantum algorithm that reduces the GA search for the best fitness in a population of potential solutions to running Grover’s algorithm.
Unfortunately, RQGA finds the best fitness value and its corresponding chromosome (i.e., the solution or one of the solutions of the problem) in exponential runtime, O(2^n/2), where n is the number
of qubits in the individuals’ quantum register. This article introduces a novel QGA optimization strategy, namely a classical optimization of a fully quantum algorithm, to address the RQGA complexity
problem. Accordingly, we control the complexity of the RQGA algorithm by selecting a limited number of qubits in the individuals’ register and fixing the remaining ones as classical values of ‘0’ and
‘1’ with a genetic algorithm. We also improve the performance of RQGA by discarding unfit solutions and bounding the search only in the area of valid individuals. As a result, our Hybrid Quantum
Algorithm with Genetic Optimization (HQAGO) solves search problems in O(2^(n−k)/2) oracle queries, where k is the number of fixed classical bits in the individuals’ register.
Genetic algorithms (GAs) represent a widely used heuristic method for search and optimization problems inspired by evolutionary theory (Spector, 2004; Matoušek, 2009). In their simplest form—without
losing generality—individuals’ chromosomes encode candidate solutions as binary arrays. The GA has four phases: initialization, selection, reproduction & mutation, and termination (After
initialization, the selection and reproduction & mutation phases are repeated in a loop until some condition is met, and the algorithm enters the termination phase.) In the initialization phase, the
GA begins with a randomly generated population of chromosomes; the population evolves over multiple generations (each performing selection and reproduction) in search of an optimal solution (
Lahoz-Beltra, 2016). Accordingly, each generation’s chromosomes are evaluated on the basis of the fitness function to select the best individuals. A new generation evolves from the previous one by
recombining and mutating selected individuals’ chromosomes. Consequently, individuals with higher quality have a higher probability of being copied by the next generation, hence improving the
population’s average fitness.
However, even with sophisticated GA search strategies such as elitism or adaptive parameter control, or dedicated hardware to parallelize and accelerate GAs, classical computation often achieves only
marginal performance improvements over deterministic approaches (Spector, 2004; Udrescu, Prodan & Vlăduţiu, 2006). To further pursue performance, quantum computation emerges as one of the possible GA
implementation solutions due to its specific features, such as entanglement, interference, and exponential parallelism (Nielsen & Chuang, 2002; Spector, 2004). The general approach in trying to
combine genetic algorithms with quantum computing is to optimize genetic operators using quantum features (Lahoz-Beltra, 2023); in this article, we turn the tables by proposing a classical (i.e.,
genetic algorithm) optimization of a purely quantum search (i.e., the RQGA algorithm (Udrescu, Prodan & Vlăduţiu, 2006)).
The remainder of this article is organized as follows: section State of the Art surveys the similar solutions to combining GAs with quantum computing, section Background describes the purely quantum
RQGA search algorithm that we optimize with a genetic algorithm, section Algorithm Design details our proposed HQAGO solution to fixing qubits in the RQGA individual register and analyzes its time
complexity, section Results show the results obtained by simulating HQAGO in the context of concrete optimization problems (knapsack and graph coloring), and section Conclusions discusses our
findings, their implications, and potential impact. Portions of this text describing the algorithm were previously published as part of a preprint (https://doi.org/10.21203/rs.3.rs-3009060/v1).
State of the art
The literature proposes several quantum-implemented GAs—mostly algorithms that combine classical and quantum operators (Lahoz-Beltra, 2016). GAs have also been used for quantum circuit synthesis, as
presented in Ruican et al. (2007) and Ruican et al. (2008), and as evolutionary strategies that can evolve and scale up small quantum algorithms (Gepp & Stocks, 2009). From an implementation
perspective, these trends are assembled under the term Quantum Evolutionary Programming (QEP), which largely consists of Quantum-Inspired Genetic Algorithms (QIGA) or Hybrid Genetic Algorithms (HGA).
QIGAs and HGAs are algorithms that mix classical computation with quantum operators, using qubits for chromosome representations and quantum gates for operators. We have just a few examples of fully
Quantum Genetic Algorithms (QGA), which focus on implementing genetic algorithms searches on quantum hardware (Giraldi, Portugal & Thess, 2004; Lahoz-Beltra, 2016).
In addition to these developments, recent optimization strategies offer promising avenues for enhancing QEP methodologies. For instance, Escobar-Cuevas et al. (2024b) introduces a novel method that
leverages evolutionary game theory for optimization. The proposed method initializes all individuals using the Metropolis-Hasting technique. The algorithm continuously adapts and refines the
strategies of each individual based on performance—based on the interactions and the competition between individuals—in search of the global optimum or near-optimal solution. Similarly,
Escobar-Cuevas et al. (2024a) presents a method that combines a hybrid search mechanism with the fuzzy optimization approach that shows improvements in terms of solution quality, dimensionality,
similarity, and convergence criteria (Escobar-Cuevas et al., 2024a).
QIGAs start with generating an initial population of n-qubit chromosomes; then, the best solution is selected and stored by observing and evaluating the chromosomes. The algorithm evolves by
performing a classical evaluation of individual chromosomes and generating a new population, using classical and quantum operators (Giraldi, Portugal & Thess, 2004; Lahoz-Beltra, 2016). QGAs also
start with a population of qubit-encoded chromosomes, but the following steps use only quantum operators. A QIGA consisting of a classical genetic algorithm with quantum crossover operation applied
on all chromosomes in parallel can achieve quadratic speedup over its conventional counterpart; the complexity of such a QIGA is $\mathcal{O}\left(\stackrel{̃}{N}poly\left(log\stackrel{̃}{N}logN\right)
\right)$ where $\stackrel{̃}{N}\le N$, $\stackrel{̃}{N}$ is the number of individuals in a generation and N is the total number of individual chromosomes (SaiToh, Rahimi & Nakahara, 2014). Quantum
Genetic Optimization Algorithm (QGOA) is a QIGA that combines quantum selection with classical operations performing crossover, mutation, and substitution (Malossini, Blanzieri & Calarco, 2008).
Another QIGA approach introduces a new way of implementing GA operators on quantum hardware to aim for better runtimes; however, the proposed QIGA only converges towards suboptimal solutions, and its
complexity is uncertain (Acampora & Vitiello, 2021).
RQGA is a fully quantum genetic algorithm based on Grover’s quantum search, which does not have genetic operators such as mutation and crossover. Compared to the 4-phases (initialization, fitness
assessment, variation, and selection) QIGAs, the RQGA performs only initialization, fitness assessment, and selection. There is no need for a variation stage in RQGA since the individuals’ register
encodes the entire search space as a superposition of chromosome codes. Therefore, in the initialization phase, the population is generated as a basis-state superposition of all possible binary
combinations (Udrescu, Prodan & Vlăduţiu, 2006). In this way, RQGA provides a solution that consists in finding the best individual/chromosome with a specially designed oracle that works with a
modified version of the maximum finding algorithm (Ahuja & Kapoor, 1999). Overarchingly, RQGA represents a method that reduces any Quantum Genetic Algorithm (QGA) to a Grover search (Grover, 1996).
Therefore, the complexity of RQGA is $\mathcal{O}\left(\sqrt{{n}_{i}}\right)$ Grover iterations (where n[i] is the number of items) in a search space with n[i] = 2^n items (where n is the number of
qubits in the search register), or $\mathcal{O}\left({2}^{n/2}\right)$.
The main objective of this article is to reduce the complexity of the RQGA search by using classical optimization approaches. Consequently, the main contributions of this article are:
The main achievement of our HQAGO approach is that it allows for scanning the space between a pure classical GA (i.e., all positions in the individuals’ register are classical bits, ‘0’ or ‘1’) and
RQGA (where the individuals’ register has only qubits).
Pure classical GA has a moderate probability of finding the best solutions; however, their complexity can be restricted by the termination condition that limits the number of generations. The
pure-quantum RQGA has a very high probability of finding the best solutions (according to Grover’s algorithm); their complexity is exponential with the number n of qubits in the individuals’
register. HQAGO maintains RQGA’s high probability of finding the best solutions while significantly reducing the search complexity by limiting the number of qubits in the individuals’ register.
• A novel GA-based method of reducing the number of qubits required in the individuals’ register of the RQGA. Our classical GA, combined with RQGA, or HQAGO, fixes the value of k bits in the n
-qubit individuals’ register as classical ‘0’ or ‘1’, while the other register positions remain quantum (i.e., qubits). Therefore, considering that Grover’s algorithm delivers the complexity of
the search, our HQGAO is $\mathcal{O}\left({2}^{\left(n-k\right)/2}\right)$. By controlling k, we can control the complexity of the search so that the probability of finding a solution remains
high; however, the number of required Grover iterations is reduced because a limited number of qubits means a reduced search space (see Fig. 1).
• A new method to discard unfit solutions and bound the search only in the area of valid individuals. This way, we reduce the number of Grover algorithm runs to find the best fitness value.
• Series of Qiskit simulations of HQAGO implementations for solving the knapsack optimization and graph coloring problems that show that best search solutions can be found even for relatively large
k values, which consequently entail a drastically reduced search space and a much lower computational complexity.
Since HQAGO builds upon the pure-quantum RQGA, this section details the RQGA implementation and analyzes its complexity. RQGA takes a superposition of all possible individual chromosomes
(representing potential solutions for the search problem) in the individuals’ register $|u{〉}_{ind}$ and computes the corresponding fitness values in the fitness register $|f\left(u\right){〉}_{fit}
$. RQGA uses Grover’s algorithm (Grover, 1996) to augment the quantum amplitude of the basis state in $|u{〉}_{ind}$ that corresponds to the best fitness values. Thus, when we measure the fitness
register $|f\left(u\right){〉}_{fit}$, we get the best fitness value (or one of the best fitness values) with a high probability. The post-measurement state will have only the individual code (or a
superposition of individual codes) that produces the best fitness. In any of these cases, measuring the individuals’ register will return the solution.
RQGA is a framework built around a modified version of the quantum maximum finding algorithm proposed by Ahuja & Kapoor (1999). This approach reduces the problem of finding the maximum fitness
individual to a Grover search (Udrescu, Prodan & Vlăduţiu, 2006), which requires $\mathcal{O}\left(\sqrt{N}\right)$ Grover iterations (Nielsen & Chuang, 2002). RQGA encodes the search space on n
qubits; therefore, in our case, N = 2^n. Accordingly, as RQGA maintains the number of oracle queries of the quantum maximum finding algorithm, namely $\mathcal{O}\left(\sqrt{N}\right)$, RQGA’s
complexity becomes $\mathcal{O}\left({2}^{n/2}\right)$. In Algorithm 1 , we present the main steps of RQGA.
RQGA’s worth is that it uses Grover’s and maximum finding algorithms to simplify QGAs. However, its main drawback is that it still requires an exponential runtime; for a search space of size 2^n, its
complexity is $\mathcal{O}\left({2}^{n/2}\right)$ Grover iterations (Udrescu, Prodan & Vlăduţiu, 2006). This situation calls for a solution to reduce or control the algorithm’s complexity.
Algorithm 1 The main steps of RQGA (Udrescu et al., 2006)
1: Prepare |ψ 〉1 as a superposition of all individual–fitness register pairs
(|u 〉ind ⊗|0 〉fit), as presented in Equation 1.
2: Choose max ∈ [2m+1,2m+2 −1) randomly, where m is the number of qubits
in the fitness register.
3: Appy the unitary operation corresponding to the fitness function f: |ψ 〉2 =
Ufit|ψ 〉1 = 1 __√
_2n ∑2n−1
u=0 |u 〉ind ⊗|f(u) 〉fit
4: repeat
5: Use the oracle O to mark (i.e., change to a negative phase) all basis states
in the fitness register that correspond to f(u) ≥ max.(|ψ 〉3 = O|ψ 〉2)
6: Use Grover iterations to augment the quantum amplitude corresponding
to the marked fitness values. Then measure the fitness register, obtaining
|ψ 〉4 = |u 〉ind ⊗|f(u) 〉fit, with f(u) ≥ max.
7: max := f(u).
8: until max value is not improved.
9: Return the chromosome value umax (corresponding to max), namely
|umax 〉ind ⊗|f(umax) 〉fit, with f(umax) = max. Therefore, umax repre-
sents the individual/chromosome that generates the highest fitness.
Algorithm design
In this article, we reduce the RQGA exponential runtime by limiting the number of qubits in the search register. Our novel Hybrid Quantum Algorithm with Genetic Optimization (HQAGO) algorithm selects
a bounded number of qubits in the individuals’ register and fixes the remaining ones as classical values of ‘0’ and ‘1’; a classical genetic optimization algorithm selects the qubits’ positions and
determines the values of the fixed bits. Compared to RQGA, where the population contains both valid and non-valid individuals, HQAGO also modifies the initialization step to search only in the valid
individuals’ space. (An individual chromosome is valid if it meets a condition specific to the search or optimization problem; it is non-valid otherwise.) With the HQAGO procedure presented in Fig. 2
, we reduce the number of Grover iterations, thus improving the algorithm’s performance, at the cost of adding complexity—entailed by genetic optimization—to the RQGA design.
The overview of applying genetic algorithm optimization to reduce the number of Grover iterations entailed by running the RQGA algorithm.
The conventional genetic algorithm determines the fixed qubits’ positions (presented with gray background) and their binary values in the individuals’ register of the Reduced Quantum Genetic
Algorithm, thus controlling the number of qubits in the individual/chromosome quantum register and reducing the number of Grover iterations required.
Like RQGA, the HQAGO starts by initializing a superposition of all individual-fitness register pairs as (1)$|\psi 〉1=\frac{1}{\sqrt{{2}^{n}}}\sum _{u=0}^{{2}^{n}-1}|u{〉}_{ind}\otimes |0{〉}_{fit},$
where $|u{〉}_{ind}\otimes |0{〉}_{fit}$ is the individual-fitness register pair and n is the number of qubits in the individuals’ quantum register. The individual is encoded on n-qubits; therefore
we have 2^n basis states in the superposition.
Given the individual quantum register $|u{〉}_{ind}$ we apply the classical GA to fix a subset of k qubits (i.e., assign them classical values of 0 and 1), 0 ≤ k ≤ n. Before fixing qubits in the
individuals’ register, in Equation Eq. (1) we have $|u〉\in S=\left\{0,1,2,\dots ,{2}^{n}-1\right\}$; u ∈ ℕ is binary-encoded (u = b[0]b[1]…b[n−1] where ${b}_{i}\in \mathbb{B}=\left\{0,1\right\}$, $i
=\overline{0,n-1}$). When we assign the classical binary values to k b[i]s, $|u〉\in {S}_{k}\subseteq S$; the cardinality of S[k] is |S[k]| = 2^n−k elements. For example, for n = 4 and k = 2, we have
u = b[0]b[1]b[2]b[3], and fix b[1] = 1 and b[2] = 0; in this case, ${S}_{k}=\left\{4,5,12,13\right\}$ (in binary, $\left\{0100,0101,1100,1101\right\}$, where the circled bits are fixed). Thus, we
obtain the next state (2)$|\psi 〉2=GA|\psi 〉1⟼\frac{1}{\sqrt{{2}^{n-k}}}\sum _{u\in {S}_{k}}|u{〉}_{ind}\otimes |0〉fit.$
We present the complete initialization phase in Algorithm 2 , and the conventional GA chromosome initialization in Algorithm 3 .
Algorithm 2 HQAGO initialization, identical with RQGA
1: Initialize the n-qubits individual quantum register |u 〉 = |0 〉⊗n
2: Initialize the (m + 1)-qubits fitness quantum register |fitnessu 〉 =
|0 〉⊗(m+1)
3: Initialize the oracle workspace 1-qubit quantum register |ws 〉 = |0 〉
4: Create the quantum circuit QC
5: Apply conventional GA such that GAsolution = GA(|u 〉), where GAsolution
encodes the values and the positions of the fixed k qubits.
6: |u 〉 = H⊗n|0 〉⊗n ↦− → 1 __√
_2n ∑2n−1
u=0 |u 〉
7: |ws 〉 = H|0 〉↦− → 1 _√
_2 (|0 〉 + |1 〉)
Algorithm 3 Conventional genetic algorithm individual initialization
1: while gene is not generated do
2: Generate a random value that represents the gene value.
3: if gene is not already generated then
4: Randomly generate the sign of the gene.
5: if sign is 0 then
6: Gene value is negated.
7: end if
8: Append gene value.
9: end if
10: end while
The conventional GA searches for the optimal configurations that maximize the fitness, given the search space limitations dictated by fixing qubits in the individuals’ register. The GA is a classical
(i.e., non-quantum) algorithm that starts by generating an initial population according to Algorithm 3 and then calculates the fitness for each chromosome. To define a format that encodes each fixed
qubit’s value and position in the register, we define a constraint on the chromosome format. As such, we consider that the absolute value of the gene v encodes the position of the fixed qubit; the
sign of the gene encodes the fixed qubit’s value. Therefore, a negative v means ‘0’ on position/index v in the individual quantum register (b[v] = 0), while a positive v means ‘1’ on position v in
the individual quantum register (b[v] = 1). In Fig. 3 we present an example of chromosome encoding in the conventional GA’s population.
An example of chromosome encoding.
The absolute value of the gene v encodes the position of the fixed qubit while the gene encodes the fixed qubit’s value. −v[i], $i=\overline{0,k-1}$ means ‘0’ on position v[i] in the individual
quantum register, while +v[i] means ‘1’ on position v[i] in the individual quantum register. N[GA] is the number of individuals in the conventional GA’s population.
The classical GA evolves the population of chromosomes across multiple generations in search of the maximum fitness (which corresponds to the solution). Each generation of chromosomes is evaluated to
select the fittest individuals; we used a probabilistic method where the chances of being selected are proportional to the respective fitness values (Spector, 2004). The percentage of the population
selected for crossover is 32% (similar to other classical GA approaches) (Stanhope & Daida, 1998). Then, we perform fixed point crossover and random mutation (with an adaptive mutation rate) to
obtain a new generation of offspring chromosomes. (We did not use elitism for the fittest individuals.) The termination conditions are met when, as shown in Algorithm 4 , we find an optimal solution
(corresponding to the maximum fitness) or the number of generations exceeds a maximum number (which is given as a parameter). In Fig. 4A, we present the conventional GA operator symbol that we
integrate in the HQAGO design, while Fig. 4B presents the circuit implementation of the operator.
Classical GA circuit applied on the individuals’ quantum register.
(A) We present the GA operator symbol while in (B) we present the gate-level implementation of the circuit.
Algorithm 4 Conventional Genetic Algorithm optimization
1: for each individual in population do
2: Initialize individual.
3: Calculate fitness.
4: end for
5: Select fittest individuals from population.
6: while fitness < maximum fitness and maximum number of generations not
exceeded do
7: Save the fittest individuals (selection) in order to form the new popula-
8: Apply crossover operation on selected individuals and save the offsprings.
9: Mutate the new population resulting from the fittest individuals and
10: Select the fittest individuals from the new population.
11: end while
The next step in HQAGO is to calculate the superposed fitness values of all individuals in the fitness register. Such a quantum fitness function maintains the correlation between each individual and
its corresponding fitness value; it is applied to valid and non-valid individuals. Thus, as presented in Udrescu, Prodan & Vlăduţiu (2006) the assessment operator U[fit], is a unitary operator
characterized by a Boolean fitness function f[fit]:{0, 1}^n → {0, 1}^m+1, (3)${f}_{fit}\left(x\right)=\left\{\begin{array}{c}0×{\left\{0,1\right\}}^{m},\text{}x\text{is a non-valid individual}\
phantom{\rule{10.00002pt}{0ex}}\hfill \\ 1×{\left\{0,1\right\}}^{m},\text{otherwise},\phantom{\rule{10.00002pt}{0ex}}\hfill \end{array}\right\$where m represents the number of qubits in the fitness
The fitness value is encoded using (m + 1)-qubits with the most significant one indicating the validity of the individual; when the most significant bit is ‘0’, it means a non-valid individual; when
‘1’, it means a valid one. As such, the values returned by f[fit] represented in two’s complement belong to distinct fitness areas corresponding to valid and non-valid individuals (a non-valid
chromosome configuration represents a combination that does not satisfy some given conditions) (Udrescu, Prodan & Vlăduţiu, 2006). Naturally, U[fit] characterized by the fitness function f is an
unitary operator, (4)${U}_{fit}:|u{〉}_{ind}\otimes |0〉fit⟼|u{〉}_{ind}\otimes |ffit\left(u\right)〉,$where $|u{〉}_{ind}\otimes |•{〉}_{fit}$ is the individual-fitness value quantum pair register($
|•〉$ stands for either $|0〉$ or $|{f}_{fit}\left(u\right)〉$).
Explicitly applying the U[fit] operator on all superposed individuals means (5)$|\psi 〉3={U}_{fit}|\psi 〉2={U}_{fit}:\frac{1}{\sqrt{{2}^{n-k}}}\sum _{u\in {S}_{k}}|u{〉}_{ind}\otimes |0{〉}_{fit}⟼\
frac{1}{\sqrt{{2}^{n-k}}}\sum _{u\in {S}_{k}}|u{〉}_{ind}\otimes |{f}_{fit}\left(u\right){〉}_{fit}.$
In Fig. 5, we present the symbol of the U[fit] operator, with input and output qubits; Fig. 6 shows the gate-level implementation of the operator. Algorithm 5 explains the assessment by fitness
The symbol of the U[fit] circuit, its inputs and outputs.
The gate-level implementation of the U[fit] sub-circuit utilizes n-qubit Controlled-not gates (Nielsen & Chuang, 2002).
The qubits from the individuals’ register are control qubits, and the qubits from the fitness registers are target qubits. v is the valid qubit that indicates the validity of the corresponding
Algorithm 5 Assessment operation
1: for each individual in population do
2: Calculate fitness
3: Apply Ufit operator
4: if fitness value is valid then
5: Mark individual as valid by setting fM = 1.
6: end if
7: end for
In the next step, we apply the Oracle and Grover diffuser (i.e., the Grover iteration) $\mathcal{O}\left(\sqrt{{2}^{\left(n-k\right)}}\right)$ times. Like in RQGA, we generate a random value max
∈ ℕ, max > 0 in the interval [2^m+1, 2^m+2 − 1), such that the search for the individual with the highest fitness will occur in the valid individuals’ area (Udrescu, Prodan & Vlăduţiu, 2006). The
oracle 𝕆 operates on the fitness quantum register qubits except for the validity qubit v (see Fig. 7), and uses two’s complement representation for marking the states with a value greater than max.
(By subtracting max from all fitness values, only the fitnesses equal or greater than max will remain positive and will be marked with a negative phase.)
Grover circuit.
The oracle uses 2 two’s complement quantum adders, 2 Hadamard gates, and 1 n-qubit Controlled-not gate. Max value register is the quantum register storing the max value, while c[0] and c[1] are the
carry qubits used in the subtraction and addition circuits; v is the valid qubit that indicates the validity of the corresponding chromosome; ws is the oracle workspace qubit (Udrescu, Prodan &
Vlăduţiu, 2006). The diffuser utilizes Hadamard, Pauli-X, and n-qubit Controlled-not gates.
Accordingly, the oracle ${\stackrel{̃}{\mathbb{O}}}_{max}\left({f}_{fit}\left(u\right)\right)$ is applied on the register $|•{〉}_{fit}$ from state $|\psi {〉}_{3}$, (6)$|\psi {〉}_{4}={\stackrel{̃}{\
mathbb{O}}}_{max}|\psi 〉3⟼{\left(-1\right)}^{g\left(u\right)}\frac{1}{\sqrt{{2}^{n-k}}}\sum _{u\in {S}_{k}}|u{〉}_{ind}\otimes |ffit\left(u\right)〉fit,$where (7)$g\left(u\right)=\left\{\begin
{array}{c}1\phantom{\rule{10.00002pt}{0ex}}if\phantom{\rule{10.00002pt}{0ex}}|{f}_{fit}\left(u\right){〉}_{fit}⩾max\phantom{\rule{10.00002pt}{0ex}}\hfill \\ 0\phantom{\rule{10.00002pt}{0ex}}
otherwise.\phantom{\rule{10.00002pt}{0ex}}\hfill \end{array}\right\$
The oracle 𝕆 is implemented using two’s complement quantum adders and subtractors (Udrescu, Prodan & Vlăduţiu, 2006); it is applied on the entire fitness register, except for the validity qubit.
Using two’s complement addition does not affect the correlation between the individual and its corresponding fitness value since addition is a pseudo-classical permutation function. Hence, by
subtracting and adding max + 1 to the fitness register, all basis states for which the fitness value is greater than max + 1 are marked by multiplying their amplitudes with −1. (In other words,
marking a superposed state means its amplitude becomes negative.)
Quantum subtractor design on 8-qubit numbers using QRCA as in (Cuccaro et al., 2004), where a[0], a[1], …, a[7] represent the first operand and b[0], b[1], …, b[7] encode the second operand.
We may consider Quantum Carry Look-Ahead Adder (QCLAA), as presented in Cheng & Tseng (2002), or Quantum Ripple Carry Adder (QRCA), see Cuccaro et al. (2004), as possible implementations for the
quantum adders. Figure 8 presents the gate-level implementation of the subtractor using QRCA. We opted for a ripple-carry adder because it offers an advantage over the Quantum Carry Look-Ahead Adder
(QCLAA) in terms of the number of qubits used. For an n-qubits individuals’ register and $\mathcal{O}\left(\sqrt{{2}^{n-k}}\right)$ Grover iterations, using the QRCA circuit requires $2\sqrt{{2}^
{n-k}}+1$ carry qubits (1 carry-in qubit and 2 qubits for carry-out in each iteration—1 carry-out qubit for each adder). The QCLAA requires a total of $2\left(n-k+1\right)$ carry qubits in each
iteration, namely n − k + 1 carry qubits for each adder. Therefore, from the perspective of the additional required qubits, using QCLAA is not an acceptable solution for our implementation.
Next, we iterate the Grover diffuser $\sqrt{{2}^{n-k}}$ times, to augment the amplitudes of the marked states $|\psi 〉i=|{f}_{fit}\left(u\right)〉i$ with ${f}_{fit}\left(u\right)⩾max$ in the fitness
register; thus, the resulting population becomes (8)$|\psi 〉5=\mathbb{G}|\psi {〉}_{4}.$In Algorithm 6 we present the effects of using the Grover circuit implemented according to Fig. 7, where $|ws
〉$ represents the workspace.
Algorithm 6 Grover algorithm
1: Subtract max value from the fitness values
2: |ws 〉 = H|ws 〉
3: |ws 〉 = CNOT(|fitnessu 〉,|ws 〉)
4: |ws 〉 = H|ws 〉
5: Add max value to the fitness value
6: Use Grover iteration to find the marked states, |ψ 〉 = |ffit(u) 〉i with
ffit(u) ≥ max, in the fitness register.
After iterating the Grover diffuser $\sqrt{{2}^{n-k}}$ times, we measure the fitness register $|•〉$ to obtain (with a high probability) a fitness value ≥max in $|•〉$; thus, in the individual
register, we get a superposition of individuals that generate fitness values ≥max. We then update the max value with the measured fitness value. The entire Grover algorithm procedure is applied
multiple times until the max value is no longer improved, and the measured fitness value corresponds to the solution (or one of the solutions). To find the solution that solves our problem, we need
to measure the individual register (in this state, the individual register is a superposition of individuals that generate the highest fitness). Algorithm 7 presents the entire implementation of our
HQAGO method, and in Fig. 9 the circuit implementation. (In Supplemental Information, Knapsack problem example, we present a step-by-step example of how Algorithm 7 works on an instance of the
Knapsack problem.)
Hybrid quantum algorithm with genetic optimization circuit implementation.
The u qubits make out the individuals’ quantum register, f qubits represent the fitness quantum register, while v is the valid qubit; the val qubits represent the max value (Udrescu, Prodan &
Vlăduţiu, 2006). The carry-in and carry-out qubits used by adder sub-circuits are c[0] and c[1]. For simplicity, we represent only one Grover Iteration and one maximum finding iteration.
Algorithm 7 Hybrid Quantum Algorithm with Genetic Optimization
1: |ψ 〉1 = 1 __√
_2n ∑2n−1
u=0 |u 〉ind ⊗|0 〉fit .
2: Apply conventional Genetic Algorithm outcome outcome, |ψ 〉2 = GA|ψ 〉1.
3: Apply unitary operation Ufit corresponding to fitness computation, |ψ 〉3 =
Ufit|ψ 〉2
4: Randomly generate the real value max ∈ [2m+1,2m+2 − 1)
5: repeat ⊳ Iterates Nmf times, where Nmf represents the number of
iterations of the maximum finding algorithm.
6: Apply the oracle O on the entire fitness register except for the validity
qubit. |ffit(u) 〉fit basis states are marked if |ffit(u) 〉fit ≥ max.
7: Use Grover iteration to find the marked states, |ψ 〉 = |ffit(u) 〉fit with
ffit(u) ≥ max, in the fitness register.
8: max = |ψ 〉.
9: until until max no longer improves.
10: Measure |u 〉ind register in order to obtain the corresponding individual
which represents the solution.
Space complexity
Solving real-world problems using quantum algorithms requires large numbers of qubits when accounting for error correction. As mentioned in (Tănăsescu, Constantinescu & Popescu, 2022), factoring a
2,048-bit number using Shor’s algorithm (Shor, 1994) requires 400,000 qubits when error correction is accounted for. In our previous work, see (Ardelean & Udrescu, 2022a), we showed that for solving
the knapsack problem, the total number of qubits required by RQGA grows exponentially as the number of qubits used for individual representation grows. Thus, for solving real-world problems using
error-corrected qubits is necessary to implement hybrid solutions that capitalize on the quantum speed advantage and reduce the number of required logical qubits.
Our HQAGO requires n qubits to encode the individuals’ register, m+1 qubits for the fitness register, and m qubits for the max value representation. Additionally, the algorithm requires 2⋅r + 1
carry-qubits in the oracle architecture and one qubit for the Oracle workspace. Altogether, the space complexity of the algorithm is 2⋅(m + 1 + r) + n + 1.
In Fig. 10, we compare the pure quantum configuration of HQAGO (i.e., no fixed qubits) with the algorithm configurations with 2 and 3 fixed qubits. The circuit’s critical path length is the same in
all three setups. As shown, the complexity of the quantum circuit decreases as we increase the number of fixed qubits.
Comparison between the pure quantum HQAGO (equivalent to RQGA) and the HQAGO with 2 and 3 fixed qubits, from the perspective of circuit complexity.
(The critical path length is the maximum number of gates between the input and the output in the quantum circuit.).
Time complexity
HQAGO fixes k qubits, meaning that the time complexity of the quantum part is N[gi] × N[mf], where ${N}_{gi}=\sqrt{{2}^{n-k}}$ is the number of Grover iterations and N[mf] is a linear function of n
that represents the number of iterations of the maximum finding algorithm (Ahuja & Kapoor, 1999). This way, we control the time complexity of HQAGO by increasing k. However, the total time complexity
of HQAGO comprises both the quantum and the classical GA parts. The time performance of the classical GA depends on the application domain and implementation parameters (Ankenbrandt, 1991); it can be
predicted as a function of the population size, cardinality of the representation, the complexity of the evaluation function, and the fitness ratio. Nonetheless, the assessment of convergence in the
classical GA is beyond the scope of this study. Still, we note that bounding the number of generations controls the classical GA runtime.
Indeed, HQAGO aims to reduce the algorithm’s complexity by reducing the number of Grover iterations, thus improving the performance at the cost of adding the classical GA. For the individual quantum
register $|u{〉}_{ind}\in S,|S|={2}^{n}$ we fix a subset of k qubits—using classical GA—such that $|u{〉}_{ind}\in {S}_{k}\subseteq S,|{S}_{k}|={2}^{n-k}$. Therefore, we employ Grover’s search
algorithm (Grover, 1996) on a reduced search space S[k], so that HQAGO requires $\mathcal{O}\left(\sqrt{{2}^{n-k}}\right)$ oracle queries for NP-hard problems with unique solution (global optimum),
and $\mathcal{O}\left(\sqrt{\frac{{2}^{n-k}}{M}}\right)$ for problems with M solutions (Nielsen & Chuang, 2002). The best performance of HQAGO is determined experimentally by finding the ”sweet spot”
in which the number of classical GA generations and the number of Grover iterations is minimized. In Fig. 11 we present the complexity reduction both theoretical (according to the $\mathcal{O}$
notation functions) and simulated. As shown, the algorithm’s complexity decreases exponentially as the number of fixed qubits increases.
(A) Presents the theoretical complexity reduction of HQAGO according to the calculated $\mathcal{O}$-notation formulas. We calculate the number of iterations for the maximum finding algorithm ( N[mf
]) using a linear function N[mf](x) = a⋅x + b where x is the number of qubits, and a = 1 and b = 0.3 are the fixed-values parameters approximated after multiple experiments. In (B) we present the
complexity reduction of the algorithm determined after simulating the knapsack problem.
We use the Qiskit toolchain (Javadi-Abhari et al., 2024) to analyze the conventional GA’s convergence and measure the quantum algorithm’s performance. Qiskit is an open-source library for quantum
computing that enables interaction with the IBM Q hardware and fosters the development and simulation of quantum algorithms (Wille, Van Meter & Naveh, 2019). We instantiated HQAGO, as presented in
Fig. 9, using the IBMQ back end, simulator_mps (version 0.1.547 with a configuration of 16 shots) from the ibm-q provider. The simulator is a tensor-network simulator that uses Matrix Product State
representation—limited to 100-qubit circuits. The following basic gates are available on simulator_mps: U1, U2, U3, U, P, CP, CX, CZ, ID, X, Y, Z, H, S, SDG, SX, T, TDG, SWAP, CCX, UNITARY, ROERROR,
To assess HQAGO performance, we propose two applications representing instantiations of the algorithm: one that solves the knapsack problem and the second one solves graph coloring problems. We also
compare the outcome of the graph Coloring problem simulation with our previous results (Ardelean & Udrescu, 2022b).
Knapsack problem
The knapsack problem is defined as the task to efficiently fill a fixed capacity knapsack with items from a finite set. Let W denote the maximum weight the knapsack can accommodate and T the total
number of available items; w[i] represents the weight of the i-th item, and p[i] represents its value. The goal is to load the knapsack in a way that maximizes the total value of the items while
keeping the weight within the capacity limit.
The knapsack problem is a well-studied NP-hard problem with numerous applications in fields such as machine scheduling, space allocation, asset optimization, financial modeling, production and
inventory management, design of network models, and traffic overload control in telecommunication systems (Badiru, 2009; Bretthauer & Shetty, 2002). Other applications focus on scheduling hard
real-time tasks and deterministic cache memory utilization (Nawrocki et al., 2009).
We consider a knapsack with a maximum capacity W = 20 kilograms and the following T = 5 items: Item[0] has 3 kg and a value of 3$, Item[1] has 2 kg and a value of 5$, Item[2] has 4 kg and a value of
10$, Item[3] has 7 kg and a value of 5$, and Item[4] has 9 kg and a value of 15$. Therefore, we define a search space of 5 qubits, each representing an individual. We then variate the number of fixed
individuals: from 0 (representing a pure quantum solution) to 5 (representing a classical GA). We perform each simulation 100 times and record the number of solutions found—in terms of local and
global maximums—and the average number of Grover iterations and classical GA generations. Under these experimental conditions, we performed 3 types of experiments, implementing distinct strategies
for mutation in the conventional GA that fixes qubits in the individuals’ register: GA with non-adaptive mutation, GA with adaptive mutation probability, and GA with the adaptive percentage of the
mutated genes. In Supplementary Information, Knapsack problem example, we present an example of how to apply HQAGO to solve the Knapsack problem.
We configured the classical GA algorithm to use roulette-wheel selection, single-point crossover, and random mutation. The crossover probability is 0.6, 2 parents are involved in the crossover, and
the mutation rate for the non-adaptive mutation is 0.00002. We configured a population of 100 individuals that would evolve over 100 generations, with the possibility to stop the evolution after a
saturation point of 30 generations. For the experiments in which we use adaptive mutation probability, the individual with the worst fitness has a 0.15 probability of mutation; in contrast, the
individual with the best fitness has a probability of 0.005. We mutate 21% of the genes of the individual with the worst fitness and 13% of the genes of the individual with the best fitness in the
simulations in which we use mutation with the adaptive percentage of the mutated genes. (We adopted these GA parameter values inspired by previous approaches in using GAs for quantum circuit
synthesis (Ruican et al., 2008).)
In Supplemental Information, Conventional GA with non-adaptive mutation, Fig. S1, the pure quantum HQAGO finds the best solution after 8 RQGA iterations, while in Figs S2 and S3, S4, and S5 we notice
that the number of iterations decreases. Thus, using classical GA to fix genes reduces the number of RQGA (HQAGO with no fixed qubits) iterations because non-valid solutions are discarded. In Fig. S6
from Supplemental Information, Conventional GA with non-adaptive mutation, we present the results—in terms of best and valid solutions—of the HQAGO with all the genes fixed (representing a classic
GA). As presented, the best outcome is achieved after 21 classical GA generations.
We achieved the same expected outcome after using adaptive mutation for the classical GA. In Figs. S7, S8, and S9 from Supplemental Information, Conventional GA with adaptive mutation probabilities,
we show that HQAGO finds the best outcome after eight RQGA iterations. Moreover, by fixing more genes, we significantly decrease the number of iterations, as presented in Figs. S10 and S11. As
illustrated in Fig. S12, the classic HQAGO (i.e., all qubits in the individuals’ register are fixed) requires 25 classical GA generations to find the best outcome.
Changing the percentage of the mutated genes adaptively, the algorithm (as presented in Figs. S14, S15, S16, and S17 from the Supplemental Information, Conventional GA with adaptive percentage of the
mutated genes) requires fewer RQGA iterations than the pure quantum solution (see Fig. S13) or the classic HQAGO (all individual qubits fixed, see Fig. S18). In Table 1 we show a summary of the
results presented in Supplemental Information, Knapsack Problem.
Number of Number of Number of Number of Number of
fixed qubits GA generations RQGA generations valid solutions best solutions
0 fixed qubits
(pure quantum 0 8 79 19
Conventional GA 1 fixed qubit 10 8 78 21
with non-adaptive 2 fixed qubits 10 8 52 43
mutation 3 fixed qubits 10 6 53 34
4 fixed qubits 10 2 62 28
5 fixed qubits 21 0 62 38
(classical GA)
0 fixed qubits
(pure quantum 0 8 79 19
Conventional GA solution)
with adaptive 1 fixed qubit 10 8 74 25
mutation 2 fixed qubits 10 8 59 36
probabilities 3 fixed qubits 10 6 48 43
4 fixed qubits 10 2 64 29
5 fixed qubits 22 0 56 41
(classical GA)
0 fixed qubits
(pure quantum 0 8 80 20
Conventional GA solution)
with adaptive 1 fixed qubit 10 8 82 16
percentage of 2 fixed qubits 10 8 56 41
mutated genes 3 fixed qubits 10 6 37 46
4 fixed qubits 10 2 65 24
5 fixed qubits 21 0 61 35
(classical GA)
As presented in Figs. 12A, 12C, and 12E, the average number of Grover iterations decreases as we increase the number of fixed qubits. The experiment confirms our expectations that, by using classical
GA to fix genes, the search space size is reduced (our search space is represented only by valid solutions while non-valid ones are discarded). Therefore, our approach reduces the complexity of the
quantum search algorithm. The average number of Grover iterations—calculated as the product between the number of Grover iterations per RQGA iteration and the average number of RQGA
iterations—decreases as the search space is reduced by fixing genes. In Table 2 we summarize the results presented in Figs. 12A, 12C, and 12E.
(A, C, and E) show the number of Grover iterations to find the solution for the Knapsack problem with m = 8 and n = 5 while in (B, D, and F) we present the relationship between the average number of
Grover iterations and the average number of GA generations.
(A, B) The conventional GA has non-adaptive mutation. In (C, D) the conventional GA has adaptive mutation probability, while (E, F) have adaptive percentages of the mutated genes.
Number of Grover iterations
Number of GA with GA with adaptive GA with adaptive
fixed qubits non-adaptive mutation mutation probability percentage of the
mutated genes
In Figs. 12B, 12D, and 12F we present the relationship between the average number of Grover iterations and the average number of classical GA generations. By variating the number of fixed qubits, we
observe a sweet spot in which both the average number of Grover iterations and the average number of classical GA generations are minimized. Table 3 summarizes the results presented in Figs. 12B, 12D
, and 12F.
Average number
of Average number of
Grover GA generations
Number of fixed qubits 0 1 2 3 4 5 0 1 2 3 4 5
GA with non-adaptive 11 9 4 4 3 3 0 10 10 10 10 15
GA with adaptive 9 6 4 4 2 2 0 10 10 11 10 17
mutation probability
GA with adaptive
percentage of the 14 11 6 6 3 3 0 10 10 11 11 18
mutated genes
Graph coloring problem
Consider an undirected graph G = (V, E) where V is the set of nodes and E represent the set of edges. We define C as the set of colors. The graph coloring problem is defined as finding the best way
of assigning the colors in C to nodes from V, such that no two adjacent nodes, v[i], v[j] ∈ V, e[ij] ∈ E, have the same color ($c\left({v}_{i}\right)e c\left({v}_{j}\right)$. (Titiloye & Crispin,
2011) defines the coloring of G as a mapping c:V → E, such that $c\left({v}_{i}\right)e c\left({v}_{j}\right)$ if ∃e[ij] ∈ E. The chromatic number of the graph, χ(G) represents the minimum number of
colors that can color the graph G.
The graph coloring problem has multiple applications, such as timetabling, scheduling, radiofrequency assignment, computer register allocation, printed circuit board testing, and register allocation
(Mahmoudi & Lotfi, 2015; Hennessy & Patterson, 2018). Others identify applications of graph coloring in routing and wavelength assignment, dichotomy-based constrained encoding, frequency assignment
problems, and scheduling (Demange et al., 2015; Orden et al., 2018).
For the graph coloring problem, an example search space is defined by the graph presented in Fig. 13A. We variate the number of fixed qubits in the individuals’ register and perform each simulation
100 times. The classical GA uses non-adaptive mutation with a rate of 0.00002. We use roulette-wheel selection, single-point crossover, and random mutation. The crossover probability is 0.6, with 2
parents involved. We evolve a population of 100 individuals over 100 generations. As presented in Fig. 13B, the algorithm solves the Graph Coloring problem and determines the chromatic number. In
Fig. 13C, we present the relationship between the average number of Grover iterations and the average number of classical GA generations. As observed, the number of Grover Iterations and GA
generations decrease as the search space is reduced by fixing genes.
(A) and (B) show the Erdös-Rényi graph generated with edge probability 0.7 and 5 nodes, and the solution that colors the graph. (C) Depicts the experimental results; after 3 iterations the algorithm
produced 9 valid solutions, of which 2 are the best.
Individual’s register size is n = 10 and Fitness register size is m = 8.
In Supplemental Information, Graph coloring problem, Figs. S19A and S19B we present the graph used for coloring and the solution. In Figs. S20, S21, S22, and S23 we present the outcome of the HQAGO
with different numbers of fixed qubits–from 1 fixed gene in Fig. S20 to 4 fixed genes in Fig. S23. In Table 4 we show a summary of the results presented in Supplemental Information, Graph coloring
This article presents a novel quantum genetic algorithm, based on RQGA, that controls the algorithm complexity by reducing the search space. Accordingly, the proposed HQAGO solves NP-hard problems in
$\mathcal{O}\left(\sqrt{{2}^{n-k}}\right)$ oracle queries.
Therefore, the main advantage of our approach is that it boosts searches large solution spaces using a limited number of qubits. More precisely, compared to the state-of-the-art, our algorithm
enables solving complex problems using fewer qubits at the cost of adding extra circuitry to instantiate the conventional GA.
The limitation of our approach is that—from a theoretical standpoint—by fixing k individual’s chromosome qubits, the conventional genetic algorithm may exclude the maximum-fitness solution(s).
Dealing with such undesired situations may require running the HQAGO several times or optimizing the conventional GA part. Even with an elementary, straightforward approach to designing the
conventional GA in this article’s simulations, we still obtained the best solutions in all HQAGO runs. Further research on more sophisticated conventional GA methods, which may include combining our
approach with similar ones, should lead to even better performance.
Number of Number of Number of Number of Number of
fixed qubits GA generations RQGA generations valid solutions best solutions
0 fixed individuals
(pure quantum solution) 0 3 8 2
Conventional GA (Ardelean & Udrescu, 2022b)
with non-adaptive 1 fixed individual 10 2 8 3
mutation 2 fixed individuals 10 2 47 7
3 fixed individuals 10 2 45 15
4 fixed individuals 10 2 62 10
The use cases of the HQAGO are the typical application cases for classical GAs, varying from scheduling problems to molecular docking and neural network optimizations. Consequently, HQAGO can be used
for register allocation as presented in (Hennessy & Patterson, 2018), Wi-Fi channel assignment in Orden et al. (2018), and scheduling applications (e.g., PCBs on a single machine for processing, see
Maimon & Braha (1998), scheduling of hard real-time tasks, see Nawrocki et al. (2009)). HQAGO can also be used in molecular docking to predict the bound conformations of flexible ligands to
macromolecular targets (Westhead, Clark & Murray, 1997; Morris et al., 1998). Searches performed with HQAGO can be effectively employed in RNA secondary structure prediction since GAs are utilized
for the simulation of the RNA folding process and the investigation of possible folding pathways (Van Batenburg, Gultyaev & Pleij, 1995). Neural network optimization may also apply HQAGO due to its
reduced/controlled algorithm complexity. Indeed, classical GAs are already utilized for the Back-Propagation (BP) algorithm optimization, see Ding, Su & Yu (2011). (As mentioned by the authors, the
network trained with GA and BP has better generalization ability and good stabilization performance.) GAs are also used for tuning the structure and parameters of a neural network to reduce the fully
connected neural network to a partially connected network (Leung et al., 2003); thus, HQAGO can be beneficial for artificial intelligence applications as well.
In the mentioned use cases, the search space varies between 2^25, as shown in Nawrocki et al. (2009), and 10^30 for the RNA folding as in Westhead, Clark & Murray (1997). In such instances, HQAGO
requires runtimes of the orders $\mathcal{O}\left(\sqrt{{2}^{25}}\right)$, and $\mathcal{O}\left(\sqrt{1{0}^{30}}\right)$. Compared to a fully-quantum solution, the HQAGO’s convergence requires fewer
generations by marking k-qubits and discarding the less-fit individuals and the circuit complexity decreases due to a reduced number of quantum gates and qubits.
Supplemental Information
Supplementary Information | {"url":"https://peerj.com/articles/cs-2210/","timestamp":"2024-11-04T20:53:31Z","content_type":"text/html","content_length":"333748","record_id":"<urn:uuid:b0e3a135-dcb7-4b0b-91e2-0e3f9bfe36ae>","cc-path":"CC-MAIN-2024-46/segments/1730477027861.16/warc/CC-MAIN-20241104194528-20241104224528-00561.warc.gz"} |
Professeur-e HES Assistant-e
Description du projet : The safe and reliable operation of power grids rely crucially, among other factors, on an accurate estimation of their current and future states. Whereas our society has been
operating larger and larger power grids over the last 150 years, our fundamental understanding thereof is still partial and incomplete. The past decades have seen a tremendous body of work dedicated
to deciphering the impact of network structures and parameters on the behavior of voltages and currents. The power flow equations, relating the balance of active and reactive power to the complex
voltages, are a fundamental tool for the operation and planning of power grids, as well as for their theoretical analysis (see, e.g., [1, Sec. 6.4] or [2, Sec. 3.5]). Up to this day, there is little
analytical understanding of the relation between the systems characteristics (parameters, coupling network) and the properties of the power flow solutions (existence, uniqueness, stability). Events
such as large scale loop flows around geographic obstacles, as Lake Erie [3] in 2007, are far from being totally understood. It is however clear that their occurrence is highly related to the network
structure of the grid.
Equipe de recherche au sein de la HES-SO: Délitroz Jim , Delabays Robin
Partenaires académiques: VS - Institut Energie et environnement
Durée du projet: 01.10.2023 - 30.09.2027
Montant global du projet: 285'313 CHF
Statut: En cours | {"url":"https://people.hes-so.ch/fr/profile/3473558431-robin-delabays?view=conferences","timestamp":"2024-11-07T15:16:06Z","content_type":"text/html","content_length":"89991","record_id":"<urn:uuid:d4af8312-278d-4292-9f05-316d9637b3c8>","cc-path":"CC-MAIN-2024-46/segments/1730477028000.52/warc/CC-MAIN-20241107150153-20241107180153-00761.warc.gz"} |
Lesson 10
Equivalent Fractions
Warm-up: Choral Count: One-halves (10 minutes)
The purpose of this Choral Count is to invite students to practice counting by \(\frac{1}{2}\) and notice patterns in the count. These understandings help students develop fluency and will be helpful
later in this lesson when students recognize and generate equivalent fractions. In the synthesis, students have the opportunity to notice that \(\frac{2}{2}\) and \(\frac{4}{4}\) are both equal to 1
Required Preparation
• Have recording of choral count by one-fourth available, from a previous lesson.
• “Count by \(\frac{1}{2}\), starting at \(\frac{1}{2}\).”
• Record as students count. Record 2 fractions in each row, then start a new row. There will be 4 rows.
• Stop counting and recording at \(\frac{8}{2}\).
• “What patterns do you see?”
• 1–2 minutes: quiet think time
• Record responses.
Activity Synthesis
• Display count by \(\frac{1}{4}\) from the previous lesson. There should be 4 rows and 4 fractions in each row with the count ending at \(\frac{16}{4}\).
• “How are these two counts the same? How are they different?” (The denominator stays the same in both counts—4 for the last count, and 2 for today’s count. The numerators change in the same way
because they both count by one. They start a new line at \(\frac{2}{2}\) and \(\frac{4}{4}\), which are both whole numbers.)
• Consider asking:
□ “Who can restate the pattern in different words?”
□ “Does anyone want to add an observation as to why that pattern is happening here?”
□ “Do you agree or disagree? Why?”
Activity 1: Equivalent to $\frac{1}{2}$ (15 minutes)
The purpose of this activity is for students to consider equivalent fractions using diagrams. One half has been chosen to introduce equivalent fractions because there are many ways to see and
represent fractions that are equivalent to \(\frac{1}{2}\). Many students may be familiar with the concept of halves and justify equivalence by saying 2 is half of 4. This reasoning is helpful with 1
half and 2 fourths but may not be generalizable to other cases of equivalence. For this reason, the activity synthesis focuses on justifications about whether or not the shaded parts are the same
size. The idea that \(\frac{1}{2}\) and \(\frac{2}{4}\) are the same size is used to define equivalent fractions as fractions that are the same size.
Students need to use language carefully as they explain why the shaded parts of a shape show \(\frac{1}{2}\) (MP6). For example, they may say that 2 of 4 equal parts in shape D are shaded, but if
they combine those parts, the total shaded amount is the same as in the shape where 1 of 2 equal parts is shaded.
MLR7 Compare and Connect. Synthesis: Lead a discussion comparing, contrasting, and connecting shapes C and D. Ask, ”How are shapes C and D the same?”, “How are they different?”, and “How do these two
different representations show \(\frac{1}{2}\)?”
Advances: Representing, Conversing
Engagement: Provide Access by Recruiting Interest. Synthesis: Invite students to share connections between finding one-half in fractions with more than two equal parts in this activity and when they
might, in their own lives, see one half when there are more than 2 equal parts.
Supports accessibility for: Visual-Spatial Processing
• Groups of 2
• “What do you know about \(\frac{1}{2}\)?” (There are 2 equal parts. The parts have to be the same size. One of the parts would be shaded.)
• 1 minute: quiet think time
• Share and record responses.
• “Now work with your partner to select all the shapes where the shaded portion represents \(\frac{1}{2}\) of the shape and explain how there are more than one shape where this is the case.”
• 5–7 minutes: partner work time
• Monitor for students who explain that the shading in A and D both represents \(\frac{1}{2}\) of the shape.
Student Facing
1. For which shapes is the shaded portion \(\frac{1}{2}\) of the shape? Be prepared to share your reasoning.
2. How can there be more than one way of shading a shape to show \(\frac{1}{2}\)?
Activity Synthesis
• Invite students to share their responses.
• Display C and D.
• “How can the shaded portion in each show \(\frac{1}{2}\) when the squares have been partitioned into a different number of equal parts?” (The shaded part is the same size even though they look
different. The same amount of the square is shaded.)
• “Even though C is partitioned into halves and D is partitioned into fourths, we can say that \(\frac{1}{2}\) of each square is shaded because the same amount is shaded as in squares C and D,
which means the two fractions are the same size.”
• “Two numbers that are the same size are equivalent, so the fractions \(\frac{2}{4}\) and \(\frac{1}{2}\) are equivalent fractions.”
Activity 2: Find Equivalent Fractions (20 minutes)
The purpose of this activity is for students to use fraction strips to identify equivalent fractions and explain why they are equivalent. Highlight explanations that make clear that the parts that
represent the fractions are the same size and the parts of the fractions refer to the same whole.
Required Preparation
• Students need the fraction strips they made in a previous lesson.
• Groups of 2
• Ask students to refer to the fraction strips they made in an earlier lesson.
• “Use your fraction strips to find as many fractions as you can that are equivalent to the listed fractions.”
• 5–7 minutes: independent work time
• If students have extra time, encourage them to use their fraction strips to find other pairs of fractions that are equivalent.
• “Now, share the equivalent fractions you found with your partner. Be sure to share your reasoning.”
• 3–5 minutes: partner discussion
• Monitor for students who explain equivalence by saying that the fractions are the same size.
Student Facing
Use your fraction strips from an earlier lesson to find as many equivalent fractions as you can that are equivalent to:
1. \(\frac{1}{2}\)
2. \(\frac{2}{3}\)
3. \(\frac{6}{6}\)
4. \(\frac{3}{4}\)
Be prepared to show how you know the fractions are equivalent.
Advancing Student Thinking
If students don’t generate an equivalent fraction for one of the given fractions, consider asking:
• “How did you represent the fraction with the fraction strips?”
• “How could you use the fraction strips to make an equivalent fraction?”
Activity Synthesis
• Invite students to share pairs of equivalent fractions and why they are equivalent. Highlight that the fractions are equivalent because the part of the strips that represent the fractions are the
same size.
• Display a set of fraction strip diagram for all to see.
• As students share, mark up the fraction strip diagram to illustrate the equal size of the parts (for example, by drawing lines or circling the parts). Then, record pairs of equivalent fractions
using the equal sign like: \(\frac{1}{2} = \frac{3}{6}\).
Lesson Synthesis
“If you were given two fractions, how could you determine whether they are equivalent?” (I would look at diagrams of them to see if the fractions are the same size. I would use fraction strips to see
if the fractions were the same size.)
Cool-down: Find the Equivalent Fractions (5 minutes) | {"url":"https://im.kendallhunt.com/k5/teachers/grade-3/unit-5/lesson-10/lesson.html","timestamp":"2024-11-06T02:48:59Z","content_type":"text/html","content_length":"98135","record_id":"<urn:uuid:0c64f71e-0c42-4438-973a-7bafff296ffa>","cc-path":"CC-MAIN-2024-46/segments/1730477027906.34/warc/CC-MAIN-20241106003436-20241106033436-00884.warc.gz"} |
Granular CA Synthesis
Have you ever wondered what a cellular automaton would sound like? This Demonstration converts cellular automata to sound using granular synthesis and a set of harmonically related frequencies. Each
row of the automata is represented as one granule of sound consisting of superimposed sine waves whose frequencies are determined by the relative positions of values in the row. The set of possible f
requencies is determined in this case by using Farey sequences and an arbitrary base frequency. This Demonstration explores the 3-color range 2 totalistic rule space, chosen for its bountiful variety
of complexity. | {"url":"https://www.wolframcloud.com/obj/de85e9a5-eb16-4101-a31f-3076a36448e4","timestamp":"2024-11-12T02:40:49Z","content_type":"text/html","content_length":"181337","record_id":"<urn:uuid:bf1f930d-9add-4b51-aa07-b20d15aa96b8>","cc-path":"CC-MAIN-2024-46/segments/1730477028242.50/warc/CC-MAIN-20241112014152-20241112044152-00506.warc.gz"} |
What possible values can the difference of squares of two Gaussian integers take? | HIX Tutor
What possible values can the difference of squares of two Gaussian integers take?
The difference of squares of any two integers can take the form $4 n + k$ for any integer $n$ and $k \in \left\{0 , 1 , 3\right\}$. Specifically, the difference of squares of two integers cannot be
of the form $4 n + 2$.
Is there a simple characterisation of possible differences of squares for Gaussian integers, i.e. complex numbers of the form $m + n i$, where $m , n$ are integers ?
Conjecture: Any Gaussian integer of the form $m + 2 n i$ where $m , n$ are integers is expressible as the difference of two squares of Gaussian integers.
Answer 1
Here are some possibilities:
#(n+1)^2-n^2 = 2n+1#
#(n+1)^2-(n-1)^2 = 4n#
#((n+1)+ni)^2 - (n+(n+1)i)^2 = 4n+2#
So we can get any (real) integer as a difference of squares of Gaussian integers.
#(a+bi)^2-(c+di)^2 = (a^2-b^2-c^2+d^2)+2(ab+cd)i#
Consider the various possible combinations of odd and even #a, b, c, d# and the resulting values of #(a^2-b^2-c^2+d^2)# and #(ab+cd)# modulo #4# and #2# respectively:
#((a_2, b_2, c_2, d_2, (a^2-b^2-c^2+d^2)_4, (ab+cd)_2),(0,0,0,0,0,0),(0,0,0,1,1,0),(0,0,1,0,3,0),(0,0,1,1,0,1),(0,1,0,0,3,0),(0,1,0,1,0,0),(0,1,1,0,2,0),(0,1,1,1,3,1),(1,0,0,0,1,0),(1,0,0,1,2,0),
So if #(ab+cd)# is odd, then #(a^2-b^2-c^2+d^2) = 0, 1# or #3# modulo #4#.
So the conjecture in the question is false: If the difference of squares of two Gaussian integers has an imaginary part of the form #4k+2# then the real part is of the form #4k+0#, #4k+1# or #4k+3#.
Specifically not of the form #4k+2#.
Sign up to view the whole answer
By signing up, you agree to our Terms of Service and Privacy Policy
Answer 2
The difference of squares of two Gaussian integers can take any non-negative integer value. This is because when you subtract one Gaussian integer from another and then square the result, you can
obtain any non-negative integer value depending on the specific Gaussian integers chosen. The difference of squares of Gaussian integers is a non-negative integer because it is always squared,
resulting in a non-negative value. Therefore, any non-negative integer can be represented as the difference of squares of two Gaussian integers.
Sign up to view the whole answer
By signing up, you agree to our Terms of Service and Privacy Policy
Answer from HIX Tutor
When evaluating a one-sided limit, you need to be careful when a quantity is approaching zero since its sign is different depending on which way it is approaching zero from. Let us look at some
When evaluating a one-sided limit, you need to be careful when a quantity is approaching zero since its sign is different depending on which way it is approaching zero from. Let us look at some
When evaluating a one-sided limit, you need to be careful when a quantity is approaching zero since its sign is different depending on which way it is approaching zero from. Let us look at some
When evaluating a one-sided limit, you need to be careful when a quantity is approaching zero since its sign is different depending on which way it is approaching zero from. Let us look at some
Not the question you need?
HIX Tutor
Solve ANY homework problem with a smart AI
• 98% accuracy study help
• Covers math, physics, chemistry, biology, and more
• Step-by-step, in-depth guides
• Readily available 24/7 | {"url":"https://tutor.hix.ai/question/what-possible-values-can-the-difference-of-squares-of-two-gaussian-integers-take-5557f63f50","timestamp":"2024-11-06T21:33:44Z","content_type":"text/html","content_length":"582553","record_id":"<urn:uuid:53d0b92e-85ab-4865-b2ab-01619d513778>","cc-path":"CC-MAIN-2024-46/segments/1730477027942.47/warc/CC-MAIN-20241106194801-20241106224801-00002.warc.gz"} |
Physics B.S.
Information and Policies
Program Learning Outcomes
Learning outcomes summarize the most important knowledge, skills, abilities, and attitudes that students are expected to develop over the course of their studies. The program learning outcomes
communicate the faculty’s expectations to students, provide a framework for faculty evaluation of the curriculum based on empirical data, and help improve and measure the impact of implemented
changes. Students graduating with a B.S. in physics will demonstrate:
PLO 1. Ability to solve problems using concepts in classical and quantum mechanics, statistical mechanics and electromagnetism.
PLO 2. Proficiency in mathematics and the mathematical concepts needed for a proper understanding of physics.
PLO 3. Ability to take measurements in a physics laboratory and analyze the measurements to draw valid conclusions.
PLO 4. Ability to communicate scientific content effectively, both orally and in writing.
Academic Advising for the Program
The department undergraduate adviser (physicsadvising@ucsc.edu) works closely with students interested in pursuing the major to ensure that they begin the program immediately and follow the
appropriate steps toward its completion.
Getting Started in the Major: Frosh
Before coming to the University of California, Santa Cruz:
High school students coming to UC Santa Cruz as frosh should emphasize their mathematics preparation with the expectation that they will take the first calculus course, MATH 19A, before their second
quarter at UCSC. Students who come to UC Santa Cruz with credit for MATH 19A will be able to start the Physics 5 series in the first quarter. PHYS 5A is offered in the fall and winter quarters each
year. Students with a score of 5 on the AP Physics C Mechanics and AP Physics C Electricity and Magnetism examinations are exempt from taking PHYS 5A and PHYS 5C respectively, and the associated lab
After coming to UC Santa Cruz:
This major is highly course intensive and sequential; students who intend to pursue this major must begin taking classes for the major in their first quarter at UCSC.
Incoming students in the physics major should complete the Math Placement process as early as possible, so that the placement is posted before enrollment begins.For more information, please review
the Math Placement website.
In their first term, students should enroll in the highest course in the following sequence that they are eligible for: MATH 2, MATH 3, MATH 19A, MATH 19B, MATH 23A, MATH 23B. Students should not
take MATH 11A or MATH 11B. Graduating in four years is still possible for a student who places into MATH 2 or MATH 3; the department undergraduate adviser and the department's alternative paths
webpage should be consulted.
Students who come to UC Santa Cruz with credit for MATH 19A, and have room in their schedule, should take PHYS 5A and PHYS 5L (unless they have a score of 5 on the AP Physics C Mechanics examination)
in their first term. Those who come to UCSC with credit for MATH 19B and PHYS 5A and PHYS 5L, and have room in their schedule, should take PHYS 5C and PHYS 5N (unless they have a score of 5 on the AP
Physics C Electricity & Magnetism examination) in their first term. The Physics Department tries to match incoming students who are interested with research opportunities, if they are available.
Students who for some reason do not start the courses for the major in their first term should consult the department undergraduate adviser and the alternative paths webpage. Students who take PHYS
6A instead of PHYS 5A, and do very well in it, may contact the department chair for permission to enter the major. Students who do not begin the lower-division requirements during their first year
will have difficulty completing the program within four years.
Transfer Information and Policy
Transfer Admission Screening Policy
The following courses or their equivalents are required prior to transfer, by the end of the spring term for students planning to enter in the fall:
PHYS 5A Introduction to Physics I 5
PHYS 5B Introduction to Physics II 5
PHYS 5C Introduction to Physics III 5
MATH 19A Calculus for Science, Engineering, and Mathematics 5
MATH 19B Calculus for Science, Engineering, and Mathematics 5
MATH 23A Vector Calculus 5
A minimum GPA of 2.7 must be obtained in the following courses
PHYS 5A Introduction to Physics I 5
PHYS 5B Introduction to Physics II 5
PHYS 5C Introduction to Physics III 5
In addition, the following course is recommended prior to transfer to ensure timely graduation:
PHYS 5D Introduction to Physics IV 5
Prospective students are also encouraged to complete the Intersegmental General Education Transfer Curriculum (IGETC) or to complete all UC Santa Cruz general education requirements before
Students entering UC Santa Cruz in the winter quarter must complete
PHYS 5D Introduction to Physics IV 5
MATH 23B Vector Calculus 5
in addition to the requirements for students entering in the fall quarter. (This is true for years when winter admission is open.)
Getting Started in the Major: Transfer Students
Transfer students admitted to UC Santa Cruz in the applied physics major who have satisfied the above screening requirements may declare the major immediately upon arrival at UC Santa Cruz. They
should contact the undergraduate advisor to draw up an academic plan.
Incoming transfer students should enroll in the following courses in their first term:
• PHYS 5D, unless they have credit for the course, in which case PHYS 102;
• MATH 23B, unless they have credit for the course, in which case they may enroll in PHYS 116A or an elective or general education course;
• ASTR 119, unless they have knowledge of the Python programming language, in which case they may enroll in PHYS 133 after obtaining a permission code.
Students who have completed courses that should be equivalent to PHYS 5D or MATH 23B but are not formally articulated as such should contact the undergraduate adviser to have their courses evaluated.
Transfer students entering UC Santa Cruz in the winter quarter should meet with the undergraduate adviser upon arrival to draw up an academic plan.
Students who are proposed in a different major (other than applied physics or physics [astrophysics]) and have advanced standing when they come to UC Santa Cruz require permission from the department
to change into the major.
Major Qualification Policy and Declaration Process
Major Qualification
To qualify to declare the physics major, students must achieve a cumulative grade point average (GPA) of 2.70 or greater in the following courses, or their equivalents:
PHYS 5A Introduction to Physics I 5
PHYS 5B Introduction to Physics II 5
PHYS 5C Introduction to Physics III 5
When determining qualification to declare the major:
• All courses must be taken for a letter grade.
• If PHYS 5A is satisfied with AP credit based on an AP examination score of 5, students may substitute a grade of A for PHYS 5A when calculating their cumulative GPA.
• If PHYS 5C is satisfied with AP credit based on an AP examination score of 5, students may substitute a grade of A for PHYS 5C when calculating their cumulative GPA.
• Students with two or more grades of NP, C-, D+, D, D-, or F in the major qualification policy courses are not eligible to declare even if the courses are retaken and the grades replaced.
Students who achieve a GPA of 2.66 or higher (but less than 2.70) in the three courses may declare the major if they receive a B or better in PHYS 5D.
Appeal Process
Students who are informed that they are not eligible to declare the major may appeal this decision by submitting a letter to the department chair by the later date of either 15 days from the date the
notification was sent, or one week after the start of instruction during the quarter after the final relevant grade was received (generally in PHYS 5C or PHYS 5D). They also must arrange to meet with
one of the faculty mentors listed for Declaring the Major. Within 15 days of receipt of the appeal, after consulting with the faculty mentor, the department chair will either finalize the denial of
admission or specify further conditions for admission or approve admission to the major, and will notify the student and their college of the decision. For more information about the appeal process,
see Appeal Process.
How to Declare a Major
Students should submit a petition to declare as soon as they complete the major qualification requirements or reach their declaration deadline quarter (whichever comes first).
Students petitioning when the campus declaration deadline is imminent (i.e., in their sixth quarter, for students admitted as frosh), will either be approved, denied, or provided with conditions
(e.g., completion of some courses with certain grades) that will be resolved within at most one more enrolled quarter, even if they have not completed major qualification courses.
All students are required to review their academic plan with a faculty mentor prior to declaring the major. For instructions on petitioning to declare, go to Declaring Your Major.
Letter Grade Policy
All courses used to satisfy the physics major requirements must be taken for a letter grade.
Double Majors and Major/Minor Combinations Policy
Students who complete a major sponsored by the Physics Department cannot complete a second major sponsored by the Physics Department or a physics minor.
Students who complete the Physics B.S. and the astrophysics minor cannot use any courses that satisfy the requirements for the minor as electives for the major.
The department awards "honors" (3.5 grade point average or better) and "highest honors" (3.8 grade point average or better) to top graduating students each year. The department also awards "honors"
for outstanding work on the senior thesis, made upon the recommendation of the faculty thesis adviser.
Timely Graduation and Alternative Plans
• Students planning a senior thesis should find a faculty thesis adviser as early as possible, but no later than the beginning of the senior year for four-year students or the beginning of the
second year for transfer students. For further information about the senior thesis, see Senior Thesis.
• Students who join a major program of the Physics Department with some of the required courses completed, or have room in their program for additional courses, should consult with the Physics
Department undergraduate adviser.
• Students who fall behind the planners should consult the Physics Department undergraduate adviser and Alternatives.
• All the transfer major planners assume that the Intersegmental General Education Transfer Curriculum (IGETC) has been completed in community college, or has been partially completed and can be
finished while at UC Santa Cruz (including summers).
Requirements and Planners
Course Requirements
Lower-Division Courses
Choose one of the following courses:
MATH 19A Calculus for Science, Engineering, and Mathematics 5
MATH 20A Honors Calculus 5
Plus one of the following courses:
MATH 19B Calculus for Science, Engineering, and Mathematics 5
MATH 20B Honors Calculus 5
Plus all of the following courses:
MATH 23A Vector Calculus 5
MATH 23B Vector Calculus 5
PHYS 5A Introduction to Physics I 5
PHYS 5L Introduction to Physics I Laboratory 1
PHYS 5B Introduction to Physics II 5
PHYS 5M Introduction to Physics II Laboratory 1
PHYS 5C Introduction to Physics III 5
PHYS 5N Introduction to Physics Laboratory III 1
PHYS 5D Introduction to Physics IV 5
Plus one of the following courses or equivalent:
ASTR 119 Introduction to Scientific Computing 5
CSE 20 Beginning Programming in Python 5
Upper-Division Courses
All of the following courses:
PHYS 102 Modern Physics 5
PHYS 116A Mathematical Methods in Physics 5
PHYS 116C Mathematical Methods in Physics 5
PHYS 105 Mechanics 5
PHYS 110A Electricity, Magnetism, and Optics 5
PHYS 112 Thermodynamics and Statistical Mechanics 5
PHYS 133 Intermediate Laboratory 5
PHYS 134 Physics Advanced Laboratory 5
PHYS 139A Quantum Mechanics I 5
PHYS 133 is offered all three terms. PHYS 134 is offered in the winter and spring terms. Capacity in the lab courses is limited, and they should be taken as early as possible.
MATH 21 and MATH 24 can substitute for PHYS 116A.
PHYS 116C is waived for students who are pursuing a dual major in physics and a mathematics B.A. or B.S., and take MATH 107 in the year 2017 or later.
And one of these two courses:
PHYS 110B Electricity, Magnetism, and Optics 5
PHYS 139B Quantum Mechanics II 5
Three courses chosen from upper-division elective courses offered by the Physics Department or ASTR 111 - ASTR 118. In some cases, with the approval of the department, one of the elective
requirements may be satisfied by an upper-division science or engineering course.
Students who wish to go to graduate school in physics after the Physics B.S. are recommended to complete both PHYS 110B and PHYS 139B instead of one of them, and complete PHYS 116D.
Disciplinary Communication (DC) Requirement
Students of every major must satisfy the upper-division disciplinary communication (DC) requirement. Students in the physics major satisfy the DC requirement by completing one of the following
Either this course
PHYS 182 Scientific Communication for Physicists 5
or these courses
PHYS 195A Senior Thesis I 5
PHYS 195B Senior Thesis II 5
Students interested in doing a senior thesis should have found a faculty thesis advisor by the beginning of their senior year. They should contact physicsadvising@ucsc.edu or their faculty mentor if
they need assistance.
Comprehensive Requirement
The comprehensive requirement is satisfied by completing the following course:
PHYS 134 Physics Advanced Laboratory 5
The tables below are for informational purposes and do not reflect all university, general education, and credit requirements. See Undergraduate Graduation Requirements for more information.
Physics B.S.: Freshman Academic Plan
Fall Winter Spring
MATH 19A MATH 19B MATH 23A
1st (frosh) (or MATH 20A) (or MATH 20B)
PHYS 5A & PHYS 5L* PHYS 5B & PHYS 5M
PHYS 5C & PHYS 5N ASTR 119 PHYS 105
2nd (soph) PHYS 5D PHYS 116A PHYS 116C
MATH 23B
3rd (junior) PHYS 102 PHYS 112 PHYS 133
PHYS 110A PHYS 110B
PHYS 139A PHYS 182** Elective
4th (senior) Elective PHYS 134
*Students who complete the equivalent of MATH 19A before coming to UCSC can take the PHYS 5A, PHYS 5B, PHYS 5C courses and the MATH 19B, MATH 23A, MATH 23B courses in their first year.
**Students writing a senior thesis should replace PHYS 182 with the two-quarter sequence PHYS 195A and PHYS 195B.
In addition to the specific courses shown in this planner, a student must complete courses satisfying the ER, CC, IM, TA, PR and PE general education requirements.
Students looking for an alternative pathway through the major should consult the physics adviser.
Physics B.S. Transfer Academic Plan One
Fall Winter Spring
MATH 23B PHYS 116A PHYS 116C
1st (junior) ASTR 119 PHYS 133 PHYS 105
PHYS 102 PHYS 134
PHYS 110A PHYS 110B Elective
2nd (senior) PHYS 139A PHYS 112 PHYS 182*
Elective Elective
*Students writing a senior thesis should replace PHYS 182 with the two-quarter sequence PHYS 195A and PHYS 195B.
This planner assumes that a student has completed PHYS 5D and general education requirements.
Physics B.S. Transfer Academic Plan Two
For students who have not completed the equivalent of PHYS 5D:
Fall Winter Spring
MATH 23B PHYS 133 PHYS 105
1st Year PHYS 5D PHYS 102 Elective
ASTR 119 PHYS 116A PHYS 116C
PHYS 110A PHYS 110B PHYS 134
2nd Year PHYS 139A PHYS 112 Elective
Elective PHYS 182*
*Students writing a senior thesis should replace PHYS 182 with the two-quarter sequence PHYS 195A and PHYS 195B. | {"url":"https://catalog.ucsc.edu/en/2022-2023/general-catalog/academic-units/physical-and-biological-sciences-division/physics/physics-bs/","timestamp":"2024-11-06T01:01:13Z","content_type":"application/xhtml+xml","content_length":"109403","record_id":"<urn:uuid:077d2411-0071-466b-946a-f2bde7ba067a>","cc-path":"CC-MAIN-2024-46/segments/1730477027906.34/warc/CC-MAIN-20241106003436-20241106033436-00120.warc.gz"} |
Long Division - Steps, Examples
Long division is a crucial mathematical concept that has several practical utilizations in different domains. One of its main uses is in finance, where it is applied to figure out interest rates and
determine loan payments. It is further applied to figure out taxes, investments, and budgeting, making it a crucial skill for anybody working in finance.
In engineering, long division is utilized to figure out complicated challenges in connection to development, construction, and design. Engineers utilize long division to determine the loads that
structures can bear, assess the strength of materials, and design mechanical systems. It is further utilized in electrical engineering to calculate circuit parameters and design complex circuits.
Long division is also crucial in science, where it is used to calculate measurements and perform scientific workings. For example, astronomers utilize long division to calculate the distances between
stars, and physicists use it to calculate the velocity of objects.
In algebra, long division is used to factor polynomials and solve equations. It is an important tool for working out complex problems which consist of huge numbers and need precise calculations. It
is further applied in calculus to figure out integrals and derivatives.
As a whole, long division is an essential math theory which has many practical utilizations in various fields. It is a rudimental arithmetic operation which is applied to work out complicated
problems and is an important skill for everyone interested in engineering, science, finance, or mathematics.
Why is Long Division Important?
Long division is an important mathematical theory that has many utilization in various domains, including engineering, science and finance. It is a fundamental arithmetic operation which is used to
work out a broad array of problems, for example, figuring out interest rates, determining the length of time required to finish a project, and figuring out the distance traveled by an object.
Long division is further used in algebra to factor polynomials and figure out equations. It is an essential tool for figuring out complicated challenges which consist of enormous values and requires
accurate calculations.
Procedures Involved in Long Division
Here are the procedures involved in long division:
Step 1: Write the dividend (the number being divided) on the left and the divisor (the number dividing the dividend) on the left.
Step 2: Determine how many times the divisor can be divided into the first digit or set of digits of the dividend. Note down the quotient (the result of the division) above the digit or set of
Step 3: Multiply the quotient by the divisor and write the result below the digit or set of digits.
Step 4: Subtract the outcome obtained in step 3 from the digit or set of digits in the dividend. Write the remainder (the value left over after the division) underneath the subtraction.
Step 5: Carry down the following digit or set of digits from the dividend and append it to the remainder.
Step 6: Replicate steps 2 to 5 unless all the digits in the dividend have been refined.
Examples of Long Division
Here are few examples of long division:
Example 1: Divide 562 by 4.
4 | 562
As a result, 562 divided by 4 is 140 with a remainder of 2.
Example 2: Divide 1789 by 21.
21 | 1789
Thus, 1789 divided by 21 is 85 with a remainder of 11.
Example 3: Divide 3475 by 83.
83 | 3475
Therefore, 3475 divided by 83 is 41 with a remainder of 25.
Common Mistakes in Long Division
Long division can be a challenging theory to conquer, and there are numerous general errors that students make when performing long division. One common mistake is to forget to write down the
remainder when dividing. Another usual error is to wrongly place the decimal point when dividing decimal numbers. Learners might also forget to carry over values when subtracting the product within
the dividend.
To prevent making these errors, it is important to exercise long division regularly and pay close attention to ever stage of the process. It can further be helpful to revisit your calculations using
a calculator or by performing the division in reverse to make sure that your solution is right.
In addition, it is essential to get a grasp of the fundamental principles regarding long division, for example, the relationship between the quotient, dividend, divisor, and remainder. By conquering
the basics of long division and preventing common errors, everyone can better their skills and gain self-esteem in their skill to solve complicated challenges.
Finally, long division is an essential math idea which is important for working out complicated challenges in many domains. It is used in science, finance, engineering, and mathematics, making it a
crucial skill for professionals and learners alike. By mastering the stages involved in long division and getting a grasp of how to apply them to real-world problems, anyone can gain a deeper grasp
of the complicated workings of the world around us.
If you require help understanding long division or any other arithmetic idea, Grade Potential Tutoring is here to help. Our experienced teachers are accessible online or face-to-face to give
personalized and effective tutoring services to guide you succeed. Our teachers can assist you across the stages in long division and other arithmetic concepts, support you figure out complex
challenges, and provide the tools you need to excel in your studies. Connect with us right now to schedule a tutoring class and take your math skills to the next level. | {"url":"https://www.clearwaterinhometutors.com/blog/long-division-steps-examples","timestamp":"2024-11-02T15:36:05Z","content_type":"text/html","content_length":"76318","record_id":"<urn:uuid:01194f47-75b7-4945-8bd2-800ec9f3b422>","cc-path":"CC-MAIN-2024-46/segments/1730477027714.37/warc/CC-MAIN-20241102133748-20241102163748-00553.warc.gz"} |
Chapter 5: Hypothesis Testing and Statistical Significance Notes | Knowt
5.1 Another Way of Applying Probabilities to Research: Hypothesis Testing
• Due to sampling error, the samples we select might not be a true reflection of the underlying population.
• One of the problems we face when conducting research is that we do not know the pattern of scores in the underlying population. In fact, our reason for conducting the research in the first place
is to try to establish the pattern in the underlying population. We are trying to draw conclusions about the populations from our samples.
p-value: the probability of obtaining the pattern of results we found in our study if there was no relationships between the variables in which we were interested in the population
• p-value is a conditional probability.
• Hypothesis testing is often seen as a competition between two hypotheses. It is seen as a competition between our research hypothesis and null hypothesis.
5.2 Null Hypothesis
Null Hypothesis: always states that there is no effect in the underlying population; by effect we mean a relationship between two or more variables, a difference between two or more different
populations or a difference in the responses of one population under two or more different conditions
Research Hypothesis: our prediction of how two variables might be related to each other; alternatively, it might our prediction of how specified groups of participants might be different from each
other or how one group of participants might be different when performing under two or more conditions
• Research hypothesis is often called the experimental or alternate hypothesis.
• If the researcher suggests that the null hypothesis could not be rejected, this simply indicates that the statistical probability they calculated meant that it was likely that the null hypothesis
was the more sensible conclusion.
• If the researcher rejects the null hypothesis, it means that the probability of obtaining their findings if the null hypothesis were true is so small that it makes more sense to believe in the
research hypothesis.
5.3 Logic of Null Hypothesis Testing
• If there is no real relationship in the population, you are unlikely to find a relationship in your randomly selected sample. Therefore, if you do find a relationship in your sample, it is likely
to reflect a relationship in your population.
5.4 The Significance Level
Alpha (α): the criterion for statistical significance that we set for our analyses; it is the probability level that we use as a cut-off below which we are happy to assume that our pattern of results
is so unlikely as to render our research hypothesis as more plausible than the null hypothesis
• On the assumption of the null hypothesis being true, if the probability of obtaining an effect dues to sampling error is less than 5%, then the findings are said to be ‘significant.’ If this
probability is greater than 5%, then the findings are said to be ‘nonsignificant'.’
Statistically Significant: our findings when we find that our pattern of research results is so unlikely as to suggest that our research hypothesis is more plausible than the null hypothesis
Not Significant: our findings when we find that our pattern of data is highly probable if the null hypothesis were true
5.5 Statistical Significance
• Just because a statistically significant difference is found between two samples of scores, it does not mean that it is necessarily a large or psychologically significant difference.
• The probability we calculate in inferential statistics is simply the probability that such an effect would arise if there were no difference between the underlying populations. This does not
necessarily have any bearing on the psychological importance of the finding. The psychological importance of a finding will be related to the research question and the theoretical basis of the
• Statistical significance does not equal psychological significance.
5.6 Correct Interpretation of the p-value
• It is important to understand that the p-value is a conditional probability. That is, if you are assessing the probability of an event’s occurrence, given that the null hypothesis is true.
• Alpha simply gives an indication of the likelihood of finding such a relationship if the null hypothesis were true. It is perhaps true that the stronger the relationship, the lower the
probability that such a relationship would be found if the null hypothesis were true, but this is not necessarily so.
• Alpha is the probability that we will get a relationship of an obtained magnitude if the null hypothesis were true. It is not the probability of the null hypothesis being true.
5.7 Statistical Tests
• Converting the data from our samples into scores from probability distributions enables us to work out the probability of obtaining such data by chance factors alone. We can then use this
probability to decide which of the null and experimental hypotheses is the more sensible conclusion. It should be emphasized here that these probabilities we calculate are based upon the
assumption that our samples are randomly selected from the population.
• If we were investigating differences between groups we could use probability distributions to find out the probability of finding differences of the size we observe by chance factors alone if the
null hypothesis were true. In such a case, we would convert the difference between the two groups of the independent variable into a score from a probability distribution. We could then find out
the probability of obtaining such a score by sampling error if no difference existed in the population.
5.8 Type I Error
Type I Error: where you decide to reject the null hypothesis when it is in fact true in the underlying population; you conclude that there is an effect in the population when no such effect really
• If your p-value (α) is 5% then you will have a 1 in 20 chance of making a Type I error. This is because the p-value is the probability of obtaining an observed effect, given that the null
hypothesis is true. It is the probability of obtaining an effect as a result of sampling error alone if the null hypothesis is true.
5.8.1 Replication
• Replication is one of the cornerstones of science.
• If you observe a phenomenon once, it may be a chance occurrence; if you see it on two, three, four or more occasions, you can be more certain that it is a genuine phenomenon.
5.9 Type II Error
Type II Error: where you conclude that there is no effect in the population when in reality there is an effect in the population; it represents the case when you do not reject the null hypothesis
when in fact you should do because in the underlying population the null hypothesis is not true
5.10 Why set Alpha at 0.05?
• If we set α at 0.2, we would be tolerating a Type I error in one case in every five. In one case in every five we would reject the null hypothesis when it is in fact true.
• If we set α at 0.001, we are much less likely to make a Type I error. We are only likely to reject the null hypothesis when it is true at one time in every thousand. On the face of it, this would
appear to be a very good thing. The problem here is that, although we reduce the probability of making a Type I error, we also increase the probability of not rejecting the null hypothesis when
it is false. We increase the probability of making a Type II error.
• In most situations an α of 0.05 provides a balance between making Type I and Type II errors.
5.11 One-Tailed and Two-Tailed Hypothesis
One-tailed Hypothesis: on where you have specified the direction of the relationship between variables or the difference between 2 conditions; also called a directional hypothesis
Two-tailed Hypothesis: one where you have predicted that there will be a relationship between variables or a difference between conditions, but you have no predicted the direction of the relationship
between the variables or the difference between the conditions; also called a bi-directional hypothesis
• If you make a two-tailed prediction, the calculated score can fall in either tail. If we use a 5% significance level as our cut-off for rejecting the null hypothesis, we take calculated scores
that have a 2.5% probability of being obtained; that is, 5% divided by the two tails.
• If we make a one-tailed prediction, we accept scores in only one of the tails and therefore our 5% probability region is all in the one tail; that is, it is not divided between the two tails.
• Only the p-value is affected by one-tailed and two-tailed hypothesis distinction. The test statistic (e.g., correlation coefficient or t-value) remains the same for both one- and two-tailed tests
on the same set of data.
• When making a two-tailed prediction about differences between two conditions, we have only to specify that a difference exists between them. We do not specify which condition will have the higher
• If we make a one-tailed prediction, we would predict which of the above scenarios is most appropriate: that is, which condition will have the higher scores.
5.12 Assumptions Underlying the Use of Statistical Tests
• Many statistical tests that we use require that our data have certain characteristics. These characteristics are called assumptions.
• Many statistical tests are based upon the estimation of certain parameters relating to the underlying populations in which we are interested. These sorts of tests are called parametric tests.
These tests make assumptions that our samples are similar to underlying probability distributions such as the standard normal distribution.
Non-Parametric or Distribution-Free Tests: where statistical tests do not make assumptions about the underlying distributions or estimate the particular population parameters
5.12.1 Assumptions Underlying Parametric Tests
1. The scale upon which we measure the outcome or dependent variable should be at least interval level. This assumption means that any dependent variables that we have should be measured on an
interval- or ratio-level scale or, if we are interested in relationships between variables, the variables of interest need to measured using either interval- or ratio-level scales of measurement.
2. The populations from which the samples are drawn should be normally distributed. Parametric tests assume that we are dealing with normally distributed data. Essentially this assumption means that
we should always check that the data from our samples are roughly normally distributed before deciding to use parametric tests. We have already told you how to do this using box plots, histograms
or stem and leaf plots. If you find that you have a large violation of this assumption, there are ways to transform your data legitimately so that you can still make use of parametric tests. For
example, if you have positively skewed data you can transform all the scores in your skewed variable by calculating the square-root of each score. It has been shown that when we do this it can
eliminate positive skew and leave your variable much more normally distributed. Some students think that this is simply changing your data and so cheating. However, this is not the case. All you
are doing is converting the variable to a different scale of measurement. It is akin to converting temperature scores from Centigrade to Fahrenheit. As you are doing the same transformation for
all scores on the variable it is entirely legitimate.
3. The third assumption that we cover here is only relevant for designs where you are looking at differences between conditions. This assumption is that the variances of the populations should be
approximately equal. This is sometimes referred to as the assumption of homogeneity of variances. We informed you that the standard deviation is the square root of the variance. In practice, we
cannot check to see if our populations have equal variances and so we have to be satisfied with ensuring that the variances of our samples are approximately equal. You might ask: what do you mean
by approximately equal? The general rule of thumb for this is that, as long as the largest variance that you are testing is not more than three times the smallest, we have roughly equal
variances. Generally, a violation of this assumption is not considered to be too catastrophic as long as you have equal numbers of participants in each condition. If you have unequal sample sizes
and a violation of the assumption of homogeneity of variance, you should definitely use a distribution-free test.
4. The final assumption is that we have no extreme scores. The reason for this assumption is easy to understand when you consider that many parametric tests involve the calculation of the mean as a
measure of central tendency. If extreme scores distort the mean, it follows that any parametric test that uses the mean will also be distorted. We thus need to ensure that we do not have extreme
• Parametric tests are used very often in psychological research because they are more powerful tests. That is, if there is a difference in your populations, or a relationship between two
variables, the parametric tests are more likely to find it, provided that the assumptions for their use are met.
• Parametric tests are more powerful because they use more of the information from your data. Their formulae involve the calculation of means, standard deviations, and some measure of error
• Distribution-free or non-parametric tests are based upon the rankings or frequency of occurrence of your data rather than the actual data themselves.
• Because of their greater power, parametric tests are preferred whenever the assumptions have not been grossly violated. | {"url":"https://knowt.com/note/2d03809e-fe38-4ae9-bfdb-5baa5f46cc01/Chapter-5-Hypothesis-Testing-and-Statis","timestamp":"2024-11-12T02:15:20Z","content_type":"text/html","content_length":"214399","record_id":"<urn:uuid:5e62b8e7-cebd-46c6-9135-27699de0487b>","cc-path":"CC-MAIN-2024-46/segments/1730477028242.50/warc/CC-MAIN-20241112014152-20241112044152-00002.warc.gz"} |
Modeling and Experimental Tools with Prof. Magnes
Magnetic Field Conclusions
When I started this project I initially had the intention to model the magnetic fields due to a cylinder, bar magnet, and sphere. Little did I realize that while I knew that these fields should look
like theoretically, modelling them would have been a great undertaking. The fields due to a bar magnet is that of a magnetic dipole and that in itself seemed as though it would have been a project.
The sphere could have been modeled in two different ways: as a rotating sphere of charge, or a collection of current-carrying loops. Both were very difficult to find the magnetic field for at any
given point and so I was at a loss for things to model.
I was only able to successfully plot the magnetic field due to a line of charge rather than a cylinder since I was having trouble making Mathematica plot piecewise vector functions. The plot below
was all that I had to work with.
After some discussion with Professor Magnes, I decided I would take what I had, and make more complicated systems with it.
As seen in my previous Final Data post, I was able to show that if identical current-carrying wires were aligned next to each other, their resulting magnetic field would resemble that of the field
due to a plane of current as the distance between them decreases.
The only problem I faced was that I could not find a way to superimpose the vector fields from my aligned wires. While this would have made my model look nicer, it is still relatively clear to
understand how the field lines add together.
Next I decided I would use the same method that I used to mimic a plane of current and attempt to model a magnetic dipole. Rather than placing two identical current-carrying wires next to each other,
I made one of them have a negative current. I would then plot their resulting vector fields and change the viewpoint such that only the x,y plane was seen.
Again I was faced with the issue of superimposing my two vector fields. However, I suspect that if I found a function Mathematica that would do this for me, I would have indeed modeled a magnetic
While the topic of my project was by no means a very complicated one, it would be false to say that I did not learn anything from it. My understanding of how Mathematica functions as a program has
grown and I have come to appreciate its capabilities. I also learned that while something may seem simple at first in theory, it can be very complicated to achieve in reality.
1 thought on “Magnetic Field Conclusions”
1. Derek Parrott
Overall, I think you did a very good job of adapting the project as you worked through it and of discussing why you did what you did/what it means. I think that is very important to be able to
do, from the perspective of general problems-soling processes. I also think you did a good job of talking about the implications of your model – specifically how the system of several wires could
be expanded to approximate a plane of current.
By the way, I think that Mathematica will add the vector fields if you define a new field which is the sum of two fields. For example, using your names, if the field from one wire was “bCart1”
and from the other was “bCart2”, defining “fieldtotal = bCart1 + bCart2” and then using Vectorplot with fieldtotal as the argument should do the trick, I think. At least that’s how I got the
field outside the torus of my model to be zero (field1 was the torus, bCart2 was piece-wise defined as negative bCart1 or zero, depending on the location).
Your presentation both in class and on the blog were organized and clear, and your Mathematica file was sufficiently clear and explained. I think the only thing that I would have done differently
is to have still compared to Peter’s results, even though your project parameters changed. It would have been a nice followup, and highlight to readers how you adapted to change in your project.
I enjoyed reading it, and I think you did a great job! Thanks! | {"url":"https://pages.vassar.edu/magnes/2014/05/01/magnetic-field-conclusions/","timestamp":"2024-11-07T06:04:48Z","content_type":"text/html","content_length":"55289","record_id":"<urn:uuid:0d5cb304-cf2b-4af4-ae8b-3a249897def0>","cc-path":"CC-MAIN-2024-46/segments/1730477027957.23/warc/CC-MAIN-20241107052447-20241107082447-00170.warc.gz"} |
ECG Signal Processing
After reading (most of) "The Scientists and Engineers Guide to Digital Signal Processing" by Steven W. Smith, PhD, I decided to take a second crack at the ECG data. I wrote a set of R functions that
implement a windowed (Blackman) sinc low-pass filter. The convolution of filter kernel with the input signal is conducted in the frequency domain using the fast Fourier transform, which is much of
the focus of Smith's book.You can check out the complete R script. Also, you can reproduce the analysis and the image below in R by running the following command
The low-pass filter was first applied to eliminate the high frequency noise, anything greater than 30Hz. I next applied the filter at a cutoff frequency of 1Hz in order to isolate the slow wave that
corresponds to respirations. The image below gives the sequence of filtering. | {"url":"http://biostatmatt.com/archives/78","timestamp":"2024-11-09T05:43:06Z","content_type":"text/html","content_length":"26208","record_id":"<urn:uuid:cfcdd3fd-c38e-4579-a5ca-60234281c0c3>","cc-path":"CC-MAIN-2024-46/segments/1730477028116.30/warc/CC-MAIN-20241109053958-20241109083958-00322.warc.gz"} |
Scaling Limits of Neural Networks - MIT Statistics and Data Science Center
Stochastics and Statistics Seminar
Scaling Limits of Neural Networks
November 8 @ 11:00 am - 12:00 pm
Boris Hanin, Princeton University
Abstract: Neural networks are often studied analytically through scaling limits: regimes in which taking to infinity structural network parameters such as depth, width, and number of training
datapoints results in simplified models of learning. I will survey several such approaches with the goal of illustrating the rich and still not fully understood space of possible behaviors when some
or all of the network’s structural parameters are large.
Boris Hanin is an Assistant Professor at Princeton Operations Research and Financial Engineering working on deep learning, probability, and spectral asymptotics. Prior to Princeton, he was an
Assistant Professor in Mathematics at Texas A&M and an NSF Postdoc at MIT Math. He is also an advisor and member of the technical staff at Foundry, an AI and computing startup. | {"url":"https://stat.mit.edu/calendar/stochastics-and-statistics-seminar-8/","timestamp":"2024-11-14T15:23:01Z","content_type":"text/html","content_length":"107118","record_id":"<urn:uuid:7528b653-3137-43c1-9e7b-c5df8bee1dfc>","cc-path":"CC-MAIN-2024-46/segments/1730477028657.76/warc/CC-MAIN-20241114130448-20241114160448-00896.warc.gz"} |
[Updated] CBSE Class 1 Maths Holiday Homework 2024-25
[Updated] CBSE Class 1 Maths Holiday Homework 2024-25 Session in PDF
CBSE Class 1 Maths is an important subject that helps students to understand the basic mathematical concepts that they will need throughout their academic journey. During holidays, students are often
given homework to complete so that they can revise and practice the concepts they have learned in class. In this article, we will discuss some holiday homework ideas for CBSE Class 1 Maths Holiday
Upon returning from a Holiday, the teachers probably have a handful of students saying the dog ate their homework or it got blown away in a winter storm. But as a parent, you need to understand that
the holiday homework is good practice for your student because the children get so busy enjoying their holidays that they forget to study.
Before discussing the CBSE Class 1 Maths Holiday Homework, let’s check the short summary.
Particulars Description
Class 1st
Board CBSE
Subject Maths
Category Holiday Homework
Study Materials Class 1 Study Materials
E-Book Class 1 eBooks
Youtube Channel Subscribe now
Class 1 Maths Holiday Homework
Below we have mentioned the updated Maths Holiday homework for CBSE Class 1. Students can download the complete this holiday homework in PDF Format for practice purposes.
Example of Maths Holiday Homework
Class 1 Maths Holiday Homework
NOTE: The links given below for Download Class 1 Maths Holiday Homework in PDF Format
CBSE Class 1 Holiday Homework
Below we have mentioned the updated Holiday homework for CBSE Class 1 Season-wise, like an autumn, winter, and summer Season. Students can download the complete Subject holiday homework in PDF Format
for practice.
CBSE Class 1 Maths Syllabus 2024-25
Understanding the Maths, Mathematical questions and other mathematical operations concerns Class 1 Mathematics. Let us now discuss the CBSE Class 1 Maths NCERT Books syllabus with topics to be
covered and the month assigned.
Class 1 Maths New Syllabus (Joyful) 2024-25
Chapter 1 Finding the Furry Cat! (Pre-number Concepts)
Chapter 2 What is Long? What is Round? (Shapes)
Chapter 3 Mango Treat (Numbers 1 to 9)
Chapter 4 Making 10 (Numbers 10 to 20)
Chapter 5 How Many? (Addition and Subtraction of
Single Digit Numbers)
Chapter 6 Vegetable Farm (Addition and Subtraction up to 20)
Chapter 7 Lina’s Family (Measurement)
Chapter 8 Fun with Numbers (Numbers 21 to 99)
Chapter 9 Utsav (Patterns)
Chapter 10 How do I Spend my Day? (Time)
Chapter 11 How Many Times? (Multiplication)
Chapter 12 How Much Can We Spend? (Money)
Chapter 13 So Many Toys (Data Handling)
Class 1 Maths Syllabus Explanation in Video
Below we have mentioned the Class 1 Maths Syllabus Parents have checked the complete Maths Syllabus in Video for a great score in the final examination.
Class 1 Maths Useful Resources
Below, we have mentioned the updated CBSE Class 1 Maths Study Material for the Academic year 2024-25. Students can download the complete Subject in PDF Format for practice purposes.
Leave a Comment
Note: The information provided here is gathered from various sources, and there may be discrepancies between the data presented and the actual information. If you identify any errors, please notify
us via email at [mail[@]edufever.com] for review and correction.
You Should Also Checkout | {"url":"https://edufever.in/school/cbse-class-1-maths-holiday-homework/","timestamp":"2024-11-01T19:43:41Z","content_type":"text/html","content_length":"103885","record_id":"<urn:uuid:c0ae5ed5-4c9f-46c1-a36c-0e86984f60c5>","cc-path":"CC-MAIN-2024-46/segments/1730477027552.27/warc/CC-MAIN-20241101184224-20241101214224-00864.warc.gz"} |
Multiplication Chart 1 12 Worksheet | Multiplication Chart Printable
Multiplication Chart 1 12 Worksheet
Multiplication Chart 1 12 Worksheet
Multiplication Chart 1 12 Worksheet – A Multiplication Chart is an useful tool for kids to learn just how to multiply, split, and also discover the tiniest number. There are lots of usages for a
Multiplication Chart. These useful tools help children understand the procedure behind multiplication by using tinted courses and filling out the missing out on products. These charts are cost-free
to print and download.
What is Multiplication Chart Printable?
A multiplication chart can be utilized to help kids learn their multiplication truths. Multiplication charts can be found in several kinds, from full page times tables to single web page ones. While
individual tables work for providing chunks of info, a complete page chart makes it much easier to evaluate realities that have actually currently been understood.
The multiplication chart will generally feature a left column and a top row. When you desire to find the product of two numbers, select the very first number from the left column and also the 2nd
number from the leading row.
Multiplication charts are useful understanding tools for both kids and also adults. Children can utilize them at home or in school. Free Printable Multiplication Worksheets 1 12 are readily available
on the net as well as can be published out as well as laminated for toughness. They are a wonderful tool to make use of in math or homeschooling, and also will certainly offer an aesthetic reminder
for kids as they discover their multiplication realities.
Why Do We Use a Multiplication Chart?
A multiplication chart is a diagram that demonstrates how to multiply 2 numbers. It generally contains a top row and a left column. Each row has a number standing for the item of both numbers. You
pick the initial number in the left column, move it down the column, and after that choose the 2nd number from the top row. The item will certainly be the square where the numbers fulfill.
Multiplication charts are handy for several reasons, consisting of assisting children learn just how to separate and simplify fractions. Multiplication charts can additionally be valuable as desk
sources since they serve as a continuous pointer of the pupil’s progression.
Multiplication charts are likewise useful for helping trainees memorize their times tables. They help them find out the numbers by reducing the variety of steps needed to complete each procedure. One
method for remembering these tables is to focus on a solitary row or column at once, and then move onto the following one. Eventually, the whole chart will certainly be committed to memory. Just like
any ability, memorizing multiplication tables requires time and also method.
Free Printable Multiplication Worksheets 1 12
Free Printable Multiplication Worksheets 1 12
If you’re looking for Free Printable Multiplication Worksheets 1 12, you’ve come to the best area. Multiplication charts are readily available in various formats, consisting of complete size, half
size, and also a range of charming designs.
Multiplication charts and also tables are crucial tools for youngsters’s education and learning. You can download as well as publish them to make use of as a training aid in your youngster’s
homeschool or class. You can likewise laminate them for sturdiness. These charts are wonderful for usage in homeschool math binders or as classroom posters. They’re particularly helpful for kids in
the second, 3rd, as well as 4th qualities.
A Free Printable Multiplication Worksheets 1 12 is a beneficial tool to strengthen math truths as well as can assist a child find out multiplication swiftly. It’s likewise a wonderful tool for skip
checking and also learning the moments tables.
Related For Free Printable Multiplication Worksheets 1 12 | {"url":"https://multiplicationchart-printable.com/free-printable-multiplication-worksheets-1-12/multiplication-chart-1-12-worksheet-8/","timestamp":"2024-11-07T03:05:34Z","content_type":"text/html","content_length":"26919","record_id":"<urn:uuid:00647427-ad06-4e16-be30-8713b59742bf>","cc-path":"CC-MAIN-2024-46/segments/1730477027951.86/warc/CC-MAIN-20241107021136-20241107051136-00704.warc.gz"} |
Properties of Equilibrium Constant
Equilibrium constants are called "constants" because they remain the same for a specific reaction at a certain temperature. But if you change the temperature or equation, the value of the equilibrium
constant will change too. Don't worry though, you don't need to work out a new value every time. Instead, you can use the properties of the equilibrium constant to figure out the new value.
This article is all about the properties of Keq (that's the equilibrium constant). We'll talk about how it changes with temperature, concentration, and pressure. Then, we'll explore what happens when
you reverse a reaction, multiply it by a coefficient, or combine two reactions. After that, we'll show you how to use these properties in real-life situations. Finally, we'll explain why the
equilibrium constant is so important. In summary, this article covers everything you need to know about the properties of the equilibrium constant Keq, from how it changes to how to use it in
real-life situations.
What is the equilibrium constant?
In our previous article, "Equilibrium Constant," we learned that when a "reversible reaction" is left in a closed system, it will eventually reach a state of "Dynamic Equilibrium." At this point, the
rate of the forward reaction is equal to the rate of the backward reaction, and the amounts of products and reactants remain constant. We use the equilibrium constant, Keq, to express the ratio
between the amounts of products and reactants in such a system.
Keq is a constant value that tells us the relative amounts of reactants and products in a system at equilibrium for a specific reaction at a certain temperature. It doesn't matter how much of the
products or reactants you start with, as long as you keep the reaction equation and temperature the same, Keq won't change.
However, if you alter the temperature or reaction equation, Keq will change. The equilibrium constant is affected by changes in temperature or equation, and we'll explore how and why that happens in
the following sections.
Discuss some properties of the equilibrium constant
We'll now explore the properties of the equilibrium constant, Keq, and how it responds to changes in the system's conditions or the reaction equation.
Changing conditions
First up, let's look at the effect of changing a system's conditions on the equilibrium constant. We mentioned this in the article "Equilibrium Constant", but we'll remind ourselves of it now. This
section will focus on pressure, concentration, the presence of a catalyst and temperature.
It is quite simple, really - the only external condition that affects the equilibrium constant, Keq, is temperature. Changing the pressure or concentration of a system at equilibrium has no effect on
the equilibrium constant. Adding a catalyst doesn't change its value either:
Neither increasing nor decreasing the pressure of a system at equilibrium has any effect on the equilibrium constant. Likewise, neither increasing nor decreasing the concentration of a system at
equilibrium has any effect on the equilibrium constant. The presence of a catalyst also doesn't affect the equilibrium constant. Changing the temperature of a system at equilibrium does change the
equilibrium constant. Increasing the temperature favors the endothermic reaction. If the forward reaction is endothermic, then Keq will increase. Decreasing the temperature favors the exothermic
reaction. If the backward reaction is exothermic, then Keq will decrease.
Changing the reaction equation
Next, let's look at what happens to the equilibrium constant when you change the reaction equation itself. Remember, the equilibrium constant is only constant for a particular reaction. This means
that by changing the reaction equation, we've created a new reaction. This new reaction will have its own unique equilibrium constant. However, the equilibrium constant changes in predictable ways,
thanks to certain properties.
We'll first look at what happens when you reverse the reaction equation.
Take the reaction A(g) + B(g) ⇌ C(g) + D(g). If we were to write an equation for Kc for this reaction (which we'll call Kc1), we'd get the following:
Kc1 = [C]eqm [D]eqm[A]eqm [B]eqm
Check out "Equilibrium Constant" to find out how to write the expression for Kc, a particular type of equilibrium constant. There, you'll also learn that although equilibrium constant measurements
are always taken at equilibrium, we often don't bother writing out the subscript eqm in the expression- the formula looks a lot more simple if you leave it out. We'll therefore omit eqm for the rest
of this article. This turns the expression for Kc1 into the following: Kc1 = [C] [D][A] [B]In addition, you should note that while we've used Kc for these examples, all of the properties that we're
about to explore apply to the equilibrium constant Kp too.
Let's consider what would happen if we reversed this reaction. Our old products become our new reactants, and our old reactants become our new products:
C(g) + D(g) ⇌ A(g) + B(g)
This gives us the following expression for Kc2:
Kc2 = [A] [B][C] [D]
Notice something? The expression for Kc2 is the reciprocal of the expression for Kc1. The equilibrium constant of a reaction in one direction is the reciprocal of the equilibrium constant for the
same reaction in the reverse direction. Or, simply put: when you reverse a reaction, you take the reciprocal of its equilibrium constant.
Now let's consider what happens if you multiply the reaction equation by a coefficient. We've seen above that for the reaction A(g) + B(g) ⇌ C(g) + D(g), we get the following expression for Kc1:
Kc1 = [C] [D][A] [B]
What if we multiplied the entire equation by three? We'd get the following:
3A(g) + 3B(g) ⇌ 3C(g) + 3D(g)
Note that this equation is still balanced - it is simply three times larger in magnitude than the original. But it means that the expression for Kc changes too:
Kc2 = [C]3 [D]3[A]3 [B]3 Kc2 = ([C] [D][A] [B])3 = (Kc1)3
This is the same as our original expression for Kc, but cubed. Multiplying a balanced chemical equation by a coefficient raises the equilibrium constant to the power of that coefficient. If you times
an equation by two, you square Keq. If you times an equation by four, you raise Keq to the power of four.
Last of all, let's explore the effect of adding multiple reactions together. Suppose that the products of the reaction A(g) + B(g) ⇌ C(g) + D(g) then react to form two new products, E(g) and F(g).
Here are the two reactions and their expressions for Kc:
A(g) + B(g) ⇌ C(g) + D(g) Kc1 = [C] [D][A] [B]C(g)+ D(g) ⇌ E(g) +F(g) Kc2 = [E] [F][C] [D]
We can write this as one overall equation, with its own respective expression for Kc:
A(g) + B(g) ⇌ E(g) + F(g) Kc3 = [E] [F][A] [B]
What can you see? The expression for Kc3 is simply the product of the expressions for Kc1 and Kc2:
[E] [F][A] [B] = [C] [D][A] [B] × [E] [F][C] [D]Kc3 = Kc1 × Kc2
Therefore, we can deduce that the equilibrium constant for the overall reaction made up of two or more reactions is equal to the product of their individual equilibrium constants. In other words,
when you add up individual reactions, you multiply their equilibrium constants together.
Summary of properties of the equilibrium constant
To help consolidate your learning, we've created a handy table summarizing the properties of the equilibrium constant:
The properties of the equilibrium constant.
Application of properties of the equilibrium constant
Let's now have a go at calculating the equilibrium constant using what we've just learned about its properties.
Use the following information to work out Kc for the reaction 2CO2 + 8H2 ⇌ 2CH4 + 4H2O:
1) CH4 + H2O ⇌ CO + 3H2 Kc1 = 6.52) CO + H2O ⇌ CO2 + H2 Kc2 = 0.12
Well, we have been given two equations. With a bit of manipulation, they can be turned into the desired reaction. First of all, notice that whilst we can see CO in both reaction 1 and reaction 2, it
isn't present in the overall reaction. We need to add reactions 1 and 2 together to eliminate CO. Remember that when we add two reactions to each other, we multiply their equilibrium constants
together. Therefore, this new reaction's equilibrium constant, Kc3, equals the product of Kc1 and Kc2:
3) CH4 + 2H2O + CO ⇌ CO + CO2 +4H2 Kc3 = 6.5 × 0.12Overall: 3) CH4 + 2H2O ⇌ CO2 +4H2 Kc3 = 0.78
Reaction 3 looks a little closer to our desired reaction. However, the reactants and products are on the wrong sides. We, therefore, need to reverse reaction 3. Remember that when we do this, we take
the reciprocal of the equilibrium constant:
4) CO2 + 4H2 ⇌ CH4 + 2H2O Kc4 = 10.78
Kc4 = 1.28
We're almost there. The last step is to multiply reaction 4 by two. Remember that this means we need to raise the equilibrium constant to the power of two:
5) 2CO2 + 8H2 ⇌ 2CH4 +4H2O Kc5 = 1.282 Kc5 = 1.64
This is our final answer.
Importance of the equilibrium constant
The equilibrium constant (Keq) has many practical uses. We can use it to determine the direction a reaction is traveling in by comparing it to the reaction quotient. We can also estimate how far the
reaction will go to completion by looking at the magnitude of Keq. Additionally, we can calculate the relative amounts of species in a system at equilibrium using Keq.
If you want to learn more about the reaction quotient, check out our article "Reaction Quotient" and practice working with it in "Using the Reaction Quotient". In "Magnitude of Equilibrium Constant",
you'll see how Keq relates to the extent of the reaction and the position of the equilibrium. And in "Calculating Equilibrium Concentrations", you'll learn how to find equilibrium concentrations
using the equilibrium constant.
To summarize, we've covered the properties of Keq, including how it changes with alterations to the system's conditions or reaction equation. We've also discussed how to apply this knowledge to
real-life reactions. Remember that Keq is only constant for a particular reaction, and changing the reaction equation changes the value of Keq. Reversing the direction of a reaction takes the
reciprocal of Keq, multiplying a reaction by a coefficient raises Keq to the power of the coefficient, and adding two reactions multiplies their respective values of Keq together.
Properties of Equilibrium Constant
State three properties of equilibrium constant.
The equilibrium constant is constant for a certain reaction at a specific temperature. It isn't affected by changes in pressure or concentration, or the presence of a catalyst. However, it is
affected by temperature. If you change the reaction equation, you also change the value of the equilibrium constant - check out this article to find out more.
What kind of property is the equilibrium constant?
The equilibrium constant is a value that tells us the relative amounts of reactants and products in a system at equilibrium.
What are the properties of equilibrium?
At equilibrium, the rate of the forward reaction equals the rate of the backward reaction and the relative amounts of products and reactants don't change.
What are the features of the equilibrium constant?
The equilibrium constant is unaffected by changes in pressure or concentration, or the presence of a catalyst. However, it is affected by temperature. Changing the reaction equation also changes the
equilibrium constant. Reversing the equation takes the reciprocal of Keq, whilst multiplying the reaction by a coefficient raises Keq to the power of that coefficient. On the other hand, adding two
reactions to each other multiplies their respective values of Keq together.
What are the characteristics and applications of equilibrium constant?
The equilibrium constant is unaffected by changes in temperature, pressure or the presence of a catalyst, but is affected by temperature. The equilibrium constant also changes when you change the
reaction equation, and you can find out how exactly it responds in this article. We can use the equilibrium constant to find out the direction a reaction is travelling, estimate how far a reaction
will go to completion, and calculate the relative amounts of species in a system at equilibrium. | {"url":"https://shiken.ai/chemistry/properties-of-equilibrium-constant","timestamp":"2024-11-09T20:48:08Z","content_type":"text/html","content_length":"78057","record_id":"<urn:uuid:0233cbf9-ea87-469f-a34b-245cb41fb4ac>","cc-path":"CC-MAIN-2024-46/segments/1730477028142.18/warc/CC-MAIN-20241109182954-20241109212954-00537.warc.gz"} |
Times Tables 1 100 Chart | Multiplication Chart Printable
Times Tables 1 100 Chart
Multiplication Table 1 100 Free Printable Multiplication Chart 100X100
Times Tables 1 100 Chart
Times Tables 1 100 Chart – A Multiplication Chart is a practical tool for kids to discover how to increase, split, and locate the tiniest number. There are numerous usages for a Multiplication Chart.
These handy tools assist youngsters recognize the procedure behind multiplication by utilizing tinted paths and filling in the missing products. These charts are totally free to download as well as
What is Multiplication Chart Printable?
A multiplication chart can be utilized to help kids discover their multiplication facts. Multiplication charts come in many kinds, from complete page times tables to solitary web page ones. While
specific tables work for providing chunks of info, a complete web page chart makes it much easier to assess realities that have currently been understood.
The multiplication chart will usually feature a top row as well as a left column. When you want to discover the product of two numbers, select the first number from the left column as well as the 2nd
number from the top row.
Multiplication charts are practical discovering tools for both adults and also children. Times Tables 1 100 Chart are readily available on the Internet and can be printed out and also laminated for
Why Do We Use a Multiplication Chart?
A multiplication chart is a layout that shows how to increase two numbers. You select the first number in the left column, relocate it down the column, and then pick the 2nd number from the leading
Multiplication charts are helpful for lots of reasons, including assisting kids learn exactly how to split and also streamline portions. Multiplication charts can additionally be practical as desk
resources due to the fact that they serve as a continuous tip of the student’s progress.
Multiplication charts are likewise beneficial for helping students memorize their times tables. They help them learn the numbers by minimizing the number of actions needed to finish each procedure.
One approach for memorizing these tables is to focus on a solitary row or column each time, and then move onto the following one. Ultimately, the whole chart will be committed to memory. Just like
any kind of skill, memorizing multiplication tables takes time and technique.
Times Tables 1 100 Chart
Free Printable Multiplication Chart 1 100 PrintableMultiplication
Multiplication Table 1 100 2020 Printable Calendar Posters Images
10 Best 1 100 Chart Printable Printablee
Times Tables 1 100 Chart
You’ve come to the best location if you’re looking for Times Tables 1 100 Chart. Multiplication charts are available in different styles, consisting of complete dimension, half dimension, and a
selection of adorable styles. Some are vertical, while others include a straight style. You can additionally find worksheet printables that include multiplication formulas and math facts.
Multiplication charts and tables are important tools for kids’s education. You can download and also publish them to use as a training help in your child’s homeschool or classroom. You can also
laminate them for longevity. These charts are wonderful for use in homeschool math binders or as class posters. They’re specifically useful for youngsters in the second, third, as well as fourth
A Times Tables 1 100 Chart is a valuable tool to enhance mathematics truths and also can assist a kid discover multiplication quickly. It’s likewise a fantastic tool for miss checking as well as
finding out the moments tables.
Related For Times Tables 1 100 Chart | {"url":"https://multiplicationchart-printable.com/times-tables-1-100-chart/","timestamp":"2024-11-07T02:39:08Z","content_type":"text/html","content_length":"41334","record_id":"<urn:uuid:2022cdeb-ae41-4c7d-a2c9-82db66641e4e>","cc-path":"CC-MAIN-2024-46/segments/1730477027951.86/warc/CC-MAIN-20241107021136-20241107051136-00800.warc.gz"} |
The Stacks project
Remark 42.56.2. Let $X$ be a scheme such that $2$ is invertible on $X$. Then the Adams operator $\psi ^2$ can be defined on the $K$-group $K_0(X) = K_0(D_{perf}(\mathcal{O}_ X))$ (Derived Categories
of Schemes, Definition 36.38.2) in a straightforward manner. Namely, given a perfect complex $L$ on $X$ we get an action of the group $\{ \pm 1\} $ on $L \otimes ^\mathbf {L} L$ by switching the
factors. Then we can set
\[ \psi ^2(L) = [(L \otimes ^\mathbf {L} L)^+] - [(L \otimes ^\mathbf {L} L)^-] \]
where $(-)^+$ denotes taking invariants and $(-)^-$ denotes taking anti-invariants (suitably defined). Using exactness of taking invariants and anti-invariants one can argue similarly to the proof of
Lemma 42.56.1 to show that this is well defined. When $2$ is not invertible on $X$ the situation is a good deal more complicated and another approach has to be used.
Comments (0)
Post a comment
Your email address will not be published. Required fields are marked.
In your comment you can use Markdown and LaTeX style mathematics (enclose it like $\pi$). A preview option is available if you wish to see how it works out (just click on the eye in the toolbar).
All contributions are licensed under the GNU Free Documentation License.
In order to prevent bots from posting comments, we would like you to prove that you are human. You can do this by filling in the name of the current tag in the following input field. As a reminder,
this is tag 0FEK. Beware of the difference between the letter 'O' and the digit '0'.
The tag you filled in for the captcha is wrong. You need to write 0FEK, in case you are confused. | {"url":"https://stacks.math.columbia.edu/tag/0FEK","timestamp":"2024-11-08T23:44:04Z","content_type":"text/html","content_length":"14444","record_id":"<urn:uuid:f5c46635-d0b7-40ed-bf2c-86dd65428c01>","cc-path":"CC-MAIN-2024-46/segments/1730477028106.80/warc/CC-MAIN-20241108231327-20241109021327-00207.warc.gz"} |
Steady-State Temperature Field and Thermal Deformation Field of Electric Drive Helical Gear
This study aims to analyze the steady-state temperature field and thermal deformation field of electric drive helical gears. Based on the meshing principle of helical gears and the theories of heat
transfer and tribology, the average friction heat flux on the meshing surface and the convective heat transfer coefficients of other surfaces are derived. A parametric thermal analysis model of the
helical gear is established using the APDL language, providing insights into the influence of different gear models on the steady-state temperature field and thermal deformation field. The results
indicate that the temperature field varies gradually, with the highest temperature occurring on the meshing surface and the lowest on the hub. The deformation is most significant at the tooth tip and
least at the hub, with the maximum deformation observed near the ends of the gear. To reduce computational costs, the single-tooth model can be used with high accuracy for steady-state temperature
analysis, but only the full-tooth model provides accurate results for thermal deformation analysis.
Keywords: helical gear, finite element method, steady-state temperature field, thermal deformation field
Helical gears are widely used in electric drive systems due to their excellent transmission performance and load-bearing capacity. However, during operation, significant heat is generated at the
meshing interface due to axial and radial sliding friction, particularly under high-speed and heavy-load conditions. This heat can lead to adhesion failure and thermal deformation, which alters the
involute tooth profile, thereby affecting the gear’s meshing characteristics and potentially causing vibration and noise. Therefore, studying the temperature field and thermal deformation field of
electric drive helical gears is crucial.
Several methods exist for analyzing gear temperature fields and thermal deformation fields, including numerical calculations, experimental analysis, and finite element methods (FEM). This study
employs FEM to establish a parametric thermal analysis model of helical gears using the APDL (ANSYS Parametric Design Language) and investigates the influence of different gear models on the accuracy
of the results.
Theoretical Background
Meshing Principle of Helical Gears
Helical gears transmit power through the gradual engagement of their teeth. As the teeth mesh, heat is generated due to friction at the contact points. This heat is then distributed through the gear
body and dissipated via convection with the surrounding environment and the lubricant.
Heat Transfer Theory
The steady-state temperature field of a helical gear can be described using the heat conduction equation based on Fourier’s law of heat conduction:
where T is the temperature and λ is the thermal conductivity.
Boundary Conditions
The boundary conditions of the gear surfaces are critical for solving the heat conduction equation. These conditions can be expressed using Newton’s law of cooling and Fourier’s law of heat
• Meshing Surface: A combination of second- and third-type boundary conditions, given by:−λ(∂n∂T)=s1(Tc−To)−qw
• Non-Meshing Surfaces, Tips, Roots, and Ends: Third-type boundary conditions, given by:−λ(∂n∂T)=s2(Tc−To)
where s1 and s2 are the convective heat transfer coefficients, Tc and To are the initial temperatures of the gear and lubricant, respectively, and qw is the average friction heat flux.
Calculation of Friction Heat Flux and Convective Heat Transfer Coefficients
Average Friction Heat Flux
The average friction heat flux on the meshing surface is calculated using:
where kf is the heat flux distribution coefficient, μ is the friction coefficient, PM is the maximum Hertz contact pressure, f is the thermal conversion factor, τ0 is the half-bandwidth of
time-domain contact, v1 and v2 are the tangential velocities of the driving and driven gears, respectively, and T1 is the meshing period of the driving gear.
Convective Heat Transfer Coefficients
• Tooth Tip:hd=0.664λoPo0.333(νoω)0.5
• Tooth Flank and Root:ha=0.228Re0.731ePo0.333Ldλo
• Ends: Depending on the Reynolds number (( Re )), the convective heat transfer coefficient can be calculated as:h_t = \begin{cases} 0.308 \lambda_{mix} (m_z + 2)^{0.5} P_{mix}^{0.5} \left( \frac{\
omega}{\nu_{mix}} \right)^{0.5}, & \text{if } Re \leq 2 \times 10^5 \\ 10^{-19} \lambda_{mix} \left( \frac{\omega}{\nu_{mix}} \right)^4 r_n^7, & \text{if } 2 \times 10^5 < Re \leq 2.5 \times 10^5
\\ 0.0197 \lambda_{mix} (m_z + 2.6)^{0.2} \left( \frac{\omega}{\nu_{mix}} \right)^{0.8} r_n^{0.6}, & \text{if } Re > 2.5 \times 10^5 end{cases}
where λo, ρo, co, and νo are the thermal conductivity, density, specific heat capacity, and kinematic viscosity of the lubricant, respectively; Re is the Reynolds number; and ω is the angular
velocity of the gear.
Finite Element Model
Gear and Lubricant Parameters
The parameters of the helical gear and lubricant used in this study are listed in Tables 1 and 2, respectively.
Table 1: Helical Gear Parameters
Parameter Value
Driving Gear Speed 2000 rpm
Elastic Modulus 206 GPa
Number of Teeth (z1) 23
Number of Teeth (z2) 30
Poisson’s Ratio 0.3
Modulus (m) 3 mm
Pressure Angle (α) 20°
Thermal Conductivity 46 W/m·K
Specific Heat 465 J/kg·K
Density 7850 kg/m³
Helix Angle (β) 8°
Input Power (P) 50 kW
Table 2: Lubricant Parameters
Parameter Value
Lubricant Type SCH632
Density (ρ_o) 870 kg/m³
Kinematic Viscosity (ν_o) 320 cSt (at 40°C), 38.5 cSt (at 100°C)
Thermal Conductivity (λ_o) 2000 W/m·K
Specific Heat (c_o) 0.14 J/kg·K
Modeling and Meshing
A parametric thermal analysis model of the helical gear was created using the APDL language in ANSYS. The gear was meshed using 8-node hexahedral elements (Solid70), with the friction heat flux
applied using surface effect elements (Surf152). the full-tooth parametric model of the helical gear.
Results and Discussion
Steady-State Temperature Field
The steady-state temperature field of the driving gear, obtained using the full-tooth model. The highest temperature (92.4488°C) is observed on the meshing surface, with the lowest temperature on the
Temperature Distribution
• In the Tooth Height Direction: The temperature exhibits an “M”-shaped distribution, with the highest temperatures in the double-tooth meshing regions near the tooth root at the mesh entry and
near the tooth tip at the mesh exit.
• In the Tooth Width Direction: The temperature is asymmetric, with higher temperatures observed near the front end of the gear at the mesh entry and near the rear end at the mesh exit for a
clockwise rotating right-handed driving gear.
Thermal Deformation Field
The thermal deformation field of the driving gear, obtained by applying the steady-state temperature field as a load and constraining the inner hole.
Deformation Distribution
• Maximum Deformation: Occurs at the tooth tips, with the least deformation observed at the hub. The maximum deformation region is located near the ends of the gear.
• In the Meshing Line Direction: The deformation is greater at the rear end than the front end, consistent with the temperature distribution.
Influence of Gear Models
To investigate the influence of different gear models on the accuracy of the results, single-tooth, three-tooth, five-tooth, and full-tooth models were analyzed.
Steady-State Temperature Field
The steady-state temperature fields for the different gear models. The temperature distributions are similar, with the highest temperatures ranging from 92.4708°C (single-tooth model) to 92.7150°C
(five-tooth model).
The relative errors between the single-tooth, three-tooth, and five-tooth models and the full-tooth model are small, indicating that the single-tooth model can be used to accurately predict the
steady-state temperature field with reduced computational cost.
Thermal Deformation Field
The thermal deformation fields for the different gear models. The deformation patterns differ significantly, particularly for the single-tooth model.
The single-tooth model exhibits the highest deformation at the tooth midsection, whereas the full-tooth model shows maximum deformation near the tooth tips. The errors in the deformation field
predictions are substantial for the partial-tooth models, highlighting the need for the full-tooth model for accurate thermal deformation analysis.
This study investigated the steady-state temperature field and thermal deformation field of electric drive helical gears using finite element analysis. The following key findings were obtained:
1. Temperature Field Distribution: The meshing surface exhibits the highest temperature, with a gradient decreasing towards the hub. The temperature is highest in the double-tooth meshing regions
and exhibits an “M”-shaped distribution in the tooth height direction. The temperature is asymmetric in the tooth width direction.
2. Thermal Deformation: The maximum thermal deformation occurs at the tooth tips, with minimal deformation at the hub. The maximum deformation region is located near the ends of the gear. In the
meshing line direction, the deformation is greater at the rear end than the front end.
3. Model Influence: The single-tooth model can accurately predict the steady-state temperature field with reduced computational cost but is inadequate for thermal deformation analysis. The
full-tooth model is necessary for accurate predictions of both the temperature and deformation fields. | {"url":"https://www.zhygear.com/steady-state-temperature-field-and-thermal-deformation-field-of-electric-drive-helical-gear/","timestamp":"2024-11-06T14:58:41Z","content_type":"text/html","content_length":"177287","record_id":"<urn:uuid:6586417f-b5db-4638-8355-8b424b43f4a9>","cc-path":"CC-MAIN-2024-46/segments/1730477027932.70/warc/CC-MAIN-20241106132104-20241106162104-00500.warc.gz"} |
GSoC 2020: Blog 4 - Update on Null Geodesics in Kerr Spacetime
Progress so far...
Support for calculation and graphing of Null Geodesics in Kerr (and by extension, Schwarzschild) spacetime is nearing completion (PR #527). Last week, I hit a serious obstacle, related to maximum
floating point precision and accumulation of numerical errors (which is also the reason for the delayed blog). Since the Geodesic Equations are stiff ODEs, small instabilities can wreak havoc on
step-size control and completely destabilize the solution. I observed this happening with my code for Null Geodesics. As the light ray approaches the black hole, the integrator can no longer choose a
proper step-size and the solution becomes inaccurate. In this blog, I will be discussing this issue and how we are approaching it with the new Null Geodesics module. I also present some of the null
geodesic plots, created using this module.
Stiff ODEs are evil!
Stiff ODEs and Numerical Methods have always been at loggerheads. There is no precise definition for stiff ODEs, but an important feature is that, they are prone to become unstable. The usual
solution is to choose a solver, that can accommodate very small step-sizes, while keeping overall error low. SciPy provides performant wrappers for LSODA/BDF methods, that are usually suitable for
stiff systems, but in our case, these methods are unhelpful, as can be seen in the image below. For comparison, I have used Mathematica to obtain geodesics for the same conditions. The only major
difference, here, is the solver. The plot on the left is Mathematica-generated, while the plot on the right was generated by Python. Note that, all the plots in this post have their axes normalized
to the gravitational radius, or units of $\frac{GM}{c^2}$ .
The initial conditions for the plots above are as follows. The timelike component of the initial velocity was calculated by setting $g_{ab}u^au^b = 0$ .
a = 0.9
end_lambda = 200
max_steps = 200
position = [0, 20., pi / 2, pi / 2]
velocity = [-0.2, 0., 0.002]
Other solvers (that are suited to non-stiff problems) become unstable long before the desired number of integration steps is reached. Given the lack of a proper solver, I wrote my own solver, using a
step-size control scheme from the venerable Numerical Recipes (Press et al, 2007), fine-tuned to the problem. Sadly, this did not produce better results and it even failed for certain pathological
higher-order orbits. Here, "high-order" implies "loopy" orbits, very close to the black hole, while "pathological" can encompass higher-order orbits to orbits, that are scattered at large angles
(i.e., orbits, with sharp turning points, à la the plots above).
Then, I set out to find the reason behind the instability. Based on my tests, the stiffness comes from the singular nature of the black hole horizon (in Boyer-Lindquist coordinates), which can force
the solver to choose incredibly small step-sizes, which in turn leads to more and more floating point error and over large intervals, the obtained solution becomes completely unphysical. This is
what, "unstable" means here. Apart from the graphical representation of the instability through the plots, we can also see the instability numerically, through the norm of 4-Velocity of the light
ray, as it evolves:
All of these values should be ~0 and on comparing with the plot on the right, it is easy to see, that the norm becomes too high as the light ray gets closer to the black hole. This tells us, that a
correlation exists between the initial conditions and the instability, which is expected.
In discussions with my mentors, we explored a few solutions, such as, using another system of units or coordinate system. However, we are already using the most suitable unit and coordinate systems
for numerical computation of geodesics - M-Units and Boyer-Lindquist Coordinates. I should note here, that at slightly larger initial radial distances and speeds, the code provides a good
approximation to the actual solution, as can be observed in the plot and table below.
Clearly, the accumulated error over lambda is smaller in this plot.
The initial conditions for the second set of plots are as follows:
a = 0.9
end_lambda = 200
max_steps = 200
position = [0, 30., pi / 2, pi / 2] # Only difference
velocity = [-0.2, 0., 0.002]
Until next time...
Initially, I had planned to develop the Null Geodesics module, such that simulating a photon sheet would be possible through this module itself. The purpose would be applications in radiative
transfer calculations, which require simulation of pathological orbits for better approximations in the strong gravity regime. But the issue of error accumulation has made it difficult to continue
with this strategy. We have decided to make the current code merge-ready, while keeping the PR open, mainly because, the code performs well at larger initial distances. I have already made relevant
changes to ensure the code is merge-ready. The status of the PR can be viewed at PR #527.
I am currently mulling the option of implementing the code in some low level language and then, building a wrapper around it, goal being to achieve better error-control, which I have not been able to
obtain with Python/SciPy. Another approach that I am considering, is to restrict integration near the event horizon, based on step-size changes. Since multiple options are being explored and this is
the last coding period, I have decided to make these blogs weekly. So my next blog should be up, next Friday. Hopefully, I will have solved this by then.
Top comments (0)
For further actions, you may consider blocking this person and/or reporting abuse | {"url":"https://dev.to/jes24/gsoc-blog-4-update-on-null-geodesics-in-kerr-spacetime-4eh7","timestamp":"2024-11-03T19:36:52Z","content_type":"text/html","content_length":"87902","record_id":"<urn:uuid:d7867c5e-089a-43f7-a638-2e352687b3b4>","cc-path":"CC-MAIN-2024-46/segments/1730477027782.40/warc/CC-MAIN-20241103181023-20241103211023-00183.warc.gz"} |
ECCC - Reports tagged with Instance compression
Given an instance of a hard decision problem, a limited goal is to $compress$ that instance into a smaller, equivalent instance of a second problem. As one example, consider the problem where, given
Boolean formulas $\psi^1, \ldots, \psi^t$, we must determine if at least one $\psi^j$ is satisfiable. An $OR-compression ... more >>> | {"url":"https://eccc.weizmann.ac.il/keyword/18059/","timestamp":"2024-11-14T15:26:23Z","content_type":"application/xhtml+xml","content_length":"20324","record_id":"<urn:uuid:619ba57b-e5f4-4e99-8879-dc0dbc08a3a8>","cc-path":"CC-MAIN-2024-46/segments/1730477028657.76/warc/CC-MAIN-20241114130448-20241114160448-00455.warc.gz"} |
Bulletin 13 2011
I. Poroshkin A.A., Poroshkin A.G. Three counter-example in analysis
It is represented the examples of the continuous functions on the metric spaces for which the classical theorems of Weierstrass (about boundedness and about achievement of the face) and the theorem
of Kantor (about uniformly continuity) are not true.
II. Sidorov V.V. Structure lattice isomorphisms of semirings generated by a one nonnegative function
In this paper we describe isomorphisms of lattices A[[f]] and A[[g]] of all subalgebras with unit of semirings of functions [f] and [g] generated by nonnegative real-valued functions f and g,
respectively. It is proved that any isomorphism of lattices A[[f]] and A[[g]] is generated by an isomorphism of semirings [f] and [g]. A techniqe of unigenerated subalgebras is applied.
III. Grytczuk A. On the Diophantine equation x^^2 – dy^^2 = z^^n^^
In this Note we remark that there is some duality connected with the problem of solvability of the Diophantine equation
(*) x^2 – dy^2 = z^n.
Namely, we prove that the equation (*) has no solution in positive integers x,y for every pime z = q^^* generated by an arithmetic progression and for every odd positive integer n if d is squarefree
positive integer such that p|d, where p is an odd prime.
IV. Afonin R.E., Malozemov V.N., Pevnyi A.B. Delsarte bounds for the number of elements of the spherical design
The proof of the Delsarte’s theorem for lower bound for cardinality of spherical design is given. The exposition is closed, all auxiliary theorems are proved.
V. Belyaeva N.A., Dovzhko E.S. Model of the formation of spherical products with the nonzero critical depth conversion of the material
The mathematical model of the solidification of the spherical product in the mode of spread of the bilateral front. At the boundaries of the fronts are into account the conditions of coexistence of
solid and liquid layers formed products. The results of numerical analysis.
VI. Belyaeva N.A., Kuznetsov K.P. The dissipative structure and domain of anomaly structural liquid Couette flow in a flat clearance
The bifurcation study of structural liquid Couette flow in a flat clearance in the superanomaly area was conducted. Bifurcation diagrams and the values of parameters corresponding to the superanomaly
area were obtained. Bifurcation method allowed to obtain an analytical approximation of the stationary inhomogeneous solution in the neigborhood of the bifurcation point. A numerical simulation of
the flow was conducted.
VII. Belyayev Yu.N. Wave scattering continuosly stratified elastic media
Method of calculating elements of the second order matrix, which characterizes the elastic continuously layered media is proposed. The representation of reflection and transmission coefficients of
the layer through elements of characteristic matrix is given. General solution to the plane wave reflection and transmission in a periodic continuously stratified medium is found.
VIII. Kotelina N.O. Methods of estimating kissing numbers
Methods for estimating of kissing numbers based on linear programming, corresponding grid problems of linear programming and results of calculations in Matlab are given. The table of best known upper
bounds for kissing numbers is also given.
IX. Belyaeva N.A., Istomina M.N. Computing System “Bifurcation method in nonlinear models Mechanics”
Computing system includes programs for the branching method in nonlinear mechanics models. The article discusses the general structure of the complex and a description of its constituent programs.
X. Mikhailovskii E.I., Mironov V.V., Podorov V.R. Contact free boundary problem for beams and discrete elastic foundation
The influence of the accounting of transverse shifts on the solution of contact problem for beams and supports of the unilateral action. A generalization to the case of beams, bent on the theory of
Timoshenko, the method of enumeration of sets of active supports, based on the proof of the uniqueness of solutions of the nonlinear contact problem and the equations of the analytical version of the
so-called theorem of three moments.
XI. Pevnyi A.B., Istomina M.N. A modification of Delsarte’s theorem for the estimation of kissing numbers
A modification of Delsarte’s theorem is proved.
XII. Odyniec W.P. Two hundred years from the date of the birth of the creators of mechanical computers recommended for Demidov Prize H. Slonimsky and H. Kummer
Some materials of the creation of calculating gadgets by H. Slonimsky, H. Kummer and H. Ioffe is considered. In details the Theorem by H.Slonimsky which was the base of these gadgets is presented.
This Theorem, devoted to a property of the Farey sequence, is now widely applied in informatics.
XIII. Professor Alexandr Grigiorievich Poroshkin: 60-th year in mathematics and education
XIV.Валерьян Николаевич Исаков (к 65-летию со дня рождения)
Leave a Comment | {"url":"https://vestnik-mmi.syktsu.ru/en/vestnik-13-2011/","timestamp":"2024-11-09T09:30:35Z","content_type":"text/html","content_length":"38848","record_id":"<urn:uuid:e0675fd7-722f-4ad9-9e25-c2fce4959fa4>","cc-path":"CC-MAIN-2024-46/segments/1730477028116.75/warc/CC-MAIN-20241109085148-20241109115148-00524.warc.gz"} |
Divisibility Criteria from 2 to 13 and one Example - Elementary Math
Divisibility Criteria from 2 to 13 and an Example
The criteria for divisibility are guidelines that help us to quickly know if a number is divisible or not. In other words, it lets us know if a number divides evenly or not.
The criteria for divisibility are very useful. They help us easily find the divisors of a given number. They are especially helpful when we have to break down numbers into prime factors or when we
need to know if a number is prime. The criteria give us hints for when we have to simplify fractions, and it’s useful for many other things as well.
Click on each number to learn it divisibility criteria.
Let’s do an example:
If there are 268 kids at a beach that want to compete in a group sandcastle competition…
Would they be able to form 2 teams with an equal number of kids, while not leaving any kid without a team?
Yes, the number 268 is divisible by 2, so they can make 2 teams.
Would they be able to form groups of 5?
No, they can’t because the number 268 doesn’t end in 0 or 5, so it isn’t divisible by 5.
What about groups of 10?
No, it’s not possible because the number 268 doesn’t end in 0, so it isn’t divisible by 10.
If you want to learn much more elementary math, try Smartick for free!
Learn More:
Add a new public comment to the blog:
The comments that you write here are moderated and can be seen by other users.
For private inquiries please write to [email protected]
1 Comment;
• Aditya deshmukhNov 12 2018, 9:46 AM
This is good for math | {"url":"https://www.smartick.com/blog/mathematics/multiplication-and-division/divisibility-criteria/","timestamp":"2024-11-04T01:46:58Z","content_type":"text/html","content_length":"59223","record_id":"<urn:uuid:c214ee24-6849-4d45-a443-04df3724d878>","cc-path":"CC-MAIN-2024-46/segments/1730477027809.13/warc/CC-MAIN-20241104003052-20241104033052-00308.warc.gz"} |
Guided Math Stretch: Real-Life Math: We Need Numbers! Grades 3-5
ISBN 9781425880514
Language English
Engage your mathematics students at the beginning of class with this whole-class warm-up activity. This product features a step-by-step lesson, assessment information, and a snapshot of what the
warm-up looks like in the classroom. Perfect for use at home or in the classroom, this lesson and activity support at-home learning.
Read More
Limited Time Offer
Apply Promo Code: SAVE10 to get 10% discount on all of the products in your Cart on the Teachers Site.
If your Cart is above $100, apply Promo Code: SAVE20 and receive 20% discount!
Clothesline Math: The Master Number Sense Maker
Implement Clothesline Math and teach number sense in K-12 classrooms! This essential resource includes the materials that teachers need to conduct Clothesline Math lessons, and techniques for
effectively facilitating the ensuing mathematical discourse.
Item Number:100444
Learn More
Guided Math: A Framework for Mathematics Instruction Second Edition
This 2nd edition takes an innovative approach to mathematics, using the same teaching philosophies as guided reading. This instructional framework provides an environment for math that fosters
mathematical thinking and meets the needs of all students.
Item Number:102116
Learn More
180 Days™: Math for Seventh Grade
Help seventh grade students build math skills with effective and meaningful daily practice activities. The daily mathematics practice in this workbook includes a range of complex math concepts,
organized in units focused on standards-based topics.
Item Number:142249
Learn More
180 Days™: Math for Eighth Grade
Help eighth grade students improve math skills with motivating and effective daily practice activities. The daily mathematics practice in this workbook covers a variety of complex math concepts,
organized in units focused on standards-based topics.
Item Number:142250
Learn More
Catch-Up Math: 3rd Grade
Get your child back on track in math class! This book supports third grade students who are struggling in math. The full-color book includes instructional pages, coaching videos, examples, practice,
and reviews to help students master key math concepts.
Item Number:146434
Learn More
Strategies for Implementing Guided Math
In this resource, Laney Sammons, author of Guided Math, delves into the strategies necessary to effectively implement the Guided Math Framework. Included are sample lessons, classroom snapshots, and
templates to support each component of the framework.
Item Number:50531
Learn More
Daily Math Stretches: Building Conceptual Understanding Levels K-2
Jumpstart your students' minds with daily warm-ups that get them thinking mathematically. This resource offers effective step-by-step lessons, assessment information, and snapshots of how to
facilitate these math discussions in your classroom.
Item Number:50636
Learn More
Daily Math Stretches: Building Conceptual Understanding Levels 3-5
Jumpstart your students' minds with daily warm-ups that get them thinking mathematically. This resource offers effective step-by-step lessons, assessment information, and snapshots of how to
facilitate these math discussions in your classroom.
Item Number:50786
Learn More
Daily Math Stretches: Building Conceptual Understanding Levels 6-8
Jumpstart your students' minds with daily warm-ups that get them thinking mathematically. This resource offers effective step-by-step lessons, assessment information, and snapshots of how to
facilitate these math discussions in your classroom.
Item Number:50787
Learn More
Building Mathematical Comprehension: Using Literacy Strategies to Make Meaning
This resource applies familiar reading comprehension strategies and relevant research to mathematics instruction to aid in building students' comprehension in mathematics.
Item Number:50789
Learn More
Guided Math Conferences
Use conferencing successfully within your Guided Math classroom with this resource full of suggestions, tips, management, and implementation methods.
Item Number:51187
Learn More
Implementing Guided Math: Tools for Educational Leaders
Support the implementation of the Guided Math framework with this guide by Laney Sammons. This resource provides school leaders with strategies for supporting teachers as they embark on teaching the
components of the framework in their classrooms.
Item Number:51512
Learn More
Instant downloads available from this book
Guided Math Stretch: Comparing and Ordering: What Comes First? Grades 3-5
Engage your mathematics students at the beginning of class with this whole-class warm-up activity. This product features a step-by-step lesson, assessment information, and a snapshot of what the
warm-up looks like in the classroom.
Item Number:50786_01
Extra Info:eBook
Learn More
Guided Math Stretch: How Many Ways Can We Represent This Number? Grades 3-5
Engage your mathematics students at the beginning of class with this whole-class warm-up activity. This product features a step-by-step lesson, assessment information, and a snapshot of what the
warm-up looks like in the classroom.
Item Number:50786_02
Extra Info:eBook
Learn More
Guided Math Stretch: Order of Operations: Get Ready to Compute! Grades 3-5
Engage your mathematics students at the beginning of class with this whole-class warm-up activity. This product features a step-by-step lesson, assessment information, and a snapshot of what the
warm-up looks like in the classroom.
Item Number:50786_03
Extra Info:eBook
Learn More
Guided Math Stretch: Number Sequence: What's My Neighbor? Grades 3-5
Engage your mathematics students at the beginning of class with this whole-class warm-up activity. This product features a step-by-step lesson, assessment information, and a snapshot of what the
warm-up looks like in the classroom.
Item Number:50786_04
Extra Info:eBook
Learn More
Guided Math Stretch: The Values of Fractions Grades 3-5
Engage your mathematics students at the beginning of class with this whole-class warm-up activity. This product features a step-by-step lesson, assessment information, and a snapshot of what the
warm-up looks like in the classroom.
Item Number:50786_05
Extra Info:eBook
Learn More
Guided Math Stretch: Numerical Patterns: What Comes Next? Grades 3-5
Engage your mathematics students at the beginning of class with this whole-class warm-up activity. This product features a step-by-step lesson, assessment information, and a snapshot of what the
warm-up looks like in the classroom.
Item Number:50786_06
Extra Info:eBook
Learn More
Guided Math Stretch: Numerical Patterns: The Power of Zero Grades 3-5
Engage your mathematics students at the beginning of class with this whole-class warm-up activity. This product features a step-by-step lesson, assessment information, and a snapshot of what the
warm-up looks like in the classroom.
Item Number:50786_07
Extra Info:eBook
Learn More
Guided Math Stretch: Pattern Tables: The In/Out Machine Grades 3-5
Engage your mathematics students at the beginning of class with this whole-class warm-up activity. This product features a step-by-step lesson, assessment information, and a snapshot of what the
warm-up looks like in the classroom.
Item Number:50786_08
Extra Info:eBook
Learn More
Guided Math Stretch: Variable Expressions: Write a Story Grades 3-5
Engage your mathematics students at the beginning of class with this whole-class warm-up activity. This product features a step-by-step lesson, assessment information, and a snapshot of what the
warm-up looks like in the classroom.
Item Number:50786_09
Extra Info:eBook
Learn More
Guided Math Stretch: Pattern Tables: Can It Be? Grades 3-5
Engage your mathematics students at the beginning of class with this whole-class warm-up activity. This product features a step-by-step lesson, assessment information, and a snapshot of what the
warm-up looks like in the classroom.
Item Number:50786_10
Extra Info:eBook
Learn More
Guided Math Stretch: Flip, Turn, and Slide! Grades 3-5
Engage your mathematics students at the beginning of class with this whole-class warm-up activity. This product features a step-by-step lesson, assessment information, and a snapshot of what the
warm-up looks like in the classroom.
Item Number:50786_11
Extra Info:eBook
Learn More
Guided Math Stretch: 2-D Shapes: Create a Polygon That Has ___ Grades 3-5
Engage your mathematics students at the beginning of class with this whole-class warm-up activity. This product features a step-by-step lesson, assessment information, and a snapshot of what the
warm-up looks like in the classroom.
Item Number:50786_12
Extra Info:eBook
Learn More
Guided Math Stretch: Identifying Angles: The Angle Alphabet! Grades 3-5
Engage your mathematics students at the beginning of class with this whole-class warm-up activity. This product features a step-by-step lesson, assessment information, and a snapshot of what the
warm-up looks like in the classroom.
Item Number:50786_13
Extra Info:eBook
Learn More
Guided Math Stretch: Congruent or Similar: Are They the Same? Grades 3-5
Engage your mathematics students at the beginning of class with this whole-class warm-up activity. This product features a step-by-step lesson, assessment information, and a snapshot of what the
warm-up looks like in the classroom.
Item Number:50786_14
Extra Info:eBook
Learn More
Guided Math Stretch: 3-D Properties: I Spy 3-D Shapes! Grades 3-5
Engage your mathematics students at the beginning of class with this whole-class warm-up activity. This product features a step-by-step lesson, assessment information, and a snapshot of what the
warm-up looks like in the classroom.
Item Number:50786_15
Extra Info:eBook
Learn More
Guided Math Stretch: Linear Measurement: How Long Is Your Name? Grades 3-5
Engage your mathematics students at the beginning of class with this whole-class warm-up activity. This product features a step-by-step lesson, assessment information, and a snapshot of what the
warm-up looks like in the classroom.
Item Number:50786_16
Extra Info:eBook
Learn More
Guided Math Stretch: Elapsed Time: Time Goes By! Grades 3-5
Engage your mathematics students at the beginning of class with this whole-class warm-up activity. This product features a step-by-step lesson, assessment information, and a snapshot of what the
warm-up looks like in the classroom.
Item Number:50786_17
Extra Info:eBook
Learn More
Guided Math Stretch: Perimeter and Area: Around and Inside! Grades 3-5
Engage your mathematics students at the beginning of class with this whole-class warm-up activity. This product features a step-by-step lesson, assessment information, and a snapshot of what the
warm-up looks like in the classroom.
Item Number:50786_18
Extra Info:eBook
Learn More
Guided Math Stretch: Linear Measurement in Our School Grades 3-5
Engage your mathematics students at the beginning of class with this whole-class warm-up activity. This product features a step-by-step lesson, assessment information, and a snapshot of what the
warm-up looks like in the classroom.
Item Number:50786_19
Extra Info:eBook
Learn More
Guided Math Stretch: Estimating Weight: What Weighs about a Pound? Grades 3-5
Engage your mathematics students at the beginning of class with this whole-class warm-up activity. This product features a step-by-step lesson, assessment information, and a snapshot of what the
warm-up looks like in the classroom.
Item Number:50786_20
Extra Info:eBook
Learn More
Guided Math Stretch: Frequency Table Grades 3-5
Engage your mathematics students at the beginning of class with this whole-class warm-up activity. This product features a step-by-step lesson, assessment information, and a snapshot of what the
warm-up looks like in the classroom.
Item Number:50786_21
Extra Info:eBook
Learn More
Guided Math Stretch: Line-Plot Graph Grades 3-5
Engage your mathematics students at the beginning of class with this whole-class warm-up activity. This product features a step-by-step lesson, assessment information, and a snapshot of what the
warm-up looks like in the classroom.
Item Number:50786_22
Extra Info:eBook
Learn More
Guided Math Stretch: Bar Graph Grades 3-5
Engage your mathematics students at the beginning of class with this whole-class warm-up activity. This product features a step-by-step lesson, assessment information, and a snapshot of what the
warm-up looks like in the classroom.
Item Number:50786_23
Extra Info:eBook
Learn More
Guided Math Stretch: Circle Graph Grades 3-5
Engage your mathematics students at the beginning of class with this whole-class warm-up activity. This product features a step-by-step lesson, assessment information, and a snapshot of what the
warm-up looks like in the classroom.
Item Number:50786_24
Extra Info:eBook
Learn More
Guided Math Stretch: Determine Type of Graph: How Will I Show It? Grades 3-5
Engage your mathematics students at the beginning of class with this whole-class warm-up activity. This product features a step-by-step lesson, assessment information, and a snapshot of what the
warm-up looks like in the classroom.
Item Number:50786_25
Extra Info:eBook
Learn More
Guided Math Stretch: How Did My Family Use Math Last Night? Grades 3-5
Engage your mathematics students at the beginning of class with this whole-class warm-up activity. This product features a step-by-step lesson, assessment information, and a snapshot of what the
warm-up looks like in the classroom.
Item Number:50786_26
Extra Info:eBook
Learn More
Guided Math Stretch: Real-Life Math: Numbers in the News Grades 3-5
Engage your mathematics students at the beginning of class with this whole-class warm-up activity. This product features a step-by-step lesson, assessment information, and a snapshot of what the
warm-up looks like in the classroom.
Item Number:50786_28
Extra Info:eBook
Learn More
Guided Math Stretch: Real-Life Math: _____ Makes Me Think of . . . Grades 3-5
Engage your mathematics students at the beginning of class with this whole-class warm-up activity. This product features a step-by-step lesson, assessment information, and a snapshot of what the
warm-up looks like in the classroom.
Item Number:50786_29
Extra Info:eBook
Learn More
Guided Math Stretch: Real-Life Math: Know and Want to Know Grades 3-5
Engage your mathematics students at the beginning of class with this whole-class warm-up activity. This product features a step-by-step lesson, assessment information, and a snapshot of what the
warm-up looks like in the classroom.
Item Number:50786_30
Extra Info:eBook
Learn More
Product reviews
In this section you can find reviews from our customers, or you can add your own review for this particular product.
Customer reviews help other visitors to read feedback from users who have already purchased and are using TCM’s products. | {"url":"https://www.teachercreatedmaterials.com/teachers/p/guided-math-stretch-real-life-math-we-need-numbers-grades-3-5/50786_27/?list=Instant%20downloads%20available%20from%20this%20book","timestamp":"2024-11-13T08:59:36Z","content_type":"text/html","content_length":"272540","record_id":"<urn:uuid:99df7b7c-92e8-479d-ac6b-92822996a62a>","cc-path":"CC-MAIN-2024-46/segments/1730477028342.51/warc/CC-MAIN-20241113071746-20241113101746-00691.warc.gz"} |
Type theory
$(0,1)$-Category theory
An implication may be either an entailment or a conditional statement; these are closely related but not quite the same thing.
1. Entailment is a preorder on propositions within a given context in a given logic.
We say that $p$ entails $q$syntactically, written as a sequent $p \vdash q$, if $q$ can be proved from the assumption $p$.
We say that $p$ entails $q$semantically, written $p \vDash q$, if $q$ holds in every model in which $p$ holds.
(These relations are often equivalent, by various soundness? and completeness theorems.)
2. A conditional statement is the result of a binary operation on propositions within a given context in a given logic. If $p$ and $q$ are propositions in some context, then so is the conditional
statement $p \to q$, at least if the logic has a notion of conditional.
Notice that $p$, $q$, and $p \to q$ are all statements in the object language (the language that we are talking about), whereas the hypothetical judgements $p \vdash q$ and $p \vDash q$ are
statements in the metalanguage (the language that we are using to talk about the object language).
Relations between the definitions
Depending on what logic one is using, $p \to q$ might be anything, but it's probably not fair to consider it a conditional statement unless it is related to entailment as follows:
If, in some context, $p$ entails $q$ (either syntactically or semantically), then $p \to q$ is a theorem (syntactically) or a tautology (semantically) in that context, and conversely.
In particular, this holds for classical logic and intuitionistic logic.
You can think of entailment as being an external hom (taking values in the poset of truth values) and the conditional as being an internal hom (taking values in the poset of propositions). In
particular, we expect these to be related as in a closed category:
• $q \to r \vdash (p \to q) \to (p \to r)$,
• $p \equiv \top \to p$,
• $\top \vdash p \to p$,
where $\top$ is an appropriate constant statement (often satisfying $p \vdash \top$, although not always, as in linear logic with $\multimap$ for $\to$ and $1$ for $\top$).
Most kinds of logic used in practice have a notion of entailment from a list of multiple premises; then we expect entailment and the conditional to be related as in a closed multicategory.
Just as we may identify the internal and external hom in Set, so we may identify the entailment and conditional of truth values. In the $n$Lab, we tend to write this as $\Rightarrow$, a symbol that
is variously used by other authors in place of $\vdash$, $\vDash$, and $\rightarrow$.
In various formalizations
In Heyting algebras
Although Heyting algebras were first developed as a way to discuss intuitionistic logic, they appear in other contexts; but their characterstic feature is that they have an operation analogous to the
conditional operation in logic, usually called Heyting implication and denoted $\rightarrow$ or $\Rightarrow$. If you use $\to$ and replace $\vdash$ above with the Heyting algebra's partial order $\
leq$, then everything above applies.
In natural deduction
In natural deduction the inference rules for implication are given as
$\frac{\Gamma \vdash P \; \mathrm{prop} \quad \Gamma \vdash Q \; \mathrm{prop}}{\Gamma \vdash P \to Q \; \mathrm{prop}} \qquad \frac{\Gamma \vdash P \; \mathrm{prop} \quad \Gamma, P \; \mathrm{true}
\vdash Q \; \mathrm{true}}{\Gamma \vdash P \to Q \; \mathrm{true}} \qquad \frac{\Gamma \vdash P \to Q \; \mathrm{true}}{\Gamma, P \; \mathrm{true} \vdash Q \; \mathrm{true}}$
In type theory
In type theory | {"url":"https://ncatlab.org/nlab/show/implication","timestamp":"2024-11-14T16:49:42Z","content_type":"application/xhtml+xml","content_length":"96773","record_id":"<urn:uuid:dc47da82-1129-4217-bb58-0d0b5c270c5f>","cc-path":"CC-MAIN-2024-46/segments/1730477397531.96/warc/CC-MAIN-20241114225955-20241115015955-00241.warc.gz"} |
Rocket Equation Calculator - Calculator Wow
Rocket Equation Calculator
In the vast expanse of space exploration, where every ounce of thrust matters, the Rocket Equation Calculator emerges as a celestial navigator. Beyond the glimmering stars, this calculator plays a
pivotal role in understanding the physics of rocketry. Let’s delve into its intricacies, unraveling the magic behind space propulsion and the profound influence it holds in the realm of interstellar
Importance: The rocket equation, encapsulated in the Δv (delta-v) formula, is the backbone of spacecraft propulsion. It defines the change in velocity achievable by a rocket based on the exhaust
velocity and the ratio of initial to final mass. This seemingly simple equation is of paramount importance as it dictates the feasibility and success of space missions. Whether launching satellites
or embarking on interplanetary journeys, the Rocket Equation Calculator becomes the compass guiding engineers through the cosmic sea.
How to Use: Using the Rocket Equation Calculator is a cosmic dance with numbers. Input the exhaust velocity, initial mass of the rocket, and the final mass after expelling propellant. Click the
button, and the calculator unveils the crucial Δv – the change in velocity that determines the rocket’s capability to navigate the cosmic vastness. It’s not just a calculation; it’s the key to
unlocking the mysteries of propelling spacecraft beyond Earth’s atmosphere.
10 FAQs and Answers:
Q1: What is the significance of the Rocket Equation in space exploration? A1: The Rocket Equation governs the change in velocity achievable by a rocket, influencing mission design, payload capacity,
and the feasibility of reaching distant celestial bodies.
Q2: How does the exhaust velocity impact the Δv calculation? A2: The exhaust velocity directly influences the rocket’s ability to gain velocity. Higher exhaust velocity allows for more efficient
propulsion, resulting in greater Δv.
Q3: Why is Δv crucial for interplanetary travel? A3: Δv determines the spacecraft’s capability to change its trajectory, crucial for entering orbits, performing maneuvers, and reaching distant
planets in the vastness of space.
Q4: Can a rocket achieve unlimited Δv? A4: No, there are practical limits determined by the rocket’s design, available propellant, and the laws of physics. Achieving high Δv often requires careful
mission planning.
Q5: How does the Rocket Equation impact fuel requirements for space missions? A5: The equation highlights the trade-off between fuel mass and mission objectives. More fuel increases Δv but also adds
mass, influencing overall mission feasibility.
Q6: Are there limitations to the types of propulsion systems the Rocket Equation applies to? A6: The Rocket Equation is broadly applicable to various propulsion systems, including chemical rockets,
ion drives, and others, making it a versatile tool in space mission planning.
Q7: Can Δv be replenished during a space mission? A7: In some cases, yes. Certain mission architectures include techniques like gravity assists or in-space refueling to replenish Δv, extending the
mission’s reach.
Q8: How does Δv influence spacecraft escape velocities? A8: Δv is critical for achieving escape velocities needed to break free from celestial bodies’ gravitational influences, enabling spacecraft to
venture deeper into space.
Q9: Can the Rocket Equation be applied to atmospheric flight? A9: While primarily designed for space missions, elements of the Rocket Equation can be adapted for atmospheric flight, especially in the
context of aerospace engineering.
Q10: How do engineers optimize Δv for specific space missions? A10: Engineers optimize Δv by carefully selecting propulsion systems, mission trajectories, and payload configurations to meet mission
objectives within technical and budgetary constraints.
Conclusion: As we conclude our cosmic odyssey with the Rocket Equation Calculator, it’s clear that this unassuming formula holds the key to the stars. It’s not just about numbers; it’s about
propelling humanity into the cosmic theater. So, whether you’re an aerospace engineer plotting trajectories, a space enthusiast dreaming of exploration, or simply intrigued by the vastness beyond,
let the Rocket Equation Calculator be your guide. It’s more than a tool; it’s a ticket to the cosmos, where each Δv calculation propels us closer to the frontiers of space exploration. Embrace the
numbers, chart your course, and let the allure of the Rocket Equation Calculator inspire your journey into the celestial unknown. | {"url":"https://calculatorwow.com/rocket-equation-calculator/","timestamp":"2024-11-12T16:01:28Z","content_type":"text/html","content_length":"66036","record_id":"<urn:uuid:e7b0847f-3cf9-49e5-8252-861da0a8ec6e>","cc-path":"CC-MAIN-2024-46/segments/1730477028273.63/warc/CC-MAIN-20241112145015-20241112175015-00694.warc.gz"} |
Simple Trick to End the Frustration With Subtraction!
Simple Trick to End the Frustration With Subtraction Regrouping
One math concept that often stumps students is subtracting with borrowing (or regrouping-whatever you’d like to call it). There are lots of concrete and hands-on ways to teach this concept so that it
makes sense to kids, rather than expecting them to memorize a procedure. In the past, I have used base ten blocks or dimes and pennies as a model, which worked well, and I highly recommend teaching
this in a conceptual way when introducing it to students. Today, however, I would like to share a trick that can help ease the frustration with subtraction, and actually remove the need to “borrow”
altogether, and I hope you will find it helpful!
Simple Trick to End the Frustration with Subtraction:
Ok, so I’ll be honest, I’m really not a fan of teaching math “tricks.” Often, they are short cuts that only work under particular circumstances (which are not always made clear to students), and thus
cause students confusion later.
Some of these tricks were especially frustrating as a high school teacher, because in the midst of solving a complex algebra problem, my students would need to do some sort of computation with
fractions, but could not remember the “trick” they learned, and would therefore solve the problem incorrectly.
So just to be clear, this is not really a “trick,” per se, just an observation and mathematical truth that students can use to potentially make subtraction easier.
To do this, however, they have to have a clear understanding of place value, and an understanding of subtraction and what it represents. Then they will be able to see that the problem will be
easier, but the solution will be the same (therefore, solve the easier problem).
For example, if you are given the problem 21-11, that is the same as 20-10. Or if you have the problem 47-17, that is equal to 40-10.
Understanding why this is the case requires algebraic thinking, and will provide a good foundation for solving equations later on (i.e. you can do any operation to one side of the equals sign as long
as you do the same thing to the other side).
If you are teaching this tip to your students, start with very simple problems like this, and model why this is true on a number line.
One way to think of subtraction is the distance between two numbers. So essentially, as an example, we’re saying that the distance between 240 and 228 is the same as the distance between 239 and 227.
Therefore, I can subtract 239-227 to get the same answer.
So why does this make subtraction easier?
When you get to large numbers and problems that require borrowing (especially with zeros), this process can make things simpler with one small step: subtract 1 first.
Here’s an example that would require a lot of borrowing, and it can be very easy for students (especially those who struggle) to get bogged down in the details and what borrowed number goes where:
Because there are so many zeros, to regroup would require going all the way to the thousands place, and then working back. Instead, subtract 1 from each number first:
This changes the problem to this (which, remember, will give the same solution):
Now, when subtracting these numbers by hand, there is no borrowing or regrouping required.
Obviously, subtraction problems that have lots of zeros will work best to simply subtract 1 from each number.
If you’ve explained why this works, however, students can use this same logic to simplify any subtraction problem, and either make the regrouping less cumbersome, or remove the need to regroup
Here’s another example that is not so nice and clearcut:
Simply subtracting 1 from each number doesn’t actually change the fact that you have to borrow or regroup. If, however, you subtract 3 from each number:
This changes the problem to this:
Now, there is still some regrouping required, but it’s not as complicated.
The basic goal of this is to create an easier problem and then solve it. By making the numbers on top (or at least some of them) bigger, you can make the subtraction easier.
Not only will this help students increase their number sense and mental math skills, but it will help them to better understand subtraction, as well as build a foundation for later algebra learning.
I would encourage you to start a discussion with your kids to see how they might explain this “trick” and why it works, as well as let them play with numbers to come up with their own way to simplify
subtraction problems.
As always, there is always more than one way to solve a math problem, and there will always be more than one way to simplify problems like these. So if your students try something different, check it
for accuracy, and then encourage them to “prove” that it will work in all cases.
And most of all, have fun playing with numbers!
What are some helpful ways you have found to teach subtraction? Do you have a trick to end the frustration with subtraction? Share it with us!
Want more tips and encouragement for teaching kids to be confident problem solvers? Subscribe to my weekly email newsletter and get my ebook, Strategies for Problem Solving: Equip Kids to Solve Math
Problems With Confidence, for FREE!
7 Comments
1. I wish I’d known this one when I was in school – I hated that borrowing stuff! Thanks for sharing at the Thoughtful Spot!
2. I love your commentary about not teaching tricks. I’m going to explore how to help fourth graders understand this strategy. It seems on par with how useful it is for student to use the strategy
of doubling and halving in multiplication.
1. Thank you Amy! I hope your students are able to understand and apply this strategy and that it makes things easier for them. 🙂
3. I am a second grade teacher and we introduce 2 digit addition with subtraction. i am behind right now because my students struggled with two digit addition. I am looking for ways to better teach
my kiddos as well as getting them there quicker.
4. I agree with you Bethany. Nothing should be presented as a trick; there is always a solid reasoning behind every trick in mathematics.
For the subtraction problem 352 – 198 , one could increase both numbers by 2 and have an easy subtraction, 354 – 200, that can be done mentally.
1. Thank you! Yes, that’s an excellent example, thanks for sharing! 🙂
5. This is not a trick. This is a strategy called “compensation”. | {"url":"https://mathgeekmama.com/simple-trick-to-end-the-frustration-with-subtraction-regrouping/","timestamp":"2024-11-14T13:26:30Z","content_type":"text/html","content_length":"197432","record_id":"<urn:uuid:b7ff5ae2-a290-434a-8400-7d73a14008b6>","cc-path":"CC-MAIN-2024-46/segments/1730477028657.76/warc/CC-MAIN-20241114130448-20241114160448-00884.warc.gz"} |
Quantity Surveying Archives
Types Of Curves In Surveying Work. What Is The Curve? The Curves are generally the horizontal &/or vertical bends that are usually used on highways & the railways when it is necessary to change the
alignment of the route. when 2 points are located at the different levels, it becomes …
Read More » | {"url":"https://rajajunaidiqbal.com/tag/quantity-surveying/","timestamp":"2024-11-11T13:04:23Z","content_type":"text/html","content_length":"246897","record_id":"<urn:uuid:8c4ec330-95a0-4e79-ad08-154b8870bd55>","cc-path":"CC-MAIN-2024-46/segments/1730477028230.68/warc/CC-MAIN-20241111123424-20241111153424-00686.warc.gz"} |
Comments on God Plays Dice: 51,199,463,116,367: A fuller solution to Tuesday's electoral vote problemThe real issue is not how well Obama or McCain mig...Jeff,you're right -- that would be a better way to...Never mind, the number is right. I misunderstood w...Ok, now I do actually have a disagreement with the...Jeff,the number of EVs needed to win is 270, and n...Most of this looks good, but I'm a little confused...Thanks for the post. I've written up the dynamic ...Congratulations to all! I would have guessed a muc...I'm the anonymous author of the Perl dynamic progr...
tag:blogger.com,1999:blog-264226589944705290.post6335600020794916929..comments2024-09-25T08:51:01.854-07:00Michael Lugohttp://www.blogger.com/profile/
15671307315028242949noreply@blogger.comBlogger9125tag:blogger.com,1999:blog-264226589944705290.post-71621990795050822882008-06-16T11:21:00.000-07:002008-06-16T11:21:00.000-07:00The real issue is not
how well Obama or McCain might do in the closely divided battleground states, but that we shouldn't have battleground states and spectator states in the first place. Every vote in every state should
be politically relevant in a presidential election. And, every vote should be equal. We should have a national popular vote for President in which the White House goes to the candidate who gets the
most popular votes in all 50 states. <BR/><BR/>The National Popular Vote bill would guarantee the Presidency to the candidate who receives the most popular votes in all 50 states (and DC). The bill
would take effect only when enacted, in identical form, by states possessing a majority of the electoral vote -- that is, enough electoral votes to elect a President (270 of 538). When the bill comes
into effect, all the electoral votes from those states would be awarded to the presidential candidate who receives the most popular votes in all 50 states (and DC). <BR/><BR/>The major shortcoming of
the current system of electing the President is that presidential candidates have no reason to poll, visit, advertise, organize, campaign, or worry about the voter concerns in states where they are
safely ahead or hopelessly behind. The reason for this is the winner-take-all rule which awards all of a state's electoral votes to the candidate who gets the most votes in each separate state.
Because of this rule, candidates concentrate their attention on a handful of closely divided "battleground" states. Two-thirds of the visits and money are focused in just six states; 88% on 9 states,
and 99% of the money goes to just 16 states. Two-thirds of the states and people are merely spectators to the presidential election.<BR/><BR/>Another shortcoming of the current system is that a
candidate can win the Presidency without winning the most popular votes nationwide.<BR/><BR/>The National Popular Vote bill has been approved by 18 legislative chambers (one house in Colorado,
Arkansas, Maine, North Carolina, Rhode Island, and Washington, and two houses in Maryland, Illinois, Hawaii, California, and Vermont). It has been enacted into law in Hawaii, Illinois, New Jersey,
and Maryland. These states have 50 (19%) of the 270 electoral votes needed to bring this legislation into effect. <BR/><BR/>See http://www.NationalPopularVote.com <BR/><BR/>
susanAnonymousnoreply@blogger.comtag:blogger.com,1999:blog-264226589944705290.post-88581519734646168242008-06-13T14:57:00.000-07:002008-06-13T14:57:00.000-07:00Jeff,<BR/><BR/>you're right -- that
would be a better way to do the calculation. The solution I present here basically reproduces the way I got to the answer the first time, so it's not surprising that there are some
inefficiencies.Michael Lugohttps://www.blogger.com/profile/
15671307315028242949noreply@blogger.comtag:blogger.com,1999:blog-264226589944705290.post-15358726843049631712008-06-13T14:46:00.000-07:002008-06-13T14:46:00.000-07:00Never mind, the number is right.
I misunderstood what you meant when you said you repeated the process with 4, 5, 6, etc as the minimum. Obviously for 4 you added in the 273 coefficient as well as the 270, 271 and 272 ones.<BR/><BR
/>That being said, I think you made the process a bit harder than you needed to. Instead of doing the subtraction, just add the 270, 271, 272 coefficients for the full generating polynomial, then the
273 coefficient for the 4+ EV polynomial, the 274 coefficient for the 5+ EV polynomial, and so
on.Anonymousnoreply@blogger.comtag:blogger.com,1999:blog-264226589944705290.post-70143770339059590862008-06-13T13:47:00.000-07:002008-06-13T13:47:00.000-07:00Ok, now I do actually have a disagreement
with the result. What you're computing is not the number of minimal winning sets. <BR/><BR/>For the 270, 271, or 272 EV totals, it doesn't matter what the smallest state is, because if you remove any
of those you lose. Instead you want to add up:<BR/><BR/>1) All ways to get 270<BR/>2) All ways to get 271<BR/>3) All ways to get 272<BR/>4) All ways to get 273 without any 3 EV states<BR/>5) All ways
to get 274 without any 3 or 4 EV states<BR/>etc...<BR/><BR/>The generating function approach works very nicely for this as well, but the total is going to be
different.Anonymousnoreply@blogger.comtag:blogger.com,1999:blog-264226589944705290.post-43082295794017417632008-06-13T12:56:00.000-07:002008-06-13T12:56:00.000-07:00Jeff,<BR/><BR/>the number of EVs
needed to win is 270, and no state has less than 3 EV. So 270, 271, and 272 are all essentially the same, in that if a candidate gets any of those numbers they need every state they got.Michael
15671307315028242949noreply@blogger.comtag:blogger.com,1999:blog-264226589944705290.post-56367420064524004962008-06-13T12:39:00.000-07:002008-06-13T12:39:00.000-07:00Most of this looks good, but I'm
a little confused by something. I get why you want to remove the solutions for 271 and 272 with no 3-EV states because you could then swap a state for a lower valued one. But why are you removing
those solutions for the 270 total? If you're exactly at 270, pretty much by definition there are no excess votes no matter what the combination of states
is.Anonymousnoreply@blogger.comtag:blogger.com,1999:blog-264226589944705290.post-51361781365726104342008-06-13T06:04:00.000-07:002008-06-13T06:04:00.000-07:00Thanks for the post. I've written up the
dynamic programming approach <A HREF="http://research.swtch.com/2008/06/electoral-programming.html" REL="nofollow">here</A>.rschttps://www.blogger.com/profile/
09576271159839887762noreply@blogger.comtag:blogger.com,1999:blog-264226589944705290.post-47374749147936543602008-06-13T02:45:00.000-07:002008-06-13T02:45:00.000-07:00Congratulations to all! I would
have guessed a much smaller number, until the comment on the size of 2^52.CarlBrannenhttps://www.blogger.com/profile/
17180079098492232258noreply@blogger.comtag:blogger.com,1999:blog-264226589944705290.post-90359099971420336522008-06-12T22:41:00.000-07:002008-06-12T22:41:00.000-07:00I'm the anonymous author of the
Perl dynamic programming solution posted as a comment to your earlier post. Just wanted to add two points: I think it's surprising how quick a good dynamic solution can run. The Perl runs nearly
instantaneously on any modern computer. Second, it is critical to memoize (which is, of course, sound dynamic programming practice). I don't think everyone who attempted a solution remembered to do
that, even if they were otherwise on the right track in terms of how to divide and conquer the problem.<BR/><BR/>Oh, and, my first thought about your solutoin, as a programmer, was exactly what you
expected: that the polynomial calculation was basically just a framework for employing a dynamic programming tool provided by your computer algebra software of choice. But seriously, it was
interesting to see how you approached the problem, and this has been one of my personal favorites of your posts.Anonymousnoreply@blogger.com | {"url":"https://godplaysdice.blogspot.com/feeds/6335600020794916929/comments/default","timestamp":"2024-11-08T21:22:29Z","content_type":"application/atom+xml","content_length":"21877","record_id":"<urn:uuid:52434157-f311-407a-a19b-8feaa77cc62a>","cc-path":"CC-MAIN-2024-46/segments/1730477028079.98/warc/CC-MAIN-20241108200128-20241108230128-00690.warc.gz"} |
With a Velocity of Sound: Conquering Transonics
With a Velocity of Sound: Conquering Transonics
Long-range passenger airplanes do not fly with a supersonic velocity today. “Supersonic” projects TU-144 and Concorde turned out to be cost-inefficient. The first attempt to break the sonic barrier
for peaceful purposes failed, because when an airplane approaches the sonic barrier, the drag drastically increases, while the lift force decreases. If we don’t have “supersonics”, what is it that we
have? It’s “transonics”.
Transonic velocities cover the range slightly higher and slightly lower than the velocity of sound (approximately from 0.8 to 1.2 of the velocity of sound). The term had to be introduced to describe
the transitional flow regime, with part of the flow becoming supersonic and the other part of the flow remaining in the subsonic regime
In the high and far-off times
Investigations aimed at increasing the flight velocity and lift-to-drag ratio of airplanes were started back in the 1930s. At the turn of the 1930s—1940s, it was experimentally proved that the drag
of wings and other airplane elements drastically increases with increasing flight velocity. Moreover, the behavior of the lift force and the pitching moment also becomes unpredictable. The reason for
these phenomena was found to be the emergence of flow regions where the velocity of air with respect to the body is greater than the velocity of sound. The airplane velocity at which supersonic flows
arise near the airplane surface was called the critical velocity.
Theoretical concepts on the lift and drag forces of the wing at subcritical velocities were shaped under the influence of ideas of N. E. Zhukovskii, an outstanding Russian scientist and the founder
of aerodynamics: the wing possesses a lift force because the velocity and, hence, rarefaction (decrease in pressure) near the upper surface is greater than that near the lower surface. The lift force
is the difference in pressure at the lower and upper surfaces of the wing. The drag force of infinite-span wings consists of two components: friction drag and frontal drag resulting from incomplete
recovery of pressure at the rear part of the wing. These forces are absent in an ideal gas flow around the wings. Finite-span wings also have the so-called inductive drag directly related to the
presence of the lift force.
These classical concepts, however, turned out to be insufficient to explain the phenomena observed at flight velocities higher than the critical value. The physical reason for the coincidence of the
increase in drag and the emergence of supersonic velocity near the wing surface was not understood either.
By the time the problems arising at critical velocities were recognized, investigations taking into account the effect of compressibility (decrease in gas density with increasing flow velocity) on
the pressure distribution over the wing surface were already under way. L. Prandtl, a German physicist, an eminent expert in aerodynamics of that time, suggested that a correction should be
introduced to recalculate the pressure and lift force of the airfoil from data for this airfoil in an incompressible gas flow. Experiments showed, however, that Prandtl’s theory was invalid for flow
velocities higher than the critical value.
The flow pattern around the wing and the pressure distribution in a subcritical air flow are essentially different from the regime formed at supercritical velocities. There are typical plots of
subcritical and supercritical flows. Shock waves arise every time when the particles in a supersonic gas flow hit the body surface or change the direction of their motion by a finite angle within
extremely small distances comparable with the mean free path of gas molecules.
The pictures of airplanes overcoming the supersonic barrier clearly display barrel shocks arising in supercritical flight, which depend on the wing shape. When an air molecule enters a narrow layer
containing a shock wave, part of kinetic energy transforms to thermal energy as a result of inelastic interaction of molecules with each other. As the kinetic energy of gas decreases behind the shock
wave, the total pressure also decreases. In thermodynamics, this process is called irreversible. The entropy S is used as the measure of irreversibility.
The entropy of gas increases in the shock wave. The entropy increment equals the ratio between the amount of kinetic energy transformed to thermal energy owing to inelastic interaction of particles
and the absolute temperature of the gas. Since the total energy of the gas remains unchanged and, hence, the total temperature is T[0]=const, the total pressures behind the shock wave p[02] and ahead
of the shock wave p[01] are related as p[02]= p[01]•exp(-∆S/R), where ∆S is the entropy increment in the shock wave and R is the universal gas constant.
Thus, the total pressure of the gas decreases as it passes through the shock wave. This was used further to explain the reason for the increase in the drag of airfoils in a transonic flow. Shock
waves are also responsible for the “sonic boom” phenomenon observed in supersonic flight.
TsAGI and problem solution
In 1940, the team headed by Academician S. A. Khristianovich, who worked at that time at TsAGI, the Central Hydroaerodynamics Institute named after Academician N. E. Zhukovskii — the greatest
state-supported research center of aircraft in Russia — calculated the drag force induced by the presence of shock waves generated by a supersonic flow transforming to a subsonic flow. This
phenomenon was called the wave drag.
It turned out that the shock wave leads to a decrease in pressure in the rear part of the airfoil, which increases the drag force of the body. To validate the theory, it was necessary to perform
experiments, for which a wind tunnel with transonic velocities in the test section had to be created.
When working on the wind tunnel, the scientists encountered a considerable physical restriction: in the transonic flow around the airfoil, the shock waves were found to reflect from the test-section
walls and to impinge onto the model surface, substantially changing the flow structure. To avoid this problem, S. Khristianovich developed the theory of “short” waves, which allows one to solve
problems of interaction of shock waves with various surfaces. Semipermeable surfaces were found to attenuate considerably the intensity of reflected waves. Thus, an idea was put forward to perforate
the walls of the test section of the transonic wind tunnel.
The world’s first wind tunnel was created in TsAGI in 1946. Now wind tunnels with perforated walls are an integral part of the equipment of aerodynamic laboratories all over the world. These wind
tunnels allow the researchers to obtain aerodynamic characteristics of wing and fuselage models in the transonic range of free-stream Mach numbers,* thus providing a continuous transition through the
velocity of sound.
Then, in a short time S. Khristianovich and his team solved the problem of the influence of flow compressibility on the pressure distribution over the wing. The fundamental law of stabilization was
established: as the critical velocity is reached, the growth rate of velocity near the airfoil surface becomes less intense than the growth rate of the free-stream velocity. Then the velocity ceases
to grow, and the distribution of Mach contours over the airfoil surface, between its leading edge and the shock wave, remains constant and independent of the free-stream velocity. This distribution
is called the limiting distribution of Mach numbers, and it is used to calculate the “limiting pressure curve.”
This law is illustrated by the distribution along the airfoil of the ratio of the flow pressure at a point on the airfoil to the pressure at the stagnation point p/p[0]. This ratio is related to the
Mach number by the expression p/p[0]=(1+M^2y-1/2)^-y/y-1, where γ is the ratio of the gas specific heats at constant pressure and constant volume. If the Mach number near the surface remains
unchanged, the pressure also retains its constant value, which is shown in the plot with the pressure distribution over the upper surface of the airfoil.
The results obtained allowed S. Khristianovich to develop a method to calculate the aerodynamic characteristics of transonic airfoils, based on their characteristics in an incompressible flow. Using
this method, one could calculate the limiting pressure curve and then the aerodynamic characteristics at a Mach number equal to unity, with subsequent recalculation to other transonic Mach numbers.
(It should be noted that there were no computers at that time, and all calculations were performed with slide rules or adding machines.)
The law of stabilization implies that the rarefaction ceases to increase in the range of supersonic velocities near the airfoil tip, and then the rarefaction decreases with increasing Mach number of
the incoming flow or M∞. An increase in the rarefaction on the upper surface of the airfoil occurs for the reason of expansion of the region with supersonic velocities as the shock wave is shifted
toward the airfoil tail. At the same time, the rarefaction on the lower surface of the airfoil, where the velocity is still subsonic, continues to increase greatly with increasing M∞.
This leads to deceleration of the growth and then to a decrease in the lift force and pitching moment of the wing, as can be seen from the lift force coefficient plotted as a function of the
free-stream Mach number. The drag, vice versa, starts growing because of the decrease in rarefaction in the fore part of the airfoil and the emergence of a rarefaction zone near the airfoil tail.
Understanding of the physical nature of such flow regimes allowed practical design of airfoils and wings with minimized adverse effects. One step in this direction was the use of airfoils with a
smaller relative thickness and swept wings with cross sections along which the flow is directed having a smaller thickness than cross sections located perpendicular to their leading edge.
From the mathematical viewpoint, this is as follows: if the free-stream velocity is broken down into components, one parallel to the leading edge of the wing and the other perpendicular to it, the
component parallel to the wing span does not affect the pressure distribution over the wing. In this case, the flow around the wing proceeds as if the incoming flow has a velocity lower than the
free-stream velocity, which favors the influence of flow compressibility on aerodynamic characteristics. A comprehensive theory of the flow around swept wings was developed by Academician V. V.
Experimental evidence of this theory can be found on the plot of the drag coefficient for swept wings as a function of the Mach number for different sweep angles.
Conquering “transonics”
Later there appeared a possibility of computer modeling of air flows by solving gas-dynamic and boundary-layer equations numerically. This allowed TsAGI to develop the so-called supercritical
airfoils, which ensured a higher flight velocity with a prescribed thickness and a prescribed lift force. The basis for creating such airfoils was attenuation of perturbations introduced into the
flow by the upper surface of the airfoil, which resulted in an increase in MCR. A small curvature of the upper surface of the supercritical airfoil, however, decreases the fraction of the lift force
generated by this surface. To compensate for this phenomenon, the tail part of the lower surface is “clipped off,” which is a typical feature of this class of airfoils.
It is owing to the increase in pressure at the tail part of the airfoil lower surface that the lift force decreasing in the middle part of the upper surface is compensated (the “effect of a flap”). A
low level of velocities on the upper surface of supercritical airfoils leads, in a transonic flow, to the formation of a local supersonic zone with smaller acceleration of the flow and with a
downstream shift of the barrel shock.
All these factors decrease the shock-wave intensity (the pressure difference on the shock wave) and the wave drag. As a result, a supercritical airfoil may ensure a certain gain in terms of flight
velocity: the value of MCR may be increased for a prescribed maximum relative thickness of the airfoil. An important performance property of supercritical airfoils of the second generation is that
MCR is independent of the lift force.
The plots with distributions of the pressure coefficient over the upper surface of various airfoils and dependences of the coefficient of their wave drag on the Mach number illustrate the evolution
of the distributions of the pressure and wave-drag coefficients in passing from conventional to supercritical airfoils. Another aspect of using supercritical airfoils, which are widely used in the
industry producing modern and advanced aircraft, is the possibility of increasing the relative thickness of the airfoil with the value of Mcr being unchanged.
Advanced high-velocity airfoils allow the value of MCR to be increased by 0.05—0.12 or the maximum relative thickness to be increased by 2—5 % of the airfoil chord. The fuel used during the flight is
poured into tanks located in the wings; therefore, the wing thickness is an extremely important structural parameter. The use of supercritical airfoils in combination with swept wings can be
considered as one of the main today’s directions of improving the aerodynamics of passenger and cargo aircraft.
Several series of airfoils characterized by the maximum critical flight Mach number were designed at TsAGI and the Khristianovich Institute of Theoretical and Applied Mechanics of the Siberian Branch
of the Russian Academy of Sciences (ITAM SB RAS). A typical feature of such airfoils is a rather long segment of the airfoil upper surface with a sonic velocity of the flow, i. e., M=1. Hence, the
barrel shock can be shifted to the trailing edge of the wing, and the maximum possible decrease in wave drag is reached.
It should be noted that the problems of aerodynamic design call for a comprehensive approach. Thus, flow problems should be solved accurately and rapidly, though the problem of optimization requires
repeated solving of these problems for different configurations. Optimization methods should allow obtaining solutions with allowance for aerodynamic and geometric restrictions within a reasonable
time. For these reasons, new methods had to be developed.
To meet these requirements, new methods to solve gas-flow equations, to generate the computational grid, and to present the varied boundary geometry; as well as an optimization method, were developed
on their basis. A software package was developed at ITAM SB RAS to design optimal airfoils satisfying the above-mentioned aerodynamic and geometric restrictions. By solving the direct optimization
problem, which was reduced to the problem of nonlinear programming with arbitrary initial conditions, pioneering configurations of subsonic airfoils designed for the maximum critical Mach number were
Using these programs, the researchers were able to achieve certain results in the design of airfoils with a considerable relative thickness (18 % and more), which are characterized by the above-noted
physical features of the flow. For the currently used airfoils with a relative thickness of 9—12 %, cruising velocities of 900—950 km/h were reached.
On “hot” wings
New principles and advanced technological procedures are currently used for flow control (e. g., energy supply to the flow). According to theoretical investigations performed at ITAM SB RAS, one can
halve the wave drag by controlling the flow around conventional (not supercritical) airfoils with the use of pulsed periodic energy supply, which will allow achieving the range of higher flight
Combining laser and microwave radiation can provide such a supply of energy. Laser radiation initiates minor ionization of the flow, which is sufficient for effective absorption of microwave
To elucidate the reasons for such a significant decrease in drag, we have to consider both the process dynamics and the steady-state periodic regime of the air flow. The series of plots for the
variation of the supersonic zone size and barrel shock intensity in the case of energy supply shows the field of Mach numbers in the flow around a symmetric airfoil.
The last graph shows that the resultant barrel shock is stabilized ahead of the energy-supply zone with insignificant streamwise oscillations caused by a periodic supply of energy. The barrel shock
intensity is lower than the shock intensity if the energy is not supplied, because it is formed at lower Mach numbers. For the same reason, the gas passing through the shock wave loses a smaller
amount of kinetic energy, thus providing a higher total pressure in the tail part of the airfoil, which involves a decrease in the frontal drag.
Energy supply favors not only the flow reconstruction described above, but also an independent increase in total pressure of the gas p[01] owing to an instantaneous increase in temperature in the
volume. Our estimates show that the required power of supplied energy is small as compared with the power of the incoming flow. This fact seems to be extremely important, because it guarantees high
effectiveness of this method of controlling the flow around the airfoil.
The physical mechanism of decreasing the wave drag of the airfoil owing to energy supply differs from the mechanism of supercritical airfoils. For supercritical airfoils, the decrease in wave drag is
achieved by shifting the barrel shock to the tail part of the airfoil. The distribution of the pressure coefficient along the airfoil chord with and without energy supply in different zones of the
airfoil demonstrates that much greater values of pressure are obtained in a larger part of the airfoil, beginning from the frontal point of the energy-supply region.
The aerodynamic performance of the object under study is usually estimated by the dependence of the drag coefficient C[x] (otherwise, aerodynamic polar) on the lift force coefficient C[y]. The
aerodynamic polars of the airfoil with asymmetric energy supply at the lower surface only are significantly different from the polars without energy supply for different angles of attack. With such
energy supply, the required lift force can be reached owing to lower wave drag, which increases the lift-to-drag ratio of the airfoil.
It is of interest that the drag coefficient becomes stabilized with a monotonic increase in the energy being supplied. The point corresponding to the fore part of the stabilized segment indicates the
optimal flight regime from the viewpoint of the maximum range and with allowance for the increase in the lift-to-drag ratio and the decrease in fuel consumption for gas heating.
At this point, the lift force coefficient is smaller than at the maximum lift-to-drag ratio without energy supply. Therefore, the cruising flight with energy supply has to be performed at lower
altitudes than the flight without energy supply: this follows from the condition that the aerodynamic lift force should be equal to the airplane weight.
The fact of stabilization of the drag coefficient also allows one to control the lift force with an unchanged value of the wave drag. In the case of supercritical airfoils, it seems reasonable to
supply energy to the gas flow only on the lower surface of the airfoil, because the barrel shock on the upper surface is shifted toward the trailing edge of the wing. Using this approach, one can
increase the flight range up to 15 %!
Problems associated with overcoming the supersonic barrier for peaceful purposes still remain urgent. An important challenge of today’s civilization is to fly faster and further and to spend less
time and money on such flights.
Though transonic velocities have not been “conquered” yet, progress in this direction is obvious, and soon we can expect new approaches to solving the problems formulated long ago by Academician
Khristianovich. Passenger aviation is now facing the sonic barrier, and this barrier will certainly be overcome, like many others in the “technological” history of our civilization.
*The dimensionless characteristic of velocity, which is called the Mach number, is the ratio of flight velocity to velocity of propagation of acoustic waves. The Mach number corresponding to the
critical flight velocity is called the critical Mach number µk | {"url":"https://scfh.ru/en/papers/with-a-velocity-of-sound-conquering-transonics/","timestamp":"2024-11-03T15:40:56Z","content_type":"text/html","content_length":"95193","record_id":"<urn:uuid:1fcd020a-3001-4563-97d6-0f8ee241d323>","cc-path":"CC-MAIN-2024-46/segments/1730477027779.22/warc/CC-MAIN-20241103145859-20241103175859-00556.warc.gz"} |
Concord Eye-Q 4363z
Brand: Concord
Model: Eye-Q 4363z
Megapixels: 4.06
Sensor: 1/1.8" (~ 7.11 x 5.33 mm)
Price: check here »
Sensor info
Concord Eye-Q 4363z comes with a 1/1.8" (~ 7.11 x 5.33 mm) CCD sensor, which has a diagonal of 8.89 mm (0.35") and a surface area of 37.90 mm².
Pixel density
10.68 MP/cm²
If you want to know about the accuracy of these numbers, click here
Actual sensor size
Actual size is set to screen →
change »
This is the actual size of the Eye-Q 4363z sensor: ~7.11 x 5.33 mm
The sensor has a surface area of
mm². There are approx. 4,060,000 photosites (pixels) on this area. Pixel pitch, which is a measure of the distance between pixels, is
µm. Pixel pitch tells you the distance from the center of one pixel (photosite) to the center of the next.
Pixel or photosite area is
µm². The larger the photosite, the more light it can capture and the more information can be recorded.
Pixel density tells you how many million pixels fit or would fit in one square cm of the sensor. Concord Eye-Q 4363z has a pixel density of
These numbers are important in terms of assessing the overall quality of a digital camera. Generally, the bigger (and newer) the sensor, pixel pitch and photosite area, and the smaller the pixel
density, the better the camera. If you want to see how Eye-Q 4363z compares to other cameras,
Brand: Concord
Model: Eye-Q 4363z
Megapixels: 4.06
Sensor size: 1/1.8" (~ 7.11 x 5.33 mm)
Sensor type: CCD
Sensor resolution: 2324 x 1747
Max. image resolution: 2272 x 1704
Crop factor: 4.87
Optical zoom: Yes
Digital zoom: Yes
ISO: Auto, 100, 200, 400
RAW support:
Manual focus:
Normal focus range: 60 cm
Macro focus range: 10 cm
Focal length (35mm equiv.): 35 - 105 mm
Aperture priority: No
Max aperture: f2.8 - f4.7
Max. aperture (35mm equiv.): f13.6 - f22.9
Depth of field: simulate →
Metering: Matrix, Spot
Exposure Compensation: ±2 EV (in 1/3 EV steps)
Shutter priority: No
Min. shutter speed: 4 sec
Max. shutter speed: 1/2000 sec
Built-in flash:
External flash:
Viewfinder: Optical
White balance presets: 4
Screen size: 1.5"
Screen resolution: 78,000 dots
Video capture:
Storage types: Secure Digital
USB: USB 1.1
Battery: 2x AA
Weight: 150 g
Dimensions: 100 x 61 x 31.5 mm
Year: 2004
Compare Eye-Q 4363z with another camera
The diagonal of Eye-Q 4363z sensor is not 1/1.8 or 0.56"
(14.1 mm)
as you might expect, but approximately two thirds of that value - 0.35" (
8.89 mm
). If you want to know why, see
sensor sizes
Diagonal is calculated by the use of Pythagorean theorem:
= sensor width and
= sensor height
Concord Eye-Q 4363z diagonal:
= 7.11 mm
= 5.33 mm
Diagonal = √ 7.11² + 5.33² = 8.89 mm
Surface area
Surface area is calculated by multiplying the width and the height of a sensor.
Width = 7.11 mm
Height = 5.33 mm
Surface area = 7.11 × 5.33 = 37.90 mm²
Pixel pitch
Pixel pitch is the distance from the center of one pixel to the center of the next measured in micrometers (µm). It can be calculated with the following formula:
Pixel pitch = sensor width in mm × 1000
sensor resolution width in pixels
Concord Eye-Q 4363z pixel pitch:
Sensor width
= 7.11 mm
Sensor resolution width
= 2324 pixels
Pixel pitch = 7.11 × 1000 = 3.06 µm
Pixel area
The area of one pixel can be calculated by simply squaring the pixel pitch:
Pixel area = pixel pitch²
You could also divide sensor surface area with effective megapixels:
Pixel area = sensor surface area in mm²
effective megapixels
Concord Eye-Q 4363z pixel area:
Pixel pitch = 3.06 µm
Pixel area = 3.06² = 9.36 µm²
Pixel density
Pixel density can be calculated with the following formula:
Pixel density = ( sensor resolution width in pixels )² / 1000000
sensor width in cm
You could also use this formula:
Pixel density = effective megapixels × 1000000 / 10000
sensor surface area in mm²
Concord Eye-Q 4363z pixel density:
Sensor resolution width = 2324 pixels
Sensor width = 0.711 cm
Pixel density = (2324 / 0.711)² / 1000000 = 10.68 MP/cm²
Sensor resolution
Sensor resolution is calculated from sensor size and effective megapixels. It's slightly higher than maximum (not interpolated) image resolution which is usually stated on camera specifications.
Sensor resolution is used in pixel pitch, pixel area, and pixel density formula. For sake of simplicity, we're going to calculate it in 3 stages.
1. First we need to find the ratio between horizontal and vertical length by dividing the former with the latter (aspect ratio). It's usually 1.33 (4:3) or 1.5 (3:2), but not always.
2. With the ratio (
) known we can calculate the
from the formula below, where
is a vertical number of pixels:
(X × r) × X = effective megapixels × 1000000 → X = √ effective megapixels × 1000000
3. To get sensor resolution we then multiply
with the corresponding ratio:
Resolution horizontal:
Resolution vertical:
Concord Eye-Q 4363z sensor resolution:
Sensor width
= 7.11 mm
Sensor height
= 5.33 mm
Effective megapixels
= 4.06
r = 7.11/5.33 = 1.33 X = √ 4.06 × 1000000 = 1747
Resolution horizontal: X × r = 1747 × 1.33 = 2324
Resolution vertical: X = 1747
Sensor resolution =
2324 x 1747
Crop factor
Crop factor or focal length multiplier is calculated by dividing the diagonal of 35 mm film (43.27 mm) with the diagonal of the sensor.
Crop factor = 43.27 mm
sensor diagonal in mm
Concord Eye-Q 4363z crop factor:
Sensor diagonal
= 8.89 mm
Crop factor = 43.27 = 4.87
35 mm equivalent aperture
Equivalent aperture (in 135 film terms) is calculated by multiplying lens aperture with crop factor (a.k.a. focal length multiplier).
Concord Eye-Q 4363z equivalent aperture:
Crop factor = 4.87
Aperture = f2.8 - f4.7
35-mm equivalent aperture = (f2.8 - f4.7) × 4.87 = f13.6 - f22.9
Enter your screen size (diagonal)
My screen size is inches
Actual size is currently adjusted to screen.
If your screen (phone, tablet, or monitor) is not in diagonal, then the actual size of a sensor won't be shown correctly. | {"url":"https://www.digicamdb.com/specs/concord_eye-q-4363z/","timestamp":"2024-11-09T07:43:15Z","content_type":"text/html","content_length":"30144","record_id":"<urn:uuid:14780561-4ffc-499f-a613-27bf74d97b89>","cc-path":"CC-MAIN-2024-46/segments/1730477028116.30/warc/CC-MAIN-20241109053958-20241109083958-00840.warc.gz"} |
The Stochastics Group
Probability theory and stochastic processes
Associate professor
Operations research, optimization, logistics
Machine learning, statistical learning, statistical methods in molecular biology, stochastic processes.
Associate professor
Topological data analysis, large deviations theory, spatial random networks, stochastic geometry, spatial statistics
Associate professor (on leave)
Nonparametric statistics, stochastic processes, statistical learning
Dirichlet forms, Differential geometry, Random matrices, Stochastic analysis
Associate professor
Probability theory, stochastic processes, asymptotic theory
Statistics, asymptotic theory, inference for high dimensional data
Associate professor (on leave)
Applied probability, stochastic processes, extremes, distributional robustness, simulation, high-frequency statistics
Tenure Track Assistant Professor
Insurance mathematics, applied probability
Associate professor
Mathematical optimization
Associate professor
Statistics, Statistical Learning / Machine Learning, Deep learning for medical images, Monte Carlo Simulation, Bioinformatics
Associate professor
Stochastic geometry, convex geometry, geometric tomography, spatial statistics
Tenure Track Assistant Professor
high-dimensional statistics, random matrix theory
Associate professor, head of the Applied Statistics Laboratory (aStatLab)
Applied statistics
Associate professor
Free probability theory, random matrices, Lévy processes, operator algebras.
Associate professor
Statistics – stochastic geometry, stereology, computational and nonparametric statistics, multiple testing | {"url":"https://math.au.dk/en/research/stochastics/","timestamp":"2024-11-07T13:49:16Z","content_type":"text/html","content_length":"99475","record_id":"<urn:uuid:eecd6352-3d31-417a-b7dc-80910cb33ea4>","cc-path":"CC-MAIN-2024-46/segments/1730477027999.92/warc/CC-MAIN-20241107114930-20241107144930-00319.warc.gz"} |
Type List Compile Time Performance
Soon, after writing my first meta programs with C++ templates, i realized, that certain programming patterns lead to sky rocketing compile times. I came up with rules of thumb like “Prefer pattern
matching over if_else_t”, and “Prefer nested type lists over variadic type lists”. But i did not know how much faster which pattern is, i just knew about tendencies. Finally, i sat down to write some
compile time benchmarks, and this blog posts presents the results.
Creating Lists
A fundamental thing to measure are lists. Everything else which can grow to arbitrary sizes, will somehow be implemented using lists. There are different possible ways to implement lists. I present
measurements of nested and variadic type lists. This article does not explain how to implement them, but there is an article about C++ template type lists already.
The first benchmark just creates lists of rising sizes, and measures how much time that takes. The lists are generated from integer sequences, just like those from this article which explains how to
generate integer sequences at compile time.
I present graphs for doing this inside Metashell, and also using real compilers. Metashell is a great tool for debugging meta programs, or playing around with expressions. It is basically what ghci
is to Haskell, an interactive shell for programming.
Since Metashell does also provide a profiling feature, it is tempting to measure performance with it. This turns out to be a bad idea when comparing such performance numbers with real compiler
performance: Not only are compilers significantly faster than meta shell, but they also do generate completely different numbers.
It is generally fine, that Metashell instantiates templates slower than compilers do. Metashell is meant as a development tool, and not as a high performance compiler. However, using it to compare
the performance of different algorithms can result in very misleading numbers.
The graphs both have fitted polygonal function overlays. The runtime of generating lists, both nested and variadic types, is obviously within \(\mathcal{O}(n^2)\). This is usually something which
would be considered having linear runtime, because the lists grow linearly.
These numbers turn out to be completely different when measured on real compilers like Clang and GCC:
To my knowledge it is not possible to measure only the actual template instantiation time when using a compiler. Hence i just measured how long it takes the compiler to start, instantiate the
template code, and exit again. These numbers are inherently more noisy than the metashell numbers.
Both GCC and Clang are much faster in instantiating large variadic lists, compared to Metashell. But what is most obvious on this graph, is that nested type lists in turn are much faster than
variadic type lists.
Variadic type lists are easier to read and write for programmers, but this performance penalty makes their use inpractiable for algorithms. Hence, variadic type lists can nicely be used as input/
output interface to the user (the coder). But inbetween, they should be converted to type lists, in order to work efficiently on the data they convey. This article explains how to convert between
different type list formats.
The performance numbers of GCC and Clang when instantiating nested type lists, look really similar in this diagram. They actually are, and i do not provide another diagram showing only these two
graphs. A comparison between them based on this data would not be too fair, as these graphs are really noisy. It would be easier to compare with even larger type lists, but i experienced compiler
crashes with even higher numbers.
Filtering Lists
The next thing to measure after creating lists, is applying operations on them. I chose to measure how long it takes to apply a filter on a list. The filter itself is rather cheap: I implemented
functions which take a list of integer sequences, and return a list of integer sequences, but inbetween remove all even numbers.
I wrote one benchmark measuring different implementations (a code snippet appendix follows at the end of the article):
• Filtering the even numbers out using the if_else_t construct
• Filtering the even numbers out using pattern matching
• Generating lists which do not contain even numbers from the beginning
Comparing the same algorithm using if_else_t vs. pattern matching is interesting, because there are obvious performance differences.
I tried to do a fair comparison between filtering nested and variadic type lists. To ensure this, i implemented the if-else/pattern-matching variants once in a way that the same implementation works
on both kinds of lists.
All these algorithms are applied to both nested and variadic type lists. As the list creation benchmark already suggests, the nested variants of these algorithms will be faster. This time, the
differences between Clang and GCC are more significant when looking at the nested variants, hence i present another diagram plotting only these.
There are three obvious clusters in this diagram:
Variadic list operations on GCC
This cluster shows very nicely, that the performance using if_else_t for filtering items is worst, compared to all other variants. Applying pattern matching is indeed faster.
The most performant variant is assembling an already filtered list. This effectively removes the overhead of at first generating a full sequence, and filtering it afterwards.
Variadic list operations on Clang
Here, we see a generally similar pattern compared to the variadic-GCC-cluster before, but it is a bit faster with this compiler. Clang handles variadic type lists faster than GCC does.
Apart from that, the pattern matching style filter operation on the type list is faster, than creating an already filtered list. For some reason. I don’t know.
Nested list operations on Clang/GCC
All of these transformations on nested type lists are generally faster, and they are much faster.
Because the differences are not obvious on the first diagram, they are extracted and plotted on a nested-only diagram:
These numbers are very noisy, because they are near to the general process start time of the compiler executable in the shell.
Apart from that, the two different implementations of list filter operations, and manual filtered list creation have the same performance characteristics like before, when compared to each other.
Interestingly, clang seems to be slower for small type lists, because the time it takes to launch and return to shell is larger. For large type lists (which means they contain about 500 and more
items), clang takes over and compiles faster.
The most important observation here is, that the runtime of these algorithms on nested type lists seems to be within \(\mathcal{O}(n)\). Creating variadic type lists alone is already \(\mathcal{O}(n^
The Implementations
This section shows the implementations of what i actually measured. There’s not much explanation how this works, because i wrote other articles covering that:
Both list filter implementations remove even numbers from the input type list. They are implemented in a way that they can handle both variadic and nested type lists.
Only for creating already filtered lists, there are two different implementations for the different types of type lists.
For all functions, at the very bottom of every example, there is an odds_t using clause, which represents the actual user interface.
template <typename List>
struct odds
static constexpr const int val {head_t<List>::value};
static constexpr const bool is_odd {(val % 2) != 0};
using next = typename odds<tail_t<List>>::type;
// If odd, prepend value to list. Else, skip it:
using type = if_else_t<is_odd,
prepend_t<next, head_t<List>>,
// Recursion terminator for nested type lists
template <>
struct odds<rec_tl::null_t>
using type = rec_tl::null_t;
// Recursion terminator for variadic type lists
template <>
struct odds<var_tl::tl<>>
using type = var_tl::tl<>;
template <typename List>
using odds_t = typename odds<List>::type;
Pattern Matching
// is_odd = true: Prepend item to list
// This is not a template specialization, but there is a template
// specialization afterwards, which assumes is_odd=false.
// Hence, this is an implicit specialization on is_odd=true cases.
template <bool is_odd, typename Head, typename List>
struct odds
using next = typename odds<
(head_t<List>::value % 2) != 0,
using type = prepend_t<next, Head>;
// is_odd = false: Skip item
template <typename Head, typename List>
struct odds<false, Head, List>
using type = typename odds<
(head_t<List>::value % 2) != 0,
// Recursion terminator for nested type lists
// Last element: is_odd = true
template <typename Head>
struct odds<true, Head, rec_tl::null_t>
using type = rec_tl::tl<Head, rec_tl::null_t>;
// Recursion terminator for nested type lists
// Last element: is_odd = false
template <typename Head>
struct odds<false, Head, rec_tl::null_t>
using type = rec_tl::null_t;
// Recursion terminator for variadic type lists
// Last element: is_odd = true
template <typename Head>
struct odds<true, Head, var_tl::tl<>>
using type = var_tl::tl<Head>;
// Recursion terminator for variadic type lists
// Last element: is_odd = false
template <typename Head>
struct odds<false, Head, var_tl::tl<>>
using type = var_tl::tl<>;
template <typename List>
using odds_t = typename odds<
(head_t<List>::value % 2) != 0,
Filtered List Generation
template <bool is_odd, typename Head, typename List>
struct odds;
template <typename Head, typename TailHead, typename TailTail>
struct odds<true, Head, rec_tl::tl<TailHead, TailTail>>
using next = typename odds<
(TailHead::value % 2) != 0,
using type = rec_tl::tl<Head, next>
template <typename Head, typename TailHead, typename TailTail>
struct odds<false, Head, rec_tl::tl<TailHead, TailTail>>
using type = typename odds<
(TailHead::value % 2) != 0,
template <typename Head>
struct odds<true, Head, rec_tl::null_t>
using type = rec_tl::tl<Head, rec_tl::null_t>;
template <typename Head>
struct odds<false, Head, rec_tl::null_t>
using type = rec_tl::null_t;
template <typename List>
using odds_t = typename odds<
(head_t<List>::value % 2) != 0,
template <bool is_odd, typename Current, typename InList, typename OutList>
struct odds;
template <typename Current, typename InHead,
typename ... InTail, typename ... Outs>
struct odds<true, Current, var_tl::tl<InHead, InTail...>, var_tl::tl<Outs...>>
using type = typename odds<
(InHead::value % 2) != 0,
var_tl::tl<Outs..., Current>>::type;
template <typename Current, typename InHead,
typename ... InTail, typename ... Outs>
struct odds<false, Current, var_tl::tl<InHead, InTail...>, var_tl::tl<Outs...>>
using type = typename odds<
(InHead::value % 2) != 0,
template <typename Current, typename ... Outs>
struct odds<true, Current, var_tl::tl<>, var_tl::tl<Outs...>>
using type = var_tl::tl<Outs..., Current>;
template <typename Current, typename ... Outs>
struct odds<false, Current, var_tl::tl<>, var_tl::tl<Outs...>>
using type = var_tl::tl<Outs...>;
template <typename List>
using odds_t = typename odds<
(head_t<List>::value % 2) != 0,
What i learned from my previous C++ TMP experience, and these benchmarks;
• Branching using pattern matching is generally faster than if_else_t
• Modifying nested type lists is generally faster than variadic type lists.
• Metashell is fine for debugging C++ TMP code, but not for actual measuring
I hope these insights are also useful for others! | {"url":"https://galowicz.de/2016/06/25/cpp_template_type_list_performance/","timestamp":"2024-11-11T20:44:43Z","content_type":"text/html","content_length":"52346","record_id":"<urn:uuid:9bb1af2a-e39e-43a4-9a3c-71c578edd27a>","cc-path":"CC-MAIN-2024-46/segments/1730477028239.20/warc/CC-MAIN-20241111190758-20241111220758-00143.warc.gz"} |
Formatting merge fields to european format | Microsoft Community Hub
Formatting merge fields to european format
I have some numbers, coming from an excel file, which I want to show in a Word document. I want the numbers to be formatted in a european way, with a comma seperating the decimals and the integers.
I find online to do it like this:{ MERGEFIELD totaal_inclusief\# "0,00"}
However, it doesn't work. It keeps on using the comma to seperate thousands from the hundreds.
I modified the numbers in excel, which doesn't help. Can anybody help? | {"url":"https://techcommunity.microsoft.com/discussions/word/formatting-merge-fields-to-european-format/2805362","timestamp":"2024-11-09T01:52:40Z","content_type":"text/html","content_length":"187320","record_id":"<urn:uuid:d1653922-c72a-4eb3-a3e2-3fe79dcd832e>","cc-path":"CC-MAIN-2024-46/segments/1730477028106.80/warc/CC-MAIN-20241108231327-20241109021327-00379.warc.gz"} |
Help with formula
I am working to port a process that is managed in Excel into Smartsheet.
I want to created a formula that will check two cells in the row, if they both show filled, I want to check the checkbox. If either or both are any other status, I want the checkbox to remain
I want to convert this to a column formula. When I enter this into the cell, it works as expected:
=IF([Status One]1="Filled", IF([Status Two]1 = "Filled", 1, 0))
However, that formula will not convert to a column formula. When I change it to the formula below, I get #Invalid Operation.
=IF([Status One]:[Status One]="Filled", IF([Status Two]:[Status Two] = "Filled", 1, 0))
Can someone point out my error?
Best Answer
• Column formulas can't have row number references. Use @row instead:
=IF([Status One]@row="Filled", IF([Status Two]@row = "Filled", 1, 0))
Jeff Reisman
Link: Smartsheet Functions Help Pages Link: Smartsheet Formula Error Messages
If my answer helped solve your issue, please mark it as accepted so that other users can find it later. Thanks!
• Column formulas can't have row number references. Use @row instead:
=IF([Status One]@row="Filled", IF([Status Two]@row = "Filled", 1, 0))
Jeff Reisman
Link: Smartsheet Functions Help Pages Link: Smartsheet Formula Error Messages
If my answer helped solve your issue, please mark it as accepted so that other users can find it later. Thanks!
• Thank you, @Jeff_Reisman
Much appreciated!
Help Article Resources | {"url":"https://community.smartsheet.com/discussion/95566/help-with-formula","timestamp":"2024-11-10T03:39:11Z","content_type":"text/html","content_length":"401475","record_id":"<urn:uuid:4e1132f1-337d-4e4a-be02-5226385af95e>","cc-path":"CC-MAIN-2024-46/segments/1730477028164.3/warc/CC-MAIN-20241110005602-20241110035602-00211.warc.gz"} |
RE: st: Re: hours:minutes:seconds
[Date Prev][Date Next][Thread Prev][Thread Next][Date index][Thread index]
RE: st: Re: hours:minutes:seconds
From "Nick Cox" <[email protected]>
To <[email protected]>
Subject RE: st: Re: hours:minutes:seconds
Date Tue, 17 Dec 2002 10:29:38 -0000
Gary Longton replied to Christa Scholtz and Radu Ban
> > I have a data field where the duration of an event is
> recorded in this
> > format:
> >
> > hour:minute:second
> >
> > eg: an event that lasts 5 hours, 27 minutes and 13
> seconds is 5:27:13.
> >
> > How do I get Stata to convert this into total number of seconds?
> and Radu Ban replied:
> > you can try sth like:
> > if your_time is the variable you have for time
> >
> > gen str8 stime = your_time
> > gen shours = substr(stime, 1, 2) *takes the first two digits
> > gen hours = real(shour) *reads first two digits as number
> > gen sminutes = substr(stime, 4, 2) *takes digits 4 and 5
> > gen minutes = real(sminutes)
> > gen sseconds = substr(stime, 7, 2)
> > gen seconds = real(sseconds)
> >
> > *now add up
> > gen totsecs = 3600*hours + 60*minutes + seconds
> Radu's approach assumes that the original time string will
> always have
> 2-digit hours, which will often be too restrictive, and
> won't work for
> Christa's example.
> An easier one-step approach for parsing the time string into the 3
> component numeric variables would be to use Nick Cox's
> -split- program
> (available on SSC), which could be followed with Radu's
> expression for
> total seconds.
Anyone in this territory might want to know of
various -egen- functions in the -egenmore- package
on SSC.
dhms(d h m s) [ , format(format) ] creates a date
variable from Stata date variable or date d with a fractional part
reflecting the number of hours, minutes and seconds past midnight.
h can be a variable containing integers between 0 and 23
inclusive or a single integer in that range. m and s can be
containing integers between 0 and 59 or single integer(s) in that
range. Optionally a format, usually but not necessarily a date
can be specified. The resulting variable, which is by default
as a double, may be used in date and time arithmetic in which the
time of day is taken into account.
elap(time) [ , format(format) ] creates a string variable
which contains the number of days, hours, minutes and seconds
associated with an integer variable containing a number of
elapsed seconds. Such a variable might be the result of date/time
arithmetic, where a time interval between two timestamps has been
expressed in terms of elapsed seconds. Leading zeroes are included
in the hours, minutes, and seconds fields. Optionally, a format
can be specified.
elap2(time1 time2) [ , format(format) ] creates a string variable
which contains the number of days, hours, minutes and seconds
associated with a pair of time values, expressed as fractional
where time1 is no greater than time2. Such time values may be
by function dhms(). elap2() expresses the interval between these
time values in readable form. Leading zeroes are included in the
minutes, and seconds fields. Optionally, a format can be
hmm(timevar) generates a string variable showing timevar, interpreted
as indicating time in minutes, represented as hours and minutes in
the form "[...h]h:mm". For example, times of 9, 90, 900 and
9000 minutes would be represented as "0:09","1:30", "15:00"
and "150:00". The option round(#) rounds the result: round(1)
rounds the time to the nearest minute. The option trim trims the
result of leading zeros and colons, except that an isolated 0 is
not trimmed. With trim "0:09" is trimmed to "9" and "0:00"
is trimmed to "0".
hmm() serves equally well for representing times in seconds in
minutes and seconds in the form "[...m]m:ss".
hmmss(timevar) generates a string variable showing timevar,
as indicating time in seconds, represented as hours, minutes and
in the form "[...h:]mm:ss". For example, times of 9, 90, 900 and
9000 seconds would be represented as "00:09","01:30", "15:00"
and "2:30:00". The option round(#) rounds the result: round(1)
rounds the time to the nearest second. The option trim trims the
result of leading zeros and colons, except that an isolated 0 is
not trimmed. With trim "00:09" is trimmed to "9" and "00:00"
is trimmed to "0".
hms(h m s) [ , format(format) ] creates an elapsed
time variable containing the number of seconds past midnight.
h can be a variable containing integers between 0 and 23
inclusive or a single integer in that range. m and s can be
containing integers between 0 and 59 or single integer(s) in that
range. Optionally a format can be specified.
tod(time) [ , format(format) ] creates a string
variable which contains the number of hours, minutes and seconds
associated with an integer in the range 0 to 86399, one less than
the number of seconds in a day. Such a variable is produced by
hms(), which see above. Leading zeroes are included in the hours,
minutes, and seconds fields. Colons are used as separators.
Optionally a format can be specified.
Kit Baum ([email protected]) is the author of dhms(), elap(), elap2(),
hms() and tod().
[email protected]
* For searches and help try:
* http://www.stata.com/support/faqs/res/findit.html
* http://www.stata.com/support/statalist/faq
* http://www.ats.ucla.edu/stat/stata/ | {"url":"https://www.stata.com/statalist/archive/2002-12/msg00355.html","timestamp":"2024-11-02T14:03:02Z","content_type":"text/html","content_length":"12849","record_id":"<urn:uuid:010410e3-e3d5-46bc-a7e6-f84c787af72b>","cc-path":"CC-MAIN-2024-46/segments/1730477027714.37/warc/CC-MAIN-20241102133748-20241102163748-00183.warc.gz"} |
Table Highlight Using CSS and jQuery: Part 1
In this article I will show you how to add a table highlighting feature in a HTML table. I assume that you have some basic knowledge of HTML, CSS3 and jQuery. If you are not familiar with jQuery then
check the links provided at the end of the article. In this article I will show three versions of table highlighting, one is single row highlight, the second is even row highlights and the last is
odd row highlights. So let's start.
Preparing the workspace
Before we proceed, let's first set up our base as in the following:
• Create a new text file and paste the following code in it.
<link href="http://ajax.googleapis.com/ajax/libs/jqueryui/1/themes/smoothness/jquery-ui.min.css"rel="stylesheet" type="text/css" />
<script src="http://ajax.googleapis.com/ajax/libs/jquery/1/jquery.min.js"></script>
<script src="http://ajax.googleapis.com/ajax/libs/jqueryui/1/jquery-ui.min.js"></script>
<meta charset="utf-8" />
<title>Table Highlight by Arpit</title>
/* we will use this section for adding css classes or any styling */
<!-- HTML will go here -->
$(document).ready(function () {
// We will use this for adding our jQuery or scripts
• Save this file as a HTML file.
Single-row highlighting
Sometimes you have seen on websites that, when you move your curser over the table rows, the row's color is changed or is highlighted. This is what we will do in this section. Here we don't need any
jQuery for it. This effect can be created using a CSS 3.0 transition property.
To create a sample table, copy the following HTML and paste it in our newly created HTML file's HTML section.
<table id="tbl" border="1">
<tr class="row">
<td class="col1">1</td><td class="col2">2</td><td class="col3">3</td>
<tr class="row">
<td class="col1"></td><td class="col2"></td><td class="col3"></td>
<tr class="row">
<td class="col1"></td><td class="col2"></td><td class="col3"></td>
<tr class="row">
<td class="col1">1</td><td class="col2">2</td><td class="col3">3</td>
<tr class="row">
<td class="col1"></td><td class="col2"></td><td class="col3"></td>
<tr class="row">
<td class="col1"></td><td class="col2"></td><td class="col3"></td>
<tr class="row">
<td class="col1">1</td><td class="col2">2</td><td class="col3">3</td>
<tr class="row">
<td class="col1"></td><td class="col2"></td><td class="col3"></td>
<tr class="row">
<td class="col1"></td><td class="col2"></td><td class="col3"></td>
The code above will create a simple 6X3 empty table. To give it some dimensions add the following styling in the CSS section of our HTML file.
• td[class*=col]{
-webkit-transition:all 0.5s;
In the code above the line td[class*=col] is responsible for selecting all the td (cells) using a * wild card. It will select all "td" tags with the class as col1,col2,col3 and so on. The width
and height of each cell is set to 100px and 20px respectively.
• The ".row" class is used for setting the default properties of the row.
• "-webkit-transition:all 0.5s" is used for adding the transition effect on each row. -webkit- is a browser-specific prefix for Chrome since CSS3.0 is not fully supported by all browsers (the
Transition effect is supported in most of the browsers so the prefix can be removed).
• Here "all" stands for transition of all CSS properties.
• "0.5s" stands for 0.5 seconds. That is a transition that will take 0.5 seconds to complete.
You have seen how easy it is to do this highlighting. We didn't used any JavaScript or jQuery. It's a pure HTML and CSS 3.0 based solution. In my next article I'll show you how to highlight each cell
individually. So stay tuned . Thanks for reading this article. Don't forget to comment and share. | {"url":"https://test.c-sharpcorner.com/UploadFile/4aac15/table-highlight-using-css-and-jquery-part-1/","timestamp":"2024-11-05T05:42:42Z","content_type":"application/xhtml+xml","content_length":"211334","record_id":"<urn:uuid:1fee7911-8e48-4d7d-804b-60801eeace4e>","cc-path":"CC-MAIN-2024-46/segments/1730477027871.46/warc/CC-MAIN-20241105052136-20241105082136-00795.warc.gz"} |
Design of Safety, High Step-Up DC-DC Converter for AC PV Module Application
Volume 03, Issue 01 (January 2014)
Design of Safety, High Step-Up DC-DC Converter for AC PV Module Application
DOI : 10.17577/IJERTV3IS10461
Download Full-Text PDF Cite this Publication
B. Ashok, J. Mohan, 2014, Design of Safety, High Step-Up DC-DC Converter for AC PV Module Application, INTERNATIONAL JOURNAL OF ENGINEERING RESEARCH & TECHNOLOGY (IJERT) Volume 03, Issue 01 (January
• Open Access
• Total Downloads : 337
• Authors : B. Ashok, J. Mohan
• Paper ID : IJERTV3IS10461
• Volume & Issue : Volume 03, Issue 01 (January 2014)
• Published (First Online): 18-01-2014
• ISSN (Online) : 2278-0181
• Publisher Name : IJERT
• License: This work is licensed under a Creative Commons Attribution 4.0 International License
Text Only Version
Design of Safety, High Step-Up DC-DC Converter for AC PV Module Application
1. Ashok1 J. Mohan2
1PG Student (Power Electronics &Drives), Dept of EEE, Ranganathan Engineering College, Coimbatore,
2Assistant Professor, Dept of EEE, Ranganathan Engineering College, Coimbatore, Tamilnadu. India.
Abstract: In current scenario, the world is giving importance to renewable energy power generation is to meet electricity demand. The power generated by using Photovoltaic (PV) panel has been
connected to the applications, through the DC-DC converter and DC-AC inverter. This paper proposes a new high step up DC- DC converter with floating active switch isolates the energy from the PV
panel when the AC module is off & also act as high state drive. It also regulates the DC interface between the DC-AC converters. The high step up voltage conversion ratio is obtained with
numerous turns ratio of a coupled inductor and also with appropriate duty ratios. The energy stored in the leakage inductor, will be recycled by magnetizing inductor LM present in the coupled
inductor to the R load through the output capacitor C3. By applying 15V input voltage and 200V output voltage is obtained, designed converter circuit attains 98W as output power. This is better
than conventional convert model efficiency.
Key words: Active floating switch, AC module, Coupled inductor, high step up Voltage conversion ratio .
1. Introduction
The uses of non conventional energy sources like solar energy requires a large step up conversion of their low voltage level to the required amount of voltage. A higher DC-link voltage is
obtained by serial connection of large number of PV arrays.
Through the DC-AC inverter the DC voltage can be utilized for the main electricity [4], [5]. An AC module is a micro inverter configured on the rear bezel of PV panel, this will immunizes
against the yield loss by shadow effect. The prior works have proposed the converter shown in fig 1 with single switch and fewer components to fit the dimensions of the bezel of the ac
module, but their efficiency levels are low. The maximum power point voltage (MPPT) range is 12V to 40V which will give as input for the output power capacity range of 50W to 250W.
In case if low voltage derived from the PV panel, then it is difficult for the AC module to reach the high efficiency [8]. Employing a high step-up DC-DC converter in front of the inverter,
this provides the stable DC link to the inverter & power conversion efficiency from one level to another level.
Fig 1 conventional method without floating switch
During day light, installation of PV panel generation system [1], [2], the potential difference could pose hazards to both the worker and the facilities while installing the AC module. When
the AC module is off, a floating active switch is designed to isolate the DC from the PV panel, for grid as well as non operating condition. This isolation ensures the operation of the
internal components without any residential energy being transferred to the terminals, which could be unsafe. Use of active clamp technique [25] not only recycles the leakage inductor energy
but also it constraints the voltage stress across the switch. This means the coupled inductor employed in voltage liter or voltage multiplier technique in a circuit [11].
The DC-DC converter requires a large step-up voltage conversion from low voltage obtained from the panel low voltage to the required voltage level for the application. In the previous
research on various converters employed with the switched capacitor type,[3] the voltage-lift type,[4] the capacitor-diode voltage multiplier [5],[6] and the boost type integrated with
coupled inductor [23] these converters by increasing turns ratio of coupled inductor obtain higher voltage gain than conventional boost converter for high step-up applications. Some
converters successfully combined boost and flyback converters [21],[22], some converters, since various converter combinations are developed to carryout high step up voltage gain by using the
coupled-inductor technique [12],[19]. The efficiency and voltage gain of the DC-DC boost converter are constrained by either switches or the reverse recovery issues of the diodes [21], [22].
Fig 2 circuit diagram of proposed converter
The circuit diagram of proposed converter is shown in figure.2. The primary winding N1 of a coupled inductor T1 and capacitor C1 and diode D1 receive leakage inductor energy from N1. The
secondary winding N2 of Coupled inductor T1 is connected with another pair of capacitor C2 and diode D2, which are in series
with N1 in order to further enlarge the boost voltage. The floating active switch S1 is connected to T1.The diode D3 is a diode rectifier which is connected to the output capacitor C3 and R
2. Operation of the Proposed Converter
The following assumptions are made to analysis the circuit diagram of the proposed converter.
1. Expect the leakage inductance of coupled inductor T1, all the components are ideal.
2. The snubber capacitance of S1, The on-state resistance RDS(on) are neglected.
3. The capacitors C1, C2, C3 are sufficiently large that the voltages across them are considered to be
4. The coupled inductor T1 & ESR of capacitors C1 C2 C3 and the parasitic resistance are neglected.
5. The turn ratio n of the coupled inductor T1 windings is equal to N2 /N1.
The proposed converter will be worked in two modes of operation, they are
1. Continuous conduction mode (CCM)
2. Discontinuous conduction mode (DCM).
2.1 Continuous Conduction Mode Operation (CCM) of proposed converter
Mode I [t0 t t1]: In this interval, when S1 is turned ON, the magnetizing inductor Lm continuously charges capacitor C2 through T1. Switch S1 and diode D2 are conducting. The source voltage
Vin crosses magnetizing inductor Lm and primary leakage inductor Lk1 due to the current iLm is decreasing; magnetizing inductor Lm is still transferring its energy through coupled inductor T1
to charge switched capacitor C2, but the energy is decreasing. The charging current iD2 and iC2 are decreasing. The secondary leakage inductor current iLK2 is declining as equal to iLm / n.
Once the increasing iLk1 equals decreasing iLm at t = t1, this mode ends.
Mode II [t1 t t2]: In this interval, N2 is series connected with source energy Vin. C1, and C2 to charge output capacitor C3 and load R, and also magnetizing inductor Lm is also receiving
energy from Vin. Where switch S1 remains ON, only diode D3 is conducting. The iLm, iLk1, and iD3 are increasing because the Vin is crossing Lk1, Lm, and primary winding N1. Lm and Lk1 are
storing energy from Vin, and also Vin is also serially connected with secondary winding N2 of coupled inductor T1, capacitors C1, and C2, and then discharges their energy to capacitor C3 and
load R. The iin, iD3 and discharging current iC1 and iC2 are increasing. This mode ends when switch S1 is turned OFF at t = t2.
Mode III [t2 t t3]: In this interval, when switch S1 is OFF, secondary leakage inductor Lk2 will charge C3. Only diode D1 and D3 are conducting. The energy stored in leakage inductor Lk1
flows through diode D1 to charge capacitor C1 instantly when S1 is OFF and also the energy of secondary leakage inductor Lk2 is series connected with C2 to charge output capacitor C3 and the
load. Because leakage inductance Lk1 andLK2 are smaller than Lm, iLk2 rapidly decreases, but iLm is increasing due to magnetizng inductor Lm is receiving energy from Lk1. Current iLk2
decreases until it reaches zero, this mode ends at t = t3.
Mode IV [t3 t t4]: In this interval, the energy stored in magnetizing inductor Lm is released to C1 and C2 simultaneously. Only diodes D1 and D2 are conducting. Currents iLk1 and iD1 are
continually decreased due to the leakage energy still flowing through diode D1 is charging capacitor C1. The Lm is delivers energy through T1 and D2 to charge capacitor C2. The energy stored
in capacitor C3 is constantly discharges to the load R. These energy transfers result in decreases in iLk1 and iLm but increases in iLk2. This mode ends when current iLk1 is zero, at t = t4.
Mode V [t4 t t5]: In this interval, only magnetizing inductor Lm is constantly discharges its energy to C2 in which only diode D2 is conducting. The iLm is decreasing due to the magnetizing
inductor energy flowing through the coupled inductor T1 to secondary winding N2, so D2 continues to charge capacitor C2. The energy stored in capacitor C3 is constantly discharges to the load
R. This mode ends when switch S1 is turned ON at the beginning of the next switching period.
2.2. Dis-Continuous Conduction Mode Operation of proposed converter
Mode I [t0 t t1]: In this interval, N2, C1, and C2 to charge output capacitor C3 and load R are series connected with source energy VIN, and also magnetizing inductor Lm is also receiving
energy from Vin .which depicts that switch S1 remains ON, and only diode D3 is conducting. The iLm, iLk1, and iD3 are increasing because the Vin is crossing Lk1, Lm, and primary winding N1.
Lm and Lk1 are storing energy from Vin, meanwhile, Vin also is serially connected with secondary winding N2 of coupled inductor T1, capacitors C1, and C2, then they all discharge their energy
to capacitor C3 and load R. The iin, iD3 and discharging current iC1 and iC2 are increasing. This mode ends when Switch S1 is turned OFF at t = t1.
Mode II [t1 t t2]: In this interval, when switch S1 is OFF, secondary leakage inductor Lk2 keeps charging C3. And only diode D2 and D3 are conducting. When S1 is OFF, The energy stored in
leakage inductor Lk1 flows through diode D1 to charge capacitor C1.in the meantime, the energy of secondary leakage inductor Lk2 is series-connected with C2 to charge output capacitor C3 and
the load. as leakage inductance Lk1 andLK2 are lesser than Lm, iLk2 decreases rapidly, but iLm is increasing because magnetizing inductor Lm is receiving energy from Lk1. Current iLk2 reduces
down to zero, and this mode ends at t = t2 .
Mode III [t2 t t3]: In this interval, only diodes D1 and D2 are conducting, the energy stored in coupled inductor T1 is release to C1 and C2. Currents iLk1 and iD1 are constantly decreased as
leakage energy still flowing through diode D1 keeps charging capacitor C1. The Lm is deliver its energy through T1 and D2 to charge capacitor C2.The energy stored in capacitor C3 is
constantly discharged to the load R. These energy transfers cause decreases in iLk1 and iLm but increases in iLk2. This mode ends when current iLk1 reaches zero at t = t3.
Mode IV [t3 t t4]: In this interval, only magnetizing inductor Lm is continually release its energy to C2, also only diode D2 is conducting. The iLm is decreasing due to the magnetizing
inductor energy flowing through the coupled inductor T1 to secondary winding N2, and D2 continues to charge capacitor C2. The energy
stored in capacitor C3 is constantly discharged to the load R. This mode ends when current iLm reach zero at t = t4.
Mode V [t4 t t5]: In this interval, all components are turned OFF, only the energy stored in capacitor C3 is constant to be discharged to the load R. When switch S1 is turned ON this modes
ends and the beginning of the next switching period.
3. Steady state analysis of Proposed Converter
1. CCM Operation
To simplify the analysis, the leakage inductances on the secondary and primary sides are neglected. When S1 is turned ON the voltage across magnetizing inductance LM & N2
VLm = Vin (1)
VN 2 = nVin. (2)
During S2 OFF the voltage across magnetizing inductance LM & N2
VLm = VC1 (3)
VN 2 = VC2. (4)
The voltages across capacitorsC1 andC2 are obtained as
VC1 = D / (1 D) Vin (5)
VC2 = nd / (1 D) Vin. (6)
VO = Vin + VN 2 + VC 2 + VC 1 (7)
The DC voltage gain (MCCM) is:
MCCM = (VO /Vin) = ((1 + n) / (1 D)) . (8)
2. DCM Operation
To simplify the analysis, the leakage inductances on the secondary and primary sides are neglected. When S1 is turned ON the voltage across magnetizing inductance LM & N2
VLm = Vin (9)
VN 2 = nVin . (10)
During S2 OFF the voltage across magnetizing inductance LM & N2
VLm = VC1 (11)
VN 2 = VC2. (12)
The voltages across capacitors C3 and C4 are obtained as
VC1 = (D /DL) Vin (13)
VC2 = nD / ( DL) Vin (14)
VO = (n + 1) / (D + DL) DL Vin. (16)
MDCM = (VO / Vin ) = (n + 1) + (n + 1)2 + (2D2/ L) / 2 (17)
1. Boundary condition mode (BCM)
Normalized magnetizing inductor time constant LB to be depicted as LB = (D(1 D)2 ) / 2(1 + n)2 = D / 2(MCCM)2
Once the Lm is higher than boundary curve LmB, the proposed converter operates in CCM.
1. Simulation Result
The simulation is done using MATLAB software, the results obtained from the proposed converter with the electrical specifications of the circuit components with an applied input
DC voltage of Vin=15V, and output Dc voltage Vout in CCM Vout= 201V and DCM Vout= 287V. The output current Iout= 0.5A. Switching frequency f=50 kHz, load resistance of R= 400 .
The capacitor value of C1=C2=47F and C3=220F, the switch S1 used for the simulation is IGBT, for recycle and rectify diodes are used. The turns ratio n of the mutual inductor is n
=5, and the duty ratio D is derived as 50%. The magnetizing inductor Lm >30.54 of coupled inductor for the full load . The proposed converter obtains the wide range of efficiency
when fading of sunlight occurs. The maximum full load efficiency of the proposed converter at continuous conduction mode is given by 98%, which is higher than conventional
Fig 3 Output waveform for CCM mode fig 4 output waveform in DCM mode
The fig 3 shows output current & output voltage waveform the output voltage Vout=201V & Iout=0.5A in continuous conduction mode. Fig 4 shows output voltage ¤t Vout=
287V,Iout= 0.5A in discontinuous mode.
The fig 5 shows, the voltage & current waveform of the switch S1, current in capacitor C1, C2 the current in diode D1, D2, D3 in DCM mode of operation and also the fig 6 shows the
voltage & current waveform of switch S1, current capacitor C1,C2 the current in D1,D2,D3 in CCM mode of operation.
Fig 5 output wave form in DCM mode Fig 6 output wave form in CCM mode
2. CONCLUSION
The voltage stress across the switch is controlled by leakage inductor which effectively recycles energy in coupled inductor. During OFF state of AC module the floating switch will protects from
residential energy & gives safety for the users as well as components from electrical hazards. High step up voltage gain is obtained, without numerous turns ratio and extreme duty ratio in coupled
inductor n=5 and duty ratio D=55% in the converter. During fading sunlight PV module harvest more energy due to small efficiency variation.
1. J. J. Bzura, The ac module: An overview and update on self-contained modular PV systems, in Proc. IEEE Power Eng. Soc. Gen. Meeting, Jul.2010, pp. 13.
2. B. Jablonska, A. L. Kooijman-van Dijk, H. F. Kaan, M. van Leeuwen, G. T. M. de Boer, and H. H. C. de Moor, PV-PRIV´E project at ECN, five years of experience with small-scale ac module
PV systems, in Proc. 20th Eur. Photovoltaic Solar Enrgy Conf., Barcelona, Spain, Jun. 2005, pp. 2728 2731.
3. T. Umeno, K. Takahashi, F. Ueno, T. Inoue, and I. Oota, A new approach to lowripple-noise switching converters on the basis of switched- capacitor converters, in Proc. IEEE Int. Symp.
Circuits Syst., Jun. 1991, pp. 1077 1080.
4. B. Axelrod, Y. Berkovich, and A. Ioinovici, Transformerless dcdc converters with a very high dc line-to-load voltage ratio, in Proc. IEEE Int Symp. Circuits Syst. (ISCAS), 2003, vol. 3,
pp. 435438.
5. H. Chung and Y. K. Mok, Development of a switched-capacitor dcdc boost converter with continuous input current waveform, IEEE Trans. Circuits Syst. I, Fundam. Theory Appl., vol. 46, no.
6, pp. 756759, Jun. 1999.
6. T. J. Liang and K. C. Tseng, Analysis of integrated boost-flyback step-up converter, IEE Proc. Electrical Power Appl., vol. 152, no. 2, pp. 217225, Mar. 2005.
7. T. Shimizu,K.Wada, andN.Nakamura, Flyback-type single-phase utility interactive inverter with power pulsation decoupling on the dc input for an ac photovoltaic module system, IEEE Trans.
Power Electron., vol. 21, no. 5, pp. 12641272, Jan. 2006.
8. C. Rodriguez and G. A. J. Amaratunga, Long-lifetime power inverter for photovoltaic ac modules,
IEEE Trans. Ind. Electron., vol. 55, no. 7, pp. 25932601, Jul. 2008.
9. S. B. Kjaer, J. K. Pedersen, and F. Blaabjerg, A review of single-phase grid-connected inverters for photovoltaic modules, IEEE Trans. Ind. Appl., vol. 41, no. 5, pp. 12921306, Sep./Oct.
10. M. Zhu and F. L. Luo, Voltage-lift-type cuk converters: Topology and analysis, IET Power Electron., vol. 2, no. 2, pp. 178191, Mar. 2009.
11. J. W. Baek, M. H. Ryoo, T. J. Kim, D. W. Yoo, and J. S. Kim, High boost converter using voltage multiplier, in Proc. IEEE Ind. Electron. Soc. Conf. (IECON), 2005, pp. 567572.
12. J. Xu, Modeling and analysis of switching dcdc converter with coupledinductor, in Proc. IEEE 1991 Int. Conf. Circuits Syst. (CICCAS), 1991, pp. 717720.
13. S. H. Park, S. R. Park, J. S. Yu, Y. C. Jung, and C. Y. Won, Analysis and design of a soft-switching boost converter with an HI-Bridge auxiliary resonant circuit, IEEE Trans. Power
Electron., vol. 25, no. 8, pp. 2142 2149, Aug. 2010.
14. G. Yao, A. Chen, and X. He, Soft switching circuit for interleaved boost converters, IEEE Trans. Power Electron., vol. 22, no. 1, pp. 8086, Jan. 2007.
15. Y. Park, S. Choi,W. Choi, and K. B. Lee, Soft-switched interleaved boost converters for high step-up and high power applications, IEEE Trans. Power Electron., vol. 26, no. 10, pp.
29062914, Oct. 2011.
16. Y. Zhao, W. Li, Y. Deng, and X. He, Analysis, design, and experimentation of an isolated ZVT boost converter with coupled inductors, IEEETrans. Power Electron., vol. 26, no. 2, pp.
541550, Feb. 2011.
17. T. J. Liang, S. M. Chen,L. S.Yang, J. F. Chen, and A. Ioinovici, Ultra large gain step-up switched- capacitor dcdc converter with coupled inductor for alternative sources of energy, IEEE
Trans. Circuits Syst. I, to be published.
18. L. S. Yang and T. J. Liang, Analysis and implementation of a novel bidirectional dcdc converter,
IEEE Trans. Ind Electron., vol. 59, no. 1, pp. 422434, Jan. 2012.
19. W. Li and X. He, Review of non-isolated high-step-up dc/dc converters in photovoltaic grid- connected applications, IEEE Trans. Ind. Electron., vol. 58, no. 4, pp. 12391250, Apr. 2011.
20. C. Restrepo, J. Calvente, A. Cid, A. El Aroudi, and R. Giral, A noninverting buck-boost dcdc switching converter with high efficiency and wide bandwidth, IEEE Trans. Power Electron., vol.
26, no. 9, pp. 2490 2503, Sep. 2011.
21. K. B. Park, G.W.Moon, andM. J. Youn, Nonisolated high step-up boost converter integrated with sepic converter, IEEE Trans. Power Electron., vol. 25, no. 9, pp. 22662275, Sep. 2010.
22. L. S. Yang, T. J. Liang, and J. F. Chen, Transformerless dcdc converters with high step-up voltage gain, IEEE Trans. Ind. Electron., vol. 56, no. 8, pp. 31443152, Aug. 2009.
23. N. Pogaku, M. Prodanovic, and T. C. Green, Modeling, analysis and testing of autonomous operation of an inverter-based microgrid, IEEE Trans. Power Electron., vol. 22, no. 2, pp. 613625,
Mar. 2007.
24. H. Mao, O. Abdel Rahman, and I. Batarseh, Zero-voltage-switching dc dc converters with synchronous rectifiers, IEEE Trans. Power Electron., vol. 23, no. 1, pp. 369378, Jan. 2008.
25. J. M. Kwon and B. H. Kwon, High step-up active-clamp converter with input-current doubler and output-voltage doubler for fuel cell power systems, IEEE Trans. Power Electron., vol. 24, no.
1, p. 108115, Jan. 2009.
26. S. Dwari and L. Parsa, An efficient high-step-up interleaved dcdc converter with a commonactive clamp, IEEE Trans.Power Electron., vol. 26, no. 1, pp. 6678, Jan. 2011.
B.ASHOK presently pursuing his M.E in power electronics & drives, in Ranganathan Engineering College, Coimbatore. His area of interest is power electronic inverters & converters.
J.MOHAN presently working as assistant professor in Ranganathan Engineering College,Coimbatore. His area of interest is power electronic converter & inverters, AC & DC drives, power quality.
You must be logged in to post a comment. | {"url":"https://www.ijert.org/design-of-safety-high-step-up-dcdc-converter-for-ac-pv-module-application","timestamp":"2024-11-03T12:02:08Z","content_type":"text/html","content_length":"83709","record_id":"<urn:uuid:84bb5244-949e-4166-b9ba-f39df46dab97>","cc-path":"CC-MAIN-2024-46/segments/1730477027776.9/warc/CC-MAIN-20241103114942-20241103144942-00577.warc.gz"} |
Adding graphical touches to 8bomb
Project Page
Today I added graphical rocks and bomb explosions to 8bomb. These don't effect gameplay at all, but do a good job of making the game more visualy impactful and interesting. I'll jump right in.
The basic idea here is to add visual rocks to the terrain panels. These won't effect gameplay in any, but give a sense of motion that can be missing if the terrain is all one color.
First step was to randomly place stones in each panel. I give each panel 10 chances, each with 40% chance to spawn a stone. Then each stone is given a random x and y value within the panel and a
random radius from 0 to 5. Lastly they are each given a color which is random from 4 to 7;
function createPanel() {
let panel = [];
let stones = [];
for (let y = 0; y < 100; y++) {
let row = [];
for (let x = 0; x < 128; x++) {
for (let i = 0; i < 10; i++) {
if (Math.random() * 100 < 40) {
x: Math.random() * 128,
y: Math.random() * panelHeight,
r: Math.random() * 5,
c: Math.floor(Math.random() * 3) + 4
panel.stones = stones;
return panel;
Note that since javascript objects are dynamic, I can add a stones property to the panel even though it is nominally an array. This is a little weird, but works well for my purposes so I'm leaving
I draw the stones by pulling the color calculation functionality out of the drawTerrain function and into a centralized colorAt(x, y) function which checks if a rock is near enough and returns a
random rock color instead of the expected ground color.
export function colorAt(x, y) {
let panelNumber = Math.floor(y / panelHeight);
let panel = terrain[panelNumber];
let panelY = y - (panelNumber * panelHeight);
if (!panel) return 7;
for (let stone of panel.stones) {
let dx = stone.x - x;
let dy = stone.y - panelY;
let distance = Math.sqrt(dx * dx + dy * dy);
if (distance < stone.r) {
return stone.c;
let color = 1;
if (!terrainAt(x, y - 1)) {
color -= 1;
} else if (!terrainAt(x, y + 1)) {
color += 1;
return color;
Since the stones can be at any floating point from 0 to the panel width and 0 to the panel height, the rocks tend to have slightly irregular shapes which improves the effect.
The explosions were pretty simple as well. I created newExplosion function which takes an x and y and creates a new explosion object which contains the x and y passed in as well as a standard
explosion radius r, color c initialized at 0 and a delay initialized to the animation speed. The new explosion gets added to a list managing the active explosions.
const startingRadius = 30;
const animationSpeed = 3;
let explosions = [];
export function newExplosion(x, y) {
r: startingRadius,
c: 0,
delay: animationSpeed
Then each frame I run a new updateExplosions function which loops over every active explosion, decriments the delay if it is greater than zero, or resets the delay and increments the color.
export function updateExplosions() {
let remainingExplosions = [];
for (let explosion of explosions) {
if (explosion.delay > 0) {
} else {
if (explosion.c == 7) continue;
explosion.c += 1;
explosion.r *= 0.8;
explosion.delay = animationSpeed;
explosions = remainingExplosions;
If the color is equal to 7, the explosion is dropped. Otherwise the remaining explosions become the active explosions.
Finally drawing the explosions is as simple as setting the pixels that are within r distance from the explosion center to the color c.
export function drawExplosions() {
for (let explosion of explosions) {
for (let x = explosion.x - explosion.r; x < explosion.x + explosion.r; x++) {
for (let y = explosion.y - explosion.r; y < explosion.y + explosion.r; y++) {
let dx = x - explosion.x;
let dy = y - explosion.y;
let dist = Math.sqrt(dx * dx + dy * dy);
if (dist <= explosion.r) {
setPixel(x, y, explosion.c);
And thats it!
These are very simple effects, but go a long way towards improving the feel of the game. Next up I will look into implementing a game over screen.
Till tomorrow, Kaylee | {"url":"https://kaylees.dev/trio/oak/day23-rocks-and-explosions/","timestamp":"2024-11-07T09:05:07Z","content_type":"text/html","content_length":"8185","record_id":"<urn:uuid:790d5d2d-6199-4524-a86e-8cdde3e18db7>","cc-path":"CC-MAIN-2024-46/segments/1730477027987.79/warc/CC-MAIN-20241107083707-20241107113707-00405.warc.gz"} |
What is a Symmetric Histogram? (Definition & Examples) | Online Tutorials Library List | Tutoraspire.com
What is a Symmetric Histogram? (Definition & Examples)
by Tutor Aspire
A histogram is a type of chart that helps us visualize the frequency of values in a dataset.
A symmetric histogram is a type of histogram that has perfectly identical halves if we were to draw a line down the center of it.
There are two common types of symmetric histograms:
• Unimodal symmetric histogram: A histogram with one peak
• Bimodal symmetric histogram: A histogram with two peaks
The following examples show what each of these histograms looks like.
Example 1: Unimodal Symmetric Histogram
The following histogram is an example of a unimodal symmetric histogram:
If we were to draw a line down the center of the histogram, both the left and right sides would look the exact same:
We refer to this as a unimodal symmetric histogram because “uni” means “one” and this histogram only has one peak directly in the middle.
Example 2: Bimodal Symmetric Histogram
The following histogram is an example of a bimodal symmetric histogram:
If we were to draw a line down the center of the histogram, both the left and right sides would look the exact same:
We refer to this as a bimodal symmetric histogram because “bi” means “two” and this histogram has two peaks.
Related: An Introduction to Bimodal Distributions
What is a Roughly Symmetric Histogram?
In the real world, there are rarely perfectly symmetrical histograms but there are often roughly symmetrical histograms.
These are histograms that are “roughly” symmetrical, meaning the two sides look roughly the same if you draw a line down the center of the histogram.
One example of this would be the distribution of the weights of newborn babies.
It’s well known that newborn weights follow a unimodal distribution with an average around 7.5 lbs.
If we create a histogram of baby weights, we’ll see a “peak” at 7.5 lbs with some babies weighing more and some weighing less:
This is a roughly symmetrical histogram. If we drew a vertical line down the center, each side would look roughly the same.
Another real-world example is the distribution of ACT scores for high school students in the U.S.
The average score is about 21 with some students scoring less and some scoring higher. If we create a histogram of ACT scores for all students in the U.S. we’ll see a single “peak” at 21 with some
students scoring higher and some scoring lower:
This is also a roughly symmetrical histogram. If we drew a vertical line down the center, each side would look roughly the same.
When working with real-world datasets, you’ll rarely encounter perfectly symmetrical histograms but you will often encounter roughly symmetrical histograms.
Additional Resources
The following tutorials provide additional information about histograms:
How to Describe the Shape of Histograms
How to Compare Histograms
How to Estimate the Mean and Median of Any Histogram
Share 0 FacebookTwitterPinterestEmail
previous post
How to Graph Three Variables in Excel (With Example)
You may also like | {"url":"https://tutoraspire.com/symmetric-histogram/","timestamp":"2024-11-02T04:23:52Z","content_type":"text/html","content_length":"351668","record_id":"<urn:uuid:30ba82ad-430f-4fb7-a9b1-e0e49160fc52>","cc-path":"CC-MAIN-2024-46/segments/1730477027677.11/warc/CC-MAIN-20241102040949-20241102070949-00725.warc.gz"} |
16 Times Table Worksheet [16 Multiplication Table] Printable
16 Times Table Worksheet -The 16 times table can be helpful for basic math calculations. It can be handy to have a worksheet to help keep track of the times tables during practice. Additionally, the
worksheets provide examples of how to use timetables in various situations.
16 Times Table Worksheet PDF
The use of the 16 Times Table Worksheet PDF can help children learn and remember the multiplication table. This PDF worksheet is a printable PDF that can be used at home or in a classroom. It
includes the following information: The table and its corresponding labels are organized by row and column to provide an easy-to-follow method for looking up the information.
Sixteen Times Table Worksheet
Use this 16 times table worksheet to help students in their studies. This sheet can be used as a study aid, to find solutions to equations, or for any other purpose that requires a basic
understanding of the multiplication and division of time. The worksheet can be printed out and kept in a student’s math notebook or workbook for quick reference.
Printable 16 Multiplication Table Worksheet
This Printable 16 Multiplication Table Worksheet is perfect for practising your multiplication tables. Practice your multiplication tables by multiplying the numbers in the table.
This 16 multiplication table worksheet is great for your learning and knowledge. This 16 Multiplication Table Worksheet Free Printable can be used in the classroom or at home. It can also be a
helpful tool for homework. This worksheet is a great way to practice multiplication and understanding the order of operations. It has easy-to-follow directions and is perfect for individuals learning
math or those who need review. Multiple choices are given so that you can complete the worksheet in different ways.
Looking for a worksheet that will help students learn multiplication? Check out our16 Multiplication Chart Worksheet! This sheet can be used in classrooms, homes, or even as a fun activity to do on
your own. The chart includes both standard and expanded notation, making it easy for students to understand. Additionally, the worksheet is printable so you can have it handy when needed.
Free 16 Multiplication Chart Worksheets
The benefits of understanding the concept of multiplication seen in 16 times table worksheets. This is because working out multiples is important for a number of reasons, including being able to do
basic math calculations quickly and accurately, as well as gaining an understanding of the concepts behind math.
Understanding how multiplication works is an essential skill for students at all levels. By practising multiplication tables and using 16-times table worksheets, students better equipped to solve
problems and understand what they practise. This will also improve their general math skills.
The need for good performance is evident in any classroom where students are studying mathematics. This is especially true when it comes to the 16 times table. A good way to improve student math
performance and retention is to provide them with a worksheet printed out and used during class. Times Table 16 Worksheet Free Printable – downloaded and printed out for use with your students.
Multiplication 16 Times Table Worksheet PDF can be a great confidence booster for students. Memorizing multiplication tables up to 16x can help with basic math skills and also boost confidence. The
worksheet provides practice multiplying two-digit numbers together, as well as working with three-digit numbers. The sheet also has questions to check your understanding. | {"url":"https://multiplicationtable.org/16-times-table-worksheet/","timestamp":"2024-11-06T23:59:53Z","content_type":"text/html","content_length":"163775","record_id":"<urn:uuid:fe6df4a7-51d4-459f-9d46-64739a35a943>","cc-path":"CC-MAIN-2024-46/segments/1730477027942.54/warc/CC-MAIN-20241106230027-20241107020027-00586.warc.gz"} |
The Stacks project
Definition 25.3.1. Let $\mathcal{C}$ be a site. Let $f = (\alpha , f_ i) : \{ U_ i\} _{i \in I} \to \{ V_ j\} _{j \in J}$ be a morphism in the category $\text{SR}(\mathcal{C})$. We say that $f$ is a
covering if for every $j \in J$ the family of morphisms $\{ U_ i \to V_ j\} _{i \in I, \alpha (i) = j}$ is a covering for the site $\mathcal{C}$. Let $X$ be an object of $\mathcal{C}$. A morphism $K
\to L$ in $\text{SR}(\mathcal{C}, X)$ is a covering if its image in $\text{SR}(\mathcal{C})$ is a covering.
Comments (3)
Comment #478 by a on
Def. 24.2.2 say $F$ associates a "sheaf", but does one mean pre sheaf?
Lemma 24.2.3 proof of part 1 says the coproduct of $\{ U_i\}$ and $\{ V_j\}$ is $\{ U_i\} \coprod \{ V_j\}$ ... what does this last expression mean? Should it be $(\coprod U_i) \coprod (\coprod
proof of part 3 of same lemma: line 222 of the code says $k = \alpha(i) = \beta(j)$ but should it be $j=\alpha(i)=\beta(k)$?
line 289: $\text{SR}(\mathcal{C}$ missing right parenthesis )
In tag 017Z the definition of coskelet says it goes from $Simp(C) \to Simp_n(C)$ but I think it should go the other way.
Comment #490 by Johan on
Thanks very much! Made the corresponding edits here.
If you'd like to be listed among the contributors, then please sign off your comments with your name.
Comment #1024 by correction_bot on
In the comment before Definition 24.2.6 (tag 01G5), perhaps note that the existence of coskeleton functors for $\text{SR}(\mathcal{C}, X)$ in case $\mathcal{C}$ has fibre products follows from
Lemma 24.2.3 (tag 01G2) and Lemma 14.17.3 (tag 0183).
Directly after Definition 24.2.6, in the sentence "Condition (2) makes sense since…" it looks like (2) should be changed to (1).
Post a comment
Your email address will not be published. Required fields are marked.
In your comment you can use Markdown and LaTeX style mathematics (enclose it like $\pi$). A preview option is available if you wish to see how it works out (just click on the eye in the toolbar).
All contributions are licensed under the GNU Free Documentation License.
In order to prevent bots from posting comments, we would like you to prove that you are human. You can do this by filling in the name of the current tag in the following input field. As a reminder,
this is tag 01FZ. Beware of the difference between the letter 'O' and the digit '0'.
The tag you filled in for the captcha is wrong. You need to write 01FZ, in case you are confused. | {"url":"https://stacks.math.columbia.edu/tag/01FZ","timestamp":"2024-11-12T12:18:18Z","content_type":"text/html","content_length":"27917","record_id":"<urn:uuid:344072d4-61a7-45da-944e-8079a3913d90>","cc-path":"CC-MAIN-2024-46/segments/1730477028273.45/warc/CC-MAIN-20241112113320-20241112143320-00772.warc.gz"} |
A Threshold Network for “Human Keys” to solve privacy and custody issues
In blockchain and PKI more generally, people are represented by keys. A somewhat strange question to ask might be “why don’t keys represent people?” I will argue this is actually an important
question and the crux of major privacy and onboarding challenges. We present a a threshold network design dubbed Mishti Network to derive keys from people rather than arbitrary randomness. This
network solves a number of problems in ZK identity, compliance, and onboarding.
What does it mean for a key to be a representation of a person? There are two conditions that should be met:
• A person’s knowledge and/or attributes can always map to the private key
• This person is the sole controller of the key
In other words, it is a collision-resistant map of personal data and attributes to a high-entropy pseudorandom number. Without collision resistance, multiple people could have the same key. Without
high entropy, the key is not secure. Keys can be both standard private keys or also a nullifier that’s useful for secure ZK credentials.
Human keys are not solely biometrics. They could be from human-friendly data such as security questions, passwords, or any unique knowledge belonging to an individual rather than arbitrary
Solution: Oblivious Pseudorandom Function
This solution is based on a threshold verifiable oblivious pseudorandom function (tVOPRF) on private data. An oblivious pseudorandom function (OPRF) takes a private input and computes a pseudorandom
function (PRF). PRFs take low-entropy input and create high-entropy output. Adding verifiability via a ZKP makes it into a VOPRF. Verifying individual node contributions is important to
decentralizing the network.
Why it is helpful to Ethereum + PKI
Some of the outstanding issues in Ethereum are onboarding and privacy. Onboarding requires not just simplicity but also self-custody, and recovery. Current onboarding solutions such as social logins
and passkeys do not have self-custody (as they can be recovered by web2 accounts), while self-custodial solutions can’t have recovery without extra onboarding step like electing gaurdians.
A similar need is for ZK identity applications that need to derive nullifiers from their users’ identities, in a way nobody can trace back to the user. This is a common need in proof-of-personhood
solutions to ensure that each person only has one corresponding nullifier without a central database or key that links users to their nullifiers.
Furthermore, the underlying cryptography and network can be repurposed to tackle another pressing challenge: that of satisfying compliance rules with ZK identity. The same underlying elliptic curve
multiplication primitive that underlies this design can be used to construct threshold ElGamal decryption over ZK-friendly curves, which can allow ZK proofs to contain encrypted data with flexible
access control.
Oblivious Pseudorandom Function
To generate keys from identities, an oblivious pseudorandom function (OPRF) can be constructed with distributed EC scalar multiplication. This allows private user data such as security questions,
biometrics, passwords, or social security numbers, etc. to deterministically generate secret keys. The resulting pseudorandom value is computationally impractical to reverse despite it being from
low-entropy input. One can thereby create wallet or nullifier from any (or a combination) of these low-entropy “human” factors. In the 2HashDH OPRF [1], a server or network’s secret is used to give
randomness to the client’s input. The oblivious property prevents any server or set of nodes from seeing see this input.
2HashDH is the following algorithm between a user with a private input x and a server (or network) with a private key s. For a subgroup G of an elliptic curve there are two hash functions:
hashToCurve: \{0,1\}^* \rightarrow G
hashToScalar: G \rightarrow F_q.
The 2HashDH OPRF proceeds as follows
1. User samples a random mask r and sends M = r * hashToCurve(x)
2. Server multiplies by its secret, returning s * M
3. User computes the output by unmasking the server’s response and hashing it: o = HashToScalar(r^{-1} * s * M)
o is uniformly pseudorandom in F_q, and the server is information-theoretically blinded from the user’s input.
Decentralizing the server
To decentralize the OPRF server, only the step with a server must be decentralized:
2. Server multiplies by its secret, returning s * M
For threshold elliptic curve multiplication, first a linear secret sharing, such as Shamir’s scheme, must be used. The secret key is generated through distributed key generation (DKG) such that each
node with index i receives share f(i) for some secret polynomial f known to nobody. There is no node at the 0 index and f(0) is the secret key of the network. The secret key f(0) can be computed by a
set Q of t nodes where t is one more than the degree of f.
f(0) = \sum_{i \in Q}{L_{0, Q}(i)*f(i)}
where L_{0,Q}(i) is the Lagrange basis for index i in set Q evaluated at zero.
Instead of reconstructing f(0), the nodes can collaborate to construct f(0) * M
f(0) * M = \sum_{i \in Q}{L_{0, Q}(i)*f(i) * M}
This is sufficient for step
2. Server multiplies by its secret, returning s * M
if the nodes are honest. But if one lies, the result will be wrong and there will be no way of knowing who lied. Thus, each node should prove their individual multiplication using a lightweight
zero-knowledge DLEQ proof.
Other interesting use case: Provable encryption with programmable privacy
The same decentralized EC scalar primitive can be used not just for VOPRF but also for ElGamal decryption over ZK-friendly curves. This is helpful when identities must be revealed in certain
For example, many private DeFi protocols are interested in ensuring that bad actors do not get the benefits of anonymity, while the average user typically does. Governments are not satisfied with
solely ZK because they need access to user data, but currently the only alternative is honeypots where all user data is stored to be turned over to authorities if needed.
Another use of revealing provably encrypted identities under certain conditions is undercollateralized lending – what if you want an identity or private key to be revealed if a DeFi loan is defaulted
on? In this case, you need to prove the proper data is encrypted correctly, then have a smart contract control decryption rights.
To modify this threshold EC point multiplication to such use cases, little is needed.
ElGamal encryption is client-side:
1. Create an ephemeral keypair (a, A = aG)
2. Encode the message as an EC point P
3. Compute Diffie-Hellman shared secret with network public key: aB
4. Compute the ciphertext (A, aB+P)
Unlike encryption, decryption requires a server or decentralized network.
1. Server/network multiply ephemeral public key A by its secret key b to get bA = aB
2. Decryptor subtracts this value from aB+P to get P
The server/network’s step can be handled by the same threshold multiplication protocol as before!
Network Setup and Collusion Protection
The team at Holonym has implemented this as as an AVS on Eigenlayer called Mishti Network. High reputation is common among Eigenlayer operators despite the permissionless nature, so it is ideal for
threshold networks where collusion is a concern. To further mitigate collusion risk, there is the idea of parallel networks:
The asynchronous and homomorphic nature of the computations means users can permissionlessly add nodes outside of Mishti Network that they trust to not collude with Mishti Network. E.g. instead of
splitting a secret between Mishti Network, half of the secret is between the Mishti Network and the other half in a semi-trusted node elected by the user. Since the whole network just does an EC
multiplication, exactly what its individual does do, nodes and networks can be treated the same. A 2/2 scheme could be done between a semi-trusted node and Mishti network, simply by
• Adding their public keys to get the joint public key
• Adding their responses to get a joint response to the computation
Note this requires no consent from the network and is not limited to 2/2 schemes; it can be done with any combination of semi-trusted nodes and/or independent networks via threshold schemes.
[1] S. Jarecki, A. Kiayias, and H. Krawczyk, “Round-optimal
password-protected secret sharing and T-PAKE in the password only model,” in International Conference on the Theory and Application of Cryptology and Information Security. Springer, 2014 pp. 233–253
Concluding Notes
If you have any ideas on how to improve or elaborate on this network design for either ZK identity, self-custody, or any other relevant use cases, please reply or reach out.
Linking private keys to users in a reliable way is definitely a hard problem, and I like how your solution just relies on one EC operation without additional fuss.
It seems like the challenge with this kind of design is that the threshold secret has to be persistent across long periods time. This is very different from most MPC/TSS systems in which an operation
is performed once. In particular:
• No rotation is possible, as the threshold secret must be persistent for nullifiers/private keys to stay fixed. It’s possible to add nodes but not to remove them. This means that if a node wants
to leave the network, either the threshold secret stops being recoverable, or new shares are issued and now a new share is in the wild, with no incentive to prevent its leakage.
• Secret share leakage is not detectable. This can be partially mitigated by letting someone obtaining the secret share of a node slash it and getting part of the stake, but that assumes that the
stakes are high enough and the colluder is money-driven, i.e. not a state actor for instance.
I’m curious of possible mitigations because this is definitely something we’re looking for in the context of nullifiers for OpenPassport.
3 Likes
Thanks, glad to hear this could be useful to your nullifier scheme at OpenPassport! To answer your questions:
It is possible to add and remove nodes via a resharing protocol! We designed a resharing protocol to be run at each epoch, upon which active nodes and nodes waiting to join can form the new n. All
old shares are invalidated at each epoch, and the new shares are shares of the same private key. In this protocol, similar to the DKG, new n and t values are chosen, and corresponding new shares (k_i
, K_i) are chosen for the i nodes in a set Q, for a new epoch e+1.
The bad news: the economic strategy actually doesn’t work because of resharing – for any secret sharing protocol where “standard” resharing exists, incl. Shamir’s, t nodes can just run a resharing
protocol to get new shares which aren’t linked to a particular node. They can even frame nodes though, since by knowing the polynomial they can derive any node’s keyshare and thus “frame” innocent
nodes for the collusion!
The good news: there are non-economic ways of protecting against collusion. My favorite is the idea of a semi-trusted node or a paranet. Say you have two parallel Mishti networks with keys k_1 and
k_2 respectively. Recall their goal is just multiplying their key by an input point.
You request to both networks with input point P and recieve (k_1*P, k_2*P).
Even though the networks have not communicated, you can treat their output as if it came from a single network with public key K1+K2 by adding them to get
By treating both independent networks as a joint network, both must be corrupt and collude with each other. When one network is a single node, you have a case we are calling a “semi-trusted node.” It
cannot see any secret but is trusted to not collude. Even if the decentralized network colludes, as long as this node doesn’t the collusion can’t do damage. Instead of a single node it could instead
be a collection of credibly neutral organizations, like how drand is set up.
Now there are other ways too of preventing collusion, like enclaves, but I like this more.
3 Likes | {"url":"https://ethresear.ch/t/a-threshold-network-for-human-keys-to-solve-privacy-and-custody-issues/20276?ref=blog.silk.sc","timestamp":"2024-11-10T22:13:18Z","content_type":"text/html","content_length":"35071","record_id":"<urn:uuid:13c25e9e-ab38-405e-ab24-61c4346c895d>","cc-path":"CC-MAIN-2024-46/segments/1730477028191.83/warc/CC-MAIN-20241110201420-20241110231420-00403.warc.gz"} |
In Geometry, a square is a two-dimensional plane figure with four equal sides and all the four angles are equal to 90 degrees. The properties of rectangle are somewhat similar to a square, but the
difference between the two is, a rectangle has only its opposite sides equal. Therefore, a rectangle is called a square only if all its four sides are of equal length.
• Number of sides = 4
• Number of vertices = 4
• Area = Side^2
• Perimeter = 4(Side)
The other properties of the square such as area and perimeter also differ from that of a rectangle. Let us learn here in detail, what is a square and its properties along with solved examples.
Square is a regular quadrilateral, which has all the four sides of equal length and all four angles are also equal. The angles of the square are at right-angle or equal to 90-degrees. Also, the
diagonals of the square are equal and bisect each other at 90 degrees.
A square can also be defined as a rectangle where two opposite sides have equal length.
The above figure represents a square where all the sides are equal and each angle equals 90 degrees.
Just like a rectangle, we can also consider a rhombus (which is also a convex quadrilateral and has all four sides equal), as a square, if it has a right vertex angle.
In the same way, a parallelogram with all its two adjacent equal sides and one right vertex angle is a square.
Also, read:
Shape of Square
A square is a four-sided polygon which has it’s all sides equal in length and the measure of the angles are 90 degrees. The shape of the square is such as, if it is cut by a plane from the center,
then both the halves are symmetrical. Each half of the square then looks like a rectangle with opposite sides equal.
Properties of a Square
The most important properties of a square are listed below:
• All four interior angles are equal to 90°
• All four sides of the square are congruent or equal to each other
• The opposite sides of the square are parallel to each other
• The diagonals of the square bisect each other at 90°
• The two diagonals of the square are equal to each other
• The square has 4 vertices and 4 sides
• The diagonal of the square divide it into two similar isosceles triangles
• The length of diagonals is greater than the sides of the square
Area and Perimeter of Square
The area and perimeter are two main properties that define a square as a square. Let us learn them one by one:
Area of the square is the region covered by it in a two-dimensional plane. The area here is equal to the square of the sides or side squared. It is measured in square unit.
Area = side^2 per square unit
If ‘a’ is the length of the side of square, then;
Area = a^2 sq.unit
Also, learn to find Area Of Square Using Diagonals.
The perimeter of the square is equal to the sum of all its four sides. The unit of the perimeter remains the same as that of side-length of square.
Perimeter = Side + Side + Side + Side = 4 Side
Perimeter = 4 × side of the square
If ‘a’ is the length of side of square, then perimeter is:
Perimeter = 4a unit
Length of Diagonal of Square
The length of the diagonals of the square is equal to s√2, where s is the side of the square. As we know, the length of the diagonals is equal to each other. Therefore, by Pythagoras theorem, we can
say, diagonal is the hypotenuse and the two sides of the triangle formed by diagonal of the square, are perpendicular and base.
Since, Hypotenuse^2 = Base^2 + Perpendicular^2
Hence, Diagonal^2 = Side^2 + Side^2
Diagonal = \(\sqrt{2side^2}\)
d = s√2
Where d is the length of the diagonal of a square and s is the side of the square.
Diagonal of square
Diagonal of square is a line segment that connects two opposite vertices of the square. As we have four vertices of a square, thus we can have two diagonals within a square. Diagonals of the square
are always greater than its sides.
Below given are some important relation of diagonal of a square and other terms related to the square.
Relation between Diagonal ‘d’ and side ‘a’ of a square \(d = a \sqrt{2}\)
Relation between Diagonal ‘d’ and Area ‘A’ of a Square- \(d = \sqrt{2A}\)
Relation between Diagonal ‘d’ and Perimeter ‘P’ of a Square- \(d = \frac{P}{2 \sqrt {2}}\)
Relation between Diagonal ‘d’ and Circumradius ‘R’ of a square: d = 2R
Relation between Diagonal ‘d’ and diameter of the Circumcircle \(d = D_{c}\)
Relation between Diagonal ‘d’ and In-radius (r) of a circle- \(d = 2\sqrt {2}r\)
Relation between Diagonal ‘d’ and diameter of the In-circle \(d = \sqrt {2}D_{i}\)
Relation between diagonal and length of the segment l- \(d = l \frac{2\sqrt {10}}{5}\)
Solved Examples
Problem 1: Let a square have side equal to 6 cm. Find out its area, perimeter and length of diagonal.
Solution: Given, side of the square, s = 6 cm
Area of the square = s^2 = 6^2 = 36 cm^2
Perimeter of the square = 4 × s = 4 × 6 cm = 24cm
Length of the diagonal of square = s√2 = 6 × 1.414 = 8.484
Problem 2: If the area of the square is 16 sq.cm., then what is the length of its sides. Also find the perimeter of square.
Solution: Given, Area of square = 16 sq.cm.
As we know,
area of square =side^2
Therefore, by substituting the value of area, we get;
16 = side^2
side = √16 = √(4×4) = 4 cm
Hence, the length of the side of square is 4 cm.
Now, the perimeter of square is:
P = 4 x side = 4 x 4 = 16 cm.
Learn more about different geometrical figures here at BYJU’S. Also, download its app to get a visual of such figures and understand the concepts in a better and creative way.
Frequently Asked Questions – FAQs
What is the shape of a square?
A square is a four-sided polygon, whose all its sides are equal in length and opposite sides are parallel to each other. Also, each vertices of square have angle equal to 90 degrees.
How is a square different from a rectangle?
A square has all its sides equal in length whereas a rectangle has only its opposite sides equal in length.
What is the area and perimeter of a square?
The area of square is the region occupied by it in a two-dimensional space. It is equal to square of its sides.
Area = side^2
Perimeter of a square is equal to sum of all its sides.
Perimeter = 4 x side.
Is square a polygon?
Square is a four-sided polygon, which has all its sides equal in length. It is also a type of quadrilateral.
What are the examples of square?
There are many examples of square shape in real-life such as a square plot or field, a square-shaped ground, square-shaped table cloth, the tiles of the floor in square shape, etc. | {"url":"https://mathlake.com/Square","timestamp":"2024-11-13T08:31:27Z","content_type":"text/html","content_length":"19020","record_id":"<urn:uuid:acefb4ba-5e54-4fc5-83e2-1d22af5c5d45>","cc-path":"CC-MAIN-2024-46/segments/1730477028342.51/warc/CC-MAIN-20241113071746-20241113101746-00030.warc.gz"} |
Bounding and computing obstacle numbers of graphs
An obstacle representation of a graph G consists of a set of pairwise disjoint simply-connected closed regions and a one-to-one mapping of the vertices of G to points such that two vertices are
adjacent in G if and only if the line segment connecting the two corresponding points does not intersect any obstacle. The obstacle number of a graph is the smallest number of obstacles in an
obstacle representation of the graph in the plane such that all obstacles are simple polygons. It is known that the obstacle number of each n-vertex graph is O(n log n) [Balko, Cibulka, and Valtr,
2018] and that there are n-vertex graphs whose obstacle number is Ω(n/(loglog n)^2) [Dujmović and Morin, 2015]. We improve this lower bound to Ω(n/loglog n) for simple polygons and to Ω(n) for convex
polygons. To obtain these stronger bounds, we improve known estimates on the number of n-vertex graphs with bounded obstacle number, solving a conjecture by Dujmović and Morin. We also show that if
the drawing of some n-vertex graph is given as part of the input, then for some drawings Ω(n^2) obstacles are required to turn them into an obstacle representation of the graph. Our bounds are
asymptotically tight in several instances. We complement these combinatorial bounds by two complexity results. First, we show that computing the obstacle number of a graph G is fixed-parameter
tractable in the vertex cover number of G. Second, we show that, given a graph G and a simple polygon P, it is NP-hard to decide whether G admits an obstacle representation using P as the only | {"url":"https://deepai.org/publication/bounding-and-computing-obstacle-numbers-of-graphs","timestamp":"2024-11-11T21:46:37Z","content_type":"text/html","content_length":"155998","record_id":"<urn:uuid:18a11322-48f2-48ae-affd-b87f6f6646e7>","cc-path":"CC-MAIN-2024-46/segments/1730477028239.20/warc/CC-MAIN-20241111190758-20241111220758-00802.warc.gz"} |
Upscale any video of any resolution to 4K with AI. (Get started for free)
What are some practical examples of calculations we can do in Python?
Python can perform basic arithmetic operations like addition, subtraction, multiplication, and division using simple operators.
For example, `2 + 2` gives `4`, and `10 / 2` results in `5.0`, illustrating how straightforward numerical computations can be.
Python's math module allows for the calculation of more complex mathematical functions such as square root and trigonometric functions.
Using `math.sqrt(16)` produces `4.0`, and `math.sin(math.pi / 2)` evaluates to `1.0`, showcasing Python's functionality beyond basic arithmetic.
You can easily convert between units in Python using custom functions.
For instance, converting temperature from Celsius to Fahrenheit can be done with the formula `(C * 9/5) + 32`.
Using a function like `def celsius_to_fahrenheit(c): return (c * 9/5) + 32` allows for flexible calculations.
List comprehensions in Python enable quick calculations across a collection of numbers.
For example, `[x ** 2 for x in range(5)]` generates a list of squares from 0 to 4, yielding `[0, 1, 4, 9, 16]`, which highlights Python’s power in handling sequences efficiently.
Matrix operations can be performed using the numpy library.
For example, multiplying two matrices can be represented as `numpy.dot(A, B)` where `A` and `B` are numpy arrays, allowing engineers and scientists to work with complex numerical data more
Python supports the calculation of statistical functions, such as mean, median, and standard deviation using libraries like statistics and numpy.
For instance, calling `numpy.mean([1, 2, 3, 4])` results in `2.5`, demonstrating how Python is heavily used in data analysis.
Python enables the solving of equations symbolically using the sympy library.
For example, `from sympy import symbols, Eq, solve; x = symbols('x'); eq = Eq(x + 2, 5); solve(eq)` returns `[3]`, showcasing its use in algebraic problem-solving.
Python can be used for calculus operations, such as differentiation and integration, using the sympy library.
For instance, `from sympy import diff; x = symbols('x'); diff(x**2, x)` computes the derivative of \(x^2\) with respect to \(x\), yielding `2*x`.
Financial calculations, like compound interest, can be easily programmed in Python.
The formula for compound interest `A = P(1 + r/n)^(nt)` can be applied through a function that calculates the future value, making Python useful in economic modeling.
Python supports data visualization, allowing for graphical representation of calculations.
Using matplotlib, plotting a function such as `y = x ** 2` can be done with `import matplotlib.pyplot as plt; plt.plot(x_values, y_values); plt.show()`, illustrating function growth visually.
Python can be used for simulations, utilizing random number libraries to model complex systems.
The `random` module allows for the generation of random variables, enabling users to conduct Monte Carlo simulations and analyze probabilistic scenarios.
For optimization problems, Python provides libraries like scipy.optimize that help in minimizing or maximizing functions.
Using `from scipy.optimize import minimize; result = minimize(objective_function, initial_guess)` demonstrates its application in engineering and research for efficient solutions.
Python's support for handling large datasets is exemplified by the pandas library, which allows for powerful data manipulation and analysis.
You can calculate the mean of a dataset using `df['column_name'].mean()`, making it essential in data science and analytics.
The use of lambda functions for quick calculations in Python allows for inline execution of small functions.
For example, `multiply = lambda x, y: x * y; multiply(5, 2)` evaluates to `10` for simple expressions quickly.
Python's capability for web scraping and data extraction can be applied to gather and analyze large sets of data from the internet.
Libraries like BeautifulSoup and requests enable users to programmatically extract and process information, turning it into actionable insights.
Python can automate repetitive tasks, such as sending emails or organizing files, through scripts which can execute a series of calculations or processes, thereby saving time in mundane activities.
This is particularly useful in engineering project management.
Using Python for machine learning requires mathematical foundations such as linear algebra, statistics, and probability.
You can implement algorithms like regression or classification via libraries such as scikit-learn, illustrating Python's flexibility in handling predictive analytics.
Python has tools for parsing and processing XML and JSON data formats, vital for working with web APIs.
For example, converting JSON data into usable structures can be achieved using `json.loads(data)`, which enables seamless interaction with web services.
The principle of recursion can be demonstrated in Python, where a function calls itself to solve complex problems.
This is evident in calculating factorials, where `def factorial(n): return n * factorial(n-1)` can simplify complex iterations.
Advanced calculations, such as those required in physical simulations, can employ libraries like NumPy and SciPy to solve differential equations efficiently.
For example, using `scipy.integrate.odeint` allows for integrating ordinary differential equations using numerical methods, which is critical in engineering and physics.
Upscale any video of any resolution to 4K with AI. (Get started for free) | {"url":"https://ai-videoupscale.com/knowledge/what_are_some_practical_examples_of_calculations_we_can_do_in_python.php","timestamp":"2024-11-12T08:45:37Z","content_type":"text/html","content_length":"31062","record_id":"<urn:uuid:7dd88946-ef5f-431c-b488-9900f1dfdf45>","cc-path":"CC-MAIN-2024-46/segments/1730477028249.89/warc/CC-MAIN-20241112081532-20241112111532-00288.warc.gz"} |
C3STREAM Land Designs
For the first few weeks I decided to teach and learn with the 5^th graders at Isai Ambalam middle school, at class we worked out many logic puzzles. The class was at the stage of working out
multiplication and division. Speed,distance,time and interrelating each other to form stories of multiplication and division were introduced…
e.g.: if a car travels at a speed of 25 km/hr, what will be the distance covered by the car in 4 hours?
The students multiplied and came up with a solution, by saying that the car would cover a distance of 100kms. The same story was made into division
e.g.: if a car covers 100km in 4hours, what is the speed of the car? The speed of the car is 25km/hr
the students were able to make out the stories, as we got into creating many different stories I started to realize that not all students did understand what they were making up, some got confused
or did not understand the units..
I asked the students how far was their home from school, well there were many loud answers, but each answer terminated with a unit of distance( the lesson learn ‘t was to never ask a general
questions to the whole class, or else expect a chorus answer if you don’t want that to happen, be more specific and select a random child to question) . They all knew that time was in hours, but the
units of speed and distance always seemed to confuse them until they came to realize that speed is the amount of distance covered over a certain period of time….
the real reason was that the majority of the students did not understand English very well, being bilingual is very very important. But once the conversation starts in Tamil( mother tongue) its easy
to forget the conscious of switching back to English…
by the time these few weeks passed the relation with the class grew stronger, I came to realize that there were some four students who seriously did not know what they were doing in mathematics
class… every class has a few slow learners, but having students who could not multiply, or even added by using their fingers seemed a little bit odd. I decided to take into consideration these four
members of the classroom.
The five of us would sit together and start to solve some multiplication problems, after a while they seemed to get a handle of the multiplication… but the problem showed up in the addition part
after multiplying two, two digit numbers… simple additions required finger counting, they could not do mental calculations. If asked what is five plus six they would take five fingers and then start
to count six fingers, to see that number 1 was left after adding five plus five and the answer was 11 was not that obvious.
To create a change the abacus was introduced and still it was a little hard to see the fives, as the whole ten beads were the same colour. Now the abacus was taken apart and every five beads were
altered so as to see the simple pattern of fives. After this, the pattern was seen that when one adds five to seven the remaining number of beads is two and the answer is twelve. To have broken the
pattern of finger calculation which they were stuck with for all these years felt awesomely superb. But after a few days with the abacus rigorous training was difficult and the student seemed to be
getting a littler tired of beads counting….
that is when introducing scratch programming seemed to ignite a little spark, as a team we came together and built a small script that added two digit numbers. The children were so excited and eager
to solve the sums in scratch than before with the abacus. Now doing mental calculations seemed to easier than before.
Scratch stories with Udavi 8th graders
Scratch stories with Udavi 8^th graders
The 8^th graders at Udavi had made stories on the theme ‘ if i had wings ‘ , they were wanting to depict their stories with scratch. The time offered was four English class hours on that week, and so
the fun began… on the first day all the students had worked through an idea of what they wanted to depict with scratch programming, and when asked to pair up as a team and do their work, the usual
boy-boy and girl -girl teams appeared. To make things more collaborative and interesting we mixed up the pairs into boy – girl.., asked on what aspects , criteria they were willing to do evaluate the
work. They came up with the following
-Using Proper Language
The class was just so amazing, all the children were totally focused on their work as a team there were no cross talks between teams.
The students used the Internet to get their characters and backgrounds. Gimp was introduced to them as and editing tool.
On the 2nd and 3^rd day the children continued working with their script. They were also engaged in giving feedback with the mentioned aspects. Towards the end of the last assigned day all the
projects were decided to be merged as to make a single video. The students amongst themselves coordinated and went along other teams that were still engaged in their scripting, and started to explain
the process of merging files.
Incremental Backup for Ubuntu
The command :
tar -cvfz -X excludeList -g 20150407.snar 20150407-full.tar.gz ~
tar-cvfz – creates all the files.
-X excludeList – creates a folder where one can exclude the files that need not be backed up.
-g .snar – this is the command used for incremental backup.
.tar.gz ~ – create .gz files which can be extracted later
• At first do a full blind backup of all your files.
• Then for the next backup, create a new .snar file and copy the old .snar to the new one.
Calculating ESL of a Capacitor
Impedance of Various 100μF Capacitors :
• The figure tells us that the impedance of a capacitor will decrease monotonically as frequency is increased.
• In actual practice, the ESR causes the impedance plot to flatten out.
• As we continue up in frequency, the impedance will start to rise due to the ESL of the capacitor.
• The location and width of the “knee” will vary with capacitor construction, dielectric and value.
• This is why we often see larger value capacitors paralleled with smaller values. The smaller value capacitor will typically have lower ESL and continue to “look” like a capacitor higher in
• This extends the overall performance of the parallel combination over a wider frequency range.
Reference : From Analog Devices Tutorial
Frequency Characteristics of a 0.1 uf Capacitor :
The impedance matches with ESR at around at around 2 Mhz.
ESL Analysis :
Frequency = 20 Mhz
Capacitance = 0.1 uF
From the frequency equation , ESL = 0.63 nH
Frequency Characteristics of a 1 uf Capacitor :
The impedance matches with ESR at around at around 8 Mhz.
ESL Analysis :
The impedance matches with ESR at around at around 8 Mhz.
Frequency = 8 Mhz
Capacitance = 1 uF
From the frequency equation , ESL = 0.39 nH
The impedance matches with ESR at around at around 2 Mhz.
ESL Analysis :
Frequency = 2 Mhz
Capacitance = 10 uF
From the frequency equation , ESL = 0.63 nH
Frequency Characteristics of a 10uf Capacitor :
The impedance matches with ESR at around at around 2 Mhz.
ESL Analysis :
Frequency = 2 Mhz
Capacitance = 10 uF
From the frequency equation , ESL = 0.63 nH
Capacitor Graph Reference : Datasheets from Digikey
Lets analysis the following circuit.
This circuit consist of two nodes named as v1 & v2
Consider the input voltage is vin .
Lets say the current ‘i’ is passing through the node v2 .
Then the voltage at the node v2 is given by
Voltage at the node V1 is given by
Then the input voltage vin is given by
Substituting the value of V1 in equation 3 we get,
(Equation 4)
Then the current ‘i’ passing through the node V1 is,
(Equation 5)
The transfer function (by substituting the value of ‘i’ in equations 1 & 2)
Circuit Analysis
Circuit Analysis pdf
let us take the above circuit into consideration, with three nodes namely:
and i is the current passing through them.
1. Considering node v2 :
2. Now at node v1 :
3. To determine vin :
4. The current i can written as :
5. v1 and v2 in terms of vin
Stewardship for New Emergence
To me, workshop simply means learning new technologies and technical tools (as I only attended technical workshops) . Stewardship for a New Emergence (by Monica Sharma) was totally new for me & gave
different experience.
Sometimes, I failed to listen others due to some background conversations in my head. These conversations created misunderstandings in communication. But, I never thought of how I can let it go. I
didn’t even noticed this. But this workshop gave a link to think about that. Yes, the primary steps to solve the problem is to start noticing them and making a conscious choice of let it go.
Before attending the workshop, I thought someone will come & give lectures. But what I experienced was something higher . They taught the tools as well as created the environment to practice them
with a peer(co participant).
Most case, I used to avoid pin pointing the mistakes in one’s work though it is helpful for them to grow. Because I believed in that i am not good enough to give feedback. This workshop gave some
procedure to give feedback for others to grow. It also helped me to see commitments behind the complaints of others and I learnt that the growth happens beyond the comfort zone.
“World is extraordinary and filled with many opportunities. Its all about our perspective of seeing the world. So stand up & open your window to get there where you wished to reach in spite of
difficulties that may surround you. And remember, confusion & mistakes are birth place for knowledge & perfection. In the world no one has the power to make me to feel bad without my permission. If
I bounded with emotion, it will reduce my energy and can not allow me to further action.” These are the few things I absorbed intensely at workshop & planning to practice in my life.
Most of the things which I learnt in the workshop are not completely new to me or to anyone. But the thing is it stimulates me to think about it consciously which I never did in my past.
Powering up an LED
PoweringUpLED pdf
Powering up an LED
ledcalc.com ; A useful tool to determine the value of resistors to be used in the circuit.
V = I × R (Ohm’s law)
supply voltage = 5V
resistor used = 2 x 33ohm resistor connected in parallel = 16.5 ohm
to determine current ; I = V / R
5 / 16.5 = 300mA
(Power supply voltage − LED voltage) / current (in amps) = desired resistor value (in ohms)
To calculate the amount of power that the resistor will dissipate;
Power Rule: P = (I × V ) W
If a current I flows through through a given element in your circuit, losing voltage V in the process, then the power dissipated by that circuit element is the product of that current and voltage: P
= I × V.
Speede V.2
SpeeDe V.1 -pdf
upgraded features ;
– 12 v battery
– improved LDR sensitivity
changes made ;
– 12 v battery
On Speede v.1 we used 2x 9v batteries, one for driving the arduino board and another for powering the laser. This did not suit the device as it kept on draining up the batteries. Which made us
upgraded the battery source to a 12 v rechargeable battery, but the laser and the arduino kit could only handle 9v supply. This made us connect an IC 7809 from the battery source and then supply the
components ( ic 7809 simply burns the excess voltage i.e.3 v and provides a supply of 9v as an output)
– improved LDR sensitivity
The LDR’s sensing the laser were affected by external light sources, this compromised Speede’s ability to work in a brighter environment. In order to eliminate this factor, two pvc tubes were fitted
around each LDR. | {"url":"https://www.auraauro.com/page/72/","timestamp":"2024-11-12T09:53:15Z","content_type":"text/html","content_length":"157583","record_id":"<urn:uuid:51e2e8cf-d64a-46f1-b270-81434123978e>","cc-path":"CC-MAIN-2024-46/segments/1730477028249.89/warc/CC-MAIN-20241112081532-20241112111532-00657.warc.gz"} |
Your Classifier is Secretly an Energy Based Model and You Should Treat it Like One
06 Feb 2020
• The paper proposed a framework for joint modeling of labels and data by interpreting a discriminative classifier p(y|x) as an energy-based model p(x, y).
• Joint modeling provides benefits like improved calibration (i.e., the predictive confidence should align with the miss classification rate), robustness, and out of order distribution.
• Consider a standard classifier $f_{\theta}(x)$ which produces a k-dimensional vector of logits.
• $p_{\theta}(y | x) = softmax(f_{\theta}(x)[y])$
• Uisng concepts from energy based models, we write $p_{\theta}(x, y) = \frac{exp(-E_{\theta}(x, y))}{Z_{\theta}}$ where $E_{\theta}(x, y) = -f_{\theta}(x)[y]$
• $p_{\theta}(x) = \sum_{y}{ \frac{exp(-E_{\theta}(x, y))}{Z_{\theta}}}$
• $E_{\theta}(x) = -LogSumExp_y(f_{\theta}(x)[y])$
• Note that in the standard discriminative setup, shiting the logits $f_{\theta}(x)$ does not affect the model but it affects $p_{\theta}(x)$.
• Computing $p_{\theta}(y | x)$ using $p_{\theta}(x, y)$ and $p_{\theta}(x)$ gives back the same softmax parameterization as before.
• This reinterpreted classifier is referred to as a Joint Energy-based Model (JEM).
• The log-liklihood of the data can be factoized as $log p_{\theta}(x, y) = log p_{\theta}(x) + log p_{\theta}(y | x)$.
• The second factor can be trained using the standard CE loss. In contrast, the first factor can be trained using a sampler based on Stochastic Gradient Langevin Dynamics.
Hybrid Modelling
• Datasets: CIFAR10, CIFAR100, SVHN.
• Metrics: Inception Score, Frechet Inception Distance
• JEM outperforms generative, discriminative, and hybrid models on both generative and discriminative tasks.
• A calibrated classifier is the one where the predictive confidence aligns with the misclassification rate.
• Dataset: CIFAR100
• JEM improves calibration while retaining high accuracy.
Out of Distribution (OOD) Detection
• One way to detect OOD samples is to learn a density model that assigns a higher likelihood to in-distribution examples and lower likelihood to out of distribution examples.
• JEM consistently assigns a higher likelihood to in-distribution examples.
• The paper also proposes an alternate metric called approximate mass to detect OOD examples.
• The intuition is that a point could have likelihood but be impossible to sample because its surroundings have a very low density.
• On the other hand, the in-distribution data points would lie in a region of high probability mass.
• Hence the norm of the gradient of log density could provide a useful signal to detect OOD examples.
• JEM is more robust to adversarial attacks as compared to discriminative classifiers. | {"url":"https://shagunsodhani.com/papers-I-read/Your-Classifier-is-Secretly-an-Energy-Based-Model,-and-You-Should-Treat-it-Like-One","timestamp":"2024-11-02T21:31:23Z","content_type":"text/html","content_length":"14005","record_id":"<urn:uuid:c15d89f3-606f-4876-b31a-2a56793b8b36>","cc-path":"CC-MAIN-2024-46/segments/1730477027730.21/warc/CC-MAIN-20241102200033-20241102230033-00786.warc.gz"} |
8.E: Testing Hypotheses (Exercises)
Last updated
Page ID
\( \newcommand{\vecs}[1]{\overset { \scriptstyle \rightharpoonup} {\mathbf{#1}} } \)
\( \newcommand{\vecd}[1]{\overset{-\!-\!\rightharpoonup}{\vphantom{a}\smash {#1}}} \)
\( \newcommand{\id}{\mathrm{id}}\) \( \newcommand{\Span}{\mathrm{span}}\)
( \newcommand{\kernel}{\mathrm{null}\,}\) \( \newcommand{\range}{\mathrm{range}\,}\)
\( \newcommand{\RealPart}{\mathrm{Re}}\) \( \newcommand{\ImaginaryPart}{\mathrm{Im}}\)
\( \newcommand{\Argument}{\mathrm{Arg}}\) \( \newcommand{\norm}[1]{\| #1 \|}\)
\( \newcommand{\inner}[2]{\langle #1, #2 \rangle}\)
\( \newcommand{\Span}{\mathrm{span}}\)
\( \newcommand{\id}{\mathrm{id}}\)
\( \newcommand{\Span}{\mathrm{span}}\)
\( \newcommand{\kernel}{\mathrm{null}\,}\)
\( \newcommand{\range}{\mathrm{range}\,}\)
\( \newcommand{\RealPart}{\mathrm{Re}}\)
\( \newcommand{\ImaginaryPart}{\mathrm{Im}}\)
\( \newcommand{\Argument}{\mathrm{Arg}}\)
\( \newcommand{\norm}[1]{\| #1 \|}\)
\( \newcommand{\inner}[2]{\langle #1, #2 \rangle}\)
\( \newcommand{\Span}{\mathrm{span}}\) \( \newcommand{\AA}{\unicode[.8,0]{x212B}}\)
\( \newcommand{\vectorA}[1]{\vec{#1}} % arrow\)
\( \newcommand{\vectorAt}[1]{\vec{\text{#1}}} % arrow\)
\( \newcommand{\vectorB}[1]{\overset { \scriptstyle \rightharpoonup} {\mathbf{#1}} } \)
\( \newcommand{\vectorC}[1]{\textbf{#1}} \)
\( \newcommand{\vectorD}[1]{\overrightarrow{#1}} \)
\( \newcommand{\vectorDt}[1]{\overrightarrow{\text{#1}}} \)
\( \newcommand{\vectE}[1]{\overset{-\!-\!\rightharpoonup}{\vphantom{a}\smash{\mathbf {#1}}}} \)
\( \newcommand{\vecs}[1]{\overset { \scriptstyle \rightharpoonup} {\mathbf{#1}} } \)
\( \newcommand{\vecd}[1]{\overset{-\!-\!\rightharpoonup}{\vphantom{a}\smash {#1}}} \)
\(\newcommand{\avec}{\mathbf a}\) \(\newcommand{\bvec}{\mathbf b}\) \(\newcommand{\cvec}{\mathbf c}\) \(\newcommand{\dvec}{\mathbf d}\) \(\newcommand{\dtil}{\widetilde{\mathbf d}}\) \(\newcommand{\
evec}{\mathbf e}\) \(\newcommand{\fvec}{\mathbf f}\) \(\newcommand{\nvec}{\mathbf n}\) \(\newcommand{\pvec}{\mathbf p}\) \(\newcommand{\qvec}{\mathbf q}\) \(\newcommand{\svec}{\mathbf s}\) \(\
newcommand{\tvec}{\mathbf t}\) \(\newcommand{\uvec}{\mathbf u}\) \(\newcommand{\vvec}{\mathbf v}\) \(\newcommand{\wvec}{\mathbf w}\) \(\newcommand{\xvec}{\mathbf x}\) \(\newcommand{\yvec}{\mathbf y}
\) \(\newcommand{\zvec}{\mathbf z}\) \(\newcommand{\rvec}{\mathbf r}\) \(\newcommand{\mvec}{\mathbf m}\) \(\newcommand{\zerovec}{\mathbf 0}\) \(\newcommand{\onevec}{\mathbf 1}\) \(\newcommand{\real}
{\mathbb R}\) \(\newcommand{\twovec}[2]{\left[\begin{array}{r}#1 \\ #2 \end{array}\right]}\) \(\newcommand{\ctwovec}[2]{\left[\begin{array}{c}#1 \\ #2 \end{array}\right]}\) \(\newcommand{\threevec}
[3]{\left[\begin{array}{r}#1 \\ #2 \\ #3 \end{array}\right]}\) \(\newcommand{\cthreevec}[3]{\left[\begin{array}{c}#1 \\ #2 \\ #3 \end{array}\right]}\) \(\newcommand{\fourvec}[4]{\left[\begin{array}
{r}#1 \\ #2 \\ #3 \\ #4 \end{array}\right]}\) \(\newcommand{\cfourvec}[4]{\left[\begin{array}{c}#1 \\ #2 \\ #3 \\ #4 \end{array}\right]}\) \(\newcommand{\fivevec}[5]{\left[\begin{array}{r}#1 \\ #2 \\
#3 \\ #4 \\ #5 \\ \end{array}\right]}\) \(\newcommand{\cfivevec}[5]{\left[\begin{array}{c}#1 \\ #2 \\ #3 \\ #4 \\ #5 \\ \end{array}\right]}\) \(\newcommand{\mattwo}[4]{\left[\begin{array}{rr}#1 \amp
#2 \\ #3 \amp #4 \\ \end{array}\right]}\) \(\newcommand{\laspan}[1]{\text{Span}\{#1\}}\) \(\newcommand{\bcal}{\cal B}\) \(\newcommand{\ccal}{\cal C}\) \(\newcommand{\scal}{\cal S}\) \(\newcommand{\
wcal}{\cal W}\) \(\newcommand{\ecal}{\cal E}\) \(\newcommand{\coords}[2]{\left\{#1\right\}_{#2}}\) \(\newcommand{\gray}[1]{\color{gray}{#1}}\) \(\newcommand{\lgray}[1]{\color{lightgray}{#1}}\) \(\
newcommand{\rank}{\operatorname{rank}}\) \(\newcommand{\row}{\text{Row}}\) \(\newcommand{\col}{\text{Col}}\) \(\renewcommand{\row}{\text{Row}}\) \(\newcommand{\nul}{\text{Nul}}\) \(\newcommand{\var}
{\text{Var}}\) \(\newcommand{\corr}{\text{corr}}\) \(\newcommand{\len}[1]{\left|#1\right|}\) \(\newcommand{\bbar}{\overline{\bvec}}\) \(\newcommand{\bhat}{\widehat{\bvec}}\) \(\newcommand{\bperp}{\
bvec^\perp}\) \(\newcommand{\xhat}{\widehat{\xvec}}\) \(\newcommand{\vhat}{\widehat{\vvec}}\) \(\newcommand{\uhat}{\widehat{\uvec}}\) \(\newcommand{\what}{\widehat{\wvec}}\) \(\newcommand{\Sighat}{\
widehat{\Sigma}}\) \(\newcommand{\lt}{<}\) \(\newcommand{\gt}{>}\) \(\newcommand{\amp}{&}\) \(\definecolor{fillinmathshade}{gray}{0.9}\)
These are homework exercises to accompany the Textmap created for "Introductory Statistics" by Shafer and Zhang.
8.1: The Elements of Hypothesis Testing
State the null and alternative hypotheses for each of the following situations. (That is, identify the correct number \(\mu _0\) and write \(H_0:\mu =\mu _0\) and the appropriate analogous expression
for \(H_a\).)
1. The average July temperature in a region historically has been \(74.5^{\circ}F\). Perhaps it is higher now.
2. The average weight of a female airline passenger with luggage was \(145\) pounds ten years ago. The FAA believes it to be higher now.
3. The average stipend for doctoral students in a particular discipline at a state university is \(\$14,756\). The department chairman believes that the national average is higher.
4. The average room rate in hotels in a certain region is \(\$82.53\). A travel agent believes that the average in a particular resort area is different.
5. The average farm size in a predominately rural state was \(69.4\) acres. The secretary of agriculture of that state asserts that it is less today.
State the null and alternative hypotheses for each of the following situations. (That is, identify the correct number \(\mu _0\) and write \(H_0:\mu =\mu _0\) and the appropriate analogous expression
for \(H_a\).)
1. The average time workers spent commuting to work in Verona five years ago was \(38.2\) minutes. The Verona Chamber of Commerce asserts that the average is less now.
2. The mean salary for all men in a certain profession is \(\$58,291\). A special interest group thinks that the mean salary for women in the same profession is different.
3. The accepted figure for the caffeine content of an \(8\)-ounce cup of coffee is \(133\) mg. A dietitian believes that the average for coffee served in a local restaurants is higher.
4. The average yield per acre for all types of corn in a recent year was \(161.9\) bushels. An economist believes that the average yield per acre is different this year.
5. An industry association asserts that the average age of all self-described fly fishermen is \(42.8\) years. A sociologist suspects that it is higher.
Describe the two types of errors that can be made in a test of hypotheses.
Under what circumstance is a test of hypotheses certain to yield a correct decision?
1. \(H_0:\mu =74.5\; vs\; H_a:\mu >74.5\)
2. \(H_0:\mu =145\; vs\; H_a:\mu >145\)
3. \(H_0:\mu =14756\; vs\; H_a:\mu >14756\)
4. \(H_0:\mu =82.53\; vs\; H_a:\mu \neq 82.53\)
5. \(H_0:\mu =69.4\; vs\; H_a:\mu <69.4\)
3. A Type I error is made when a true \(H_0\) is rejected. A Type II error is made when a false \(H_0\) is not rejected.
8.2: Large Sample Tests for a Population Mean
1. Find the rejection region (for the standardized test statistic) for each hypothesis test.
1. \(H_0:\mu =27\; vs\; H_a:\mu <27\; @\; \alpha =0.05\)
2. \(H_0:\mu =52\; vs\; H_a:\mu \neq 52\; @\; \alpha =0.05\)
3. \(H_0:\mu =-105\; vs\; H_a:\mu >-105\; @\; \alpha =0.10\)
4. \(H_0:\mu =78.8\; vs\; H_a:\mu \neq 78.8\; @\; \alpha =0.10\)
2. Find the rejection region (for the standardized test statistic) for each hypothesis test.
1. \(H_0:\mu =17\; vs\; H_a:\mu <17\; @\; \alpha =0.01\)
2. \(H_0:\mu =880\; vs\; H_a:\mu \neq 880\; @\; \alpha =0.01\)
3. \(H_0:\mu =-12\; vs\; H_a:\mu >-12\; @\; \alpha =0.05\)
4. \(H_0:\mu =21.1\; vs\; H_a:\mu \neq 21.1\; @\; \alpha =0.05\)
3. Find the rejection region (for the standardized test statistic) for each hypothesis test. Identify the test as left-tailed, right-tailed, or two-tailed.
1. \(H_0:\mu =141\; vs\; H_a:\mu <141\; @\; \alpha =0.20\)
2. \(H_0:\mu =-54\; vs\; H_a:\mu <-54\; @\; \alpha =0.05\)
3. \(H_0:\mu =98.6\; vs\; H_a:\mu \neq 98.6\; @\; \alpha =0.05\)
4. \(H_0:\mu =3.8\; vs\; H_a:\mu >3.8\; @\; \alpha =0.001\)
4. Find the rejection region (for the standardized test statistic) for each hypothesis test. Identify the test as left-tailed, right-tailed, or two-tailed.
1. \(H_0:\mu =-62\; vs\; H_a:\mu \neq -62\; @\; \alpha =0.005\)
2. \(H_0:\mu =73\; vs\; H_a:\mu >73\; @\; \alpha =0.001\)
3. \(H_0:\mu =1124\; vs\; H_a:\mu <1124\; @\; \alpha =0.001\)
4. \(H_0:\mu =0.12\; vs\; H_a:\mu \neq 0.12\; @\; \alpha =0.001\)
5. Compute the value of the test statistic for the indicated test, based on the information given.
1. Testing \(H_0:\mu =72.2\; vs\; H_a:\mu >72.2,\; \sigma \; \text{unknown}\; n=55,\; \bar{x}=75.1,\; s=9.25\)
2. Testing \(H_0:\mu =58\; vs\; H_a:\mu >58,\; \sigma =1.22\; n=40,\; \bar{x}=58.5,\; s=1.29\)
3. Testing \(H_0:\mu =-19.5\; vs\; H_a:\mu <-19.5,\; \sigma \; \text{unknown}\; n=30,\; \bar{x}=-23.2,\; s=9.55\)
4. Testing \(H_0:\mu =805\; vs\; H_a:\mu \neq 805,\; \sigma =37.5\; n=75,\; \bar{x}=818,\; s=36.2\)
6. Compute the value of the test statistic for the indicated test, based on the information given.
1. Testing \(H_0:\mu =342\; vs\; H_a:\mu <342,\; \sigma =11.2\; n=40,\; \bar{x}=339,\; s=10.3\)
2. Testing \(H_0:\mu =105\; vs\; H_a:\mu >105,\; \sigma =5.3\; n=80,\; \bar{x}=107,\; s=5.1\)
3. Testing \(H_0:\mu =-13.5\; vs\; H_a:\mu \neq -13.5,\; \sigma \; \text{unknown}\; n=32,\; \bar{x}=-13.8,\; s=1.5\)
4. Testing \(H_0:\mu =28\; vs\; H_a:\mu \neq 28,\; \sigma \; \text{unknown}\; n=68,\; \bar{x}=27.8,\; s=1.3\)
7. Perform the indicated test of hypotheses, based on the information given.
1. Test \(H_0:\mu =212\; vs\; H_a:\mu <212\; @\; \alpha =0.10,\; \sigma \; \text{unknown}\; n=36,\; \bar{x}=211.2,\; s=2.2\)
2. Test \(H_0:\mu =-18\; vs\; H_a:\mu >-18\; @\; \alpha =0.05,\; \sigma =3.3\; n=44,\; \bar{x}=-17.2,\; s=3.1\)
3. Test \(H_0:\mu =24\; vs\; H_a:\mu \neq 24\; @\; \alpha =0.02,\; \sigma \; \text{unknown}\; n=50,\; \bar{x}=22.8,\; s=1.9\)
8. Perform the indicated test of hypotheses, based on the information given.
1. Test \(H_0:\mu =105\; vs\; H_a:\mu >105\; @\; \alpha =0.05,\; \sigma \; \text{unknown}\; n=30,\; \bar{x}=108,\; s=7.2\)
2. Test \(H_0:\mu =21.6\; vs\; H_a:\mu <21.6\; @\; \alpha =0.01,\; \sigma \; \text{unknown}\; n=78,\; \bar{x}=20.5,\; s=3.9\)
3. Test \(H_0:\mu =-375\; vs\; H_a:\mu \neq -375\; @\; \alpha =0.01,\; \sigma =18.5\; n=31,\; \bar{x}=-388,\; s=18.0\)
9. In the past the average length of an outgoing telephone call from a business office has been \(143\) seconds. A manager wishes to check whether that average has decreased after the introduction
of policy changes. A sample of \(100\) telephone calls produced a mean of \(133\) seconds, with a standard deviation of \(35\) seconds. Perform the relevant test at the \(1\%\) level of
10. The government of an impoverished country reports the mean age at death among those who have survived to adulthood as \(66.2\) years. A relief agency examines \(30\) randomly selected deaths and
obtains a mean of \(62.3\) years with standard deviation \(8.1\) years. Test whether the agency’s data support the alternative hypothesis, at the \(1\%\) level of significance, that the
population mean is less than \(66.2\).
11. The average household size in a certain region several years ago was \(3.14\) persons. A sociologist wishes to test, at the \(5\%\) level of significance, whether it is different now. Perform the
test using the information collected by the sociologist: in a random sample of \(75\) households, the average size was \(2.98\) persons, with sample standard deviation \(0.82\) person.
12. The recommended daily calorie intake for teenage girls is \(2,200\) calories/day. A nutritionist at a state university believes the average daily caloric intake of girls in that state to be
lower. Test that hypothesis, at the \(5\%\) level of significance, against the null hypothesis that the population average is \(2,200\) calories/day using the following sample data: \(n=36,\; \
bar{x}=2,150,\; s=203\)
13. An automobile manufacturer recommends oil change intervals of \(3,000\) miles. To compare actual intervals to the recommendation, the company randomly samples records of \(50\) oil changes at
service facilities and obtains sample mean \(3,752\) miles with sample standard deviation \(638\) miles. Determine whether the data provide sufficient evidence, at the \(5\%\) level of
significance, that the population mean interval between oil changes exceeds \(3,000\) miles.
14. A medical laboratory claims that the mean turn-around time for performance of a battery of tests on blood samples is \(1.88\) business days. The manager of a large medical practice believes that
the actual mean is larger. A random sample of \(45\) blood samples yielded mean \(2.09\) and sample standard deviation \(0.13\) day. Perform the relevant test at the \(10\%\) level of
significance, using these data.
15. A grocery store chain has as one standard of service that the mean time customers wait in line to begin checking out not exceed \(2\) minutes. To verify the performance of a store the company
measures the waiting time in \(30\) instances, obtaining mean time \(2.17\) minutes with standard deviation \(0.46\) minute. Use these data to test the null hypothesis that the mean waiting time
is \(2\) minutes versus the alternative that it exceeds \(2\) minutes, at the \(10\%\) level of significance.
16. A magazine publisher tells potential advertisers that the mean household income of its regular readership is \(\$61,500\). An advertising agency wishes to test this claim against the alternative
that the mean is smaller. A sample of \(40\) randomly selected regular readers yields mean income \(\$59,800\) with standard deviation \(\$5,850\). Perform the relevant test at the \(1\%\) level
of significance.
17. Authors of a computer algebra system wish to compare the speed of a new computational algorithm to the currently implemented algorithm. They apply the new algorithm to \(50\) standard problems;
it averages \(8.16\) seconds with standard deviation \(0.17\) second. The current algorithm averages \(8.21\) seconds on such problems. Test, at the \(1\%\) level of significance, the alternative
hypothesis that the new algorithm has a lower average time than the current algorithm.
18. A random sample of the starting salaries of \(35\) randomly selected graduates with bachelor’s degrees last year gave sample mean and standard deviation \(\$41,202\) and \(\$7,621\),
respectively. Test whether the data provide sufficient evidence, at the \(5\%\) level of significance, to conclude that the mean starting salary of all graduates last year is less than the mean
of all graduates two years before, \(\$43,589\).
Additional Exercises
19. The mean household income in a region served by a chain of clothing stores is \(\$48,750\). In a sample of \(40\) customers taken at various stores the mean income of the customers was \(\$51,505
\) with standard deviation \(\$6,852\).
1. Test at the \(10\%\) level of significance the null hypothesis that the mean household income of customers of the chain is \(\$48,750\) against that alternative that it is different from \(\
2. The sample mean is greater than \(\$48,750\), suggesting that the actual mean of people who patronize this store is greater than \(\$48,750\). Perform this test, also at the \(10\%\) level of
significance. (The computation of the test statistic done in part (a) still applies here.)
20. The labor charge for repairs at an automobile service center are based on a standard time specified for each type of repair. The time specified for replacement of universal joint in a drive shaft
is one hour. The manager reviews a sample of \(30\) such repairs. The average of the actual repair times is \(0.86\) hour with standard deviation \(0.32\) hour.
1. Test at the \(1\%\) level of significance the null hypothesis that the actual mean time for this repair differs from one hour.
2. The sample mean is less than one hour, suggesting that the mean actual time for this repair is less than one hour. Perform this test, also at the \(1\%\) level of significance. (The
computation of the test statistic done in part (a) still applies here.)
Large Data Set Exercises
Large Data Set missing from the original
21. Large \(\text{Data Set 1}\) records the SAT scores of \(1,000\) students. Regarding it as a random sample of all high school students, use it to test the hypothesis that the population mean
exceeds \(1,510\), at the \(1\%\) level of significance. (The null hypothesis is that \(\mu =1510\)).
22. Large \(\text{Data Set 1}\) records the GPAs of \(1,000\) college students. Regarding it as a random sample of all college students, use it to test the hypothesis that the population mean is less
than \(2.50\), at the \(10\%\) level of significance. (The null hypothesis is that \(\mu =2.50\)).
23. Large \(\text{Data Set 1}\) lists the SAT scores of \(1,000\) students.
1. Regard the data as arising from a census of all students at a high school, in which the SAT score of every student was measured. Compute the population mean \(\mu\).
2. Regard the first \(50\) students in the data set as a random sample drawn from the population of part (a) and use it to test the hypothesis that the population mean exceeds \(1,510\), at the
\(10\%\) level of significance. (The null hypothesis is that \(\mu =1510\)).
3. Is your conclusion in part (b) in agreement with the true state of nature (which by part (a) you know), or is your decision in error? If your decision is in error, is it a Type I error or a
Type II error?
24. Large \(\text{Data Set 1}\) lists the GPAs of \(1,000\) students.
1. Regard the data as arising from a census of all freshman at a small college at the end of their first academic year of college study, in which the GPA of every such person was measured.
Compute the population mean \(\mu\).
2. Regard the first \(50\) students in the data set as a random sample drawn from the population of part (a) and use it to test the hypothesis that the population mean is less than \(2.50\), at
the \(10\%\) level of significance. (The null hypothesis is that \(\mu =2.50\)).
3. Is your conclusion in part (b) in agreement with the true state of nature (which by part (a) you know), or is your decision in error? If your decision is in error, is it a Type I error or a
Type II error?
1. \(Z\leq -1.645\)
2. \(Z\leq -1.645\; or\; Z\geq 1.96\)
3. \(Z\geq 1.28\)
4. \(Z\leq -1.645\; or\; Z\geq 1.645\)
1. \(Z\leq -0.84\)
2. \(Z\leq -1.645\)
3. \(Z\leq -1.96\; or\; Z\geq 1.96\)
4. \(Z\geq 3.1\)
1. \(Z = 2.235\)
2. \(Z = 2.592\)
3. \(Z = -2.122\)
4. \(Z = 3.002\)
1. \(Z = -2.18,\; -z_{0.10}=-1.28,\; \text{reject}\; H_0\)
2. \(Z = 1.61,\; z_{0.05}=1.645,\; \text{do not reject}\; H_0\)
3. \(Z = -4.47,\; -z_{0.01}=-2.33,\; \text{reject}\; H_0\)
9. \(Z = -2.86,\; -z_{0.01}=-2.33,\; \text{reject}\; H_0\)
11. \(Z = -1.69,\; -z_{0.025}=-1.96,\; \text{do not reject}\; H_0\)
13. \(Z = 8.33,\; z_{0.05}=1.645,\; \text{reject}\; H_0\)
15. \(Z = 2.02,\; z_{0.10}=1.28,\; \text{reject}\; H_0\)
17. \(Z = -2.08,\; -z_{0.01}=-2.33,\; \text{do not reject}\; H_0\)
1. \(Z =2.54,\; z_{0.05}=1.645,\; \text{reject}\; H_0\)
2. \(Z = 2.54,\; z_{0.10}=1.28,\; \text{reject}\; H_0\)
21. \(H_0:\mu =1510\; vs\; H_a:\mu >1510\). Test Statistic: \(Z = 2.7882\). Rejection Region: \([2.33,\infty )\). Decision: Reject \(H_0\).
1. \(\mu _0=1528.74\)
2. \(H_0:\mu =1510\; vs\; H_a:\mu >1510\). Test Statistic: \(Z = -1.41\). Rejection Region: \([1.28,\infty )\). Decision: Fail to reject \(H_0\).
3. No, it is a Type II error.
8.3: The Observed Significance of a Test
1. Compute the observed significance of each test.
1. Testing \(H_0:\mu =54.7\; vs\; H_a:\mu <54.7,\; \text{test statistic}\; z=-1.72\)
2. Testing \(H_0:\mu =195\; vs\; H_a:\mu \neq 195,\; \text{test statistic}\; z=-2.07\)
3. Testing \(H_0:\mu =-45\; vs\; H_a:\mu >-45,\; \text{test statistic}\; z=2.54\)
2. Compute the observed significance of each test.
1. Testing \(H_0:\mu =0\; vs\; H_a:\mu \neq 0,\; \text{test statistic}\; z=2.82\)
2. Testing \(H_0:\mu =18.4\; vs\; H_a:\mu <18.4,\; \text{test statistic}\; z=-1.74\)
3. Testing \(H_0:\mu =63.85\; vs\; H_a:\mu >63.85,\; \text{test statistic}\; z=1.93\)
3. Compute the observed significance of each test. (Some of the information given might not be needed.)
1. Testing \(H_0:\mu =27.5\; vs\; H_a:\mu >27.5,\; n=49,\; \bar{x}=28.9,\; s=3.14,\; \text{test statistic}\; z=3.12\)
2. Testing \(H_0:\mu =581\; vs\; H_a:\mu <581,\; n=32,\; \bar{x}=560,\; s=47.8,\; \text{test statistic}\; z=-2.49\)
3. Testing \(H_0:\mu =138.5\; vs\; H_a:\mu \neq 138.5,\; n=44,\; \bar{x}=137.6,\; s=2.45,\; \text{test statistic}\; z=-2.44\)
4. Compute the observed significance of each test. (Some of the information given might not be needed.)
1. Testing \(H_0:\mu =-17.9\; vs\; H_a:\mu <-17.9,\; n=34,\; \bar{x}=-18.2,\; s=0.87,\; \text{test statistic}\; z=-2.01\)
2. Testing \(H_0:\mu =5.5\; vs\; H_a:\mu \neq 5.5,\; n=56,\; \bar{x}=7.4,\; s=4.82,\; \text{test statistic}\; z=2.95\)
3. Testing \(H_0:\mu =1255\; vs\; H_a:\mu >1255,\; n=152,\; \bar{x}=1257,\; s=7.5,\; \text{test statistic}\; z=3.29\)
5. Make the decision in each test, based on the information provided.
1. Testing \(H_0:\mu =82.9\; vs\; H_a:\mu <82.9\; @\; \alpha =0.05\), observed significance \(p=0.038\)
2. Testing \(H_0:\mu =213.5\; vs\; H_a:\mu \neq 213.5\; @\; \alpha =0.01\), observed significance \(p=0.038\)
6. Make the decision in each test, based on the information provided.
1. Testing \(H_0:\mu =31.4\; vs\; H_a:\mu >31.4\; @\; \alpha =0.10\), observed significance \(p=0.062\)
2. Testing \(H_0:\mu =-75.5\; vs\; H_a:\mu <-75.5\; @\; \alpha =0.05\), observed significance \(p=0.062\)
7. A lawyer believes that a certain judge imposes prison sentences for property crimes that are longer than the state average \(11.7\) months. He randomly selects \(36\) of the judge’s sentences and
obtains mean \(13.8\) and standard deviation \(3.9\) months.
1. Perform the test at the \(1\%\) level of significance using the critical value approach.
2. Compute the observed significance of the test.
3. Perform the test at the \(1\%\) level of significance using the \(p\)-value approach. You need not repeat the first three steps, already done in part (a).
8. In a recent year the fuel economy of all passenger vehicles was \(19.8\) mpg. A trade organization sampled \(50\) passenger vehicles for fuel economy and obtained a sample mean of \(20.1\) mpg
with standard deviation \(2.45\) mpg. The sample mean \(20.1\) exceeds \(19.8\), but perhaps the increase is only a result of sampling error.
1. Perform the relevant test of hypotheses at the \(20\%\) level of significance using the critical value approach.
2. Compute the observed significance of the test.
3. Perform the test at the \(20\%\) level of significance using the \(p\)-value approach. You need not repeat the first three steps, already done in part (a).
9. The mean score on a \(25\)-point placement exam in mathematics used for the past two years at a large state university is \(14.3\). The placement coordinator wishes to test whether the mean score
on a revised version of the exam differs from \(14.3\). She gives the revised exam to \(30\) entering freshmen early in the summer; the mean score is \(14.6\) with standard deviation \(2.4\).
1. Perform the test at the \(10\%\) level of significance using the critical value approach.
2. Compute the observed significance of the test.
3. Perform the test at the \(10\%\) level of significance using the \(p\)-value approach. You need not repeat the first three steps, already done in part (a).
10. The mean increase in word family vocabulary among students in a one-year foreign language course is \(576\) word families. In order to estimate the effect of a new type of class scheduling, an
instructor monitors the progress of \(60\) students; the sample mean increase in word family vocabulary of these students is \(542\) word families with sample standard deviation \(18\) word
1. Test at the \(5\%\) level of significance whether the mean increase with the new class scheduling is different from \(576\) word families, using the critical value approach.
2. Compute the observed significance of the test.
3. Perform the test at the \(5\%\) level of significance using the \(p\)-value approach. You need not repeat the first three steps, already done in part (a).
11. The mean yield for hard red winter wheat in a certain state is \(44.8\) bu/acre. In a pilot program a modified growing scheme was introduced on \(35\) independent plots. The result was a sample
mean yield of \(45.4\) bu/acre with sample standard deviation \(1.6\) bu/acre, an apparent increase in yield.
1. Test at the \(5\%\) level of significance whether the mean yield under the new scheme is greater than \(44.8\) bu/acre, using the critical value approach.
2. Compute the observed significance of the test.
3. Perform the test at the \(5\%\) level of significance using the \(p\)-value approach. You need not repeat the first three steps, already done in part (a).
12. The average amount of time that visitors spent looking at a retail company’s old home page on the world wide web was \(23.6\) seconds. The company commissions a new home page. On its first day in
place the mean time spent at the new page by \(7,628\) visitors was \(23.5\) seconds with standard deviation \(5.1\) seconds.
1. Test at the \(5\%\) level of significance whether the mean visit time for the new page is less than the former mean of \(23.6\) seconds, using the critical value approach.
2. Compute the observed significance of the test.
3. Perform the test at the \(5\%\) level of significance using the \(p\)-value approach. You need not repeat the first three steps, already done in part (a).
1. \(p\text{-value}=0.0427\)
2. \(p\text{-value}=0.0384\)
3. \(p\text{-value}=0.0055\)
1. \(p\text{-value}=0.0009\)
2. \(p\text{-value}=0.0064\)
3. \(p\text{-value}=0.0146\)
1. reject \(H_0\)
2. do not reject \(H_0\)
1. \(Z=3.23,\; z_{0.01}=2.33\), reject \(H_0\)
2. \(p\text{-value}=0.0006\)
3. reject \(H_0\)
1. \(Z=0.68,\; z_{0.05}=1.645\), do not reject \(H_0\)
2. \(p\text{-value}=0.4966\)
3. do not reject \(H_0\)
1. \(Z=2.22,\; z_{0.05}=1.645\), reject \(H_0\)
2. \(p\text{-value}=0.0132\)
3. reject \(H_0\)
8.4: Small Sample Tests for a Population Mean
1. Find the rejection region (for the standardized test statistic) for each hypothesis test based on the information given. The population is normally distributed.
1. \(H_0: \mu =27\; vs\; H_a:\mu <27\; @\; \alpha =0.05,\; n=12,\; \sigma =2.2\)
2. \(H_0: \mu =52\; vs\; H_a:\mu \neq 52\; @\; \alpha =0.05,\; n=6,\; \sigma \; \text{unknown} \)
3. \(H_0: \mu =-105\; vs\; H_a:\mu >-105\; @\; \alpha =0.10,\; n=24,\; \sigma \; \text{unknown} \)
4. \(H_0: \mu =78.8\; vs\; H_a:\mu \neq 78.8\; @\; \alpha =0.10,\; n=8,\; \sigma =1.7\)
2. Find the rejection region (for the standardized test statistic) for each hypothesis test based on the information given. The population is normally distributed.
1. \(H_0: \mu =17\; vs\; H_a:\mu <17\; @\; \alpha =0.01,\; n=26,\; \sigma =0.94\)
2. \(H_0: \mu =880\; vs\; H_a:\mu \neq 880\; @\; \alpha =0.01,\; n=4,\; \sigma \; \text{unknown} \)
3. \(H_0: \mu =-12\; vs\; H_a:\mu >-12\; @\; \alpha =0.05,\; n=18,\; \sigma =1.1\)
4. \(H_0: \mu =21.1\; vs\; H_a:\mu \neq 21.1\; @\; \alpha =0.05,\; n=23,\; \sigma \; \text{unknown} \)
3. Find the rejection region (for the standardized test statistic) for each hypothesis test based on the information given. The population is normally distributed. Identify the test as left-tailed,
right-tailed, or two-tailed.
1. \(H_0: \mu =141\; vs\; H_a:\mu <141\; @\; \alpha =0.20,\; n=29,\; \sigma \; \text{unknown} \)
2. \(H_0: \mu =-54\; vs\; H_a:\mu <-54\; @\; \alpha =0.05,\; n=15,\; \sigma =1.9\)
3. \(H_0: \mu =98.6\; vs\; H_a:\mu \neq 98.6\; @\; \alpha =0.05,\; n=12,\; \sigma \; \text{unknown} \)
4. \(H_0: \mu =3.8\; vs\; H_a:\mu >3.8\; @\; \alpha =0.001,\; n=27,\; \sigma \; \text{unknown} \)
4. Find the rejection region (for the standardized test statistic) for each hypothesis test based on the information given. The population is normally distributed. Identify the test as left-tailed,
right-tailed, or two-tailed.
1. \(H_0: \mu =-62\; vs\; H_a:\mu \neq -62\; @\; \alpha =0.005,\; n=8,\; \sigma \; \text{unknown} \)
2. \(H_0: \mu =73\; vs\; H_a:\mu >73\; @\; \alpha =0.001,\; n=22,\; \sigma \; \text{unknown} \)
3. \(H_0: \mu =1124\; vs\; H_a:\mu <1124\; @\; \alpha =0.001,\; n=21,\; \sigma \; \text{unknown} \)
4. \(H_0: \mu =0.12\; vs\; H_a:\mu \neq 0.12\; @\; \alpha =0.001,\; n=14,\; \sigma =0.026\)
5. A random sample of size 20 drawn from a normal population yielded the following results: \(\bar{x}=49.2,\; s=1.33\)
1. Test \(H_0: \mu =50\; vs\; H_a:\mu \neq 50\; @\; \alpha =0.01\).
2. Estimate the observed significance of the test in part (a) and state a decision based on the \(p\)-value approach to hypothesis testing.
6. A random sample of size 16 drawn from a normal population yielded the following results: \(\bar{x}=-0.96,\; s=1.07\)
1. Test \(H_0: \mu =0\; vs\; H_a:\mu <0\; @\; \alpha =0.001\).
2. Estimate the observed significance of the test in part (a) and state a decision based on the \(p\)-value approach to hypothesis testing.
7. A random sample of size 8 drawn from a normal population yielded the following results: \(\bar{x}=289,\; s=46\)
1. Test \(H_0: \mu =250\; vs\; H_a:\mu >250\; @\; \alpha =0.05\).
2. Estimate the observed significance of the test in part (a) and state a decision based on the \(p\)-value approach to hypothesis testing.
8. A random sample of size 12 drawn from a normal population yielded the following results: \(\bar{x}=86.2,\; s=0.63\)
1. Test \(H_0: \mu =85.5\; vs\; H_a:\mu \neq 85.5\; @\; \alpha =0.01\).
2. Estimate the observed significance of the test in part (a) and state a decision based on the \(p\)-value approach to hypothesis testing.
9. Researchers wish to test the efficacy of a program intended to reduce the length of labor in childbirth. The accepted mean labor time in the birth of a first child is \(15.3\) hours. The mean
length of the labors of \(13\) first-time mothers in a pilot program was \(8.8\) hours with standard deviation \(3.1\) hours. Assuming a normal distribution of times of labor, test at the \(10\%
\) level of significance test whether the mean labor time for all women following this program is less than \(15.3\) hours.
10. A dairy farm uses the somatic cell count (SCC) report on the milk it provides to a processor as one way to monitor the health of its herd. The mean SCC from five samples of raw milk was \(250,000
\) cells per milliliter with standard deviation \(37,500\) cell/ml. Test whether these data provide sufficient evidence, at the \(10\%\) level of significance, to conclude that the mean SCC of
all milk produced at the dairy exceeds that in the previous report, \(210,250\) cell/ml. Assume a normal distribution of SCC.
11. Six coins of the same type are discovered at an archaeological site. If their weights on average are significantly different from \(5.25\) grams then it can be assumed that their provenance is
not the site itself. The coins are weighed and have mean \(4.73\) g with sample standard deviation \(0.18\) g. Perform the relevant test at the \(0.1\%\) (\(\text{1/10th of}\; 1\%\)) level of
significance, assuming a normal distribution of weights of all such coins.
12. An economist wishes to determine whether people are driving less than in the past. In one region of the country the number of miles driven per household per year in the past was \(18.59\)
thousand miles. A sample of \(15\) households produced a sample mean of \(16.23\) thousand miles for the last year, with sample standard deviation \(4.06\) thousand miles. Assuming a normal
distribution of household driving distances per year, perform the relevant test at the \(5\%\) level of significance.
13. The recommended daily allowance of iron for females aged \(19-50\) is \(18\) mg/day. A careful measurement of the daily iron intake of \(15\) women yielded a mean daily intake of \(16.2\) mg with
sample standard deviation \(4.7\) mg.
1. Assuming that daily iron intake in women is normally distributed, perform the test that the actual mean daily intake for all women is different from \(18\) mg/day, at the \(10\%\) level of
2. The sample mean is less than \(18\), suggesting that the actual population mean is less than \(18\) mg/day. Perform this test, also at the \(10\%\) level of significance. (The computation of
the test statistic done in part (a) still applies here.)
14. The target temperature for a hot beverage the moment it is dispensed from a vending machine is \(170^{\circ}F\). A sample of ten randomly selected servings from a new machine undergoing a
pre-shipment inspection gave mean temperature \(173^{\circ}F\) with sample standard deviation \(6.3^{\circ}F\).
1. Assuming that temperature is normally distributed, perform the test that the mean temperature of dispensed beverages is different from \(170^{\circ}F\), at the \(10\%\) level of significance.
2. The sample mean is greater than \(170\), suggesting that the actual population mean is greater than \(170^{\circ}F\). Perform this test, also at the \(10\%\) level of significance. (The
computation of the test statistic done in part (a) still applies here.)
15. The average number of days to complete recovery from a particular type of knee operation is \(123.7\) days. From his experience a physician suspects that use of a topical pain medication might be
lengthening the recovery time. He randomly selects the records of seven knee surgery patients who used the topical medication. The times to total recovery were:\[\begin{matrix} 128 & 135 & 121 &
142 & 126 & 151 & 123 \end{matrix}\]
1. Assuming a normal distribution of recovery times, perform the relevant test of hypotheses at the \(10\%\) level of significance.
2. Would the decision be the same at the \(5\%\) level of significance? Answer either by constructing a new rejection region (critical value approach) or by estimating the \(p\)-value of the
test in part (a) and comparing it to \(\alpha \).
16. A 24-hour advance prediction of a day’s high temperature is “unbiased” if the long-term average of the error in prediction (true high temperature minus predicted high temperature) is zero. The
errors in predictions made by one meteorological station for \(20\) randomly selected days were:\[\begin{matrix} 2 & 0 & -3 & 1 & -2\\ 1 & 0 & -1 & 1 & -1\\ -4 & 1 & 1 & -4 & 0\\ -4 & -3 & -4 & 2
& 2 \end{matrix}\]
1. Assuming a normal distribution of errors, test the null hypothesis that the predictions are unbiased (the mean of the population of all errors is \(0\)) versus the alternative that it is
biased (the population mean is not \(0\)), at the \(1\%\) level of significance.
2. Would the decision be the same at the \(5\%\) level of significance? The \(10\%\) level of significance? Answer either by constructing new rejection regions (critical value approach) or by
estimating the \(p\)-value of the test in part (a) and comparing it to \(\alpha \).
17. Pasteurized milk may not have a standardized plate count (SPC) above \(20,000\) colony-forming bacteria per milliliter (cfu/ml). The mean SPC for five samples was \(21,500\) cfu/ml with sample
standard deviation \(750\) cfu/ml. Test the null hypothesis that the mean SPC for this milk is \(20,000\) versus the alternative that it is greater than \(20,000\), at the \(10\%\) level of
significance. Assume that the SPC follows a normal distribution.
18. One water quality standard for water that is discharged into a particular type of stream or pond is that the average daily water temperature be at most \(18^{\circ}F\). Six samples taken
throughout the day gave the data: \[\begin{matrix} 16.8 & 21.5 & 19.1 & 12.8 & 18.0 & 20.7 \end{matrix}\]
The sample mean exceeds \(\bar{x}=18.15\), but perhaps this is only sampling error. Determine whether the data provide sufficient evidence, at the \(10\%\) level of significance, to conclude that
the mean temperature for the entire day exceeds \(18^{\circ}F\).
Additional Exercises
19. A calculator has a built-in algorithm for generating a random number according to the standard normal distribution. Twenty-five numbers thus generated have mean \(0.15\) and sample standard
deviation \(0.94\). Test the null hypothesis that the mean of all numbers so generated is \(0\) versus the alternative that it is different from \(0\), at the \(20\%\) level of significance.
Assume that the numbers do follow a normal distribution.
20. At every setting a high-speed packing machine delivers a product in amounts that vary from container to container with a normal distribution of standard deviation \(0.12\) ounce. To compare the
amount delivered at the current setting to the desired amount \(64.1\) ounce, a quality inspector randomly selects five containers and measures the contents of each, obtaining sample mean \(63.9
\) ounces and sample standard deviation \(0.10\) ounce. Test whether the data provide sufficient evidence, at the \(5\%\) level of significance, to conclude that the mean of all containers at the
current setting is less than \(64.1\) ounces.
21. A manufacturing company receives a shipment of \(1,000\) bolts of nominal shear strength \(4,350\) lb. A quality control inspector selects five bolts at random and measures the shear strength of
each. The data are:\[\begin{matrix} 4,320 & 4,290 & 4,360 & 4,350 & 4,320 \end{matrix}\]
1. Assuming a normal distribution of shear strengths, test the null hypothesis that the mean shear strength of all bolts in the shipment is \(4,350\) lb versus the alternative that it is less
than \(4,350\) lb, at the \(10\%\) level of significance.
2. Estimate the \(p\)-value (observed significance) of the test of part (a).
3. Compare the \(p\)-value found in part (b) to \(\alpha = 0.10\) andmake a decision based on the \(p\)-value approach. Explain fully.
22. A literary historian examines a newly discovered document possibly written by Oberon Theseus. The mean average sentence length of the surviving undisputed works of Oberon Theseus is \(48.72\)
words. The historian counts words in sentences between five successive \(101\) periods in the document in question to obtain a mean average sentence length of \(39.46\) words with standard
deviation \(7.45\) words. (Thus the sample size is five.)
1. Determine if these data provide sufficient evidence, at the \(1\%\) level of significance, to conclude that the mean average sentence length in the document is less than \(48.72\).
2. Estimate the \(p\)-value of the test.
3. Based on the answers to parts (a) and (b), state whether or not it is likely that the document was written by Oberon Theseus.
1. \(Z\leq -1.645\)
2. \(T\leq -2.571\; or\; T \geq 2.571\)
3. \(T \geq 1.319\)
4. \(Z\leq -1645\; or\; Z \geq 1.645\)
1. \(T\leq -0.855\)
2. \(Z\leq -1.645\)
3. \(T\leq -2.201\; or\; T \geq 2.201\)
4. \(T \geq 3.435\)
1. \(T=-2.690,\; df=19,\; -t_{0.005}=-2.861,\; \text{do not reject }H_0\)
2. \(0.01<p-value<0.02,\; \alpha =0.01,\; \text{do not reject }H_0\)
1. \(T=2.398,\; df=7,\; t_{0.05}=1.895,\; \text{reject }H_0\)
2. \(0.01<p-value<0.025,\; \alpha =0.05,\; \text{reject }H_0\)
9. \(T=-7.560,\; df=12,\; -t_{0.10}=-1.356,\; \text{reject }H_0\)
11. \(T=-7.076,\; df=5,\; -t_{0.0005}=-6.869,\; \text{reject }H_0\)
1. \(T=-1.483,\; df=14,\; -t_{0.05}=-1.761,\; \text{do not reject }H_0\)
2. \(T=-1.483,\; df=14,\; -t_{0.10}=-1.345,\; \text{reject }H_0\)
1. \(T=2.069,\; df=6,\; t_{0.10}=1.44,\; \text{reject }H_0\)
2. \(T=2.069,\; df=6,\; t_{0.05}=1.943,\; \text{reject }H_0\)
17. \(T=4.472,\; df=4,\; t_{0.10}=1.533,\; \text{reject }H_0\)
19. \(T=0.798,\; df=24,\; t_{0.10}=1.318,\; \text{do not reject }H_0\)
1. \(T=-1.773,\; df=4,\; -t_{0.05}=-2.132,\; \text{do not reject }H_0\)
2. \(0.05<p-value<0.10\)
3. \(\alpha =0.05,\; \text{do not reject }H_0\)
8.5: Large Sample Tests for a Population Proportion
On all exercises for this section you may assume that the sample is sufficiently large for the relevant test to be validly performed.
1. Compute the value of the test statistic for each test using the information given.
1. Testing \(H_0:p=0.50\; vs\; H_a:p>0.50,\; n=360,\; \hat{p}=0.56\).
2. Testing \(H_0:p=0.50\; vs\; H_a:p\neq 0.50,\; n=360,\; \hat{p}=0.56\).
3. Testing \(H_0:p=0.37\; vs\; H_a:p<0.37,\; n=1200,\; \hat{p}=0.35\).
2. Compute the value of the test statistic for each test using the information given.
1. Testing \(H_0:p=0.72\; vs\; H_a:p<0.72,\; n=2100,\; \hat{p}=0.71\).
2. Testing \(H_0:p=0.83\; vs\; H_a:p\neq 0.83,\; n=500,\; \hat{p}=0.86\).
3. Testing \(H_0:p=0.22\; vs\; H_a:p<0.22,\; n=750,\; \hat{p}=0.18\).
3. For each part of Exercise 1 construct the rejection region for the test for \(\alpha = 0.05\) and make the decision based on your answer to that part of the exercise.
4. For each part of Exercise 2 construct the rejection region for the test for \(\alpha = 0.05\) and make the decision based on your answer to that part of the exercise.
5. For each part of Exercise 1 compute the observed significance (\(p\)-value) of the test and compare it to \(\alpha = 0.05\) in order to make the decision by the \(p\)-value approach to hypothesis
6. For each part of Exercise 2 compute the observed significance (\(p\)-value) of the test and compare it to \(\alpha = 0.05\) in order to make the decision by the \(p\)-value approach to hypothesis
7. Perform the indicated test of hypotheses using the critical value approach.
1. Testing \(H_0:p=0.55\; vs\; H_a:p>0.55\; @\; \alpha =0.05,\; n=300,\; \hat{p}=0.60\).
2. Testing \(H_0:p=0.47\; vs\; H_a:p\neq 0.47\; @\; \alpha =0.01,\; n=9750,\; \hat{p}=0.46\).
8. Perform the indicated test of hypotheses using the critical value approach.
1. Testing \(H_0:p=0.15\; vs\; H_a:p\neq 0.15\; @\; \alpha =0.001,\; n=1600,\; \hat{p}=0.18\).
2. Testing \(H_0:p=0.90\; vs\; H_a:p>0.90\; @\; \alpha =0.01,\; n=1100,\; \hat{p}=0.91\).
9. Perform the indicated test of hypotheses using the \(p\)-value approach.
1. Testing \(H_0:p=0.37\; vs\; H_a:p\neq 0.37\; @\; \alpha =0.005,\; n=1300,\; \hat{p}=0.40\).
2. Testing \(H_0:p=0.94\; vs\; H_a:p>0.94\; @\; \alpha =0.05,\; n=1200,\; \hat{p}=0.96\).
10. Perform the indicated test of hypotheses using the \(p\)-value approach.
1. Testing \(H_0:p=0.25\; vs\; H_a:p<0.25\; @\; \alpha =0.10,\; n=850,\; \hat{p}=0.23\).
2. Testing \(H_0:p=0.33\; vs\; H_a:p\neq 0.33\; @\; \alpha =0.05,\; n=1100,\; \hat{p}=0.30\).
11. Five years ago \(3.9\%\) of children in a certain region lived with someone other than a parent. A sociologist wishes to test whether the current proportion is different. Perform the relevant
test at the \(5\%\) level of significance using the following data: in a random sample of \(2,759\) children, \(119\) lived with someone other than a parent.
12. The government of a particular country reports its literacy rate as \(52\%\). A nongovernmental organization believes it to be less. The organization takes a random sample of \(600\) inhabitants
and obtains a literacy rate of \(42\%\). Perform the relevant test at the \(0.5\%\) (one-half of \(1\%\)) level of significance.
13. Two years ago \(72\%\) of household in a certain county regularly participated in recycling household waste. The county government wishes to investigate whether that proportion has increased
after an intensive campaign promoting recycling. In a survey of \(900\) households, \(674\) regularly participate in recycling. Perform the relevant test at the \(10\%\) level of significance.
14. Prior to a special advertising campaign, \(23\%\) of all adults recognized a particular company’s logo. At the close of the campaign the marketing department commissioned a survey in which \(311
\) of \(1,200\) randomly selected adults recognized the logo. Determine, at the \(1\%\) level of significance, whether the data provide sufficient evidence to conclude that more than \(23\%\) of
all adults now recognize the company’s logo.
15. A report five years ago stated that \(35.5\%\) of all state-owned bridges in a particular state were “deficient.” An advocacy group took a random sample of \(100\) state-owned bridges in the
state and found \(33\) to be currently rated as being “deficient.” Test whether the current proportion of bridges in such condition is \(35.5\%\) versus the alternative that it is different from
\(35.5\%\), at the \(10\%\) level of significance.
16. In the previous year the proportion of deposits in checking accounts at a certain bank that were made electronically was \(45\%\). The bank wishes to determine if the proportion is higher this
year. It examined \(20,000\) deposit records and found that \(9,217\) were electronic. Determine, at the \(1\%\) level of significance, whether the data provide sufficient evidence to conclude
that more than \(45\%\) of all deposits to checking accounts are now being made electronically.
17. According to the Federal Poverty Measure \(12\%\) of the U.S. population lives in poverty. The governor of a certain state believes that the proportion there is lower. In a sample of size \
(1,550,163\) were impoverished according to the federal measure.
1. Test whether the true proportion of the state’s population that is impoverished is less than \(12\%\), at the \(5\%\) level of significance.
2. Compute the observed significance of the test.
18. An insurance company states that it settles \(85\%\) of all life insurance claims within \(30\) days. A consumer group asks the state insurance commission to investigate. In a sample of \(250\)
life insurance claims, \(203\) were settled within \(30\) days.
1. Test whether the true proportion of all life insurance claims made to this company that are settled within \(30\) days is less than \(85\%\), at the \(5\%\) level of significance.
2. Compute the observed significance of the test.
19. A special interest group asserts that \(90\%\) of all smokers began smoking before age \(18\). In a sample of \(850\) smokers, \(687\) began smoking before age \(18\).
1. Test whether the true proportion of all smokers who began smoking before age \(18\) is less than \(90\%\), at the \(1\%\) level of significance.
2. Compute the observed significance of the test.
20. In the past, \(68\%\) of a garage’s business was with former patrons. The owner of the garage samples \(200\) repair invoices and finds that for only \(114\) of them the patron was a repeat
1. Test whether the true proportion of all current business that is with repeat customers is less than \(68\%\), at the \(1\%\) level of significance.
2. Compute the observed significance of the test.
Additional Exercises
21. A rule of thumb is that for working individuals one-quarter of household income should be spent on housing. A financial advisor believes that the average proportion of income spent on housing is
more than \(0.25\). In a sample of \(30\) households, the mean proportion of household income spent on housing was \(0.285\) with a standard deviation of \(0.063\). Perform the relevant test of
hypotheses at the \(1\%\) level of significance. Hint: This exercise could have been presented in an earlier section.
22. Ice cream is legally required to contain at least \(10\%\) milk fat by weight. The manufacturer of an economy ice cream wishes to be close to the legal limit, hence produces its ice cream with a
target proportion of \(0.106\) milk fat. A sample of five containers yielded a mean proportion of \(0.094\) milk fat with standard deviation \(0.002\). Test the null hypothesis that the mean
proportion of milk fat in all containers is \(0.106\) against the alternative that it is less than \(0.106\), at the \(10\%\) level of significance. Assume that the proportion of milk fat in
containers is normally distributed. Hint: This exercise could have been presented in an earlier section.
Large Data Set Exercises
Large Data Sets missing
23. Large \(\text{Data Sets 4 and 4A}\) list the results of \(500\) tosses of a die. Let \(p\) denote the proportion of all tosses of this die that would result in a five. Use the sample data to test
the hypothesis that \(p\) is different from \(1/6\), at the \(20\%\) level of significance.
24. Large \(\text{Data Set 6}\) records results of a random survey of \(200\) voters in each of two regions, in which they were asked to express whether they prefer Candidate \(A\) for a U.S. Senate
seat or prefer some other candidate. Use the full data set (\(400\) observations) to test the hypothesis that the proportion \(p\) of all voters who prefer Candidate \(A\) exceeds \(0.35\). Test
at the \(10\%\) level of significance.
25. Lines \(2\) through \(536\) in Large \(\text{Data Set 11}\) is a sample of \(535\) real estate sales in a certain region in 2008. Those that were foreclosure sales are identified with a \(1\) in
the second column. Use these data to test, at the \(10\%\) level of significance, the hypothesis that the proportion \(p\) of all real estate sales in this region in 2008 that were foreclosure
sales was less than \(25\%\). (The null hypothesis is \(H_0:p=0.25\)).
26. Lines \(537\) through \(1106\) in Large \(\text{Data Set 11}\) is a sample of \(570\) real estate sales in a certain region in 2010. Those that were foreclosure sales are identified with a \(1\)
in the second column. Use these data to test, at the \(5\%\) level of significance, the hypothesis that the proportion \(p\) of all real estate sales in this region in 2010 that were foreclosure
sales was greater than \(23\%\). (The null hypothesis is \(H_0:p=0.25\)).
1. \(Z = 2.277\)
2. \(Z = 2.277\)
3. \(Z = -1.435\)
1. \(Z \geq 1.645\); reject \(H_0\)
2. \(Z\leq -1.96\; or\; Z \geq 1.96\); reject \(H_0\)
3. \(Z \leq -1.645\); do not reject \(H_0\)
1. \(p-value=0.0116,\; \alpha =0.05\); reject \(H_0\)
2. \(p-value=0.0232,\; \alpha =0.05\); reject \(H_0\)
3. \(p-value=0.0749,\; \alpha =0.05\); do not reject \(H_0\)
1. \(Z=1.74,\; z_{0.05}=1.645\); reject \(H_0\)
2. \(Z=-1.98,\; -z_{0.005}=-2.576\); do not reject \(H_0\)
1. \(Z=2.24,\; p-value=0.025,\alpha =0.005\); do not reject \(H_0\)
2. \(Z=2.92,\; p-value=0.0018,\alpha =0.05\); reject \(H_0\)
11. \(Z=1.11,\; z_{0.025}=1.96\); do not reject \(H_0\)
13. \(Z=1.93,\; z_{0.10}=1.28\); reject \(H_0\)
15. \(Z=-0.523,\; \pm z_{0.05}=\pm 1.645\); do not reject \(H_0\)
1. \(Z=-1.798,\; -z_{0.05}=-1.645\); reject \(H_0\)
2. \(p-value=0.0359\)
1. \(Z=-8.92,\; -z_{0.01}=-2.33\); reject \(H_0\)
2. \(p-value\approx 0\)
21. \(Z=3.04,\; z_{0.01}=2.33\); reject \(H_0\)
23. \(H_0:p=1/6\; vs\; H_a:p\neq 1/6\). Test Statistic: \(Z = -0.76\). Rejection Region: \((-\infty ,-1.28]\cup [1.28,\infty )\). Decision: Fail to reject \(H_0\).
25. \(H_0:p=0.25\; vs\; H_a:p<0.25\). Test Statistic: \(Z = -1.17\). Rejection Region: \((-\infty ,-1.28]\). Decision: Fail to reject \(H_0\). | {"url":"https://stats.libretexts.org/Bookshelves/Introductory_Statistics/Introductory_Statistics_(Shafer_and_Zhang)/08%3A_Testing_Hypotheses/8.E%3A_Testing_Hypotheses_(Exercises)","timestamp":"2024-11-02T18:50:34Z","content_type":"text/html","content_length":"210840","record_id":"<urn:uuid:a8f34b28-29ed-4414-bdb5-2c6989f19ebb>","cc-path":"CC-MAIN-2024-46/segments/1730477027729.26/warc/CC-MAIN-20241102165015-20241102195015-00313.warc.gz"} |
Clustering Mixed Data Types in R | R-bloggersClustering Mixed Data Types in R
Clustering Mixed Data Types in R
[This article was first published on
Wicked Good Data - r
, and kindly contributed to
]. (You can report issue about the content on this page
) Want to share your content on R-bloggers?
if you have a blog, or
if you don't.
Clustering allows us to better understand how a sample might be comprised of distinct subgroups given a set of variables. While many introductions to cluster analysis typically review a simple
application using continuous variables, clustering data of mixed types (e.g., continuous, ordinal, and nominal) is often of interest. The following is an overview of one approach to clustering data
of mixed types using Gower distance, partitioning around medoids, and silhouette width.
In total, there are three related decisions that need to be taken for this approach:
For illustration, the publicly available “College” dataset found in the ISLR package will be used, which has various statistics of US Colleges from 1995 (N = 777). To highlight the challenge of
handling mixed data types, variables that are both categorical and continuous will be used and are listed below:
• Continuous
□ Acceptance rate
□ Out of school tuition
□ Number of new students enrolled
• Categorical
□ Whether a college is public/private
□ Whether a college is elite, defined as having more than 50% of new students who graduated in the top 10% of their high school class
The code was run using R version 3.2.2 with the following packages:
set.seed(1680) # for reproducibility
library(dplyr) # for data cleaning
library(ISLR) # for college dataset
library(cluster) # for gower similarity and pam
library(Rtsne) # for t-SNE plot
library(ggplot2) # for visualization
Before clustering can begin, some data cleaning must be done:
• Acceptance rate is created by diving the number of acceptances by the number of applications
• isElite is created by labeling colleges with more than 50% of their new students who were in the top 10% of their high school class as elite
college_clean <- College %>%
mutate(name = row.names(.),
accept_rate = Accept/Apps,
isElite = cut(Top10perc,
breaks = c(0, 50, 100),
labels = c("Not Elite", "Elite"),
include.lowest = TRUE)) %>%
mutate(isElite = factor(isElite)) %>%
select(name, accept_rate, Outstate, Enroll,
Grad.Rate, Private, isElite)
## Observations: 777
## Variables: 7
## $ name (chr) "Abilene Christian University", "Ad...
## $ accept_rate (dbl) 0.7421687, 0.8801464, 0.7682073, 0....
## $ Outstate (dbl) 7440, 12280, 11250, 12960, 7560, 13...
## $ Enroll (dbl) 721, 512, 336, 137, 55, 158, 103, 4...
## $ Grad.Rate (dbl) 60, 56, 54, 59, 15, 55, 63, 73, 80,...
## $ Private (fctr) Yes, Yes, Yes, Yes, Yes, Yes, Yes,...
## $ isElite (fctr) Not Elite, Not Elite, Not Elite, E...
Calculating Distance
In order for a yet-to-be-chosen algorithm to group observations together, we first need to define some notion of (dis)similarity between observations. A popular choice for clustering is Euclidean
distance. However, Euclidean distance is only valid for continuous variables, and thus is not applicable here. In order for a clustering algorithm to yield sensible results, we have to use a distance
metric that can handle mixed data types. In this case, we will use something called Gower distance.
Gower distance
The concept of Gower distance is actually quite simple. For each variable type, a particular distance metric that works well for that type is used and scaled to fall between 0 and 1. Then, a linear
combination using user-specified weights (most simply an average) is calculated to create the final distance matrix. The metrics used for each data type are described below:
• quantitative (interval): range-normalized Manhattan distance
• ordinal: variable is first ranked, then Manhattan distance is used with a special adjustment for ties
• nominal: variables of k categories are first converted into k binary columns and then the Dice coefficient is used
□ pros: Intuitive to understand and straightforward to calculate
□ cons: Sensitive to non-normality and outliers present in continuous variables, so transformations as a pre-processing step might be necessary. Also requires an NxN distance matrix to be
calculated, which is computationally intensive to keep in-memory for large samples
Below, we see that Gower distance can be calculated in one line using the daisy function. Note that due to positive skew in the Enroll variable, a log transformation is conducted internally via the
type argument. Instructions to perform additional transformations, like for factors that could be considered as asymmetric binary (such as rare events), can be seen in ?daisy.
# Remove college name before clustering
gower_dist <- daisy(college_clean[, -1],
metric = "gower",
type = list(logratio = 3))
# Check attributes to ensure the correct methods are being used
# (I = interval, N = nominal)
# Note that despite logratio being called,
# the type remains coded as "I"
## 301476 dissimilarities, summarized :
## Min. 1st Qu. Median Mean 3rd Qu. Max.
## 0.0018601 0.1034400 0.2358700 0.2314500 0.3271400 0.7773500
## Metric : mixed ; Types = I, I, I, I, N, N
## Number of objects : 777
As a sanity check, we can print out the most similar and dissimilar pair in the data to see if it makes sense. In this case, University of St. Thomas and John Carroll University are rated to be the
most similar given the seven features used in the distance calculation, while University of Science and Arts of Oklahoma and Harvard are rated to be the most dissimilar.
gower_mat <- as.matrix(gower_dist)
# Output most similar pair
which(gower_mat == min(gower_mat[gower_mat != min(gower_mat)]),
arr.ind = TRUE)[1, ], ]
## name accept_rate Outstate Enroll
## 682 University of St. Thomas MN 0.8784638 11712 828
## 284 John Carroll University 0.8711276 11700 820
## Grad.Rate Private isElite
## 682 89 Yes Not Elite
## 284 89 Yes Not Elite
# Output most dissimilar pair
which(gower_mat == max(gower_mat[gower_mat != max(gower_mat)]),
arr.ind = TRUE)[1, ], ]
## name accept_rate
## 673 University of Sci. and Arts of Oklahoma 0.9824561
## 251 Harvard University 0.1561486
## Outstate Enroll Grad.Rate Private isElite
## 673 3687 208 43 No Not Elite
## 251 18485 1606 100 Yes Elite
Choosing a clustering algorithm
Now that the distance matrix has been calculated, it is time to select an algorithm for clustering. While many algorithms that can handle a custom distance matrix exist, partitioning around medoids
(PAM) will be used here.
Partitioning around medoids is an iterative clustering procedure with the following steps:
1. Choose k random entities to become the medoids
2. Assign every entity to its closest medoid (using our custom distance matrix in this case)
3. For each cluster, identify the observation that would yield the lowest average distance if it were to be re-assigned as the medoid. If so, make this observation the new medoid.
4. If at least one medoid has changed, return to step 2. Otherwise, end the algorithm.
If you know the k-means algorithm, this might look very familiar. In fact, both approaches are identical, except k-means has cluster centers defined by Euclidean distance (i.e., centroids), while
cluster centers for PAM are restricted to be the observations themselves (i.e., medoids).
• pros: Easy to understand, more robust to noise and outliers when compared to k-means, and has the added benefit of having an observation serve as the exemplar for each cluster
• cons: Both run time and memory are quadratic (i.e., $O(n^2)$)
Selecting the number of clusters
A variety of metrics exist to help choose the number of clusters to be extracted in a cluster analysis. We will use silhouette width, an internal validation metric which is an aggregated measure of
how similar an observation is to its own cluster compared its closest neighboring cluster. The metric can range from -1 to 1, where higher values are better. After calculating silhouette width for
clusters ranging from 2 to 10 for the PAM algorithm, we see that 3 clusters yields the highest value.
# Calculate silhouette width for many k using PAM
sil_width <- c(NA)
for(i in 2:10){
pam_fit <- pam(gower_dist,
diss = TRUE,
k = i)
sil_width[i] <- pam_fit$silinfo$avg.width
# Plot sihouette width (higher is better)
plot(1:10, sil_width,
xlab = "Number of clusters",
ylab = "Silhouette Width")
lines(1:10, sil_width)
Cluster Interpretation
Via Descriptive Statistics
After running the algorithm and selecting three clusters, we can interpret the clusters by running summary on each cluster. Based on these results, it seems as though Cluster 1 is mainly Private/Not
Elite with medium levels of out of state tuition and smaller levels of enrollment. Cluster 2, on the other hand, is mainly Private/Elite with lower levels of acceptance rates, high levels of out of
state tuition, and high graduation rates. Finally, cluster 3 is mainly Public/Not Elite with the lowest levels of tuition, largest levels of enrollment, and lowest graduation rate.
pam_fit <- pam(gower_dist, diss = TRUE, k = 3)
pam_results <- college_clean %>%
dplyr::select(-name) %>%
mutate(cluster = pam_fit$clustering) %>%
group_by(cluster) %>%
do(the_summary = summary(.))
## [[1]]
## accept_rate Outstate Enroll
## Min. :0.3283 Min. : 2340 Min. : 35.0
## 1st Qu.:0.7225 1st Qu.: 8842 1st Qu.: 194.8
## Median :0.8004 Median :10905 Median : 308.0
## Mean :0.7820 Mean :11200 Mean : 418.6
## 3rd Qu.:0.8581 3rd Qu.:13240 3rd Qu.: 484.8
## Max. :1.0000 Max. :21700 Max. :4615.0
## Grad.Rate Private isElite cluster
## Min. : 15.00 No : 0 Not Elite:500 Min. :1
## 1st Qu.: 56.00 Yes:500 Elite : 0 1st Qu.:1
## Median : 67.50 Median :1
## Mean : 66.97 Mean :1
## 3rd Qu.: 78.25 3rd Qu.:1
## Max. :118.00 Max. :1
## [[2]]
## accept_rate Outstate Enroll
## Min. :0.1545 Min. : 5224 Min. : 137.0
## 1st Qu.:0.4135 1st Qu.:13850 1st Qu.: 391.0
## Median :0.5329 Median :17238 Median : 601.0
## Mean :0.5392 Mean :16225 Mean : 882.5
## 3rd Qu.:0.6988 3rd Qu.:18590 3rd Qu.:1191.0
## Max. :0.9605 Max. :20100 Max. :4893.0
## Grad.Rate Private isElite cluster
## Min. : 54.00 No : 4 Not Elite: 0 Min. :2
## 1st Qu.: 77.00 Yes:65 Elite :69 1st Qu.:2
## Median : 89.00 Median :2
## Mean : 84.78 Mean :2
## 3rd Qu.: 94.00 3rd Qu.:2
## Max. :100.00 Max. :2
## [[3]]
## accept_rate Outstate Enroll
## Min. :0.3746 Min. : 2580 Min. : 153
## 1st Qu.:0.6423 1st Qu.: 5295 1st Qu.: 694
## Median :0.7458 Median : 6598 Median :1302
## Mean :0.7315 Mean : 6698 Mean :1615
## 3rd Qu.:0.8368 3rd Qu.: 7748 3rd Qu.:2184
## Max. :1.0000 Max. :15516 Max. :6392
## Grad.Rate Private isElite cluster
## Min. : 10.00 No :208 Not Elite:199 Min. :3
## 1st Qu.: 46.00 Yes: 0 Elite : 9 1st Qu.:3
## Median : 54.50 Median :3
## Mean : 55.42 Mean :3
## 3rd Qu.: 65.00 3rd Qu.:3
## Max. :100.00 Max. :3
Another benefit of the PAM algorithm with respect to interpretation is that the medoids serve as exemplars of each cluster. From this, we see that Saint Francis University is the medoid of the
Private/Not Elite cluster, Barnard College is the medoid for the Private/Elite cluster, and Grand Valley State University is the medoid for the Public/Not Elite cluster.
college_clean[pam_fit$medoids, ]
## name accept_rate Outstate
## 492 Saint Francis College 0.7877629 10880
## 38 Barnard College 0.5616987 17926
## 234 Grand Valley State University 0.7525653 6108
## Enroll Grad.Rate Private isElite
## 492 284 69 Yes Not Elite
## 38 531 91 Yes Elite
## 234 1561 57 No Not Elite
Via Visualization
One way to visualize many variables in a lower dimensional space is with t-distributed stochastic neighborhood embedding, or t-SNE. This method is a dimension reduction technique that tries to
preserve local structure so as to make clusters visible in a 2D or 3D visualization. While it typically utilizes Euclidean distance, it has the ability to handle a custom distance metric like the one
we created above. In this case, the plot shows the three well-separated clusters that PAM was able to detect. One curious thing to note is that there is a small group that is split between the
Private/Elite cluster and the Public/Not Elite cluster.
tsne_obj <- Rtsne(gower_dist, is_distance = TRUE)
tsne_data <- tsne_obj$Y %>%
data.frame() %>%
setNames(c("X", "Y")) %>%
mutate(cluster = factor(pam_fit$clustering),
name = college_clean$name)
ggplot(aes(x = X, y = Y), data = tsne_data) +
geom_point(aes(color = cluster))
By investigating further, it looks like this group is made up of the larger, more competitive public schools, like the University of Virginia or the University of California at Berkeley. While not
large enough to warrant an additional cluster according to silhouette width, these 13 schools certainly have characteristics distinct from the other three clusters.
tsne_data %>%
filter(X > 15 & X < 25,
Y > -15 & Y < -10) %>%
left_join(college_clean, by = "name") %>%
collect %>%
## [1] "College of William and Mary"
## [2] "Georgia Institute of Technology"
## [3] "SUNY at Binghamton"
## [4] "SUNY College at Geneseo"
## [5] "Trenton State College"
## [6] "University of California at Berkeley"
## [7] "University of California at Irvine"
## [8] "University of Florida"
## [9] "University of Illinois - Urbana"
## [10] "University of Michigan at Ann Arbor"
## [11] "University of Minnesota at Morris"
## [12] "University of North Carolina at Chapel Hill"
## [13] "University of Virginia"
A Final Note: Dealing with Larger Samples and One-Hot Encoding
Because using a custom distance metric requires keeping an NxN matrix in memory, it starts to become noticeable for larger sample sizes (> 10,000 or so on my machine). For clustering larger samples,
I have found two options:
1. Two-step clustering in SPSS: This model-based clustering approach can handle categorical and continuous variables and utilizes silhouette width (using rule-of-thumb cutoffs) to find the optimal
number of clusters.
2. Using Euclidean distance on data that has been one-hot encoded: While much quicker computationally, note that this is not optimal as you run into the curse of dimensionality fairly fast since all
categoricals are recoded to become sparse matrices. Note that this approach is actually fairly similar to the dice coefficient found in calculation of Gower distance, except it incorrectly labels
0-0 as a match. More on this can be seen in this discussion. | {"url":"https://www.r-bloggers.com/2016/06/clustering-mixed-data-types-in-r-2/","timestamp":"2024-11-08T15:22:21Z","content_type":"text/html","content_length":"116536","record_id":"<urn:uuid:32e972eb-b108-49fc-8acc-4547e915566e>","cc-path":"CC-MAIN-2024-46/segments/1730477028067.32/warc/CC-MAIN-20241108133114-20241108163114-00658.warc.gz"} |
Prep Baseball Update – Wednesday, May 18th
Prep Baseball Update – Wednesday, May 18th
Dave Overlund
GRANITE CITY 1390 GRANITE CITY HIGH SCHOOL BASEBALL REPORT
I will bring to you game summaries of the following teams; weekly and possibly bi-weekly as well. Rocori Spartans, St. Cloud Tech Crush, Sauk Rapids-Rice Storm and Sartell-St. Stephen Sabres of the
Central Lakes Conference. St. Cloud Cathedral Crusaders, Albany Huskies, Foley Lumberjacks and Pierz Pioneers of the Granite Ridge Conference, the Becker Bulldogs of the Mississippi 8 Conference.
Eden Valley-Watkins Eagles, Royalton Royals, Kimball Area Cubs, Paynesville Bulldogs, Holdingford Huskers, Atwater-Cosmo-Grove City Falcons, Belgrade-Brooten-Elrosa Jaguars and Maple Lake Irish of
the Central Mn. Conference.
(MONDAY/TUESDAY MAY 16th/17th)
SAUK RAPIDS-RICE STORM 9 DETROIT LAKES LAKERS 4
The Storm defeated their rivals from the North the Lakers, backed by twelve hits, including a home run and a double. The Storm put up three runs in the second, fifth and the seventh inning, this gave
their pitchers good support. Noah Jenson started on the mound for the Storm, he threw four innings to earn the win. He gave up six hits, three runs, issued one walk and he recorded six strikeouts.
Owen Arndt threw three innings in relief, he gave up four hits, one run and he recorded one strikeout.
The Storm offense was led by Andrew Harren, he went 2-for-4 with a home run for four big RBIs. Luke Pakkala went 3-for-4 with a double for two RBIs and he scored a pair of runs. Keegan Patterson went
2-for-4 with two doubles for a RBI and Cullen Posch went 1-for-4 for a RBI and he scored a run. Andrew Bemboom went 1-for-3 for a RBI, he earned a walk and he scored a run. Jeff Solorz went 1-for-3,
he earned a walk and he scored a pair of runs. Noah Jensen went 1-for-3 and he earned a walk, Terrance Moody earned a walk and he had a stolen base and Dominic Mathies went 1-for-4.
The Lakers starting pitcher was Noah Rieber, he threw five innings, he gave up eight hits, six runs, issued three walks and he recorded three strikeouts. Brock Okeson threw two innings in relief, he
gave up four hits, three runs and he issued a walk.
The Lakers offense was led by Jordan Tucker, he went 1-for-4 with a home run for two RBIs and he scored a pair of runs. Carson Rogstad went 1-for-3 with a double for a RBI. Brady Swiers went 3-for-4
with two doubles and he scored a run. Grady Kirchner went 2-for-4 and he scored a run and Hunter Korte went 1-for-4. Hunter Korte and Noah Rieber both went 1-for-3, Mason Omberg had a sacrifice bunt
and Brock Okeson earned a walk.
ALBANY HUSKIES 3 EVW EAGLES 0
The Huskies of the Granite Ridge Conference defeated their rival the Eagles from the Central Minnesota Conference, backed by four big hits, including a double and good defense. Brady Goebel started
on the mound for the Huskies, he threw 6 1/3 innings to earn the win, he gave up six hits, two walks and he recorded five strikeouts. Brandon Holm closed it out with 2/3 of an inning in relief, he
recorded a strikeout.
The Huskies offense was led by Payton Krumrei, he went 1-for-3 for a RBI and Carter Birr went 1-for-2 with a double, he was hit by a pitch and he scored a run. Tanner Reis went 1-for-3 with a double
and he scored a run and Brandon Holm went 1-for-4. Caden Sand earned a walk and he was hit by a pitch, Carter Schiffler was hit by a pitch and Carson Holthaus scored a run.
The Eagles starting pitcher was Jackson Geislinger, he threw a complete game, he gave up four hits, three runs, one walk and he recorded six strikeouts. The Eagles offense was led by Myles Dziengel
and Xander Willner both went 2-for-3. Jackson Geislinger went 1-for-3 with a double, Landon Neiman went 1-for-2, he earned a walk and he had a stolen base and Nolan Geislinger earned a walk and he
had a stolen base.
The Thunder defeated their Granite Ridge Conference rivals the Falcons, backed by six hits, including a home run and a double. This did break the Falcons 43 game regular season winning streak.
Starting pitcher for the Thunder was Eli Nelson, he threw six innings to earn the win, he gave up five hits, three runs, issued four walks and he recorded four strikeouts. Matt Freeberg closed it out
with one inning of relief.
The Thunder offense was led by Matt Freeberg, he went 1-for-2 with a home run and Cal Bushinger went 1-for-3 with a double for two RBIs and he scored a run. Max Gostonziak went 2-for-3, he had a
stolen base and he scored a run. Trevor Jones went 1-for-3 and he scored a run and Eli Nelson went 1-for-3.
The Falcons starting pitcher Derek Dahmen threw a complete game, he gave up six hits, four runs, three walks and he recorded five strikeouts. The Falcons offense was led by Bryce Gapinski, he went
1-for-4 with a home run, a pair of walks and a stolen base. Josiah Peterson went 1-for-3 for a RBI and a walk and Charles Hackett went 1-for-3, with a walk and he scored a run. Logan Winkelman went
1-for-4 and he scored a run, Daniel Dahmen went 1-for-3 and Brett Leabch was hit by a pitch.
CATHEDRAL CRUSADERS 13 MORA MUSTANGS 2 (5 Innings)
The Crusaders defeated their Granite Ridge Conference rivals he Mustangs, backed by thirteen hits. They had eight players collect hits, this gave their starting pitcher great support. Talen
Braegelman, threw five innings to earn the win, he gave up two hits, two runs, two walks and he recorded nine strikeouts.
The Crusaders offense was led by Jack Theisen, he went 2-for-3 for two RBIs, he was hit by a pitch and he scored a pair of runs. Cooper Kosiba went 2-for-4 for four RBIs and he scored a pair of runs.
Austin Lenzmeier went 1-for-4 for two RBIs and he scored a run. Trevor Fleege went 1-for-3 for a RBI he earned a walk and he scored a run. Steve Ellingson went 1-for-2 for a RBI, he had a sacrifice
bunt, he was hit by a pitch and he scored a run. John Hawkins earned a walk, had a stolen base, scored two runs and he was credited for a RBI. Ben Brown went 1-for-2 for a RBI, he was hit by a pitch
and he scored a run and Evan Wahlin scored a run. Tommy Gohman went 3-for-4 with a stolen base and he scored a pair of runs. Grant Wensmann went 2-for-3 and he earned a walk.
The Mustangs starting pitcher was Nathan Nelson, he threw four innings, he gave up eleven hits, eight runs, two walks and he recorded a strikeout. Seth Hatch gave up a pair of runs and Cole Gmahl
threw 1/3 of an inning, he gave up two hits, three runs and a walk. Owen Spokane threw 2/3 of an inning to close it out.
The Mustangs offense was led by Seth Hatch, he went 1-for-2 with a double for a RBI and Nathan Nelson went 1-for-2. Kenny Randt was credited for a RBI and Brock Folkema and Michael Mann both earned a
walk. Levi Dunsmore had a stolen base and he scored a run and Daniel Stillday was hit by a pitch.
KIMBALL AREA CUBS 11 HOLDINGFORD HUSKERS 5
The Cubs defeated their Central Minnesota Conference rivals the Huskers, backed by sixteen hits, including four doubles. They had seven players collect hits and they played solid defense. Their
starting hurler was Skylor Gruba, he threw five innings to earn the win. He gave up seven hits, three runs, issued two walks and he recorded eight strikeouts. Matt Young threw two innings in relief,
he gave up three hits, two runs and he recorded a strikeout.
The Cubs offense was led by Devin Waldorf, he went 2-for-3 with a double for three RBIs, he earned a walk and he scored a pair of runs. Clay Faber went 2-for-3 with a double for two RBIs, he earned a
walk and he scored a run. Skylor Gruba went 3-for-4 for three RBIs and Ace Meyer went 2-for-5 and he scored a pair of runs. Lefty Gavin Winter went 3-for-5 with a double for a RBI, he had a stolen
base and he scored a pair of runs. Cody Leither went 3-for-5, with a stolen base and he scored a pair of runs. Ashton “Shuggs” Hanan went 1-for-3 with a double for a RBI, he earned a pair of walks,
had a stolen base and he scored a pair of runs.
The starting pitcher for the Huskers was CJ Clear, he threw six innings, he gave up fifteen hits, eleven runs, three walks and he recorded seven strikeouts. Nick Hansen threw the final inning in
relief to close it out, he gave up one hit, one walk and he recorded a strikeout.
The Huskers offense was led by Drew Lange, he went 2-for-5 for a RBI and he scored a pair of runs. Dirks Opatz went 2-for-5 for a RBI and Luke Binek went 1-for-4 for a RBI and he had a stolen base.
Nick Hanson went 2-for-3, with a walk, a stolen base and he scored a pair of runs. Rob Voller went 1-for-4 for a RBI and he scored a run. Sam Harren and Cole Clear both went 1-for-3.
MAPLE LAKE IRISH 14 ROYALTON ROYALS 13
The Irish defeated their Central Minnesota Conference rivals the Royals, backed by fifteen hits, including four doubles and four players with multi-hit games. The Irish starting pitcher was Nathan
Zander, he threw 3 1/3 innings, he gave up five hits, seven runs, six walks and he recorded four strikeouts. Noah Gindele threw 2/3 of an inning in relief, he gave up two hits, three runs and he
issued one walk. Carter Scanlon threw three innings in relief, he gave up three hits, three runs, one walk and he recorded four strikeouts.
The Irish was led on offense by Nathan Zander, he went 4-for-5 with three doubles for two RBIs and he scored a pair of runs. Eddy Neu went 3-for-3 for four RBIs, he earned a walk, was hit by a pitch,
had a stolen base and he scored a trio of runs. Danny Reilley went 3-for-4 for three RBIs and he scored a run and Nick Jost went 2-for-4 with a double, he was hit by a pitch and he scored a run.
Marcus Weimer went 1-for-3 for two RBIs, he was hit by a pitch twice and he scored a pair of runs. Joey Gendreau went 1-for-4 for a RBI, he earned a walk, was hit by a pitch and he scored a run. G.
Goelz went 1-for-4, he was hit by a pitch and he scored a pair of runs. Sam Marquette had a sacrifice fly for a RBI, he had a sacrifice fly and a stolen base. Jarrett Faue was hit by a pitch and
Logan Salmela scored a run.
The starting pitcher for the Royals was Nick Leibold, he threw 3 1/3 innings, he gave up seven hits, seven runs, two walks and he recorded three strikeouts. Jonah Schneider threw three innings in
relief, he gave up eight hits, seven runs, one walk and he recorded a strikeout.
The Royals offense was led by Jameson Klug, he went 1-for-2 with a double and a sacrifice fly for two RBIs, he earned a pair of walks, had a stolen base and he scored a run. Drew Yourczek went
3-for-4 with a triple for a RBI, a walk and he scored three runs. Gabe Gorecki went 2-for-4 with a sacrifice fly for three RBIs, a pair of stolen bases and he scored a pair of runs. Nick Leibold went
1-for-4 with a triple for two RBIs, he earned a walk and he scored a run. Tyler Swenson went 1-for-4 for two RBIs and he scored a run. Jonah Schneider went 1-for-3 for a RBI, he earned a walk and he
scored a run. Drew Sowada went 1-for-3 with a sacrifice bunt, Cal Ollman earned three walks, two stolen bases and he scored a run. Will Gorecki had a sacrifice bunt, he was hit by a pitch, he had a
stolen base and he scored a run.
ACGC FALCONS 7 EVW EAGLES 2
The Falcons defeated their Central Minnesota Conference rivals the Eagles, backed by seven hits, including three home runs and a double. The Falcons put up a pair of runs in the 2nd, 6th and the 7th
innings and they played solid defense. Jack Peterson started on the mound for the Falcons, he threw 6 2/3 innings to earn the win. He gave up two hits, two runs, three walks and he recorded twelve
strikeouts. Connor Baker closed it out, by recording a strikeout.
The Falcons offense was led by Jack Peterson, he went 1-for-3 with a home run for two RBIs, he earned a walk and he scored a run. Connor Baker went 1-for-4 with a home run and Terrell Renne went
2-for-4 and he scored a pair of runs. Jaxon Behm went 1-for-4 with a home run and Masson Hiltner earned a walk, had a stolen base and he scored a run. Keegan Kessler-Gross went 1-for-2 with a double
for a RBI, he earned a walk and a stolen base. Logan Straumann went 1-for-2 for a RBI, he earned a walk, he was hit by a pitch and he scored a run and Zach Bagley earned a walk.
The Eagles starting pitcher was Nolan Geislinger, he threw 5 1/3 innings, he gave up five hits, five runs, three walks and he recorded four strikeouts. The Eagles offense was led by Nolan Geislinger,
he went 1-for-3 with a double for a RBI and he scored a run. Jackson Geislinger went 1-for-2 for a RBI. Myles Dziengel and Gavin Mathies both earned a walk and Landon Neiman earned a walk and he
scored a run.
MOORHEAD SPUDS 11 ST. CLOUD CRUSH 4
The Spuds defeated the Crush, backed by eight hits, including a triple and a double and they were aided by seven walks. The Spuds starting pitcher was Brett Letness, he threw 4 2/3 innings to earn
the win. He gave up five hits, three runs, six walks and he recorded three strikeouts. Aaron Reierson threw 1 1/3 inning, he gave up one hit, one run, one walk and he recorded a strikeout.
The Spuds offense was led by Jacob Vannett, he went 2-for-2 for two RBIs, he earned a walk, he was hit by a pitch and he scored a run. Logan Hilber went 1-for-2 with a double for two RBIs and he
scored two runs. Wyatt Tweet went 1-for-3 with a triple for two RBIs, he was hit by a pitch and he scored a run. Jack Teiken went 1-for-4 for a RBI and Gavin Gast earned a walk and he was credited
for a RBI. Jackson Young went 1-for-3 with a walk and he scored a run and Ignacio Delgado went 1-for-2 with a stolen base and he scored a run. Zach Taft went 1-for-3, with a walk, a stolen base and
he scored two runs. Justin Stalboerger eared two walks, scored a run and he was credited for a RBI, Carson Zimmerman had a walk, he was hit by a pitch, a stolen base and he scored two runs and Arron
Reierson had a sacrifice bunt.
The Crush starting pitcher was Elian Mezquita, he threw four innings, he gave up three hits, five runs, five walks and he recorded two strikeouts. James Nyberg threw two innings in relief, he gave up
five hits, six runs, two walks and he recorded a strikeout.
The Crush offense was led by Joe Hess, he went 2-for-3 with a double for two RBIs and he earned a walk. Henry Bulson went 1-for-4 with a double and Tim Gohman went 1-for-3 for a RBI and he earned a
walk. Elian Mezquita went 1-for-3 with a walk and he scored a pair of runs and Jaxon Kenning went 1-for-4. Parker Schultz went 1-for-1, Jacob Mendel earned two walks and he scored a run, Blake O’Hara
earned a walk and he was hit by a pitch, Will Allenspach earned a walk and Grant Roob had a stolen base and he scored a run.
(Tuesday May 17th)
SARTELL-ST. STEPHEN SABRES 5 BECKER BULLDOGS 3
The Sabres of the Central Lakes Conference defeated their Mississippi 8 Conference foe the Bulldogs. They Sabres did collect seven hits, including a pair of home runs and some good defensive plays.
The Sabres broke up a tied game with two big runs in the bottom of the sixth inning. This gave righty Wesley Johnson enough support, he was the Sabres starting pitcher, he threw six innings to earn
the win. He gave up seven hits, three runs, no walks and he recorded five strikeouts. Lefty Jalen Vorpahl threw the final inning in relief, he gave up one hit, issued one walk and he recorded a
The Sabres offense was led by Blake Haus, he went 1-for-2 with a home run for two RBIs and he earned a walk. Steven Brinkerhoff went 1-for-2 with a home run, and he was hit by a pitch. Austin
Henrichs went 2-for-4 with a pair of stolen bases and he scored a run. Kade Lewis went 1-for-2 and he was hit by a pith and Calen O’Connell went 1-for-2, he was hit by a pitch and he scored a run.
Jacob Merrill went 1-for-4 and Jackson Vos earned a pair of walks and he scored a run.
The Bulldogs starting pitcher was Will Thorn, he threw 5 2/3 innings, he gave up six hits, five runs, three walks and he recorded six strikeouts. Jacob Bergsten threw 1/3 of an inning to close it
out, he gave up a hit.
The Bulldogs offense was led by Owen Kolbinger, he went 3-for-4 no Brady Taylor went 1-for-3 for two RBIs. Gavin Swanson went 1-for-3 for a RBI and he scored a run and Nick Berglund went 1-for-4.
Will Thorn went 1-for-4 and Ben Dumonceaux went 1-for-3. Nolan Murphy earned a walk and he was hit by a pitch, Jase Tobako was hit by a pitch and he scored a run and Hayden Harmoning scored a run.
SAUK RAPIDS-RICE STORM 9 BRAINERD WARRIORS 3
The Storm defeated their Conference and Section rivals the Warriors, backed by eight hits, aided by nine walks and solid “D”. They had a pair of good pitching performances, their starter was Cullen
Posch, he threw five innings to earn the win. He gave up two hits, two runs, one walk and he recorded five strikeouts. Terrence Moody threw two innings in relief, he gave up two hits, one run and he
recorded four strikeouts.
The Storm offense was led by Luke Pakkala, he went 3-for-4 with a double for a RBI, he had a stolen base and he scored a run. Jeff Solorz went 2-for-4 with a sacrifice fly for two RBIs and he scored
a run. Dominic Mathies went 1-for-3 for a RBI, he earned a walk, he was hit by a pitch, had a stolen base and he scored a run. Terrence Moody earned two walks, he was hit by a pitch, give credited
for a RBI and he scored a run. Andrew Bemboom was earned a walk and given credited for two RBIs and he scored a run. Keegan Paterson was credited for tw RBIs and he scored a run. Ethan Swanson went
1-for-3, he earned a walk and he scored a run. Noah Jensen wen 1-for-2, he earned two walks and he score a run and Andrew Harren earned two walks, had a stolen base and he scored a run.
The Warriors starting pitcher was Mitchell Brau, he threw 1/3 of an inning, he gave up three hits, four walks and eight runs. Sawyer Hennessy threw 5 2/3 innings, he gave up four hits, four walks and
one run. Kooper Seidl threw the final inning in relief, he gave up a hit and a walk.
The Warriors offense was led by Cayden Kleffman, he went 2-for-3 with a double for a RBI, he earned a walk and he scored a run. Adam Jensen went 1-for-3 and Brody Lund had a sacrifice fly for a RBI.
Alex Helmin and Isaac Hanson both scored a run.
The Cardinals defeated their Conference rivals the Spartans, backed by nine hits, including six doubles. They put up three big runs in the first inning and never looked back. Jaxon Schoenrock started
on the mound for the Cardinals, he threw six innings to earn the win. He gave up two hits, one walk and he recorded seven strikeouts. Nick Levasseur closed it out with one inning of relief, he
recorded a strikeout.
The Cardinals offense was led by JD Hennen, he went 3-for-3 with two doubles for three RBIs. Jaxon Schoenrock went 1-for-2 with a double for two RBIs and Caleb Runge went 1-for-3 with a double for a
RBI, he earned a walk andhe scored a pair of runs. Brock Lerfald went 1-for-3 with a double, he earned a walk, had a stolen base and he scored two runs. Nate Hammerback went 1-for-3 with a double and
Reed Reisdorf went 1-for-3. Will Suchy went 1-for-2 and Grady Anderson had a sacrifice bunt. Lake Hagen was hit by a pitch, Nate Knoll and Spencer Schmidt for scored a run.
The Spartans starting pitcher was Cole Fuchs, he threw five innings, he gave up eight hits, six runs, and a walk. Evan Acheson threw one inning of relief, he gave up one hit, one walk and he recorded
two strikeouts. The Spartans offense included Joel Sowada, he went 2-for-3 and he was hit by a pitch, Brady Schafer was hit by a pitch and Thad Lieser earned a walk.
WILLMAR CARDINALS 10 ST. CLOUD CRUSH 4
The Cardinals defeated their conference rivals the Crush, backed by eight hits, aided by five walks and good defense. Their starting pitcher Cayden Hansen threw six innings to earn the win. He gave
up five hits, four runs, four walks and he recorded five strikeouts. Ian Koosman threw the final inning in relief, he recorded two strikeouts.
The Cardinals offense was led by Alex Schramm, he went 1-for-3 with a double for two RBIs, he earned a walk and he had a pair of stolen bases. Cayden Hansen went 1-for-4 for two RBIs and he scored a
run and Sam Etterman went 1-for-4 with a double and he scored a run. Brandt Sunder went 1-for-4 for a RBI, he earned a walk and he scored a run. Charter Schow went 1-for-2 for a RBI, he had a
sacrifice bunt no he scored two runs. Mason Madsen went 2-for-3, he earned a walk and he scored two runs. Jason Malmgren earned a walk, he was hit by a pitch, had a stolen base and he scored a run,
Connor Owens scored a run and Gregory went 1-for-3 with a sacrifice bunt. The Crush starting pitcher was Henry Bulson, he threw 5 1/3 innings, he gave up five hits, seven runs, three walks and he
recorded one strikeout. Luke Boettcher threw 1 2/3 innings in relief, he gave up three hits, three runs, two walks and he recorded three strikeouts.
The Crush offense was led by Jaxon Kenning, he went 2-for-4 for a RBI, he had a stolen base and he scored a run. Joe Hess went 1-for-2 for two RBIs, he earned a walk and had a stolen base. Parker
Schultz went 1-for-3 with a double and he scored a run and Ethan Lindholm earned a walk and he was credited for a RBI. Elian Mezquita went 1-for-4, Jaden Mendel earned a walk and scored a run, Ben
Schmitt scored a run and Blake O’Hara earned a walk.
PIERZ PIONEERS 5 FOLEY FALCONS 4
The Pioneers defeated their Granite Ridge Conference rivals the Falcons, backed by seven hits, solid “D” and a pair of good pitching performances. Andy Winscher started on the mound for the Pioneers,
he threw five innings to earn the win. He gave up three hits, three runs, four walks and he recorded six strikeouts. Reese Young threw two innings in relief, he gave up two hits, one run, one walk
and he recorded four strikeouts.
The Pioneers offense was led by Chase Becker, he went 1-for-3 for two RBIs and Max Barclay went 1-for-3 for a RBI and he was hit by a pitch. Andy Winscher went 1-for-3 for a RBI and he earned a walk.
Jeremy Bingesser went 1-for-3 with a double and he earned a walk and Mason Herold went 1-for-3, he earned a walk and he scored a run. Reese Young earned a walk, he was hit by a pitch, had a stolen
base and he scored a run. Trevor Radunz earned a walk, was hit by a pitch, had a stolen base and he scored a run. Ben Virnig earned a walk and scored a run, Kirby Fischer earned a walk and Hunter
Hoheisel scored a run.
The Falcons starting pitcher was Charles Hackett, he threw six innings, he gave up four hits, five runs, five walks and he recorded nine strikeouts. Trey Emmerich threw one inning in relief, he gave
up one hit, two walks and he recorded two strikeouts.
The Falcons offense was led by Derek Dahmen, he went 2-for-2 for RBI, he earned a walk, had a pair of stolen bases and he was hit by a pitch. Josiah Peterson had a sacrifice fly for a RBI, he earned
a walk and he scored a run. Brett Leabch went 1-for-4 for a RBI and he scored a run and Daniel Dahmen went 1-for-3 and he scored a run. Charles Hackett went 1-for-3, he earned a walk and he had a
stolen base. Logan Winkelman earned a walk, he was hit by a pitch and he scored a run and Trey Emmerich earned a walk and he was hit by a pitch.
The Crusaders defeated their Granite Ridge Conference rival the Flyers, backed by four timely hits, great “D” and very good pitching performances. The Crusaders starting pitcher was Tommy Gohman, he
threw six innings to earn the win. He gave up five hits, one run, two walks and he recorded three strikeouts. Jackson Henderson closed it out with one inning in relief, he retired three batters.
The Crusaders offense was led by Steven Ellingson, he went 2-for-2 with a sacrifice fly for a RBI, he had a stolen base and he scored a run. Cooper Kosiba went 1-for-3 for a RBI and he scored a run
and Grant Wensmann was credited for a RBI. Tommy Gohman went 1-for-3 and he scored a run and Jack Theisen earned a walk and he was hit by a pitch.
The Flyers starting pitcher was Matt Filippi, he threw a complete game, he gave up four hits, three runs, one walk and he recorded four strikeouts The Flyers offense was led by Matt Filippi, he went
2-for-4 with a double for a RBI and Riley Czech went 1-for-3 with a stolen base. George Moore and Dothoudt both went
1-for-3, Owen Bode earned a walk, he had a stolen base and he scored a run and Collin Kray was hit by a pitch.
PAYNESVILLE AREA BULLDOGS 13 BBE JAGUARS 3
The Bulldogs defeated the conference rivals the Jaguars, backed by sixteen hits, including a home run and three doubles, good “D” and very good pitching performances. Bennett Evans starting on the
mound, he threw four 2/3 innings to earn the win. He gave up three hits, two runs, three walks and he recorded two strikeouts. Eli Nelson threw 1 1/3 inning, he issued one walk and he recorded a
strikeout. Izaak Shultz threw one inning in relief, he gave up one hit, one run, one walk and he recorded two strikeouts.
The Bulldogs offense was led by Eli Nelson, he went 5-for-5 with a home run and two doubles for six RBIs, he was hit by a pitch, he had a stolen base and he scored a pair of runs. Trent Wendlandt
went 2-for-4 with a double for a RBI and he scored a run. Austin Pauls went 2-for-4 with a sacrifice bunt for a RBI and he scored a run. Max Ahtmann went 1-for-3 for two RBIs, he earned a walk and he
scored two runs. Grayson Fuchs went 2-for-4 for a RBI, he earned a walk and he scored three runs. Chase Bayer went 2-for-2, he was it twice by a pitch, he had a pair of stolen bases and he scored a
pair of runs. Bryce Vanderbeek went 1-for-3 for a RBI and Jevan Terres went 1-for-1 for a RBI. Spencer Eisenbraun earned a walk and he was hit by pitch and Spencer Lieser scored a run.
The Jaguars starting pitcher was Talen Kampsen, he threw two innings, he gave up four hits, six runs, one walk and he recorded two strikeouts. Easton Hagen threw 3 1/3 innings in relief, he gave up
seven hits, four runs, two walks and he recorded two strikeouts. Casey Lenarz threw 1 2/3 innings in relief, he gave up five hits and three runs.
The Jaguar offense was led by Will Vanbeck, he went 1-for-2 with a double for a RBI and he earned a walk. Blaine Fischer went 1-for-3 for a RBI and Peyton Winter went 1-for-1 with a double. Easton
Hagen went 1-for-2 and he scored a run and Ashton Dingman earned a walk and he scored a run. Chase Wright scored a run, Ethan Mueller, Luke Dingman and Tate Derek all earned a walk.
The Royals defeated their conference rivals the Huskers, backed by eleven hits, including a triple and a double and good defense. They put up five big runs in the third inning, to give their pitcher
good support. Blake Albright started on the mound for the Royals, he threw four innings to earn the win. He gave up four hits, one run, two walks and he recorded two strikeouts. Cal Ollman threw
three innings in relief, he gave up one run, four walks and he recorded four strikeouts.
The Royals offense was led by Jacob Albright, he went 2-for-3 for three RBIs, he earned a walk and he scored a run. Jacob Leibold, he went 2-for-4 with a double for a RBI, he had a stolen base and he
scored a run. Jameson Klug went 1-for-4 with a triple for a RBI, he earned a walk, he had a stolen base and he scored a pair of runs. Tyler Swenson had a sacrifice fly for a RBI and Nick Leibold went
1-for-1. Will Gorecki went 3-for-4, he earned a walk, he had three stolen bases and he scored a pair of runs. Gabe Gorecki went 1-for-3, he earned a walk and he had a stolen base. Drew Yourczek
earned threw walks and he scored a run and Drew Sowada went 1-for-4.
The Huskers starting pitcher was Drew Lange, he threw 3 1/3 innings, he gave up eight hits, seven runs, four runs and he recorded six strikeouts. G. Johnson threw 2 2/3 innings in relief, he gave up
one hit and he issued three walks. C. Breth threw the final inning in relief, he gave up two hits.
The Huskers offense was led by Sam Harren, he went 1-for-2 for a RBI and he had a stolen base. Drew Lange went 2-for-2, he earned two walks, had a stolen base and he scored a run. Jayden Barkowicz
went 1-for-3, CJ Clear earned two walks, Dirks Opatz earned a walk and Cohl Clear earned a walk and he scored a run.
More From 1390 Granite City Sports | {"url":"https://1390granitecitysports.com/prep-baseball-update-wednesday-may-18th/","timestamp":"2024-11-04T14:51:54Z","content_type":"text/html","content_length":"256715","record_id":"<urn:uuid:111bbefe-78f8-4acd-887d-43541cc226db>","cc-path":"CC-MAIN-2024-46/segments/1730477027829.31/warc/CC-MAIN-20241104131715-20241104161715-00443.warc.gz"} |
(6d+5)−(2−3d) Answer
Simplifying the Expression: (6d+5)−(2−3d)
This article will guide you through simplifying the expression (6d+5)−(2−3d). We'll break down each step to ensure clarity and understanding.
Understanding the Problem
The expression (6d+5)−(2−3d) involves combining like terms within parentheses. To simplify this expression, we need to:
1. Distribute the negative sign in front of the second parenthesis.
2. Combine the terms with 'd' and the constant terms separately.
Step-by-Step Solution
1. Distribute the negative sign: (6d + 5) + (-1) * (2 - 3d) = 6d + 5 - 2 + 3d
2. Combine like terms: (6d + 3d) + (5 - 2) = 9d + 3
Therefore, the simplified form of the expression (6d+5)−(2−3d) is 9d + 3.
This process demonstrates how to combine terms in an expression using the distributive property and identifying like terms. By following these steps, you can confidently simplify algebraic | {"url":"https://jasonbradley.me/page/(6d%252B5)%25E2%2588%2592(2%25E2%2588%25923d)-answer","timestamp":"2024-11-04T01:42:19Z","content_type":"text/html","content_length":"56935","record_id":"<urn:uuid:95ab3754-0d05-449a-ba21-a5004f76699a>","cc-path":"CC-MAIN-2024-46/segments/1730477027809.13/warc/CC-MAIN-20241104003052-20241104033052-00472.warc.gz"} |
Understanding Mathematical Functions: How To Find Maximum And Minimum
Introduction: Understanding the Importance of Finding Maximum and Minimum Values in Mathematics
Mathematics plays a crucial role in numerous fields, from engineering to economics, data analysis to optimization. One key aspect of mathematical functions is understanding how to find the maximum
and minimum values of a function. These values are essential in solving real-world problems, making informed decisions, and optimizing outcomes.
Explanation of what mathematical functions are and their role in various fields
Mathematical functions are essentially mathematical relationships that assign each input value to a unique output value. They are used to model various phenomena in fields such as physics, biology,
and finance. Functions can be expressed in the form of equations or graphs, allowing us to analyze and understand the behavior of different systems.
Overview of why maximum and minimum values are key to solving real-world problems
Finding maximum and minimum values of a function is crucial in solving optimization problems. In real-world scenarios, we often aim to maximize profits, minimize costs, or optimize resources. By
determining the highest and lowest points of a function, we can make decisions that lead to the best possible outcomes.
The significance of these values in optimization, engineering, economics, and data analysis
The maximum and minimum values of a function are critical in a wide range of fields. In engineering, these values help in designing efficient systems and structures. In economics, they aid in making
informed decisions about production, pricing, and resource allocation. In data analysis, they are used to identify trends, outliers, and anomalies in datasets.
Key Takeaways
• Identify critical points
• Use derivative to find extrema
• Check endpoints for global extrema
• Understand concavity for inflection points
• Apply knowledge to real-world problems
The Basics of Mathematical Functions and Their Extrema
A mathematical function is a rule that assigns each input value from a set (called the domain) to exactly one output value from another set (called the range). Functions are essential in mathematics
as they help us understand relationships between variables and make predictions based on those relationships.
A Definition of a mathematical function and the concept of domain and range
Definition of a mathematical function: A function f is a rule that assigns to each element x in a set A exactly one element y in a set B. This is denoted as y = f(x).
Domain and range: The domain of a function is the set of all possible input values for which the function is defined. The range of a function is the set of all possible output values that the
function can produce.
Explanation of what maximum and minimum values represent in a function
Maximum and minimum values: In a mathematical function, the maximum value represents the highest output value that the function can attain, while the minimum value represents the lowest output value
that the function can attain. These values are crucial in understanding the behavior of a function and can provide valuable insights into its properties.
Introduction to terms: local (relative) maxima/minima and global (absolute) maxima/minima
Local (relative) maxima/minima: A local maximum (or minimum) occurs at a point where the function reaches a peak (or valley) in a specific region of its domain. It is not necessarily the highest (or
lowest) point of the entire function but only within a small neighborhood.
Global (absolute) maxima/minima: A global maximum (or minimum) occurs at the highest (or lowest) point of the entire function over its entire domain. It represents the overall maximum (or minimum)
value that the function can achieve.
Methods to Find Maximum and Minimum Values
When dealing with mathematical functions, finding the maximum and minimum values is essential for various applications. There are several methods to determine these extrema, including the derivative
test, the closed interval method, and optimization problems.
A Derivative Test
The derivative test involves analyzing the first and second derivatives of a function to identify maximum and minimum points. Here's how it works:
• First Derivative Test: To find critical points, set the first derivative of the function equal to zero and solve for x. These critical points can be potential maximum or minimum points.
• Second Derivative Test: Once you have identified the critical points, evaluate the second derivative at these points. If the second derivative is positive, the point is a local minimum. If it is
negative, the point is a local maximum.
The Closed Interval Method
The closed interval method is used for continuous functions on a closed interval [a, b]. Here's how you can apply this method:
• Step 1: Find the critical points of the function within the interval [a, b] by setting the first derivative equal to zero.
• Step 2: Evaluate the function at the critical points and at the endpoints a and b.
• Step 3: The maximum and minimum values of the function on the interval [a, b] are the largest and smallest values obtained in Step 2.
Optimization Problems
Optimization problems involve maximizing or minimizing a function to solve real-world scenarios. These methods can be applied to various situations, such as maximizing profit or minimizing cost.
Here's how you can approach optimization problems:
• Step 1: Identify the objective function that needs to be optimized.
• Step 2: Determine the constraints that limit the possible solutions.
• Step 3: Use the derivative test or closed interval method to find the maximum or minimum values of the function within the given constraints.
Utilizing Calculus: A Closer Look at the Derivative Tests
When it comes to finding the maximum and minimum values of a mathematical function, calculus provides us with powerful tools known as the derivative tests. These tests, namely the first derivative
test and the second derivative test, help us identify critical points where extrema may occur.
Explanation of how the first derivative test is used to identify potential extrema
The first derivative test is a method used to determine whether a critical point is a local maximum or minimum. To apply this test, we first find the critical points of the function by setting the
derivative equal to zero and solving for x. These critical points represent potential extrema.
Next, we analyze the sign of the derivative around each critical point. If the derivative changes from positive to negative at a critical point, then that point is a local maximum. Conversely, if the
derivative changes from negative to positive, the point is a local minimum.
How the second derivative test can confirm whether the point is a maxima, minima, or a point of inflection
The second derivative test is a more definitive method for determining whether a critical point is a maximum, minimum, or a point of inflection. After finding the critical points using the first
derivative test, we evaluate the second derivative at these points.
If the second derivative is positive at a critical point, then the point is a local minimum. If the second derivative is negative, the point is a local maximum. However, if the second derivative is
zero, the test is inconclusive, and further analysis is needed.
Practical examples demonstrating the application of these tests
Let's consider a practical example to illustrate the application of the first and second derivative tests. Suppose we have the function f(x) = x^3 - 3x^2 + 2x.
First, we find the critical points by setting the derivative f'(x) = 3x^2 - 6x + 2 equal to zero. Solving this equation gives us x = 1/3 and x = 2. These are our potential extrema.
Next, we use the first derivative test to analyze the sign of f'(x) around these critical points. By plugging in values on either side of the critical points, we find that x = 1/3 is a local minimum,
and x = 2 is a local maximum.
Finally, we confirm our results using the second derivative test. Evaluating the second derivative f''(x) = 6x - 6 at x = 1/3 and x = 2, we find that f''(1/3) > 0 and f''(2) < 0, confirming our
previous conclusions.
The Closed Interval Method Explained
When it comes to finding the maximum and minimum values of a function, the closed interval method is a powerful tool that can be used to determine these critical points. By examining the function
within a specific interval, we can identify where the function reaches its highest and lowest points.
A Step-by-step guide on using the closed interval method
• Step 1: Identify the interval over which you want to find the maximum and minimum values.
• Step 2: Calculate the critical points of the function by finding where the derivative is equal to zero.
• Step 3: Evaluate the function at the critical points and at the endpoints of the interval.
• Step 4: Compare the values obtained in step 3 to determine the maximum and minimum values.
Importance of examining endpoints in closed intervals
Examining the endpoints of a closed interval is crucial in the closed interval method as it ensures that we do not miss any potential maximum or minimum values. Endpoints can sometimes be the highest
or lowest points of a function within a given interval, and neglecting them can lead to inaccurate results.
Examples highlighting the method's effectiveness in solving problems
Let's consider a simple example to illustrate the closed interval method in action. Suppose we have the function f(x) = x^2 on the interval [0, 2].
By following the steps outlined above, we can find that the critical point occurs at x = 0 and x = 2. Evaluating the function at these points and the endpoints of the interval, we find that f(0) = 0,
f(2) = 4, and f(2) = 4. Therefore, the maximum value of the function on the interval [0, 2] is 4, and the minimum value is 0.
This example demonstrates how the closed interval method can be effectively used to find the maximum and minimum values of a function within a specified interval, providing valuable insights into the
behavior of the function.
Troubleshooting Common Issues in Finding Extrema
When dealing with mathematical functions, finding the maximum and minimum values can sometimes be challenging. Here are some common issues that may arise and how to troubleshoot them:
A Misinterpretation of derivative test results
One common issue that arises when finding extrema is misinterpreting the results of the derivative test. The derivative test helps determine whether a critical point is a maximum, minimum, or
neither. It is essential to understand that a critical point where the derivative is zero does not always guarantee a maximum or minimum value. Sometimes, it may be an inflection point or a point of
To troubleshoot this issue, it is crucial to analyze the behavior of the function around the critical point. Consider the concavity of the function and whether it changes sign at the critical point.
This can help determine if the critical point is a maximum, minimum, or neither.
Understanding when a function does not have a global maximum or minimum
Another common issue is encountering functions that do not have a global maximum or minimum. In some cases, a function may have local extrema but no global extrema. This can happen when the function
is unbounded or oscillates infinitely.
To troubleshoot this issue, it is important to analyze the behavior of the function over its entire domain. Look for patterns such as periodicity or unbounded growth that may indicate the absence of
a global maximum or minimum.
Strategies to overcome challenges in applying these methods to complex functions
Dealing with complex functions can pose additional challenges when finding extrema. Complex functions may involve multiple variables, trigonometric functions, or exponential functions that complicate
the analysis. In such cases, it is essential to employ strategies to overcome these challenges.
• Break down the function: Decompose the complex function into simpler components that are easier to analyze. This can involve factoring, simplifying, or using trigonometric identities to reduce
the complexity of the function.
• Use numerical methods: If analytical methods prove to be too complex, consider using numerical methods such as graphing calculators or computer software to approximate the extrema of the
• Seek help: Don't hesitate to seek help from peers, instructors, or online resources when dealing with complex functions. Sometimes, a fresh perspective or guidance can help clarify the steps
needed to find extrema.
Conclusion and Best Practices in Identifying Maximum and Minimum Values
After delving into the intricacies of mathematical functions and exploring how to find the maximum and minimum values of a function, it is important to recap the key points discussed, highlight best
practices, and encourage further exploration of learning resources.
A Recap of key points and techniques discussed
• Understanding the concept of maximum and minimum values: We learned that the maximum value of a function represents the highest point on the graph, while the minimum value represents the lowest
• Techniques for finding maximum and minimum values: We discussed various methods such as setting the derivative of the function to zero, analyzing critical points, and using the second derivative
• Importance of visual aids: Visualizing functions through graphs can provide valuable insights into the behavior of a function and help in identifying maximum and minimum values.
Best practices: Regularly practicing problem-solving, using visual aids like graphs, and seeking real-world applications
Regular practice: Consistent practice is key to mastering the concepts of finding maximum and minimum values. By solving a variety of problems, you can enhance your problem-solving skills and gain a
deeper understanding of mathematical functions.
Utilizing visual aids: Graphs are powerful tools that can aid in visualizing functions and identifying critical points. By plotting functions and analyzing their behavior graphically, you can better
grasp the concept of maximum and minimum values.
Seeking real-world applications: Applying mathematical functions to real-world scenarios can provide context and relevance to the concepts of maximum and minimum values. By exploring practical
examples, you can see how these concepts are utilized in various fields.
Encouragement to explore further learning resources and mathematical software for deeper understanding
Exploring further learning resources: To deepen your understanding of mathematical functions and the identification of maximum and minimum values, consider exploring additional learning resources
such as textbooks, online tutorials, and academic journals. Engaging with a variety of materials can provide different perspectives and insights.
Utilizing mathematical software: Mathematical software such as MATLAB, Mathematica, or Desmos can be valuable tools for analyzing functions, plotting graphs, and solving complex mathematical
problems. By leveraging these software tools, you can enhance your problem-solving capabilities and explore advanced mathematical concepts. | {"url":"https://dashboardsexcel.com/blogs/blog/understanding-mathematical-functions-find-maximum-minimum-value","timestamp":"2024-11-11T16:35:48Z","content_type":"text/html","content_length":"224994","record_id":"<urn:uuid:f9168282-fd97-4d04-9a1b-9c0ebd928b47>","cc-path":"CC-MAIN-2024-46/segments/1730477028235.99/warc/CC-MAIN-20241111155008-20241111185008-00036.warc.gz"} |
Lesson 10
Interpreting Inequalities
10.1: True or False: Fractions and Decimals (5 minutes)
The purpose of this warm-up is to encourage students to reason about properties of operations in equivalent expressions. While students may evaluate each side of the equation to determine if it is
true or false, encourage students to think about the properties of arithmetic operations in their reasoning (MP7).
Display one problem at a time. Tell students to give a signal when they have decided if the equation is true or false. Give students 1 minute of quiet think time and follow with a whole-class
Student Facing
Is each equation true or false? Be prepared to explain your reasoning.
1. \(3(12+5) = (3\boldcdot 12)\boldcdot (3\boldcdot 5)\)
2. \(\frac13\boldcdot \frac34\) = \(\frac34\boldcdot \frac26\)
3. \(2\boldcdot (1.5)\boldcdot 12\) = \(4\boldcdot (0.75)\boldcdot 6\)
Activity Synthesis
Ask students to share their strategies for each problem. Record and display their explanations for all to see. After each true equation, ask students if they could rely on that same reasoning to
think about or solve other problems that are similar in type. After each false equation, ask students how we could make the equation true.
To involve more students in the conversation, consider asking:
• “Do you agree or disagree? Why?”
• “Who can restate ___’s reasoning in a different way?”
• “Does anyone want to add on to _____’s reasoning?”
10.2: Basketball Game (15 minutes)
Students interpret inequalities that represent constraints or conditions in a real-world problem. They find solutions to an inequality and reason about the context’s limitations on solutions (MP2).
Allow students 10 minutes quiet work time to complete all questions followed by whole-class discussion.
Representation: Internalize Comprehension. Activate or supply background knowledge. Provide students with access to blank number lines. Encourage students to attempt more than one strategy for at
least one of the problems.
Supports accessibility for: Visual-spatial processing; Organization
Student Facing
Noah scored \(n\) points in a basketball game.
1. What does \(15 < n\) mean in the context of the basketball game?
2. What does \(n < 25\) mean in the context of the basketball game?
3. Draw two number lines to represent the solutions to the two inequalities.
4. Name a possible value for \(n\) that is a solution to both inequalities.
5. Name a possible value for \(n\) that is a solution to \(15 < n\), but not a solution to \(n < 25\).
6. Can -8 be a solution to \(n\) in this context? Explain your reasoning.
Anticipated Misconceptions
Students might have trouble interpreting \(15 < n\) because of the placement of the variable on the right side of the inequality. Encourage students to reason about the possible values of \(n\) that
would make this inequality true.
Activity Synthesis
Invite selected students to justify their answers. Extend the discussion of the basketball game to consider how scoring works and whether any number could represent the points scored by a player. For
example, could a player have scored 1 point? \(2\frac12\) points? 0 points? -3 points? Is it reasonable for a player to score 200 points in a game?
Reading, Speaking, Representing: MLR3 Clarify, Critique, Correct. To support students in their ability to read inequalities and their number line representations as well as to critique the reasoning
of others, present an incorrect response to the prompt “Draw two number lines that represent the two inequalities.” For example, use the first number line representing \(15 < n\) to contain an error
by shading the line and arrow to the left instead of the right. Then correctly display the second number line. Tell students the response is not drawn correctly even though the numbers and open
circles are correct. Ask students to identify the error, explain how they know that the response is incorrect, and revise the incorrect number line. This helps prompt student reflection with an
incorrect written mathematical statement, and for students to improve upon the written work by correcting errors and clarifying meaning.
Design Principle(s): Optimize output (for explanation)
10.3: Unbalanced Hangers (15 minutes)
In this activity, students describe unbalanced hanger diagrams with inequalities. Students construct viable arguments and critique the reasoning of others during partner and whole-class discussions
about how unknown values relate to each other (MP3).
Arrange students in groups of 2. Give students 7 minutes quiet work time, followed by 3–5 minutes for partner discussion. Tell students to check in with their partners and, if there are
disagreements, work to come to an agreement. Follow with whole-class discussion.
Action and Expression: Internalize Executive Functions. Chunk this task into more manageable parts. Check in with students after the first 2–3 minutes of work time. Invite students to share the
strategies they used for the first unbalanced hanger.
Supports accessibility for: Organization; Attention
Student Facing
1. Here is a diagram of an unbalanced hanger.
1. Jada says that the weight of one circle is greater than the weight of one pentagon. Write an inequality to represent her statement. Let \(p\) be the weight of one pentagon and \(c\) be the
weight of one circle.
2. A circle weighs 12 ounces. Use this information to write another inequality to represent the relationship of the weights. Then, describe what this inequality means in this context.
2. Here is another diagram of an unbalanced hanger.
1. Write an inequality to represent the relationship of the weights. Let \(p\) be the weight of one pentagon and \(s\) be the weight of one square.
2. One pentagon weighs 8 ounces. Use this information to write another inequality to represent the relationship of the weights. Then, describe what this inequality means in this context.
3. Graph the solutions to this inequality on a number line.
3. Based on your work so far, can you tell the relationship between the weight of a square and the weight of a circle? If so, write an inequality to represent that relationship. If not, explain your
4. This is another diagram of an unbalanced hanger.
Andre writes the following inequality: \(c + p < s\). Do you agree with his inequality? Explain your reasoning.
5. Jada looks at another diagram of an unbalanced hangar and writes: \(s + c > 2t\), where \(t\) represents the weight of one triangle. Draw a sketch of the diagram.
Student Facing
Are you ready for more?
Here is a picture of a balanced hanger. It shows that the total weight of the three triangles is the same as the total weight of the four squares.
1. What does this tell you about the weight of one square when compared to one triangle? Explain how you know.
2. Write an equation or an inequality to describe the relationship between the weight of a square and that of a triangle. Let \(s\) be the weight of a square and \(t\) be the weight of a triangle.
Activity Synthesis
The purpose of the discussion is to let students explain how they used inequalities to compare the weights of different shapes on the hanger diagrams. Invite groups to describe any disagreements or
difficulties they had and how they resolved them. Select students to share how they reasoned about the quantities when there were two or more unknowns. Ask students if they can think of other
situations comparing two or more unknown quantities (people’s heights, weights of backpacks). Invite them to represent the quantities with variables and write inequality statements to compare them.
If time allows, display a circle opposite a pentagon and square for all to see. Ask students which side they think would be heavier. In this case, which side is heavier depends on how much the square
weighs. Since the circle is 12 ounces and the pentagon is 8 ounces, the square would have to be less than 4 ounces for the circle to be heavier and greater than 4 ounces for the pentagon and square
to be heavier.
Representing, Speaking: MLR8 Discussion Supports. To help students be more precise in their use of language related representations of inequalities, use the sentence frames to support their
discussion. Some examples include “I know the circle is heavier because _____,” “The inequality _____ represents the hanger because _____,” or “If the circle weighs 12 ounces, I know _____.”
Design Principle(s): Support sense-making; Maximize meta-awareness
Lesson Synthesis
Ask students to think about situations where limits or ranges of values can be important to public health or safety (e.g., weight limitations on an elevator, safe dosage for medication, tire
pressure, speed limit, temperature for growing carrots, etc.). Ask them to define variables and write inequalities to represent these situations. Select 2 or 3 students to share their responses.
Record and display those responses for all to see using the appropriate symbols. Here are some questions to consider during discussion:
• “Do solutions that are not whole numbers make sense in this situation?”
• “Do solutions that are negative numbers make sense in this situation?”
• “Do the numbers on the boundary count as solutions? For example, if an elevator has a maximum capacity of 2,500 pounds, can it handle exactly 2,500 pounds?”
10.4: Cool-down - Lin and Andre’s Heights (5 minutes)
Student Facing
When we find the solutions to an inequality, we should think about its context carefully. A number may be a solution to an inequality outside of a context, but may not make sense when considered in
• Suppose a basketball player scored more than 11 points in a game, and we represent the number of points she scored, \(s\), with the inequality \(s >11\). By looking only at \(s >11\), we can say
that numbers such as 12, \(14\frac12\), and 130.25 are all solutions to the inequality because they each make the inequality true.
\(\displaystyle 14\frac12 >11\)
\(\displaystyle 130.25 > 11\)
In a basketball game, however, it is only possible to score a whole number of points, so fractional and decimal scores are not possible. It is also highly unlikely that one person would score
more than 130 points in a single game.
In other words, the context of an inequality may limit its solutions.
Here is another example:
• The solutions to \(r<30\) can include numbers such as \(27\frac34\), 18.5, 0, and -7. But if \(r\) represents the number of minutes of rain yesterday (and it did rain), then our solutions are
limited to positive numbers. Zero or negative number of minutes would not make sense in this context.
To show the upper and lower boundaries, we can write two inequalities:
Inequalities can also represent comparison of two unknown numbers.
• Let’s say we knew that a puppy weighs more than a kitten, but we did not know the weight of either animal. We can represent the weight of the puppy, in pounds, with \(p\) and the weight of the
kitten, in pounds, with \(k\), and write this inequality: \(\displaystyle p >k\) | {"url":"https://curriculum.illustrativemathematics.org/MS/teachers/1/7/10/index.html","timestamp":"2024-11-04T10:49:58Z","content_type":"text/html","content_length":"107054","record_id":"<urn:uuid:bf5a5ecd-f0cf-469f-93a6-ef2c838c6a4c>","cc-path":"CC-MAIN-2024-46/segments/1730477027821.39/warc/CC-MAIN-20241104100555-20241104130555-00758.warc.gz"} |
The Equation of a Line - Master High-Frequency Concepts and Skills for Algebra Proficiency—FAST - Easy Algebra Step-by-Step
Easy Algebra Step-by-Step: Master High-Frequency Concepts and Skills for Algebra Proficiency—FAST! (2012)
Chapter 18. The Equation of a Line
In this chapter, you determine the equation of a line. The basic graph of all of mathematics is the straight line. It is the simplest to draw, and it has the unique property that it is completely
determined by just two distinct points. Because of this unique property, it is a simple matter to write the equation of a line given just two items of critical information.
There are three common methods for determining the equation of a line.
Determining the Equation of a Line Given the Slope and y-Intercept
This is the simplest of the methods for determining the equation of a line. You merely use the slope-y-Intercept form of the equation of a line:
Problem Given the slope m = 3 and the y-Intercept y = 5, write the equation of the line.
Step 1. Recalling that the slope-y-Intercept form of the equation of a line is y = mx + b, write the equation.
The equation of the line is
Problem Given the slope y-Intercept y = –2, write the equation of the line.
Step 1. Recalling that the slope-y-intercept form of the equation of a line is
The equation of the line is
Determining the Equation of a Line Given the Slope and One Point on the Line
For this method, you use the point-slope equation x[1], y[1]) and (x[2], y[2]) are points on the line.
Watch your signs when you use the point-slope equation.
Problem Given the slope m = 2 and a point (3, 2) on the line, write the equation of the line.
Step 1. Let (x, y) be a point on the line different from (3, 2), then substitute the given information into the point-slope formula:
Step 2. Solve the equation for y to get the slope-y-Intercept form of the equation.
y = 2x – 4 is the equation of the line.
Problem Given the slope
Step 1. Let (x, y) be a point on the line different from (–1, 3), then substitute the given information into the point-slope formula:
Step 2. Solve the equation for y to get the slope-y-Intercept form of the equation.
Problem Given the slope m = –2 and a point (0, 0) on the line, write the equation of the line.
Step 1. Let (x, y) be a point on the line different from (0, 0), then substitute the given information into the point-slope formula:
Step 2. Solve the equation for y to get the slope-y-Intercept form of the equation.
y = –2x is the equation of the line.
Determining the Equation of a Line Given Two Distinct Points on the Line
You also use the point-slope equation with this method.
Problem Given the points (3, 4) and (1, 2) on the line, write the equation of the line.
Step 1. Use the two points to determine the slope using the point-slope equation.
Step 2. Now use the point-slope formula and one of the given points to finish writing the equation. Let (x, y) be a point on the line different from, say, (3, 4).
Step 3. Solve the equation for y to get the slope-y-Intercept form of the equation.
Problem Given the points (–1, 4) and (3, –7) on the line, write the equation of the line.
Step 1. Use the two points to determine the slope using the point-slope equation.
Step 2. Now use the point-slope formula and one of the given points to finish writing the equation. Let (x, y) be a point on the line different from, say, (3, –7).
Step 3. Solve the equation for y to get the slope-y-Intercept form of the equation.
When two points are known, it does not make any difference which one is chosen to finish writing the equation.
Exercise 18
1. Given the slope m = 4 and the y-Intercept y = 3, write the equation of the line.
2. Given the slope m = –3 and the y-Intercept y = –3, write the equation of the line.
3. Given the slope y-Intercept y = 0, write the equation of the line.
4. Given the slope m = 2 and a point (1, 1) on the line, write the equation of the line.
5. Given the slope m = –1 and a point (2, 3) on the line, write the equation of the line.
6. Given the slope
7. Given the points (2, 4) and (1, 2) on the line, write the equation of the line.
8. Given the points (–1, 2) and (1, 2) on the line, write the equation of the line.
9. Given the points (2, –1) and (1, 0) on the line, write the equation of the line.
10. Given the points (4, 4) and (6, 6) on the line, write the equation of the line. | {"url":"https://schoolbag.info/mathematics/easy_1/19.html","timestamp":"2024-11-04T20:17:33Z","content_type":"text/html","content_length":"18984","record_id":"<urn:uuid:f49af7e4-570e-4f6c-8082-547c4f55e467>","cc-path":"CC-MAIN-2024-46/segments/1730477027861.16/warc/CC-MAIN-20241104194528-20241104224528-00742.warc.gz"} |
Mortgage Loan Interest Calculator - Certified Calculator
Mortgage Loan Interest Calculator
Introduction: The Mortgage Loan Interest Calculator is a powerful tool for homeowners and prospective buyers to understand the total interest paid over the life of their mortgage. By inputting
essential details such as loan amount, loan term, and interest rate, users can quickly estimate the overall interest cost and make informed decisions about their home financing.
Formula: The calculator utilizes the amortization formula to determine the monthly mortgage payment and subsequently calculates the total interest paid over the specified loan term. It considers the
loan amount, loan term in years, and the annual interest rate to provide accurate results.
How to Use:
1. Enter the loan amount in the “Loan Amount” field.
2. Specify the loan term in years in the “Loan Term” field.
3. Enter the interest rate in the “Interest Rate (%)” field.
4. Click the “Calculate” button to obtain the Total Interest Paid.
Example: Suppose you have a loan amount of $300,000, a loan term of 25 years, and an interest rate of 4%. The calculated Total Interest Paid over the life of the mortgage would be approximately
1. What is the amortization formula?
□ The amortization formula calculates the monthly mortgage payment and the distribution of principal and interest over the loan term.
2. How does the interest rate affect total interest paid?
□ A higher interest rate results in a higher total interest paid over the life of the mortgage.
3. Can I reduce the total interest paid?
□ Making extra payments, refinancing, or choosing a shorter loan term can help reduce the total interest paid.
4. Is the interest paid the same every month?
□ No, the interest portion of the monthly payment decreases over time as more of the principal is repaid.
5. What factors influence the choice of a loan term?
□ Personal financial goals, monthly budget constraints, and long-term plans influence the choice of a loan term.
Conclusion: The Mortgage Loan Interest Calculator provides valuable insights into the financial aspects of home ownership. Understanding the total interest paid allows individuals to plan for their
mortgage more effectively and consider strategies to minimize interest costs. Use this calculator to assess different scenarios and make informed decisions about your mortgage financing.
Leave a Comment | {"url":"https://certifiedcalculator.com/mortgage-loan-interest-calculator/","timestamp":"2024-11-10T00:03:03Z","content_type":"text/html","content_length":"54893","record_id":"<urn:uuid:eb8e9e36-6643-4ab4-a037-45df8a1bfe2a>","cc-path":"CC-MAIN-2024-46/segments/1730477028164.10/warc/CC-MAIN-20241109214337-20241110004337-00633.warc.gz"} |
Consider the following reaction at \(25^{\circ} \mathrm{C}\) : $$ \mathrm{Cl}_{2}(g) \rightleftharpoons 2 \mathrm{Cl}(g) \quad K=1.0 \times 10^{-37} $$ (a) Calculate \(\Delta G^{\circ}\) for the
reaction at \(25^{\circ} \mathrm{C}\). (b) Calculate \(\Delta G_{\mathrm{f}}^{\circ}\) for \(\mathrm{Cl}(\mathrm{g})\) at \(25^{\circ} \mathrm{C}\).
Short Answer
Expert verified
Question: Calculate the standard Gibbs free energy change (ΔG°) for the reaction at 25°C and the standard Gibbs free energy of formation (ΔGf°) for Cl(g) at 25°C for a reaction with an equilibrium
constant (K) of 1.0 x 10^-37. Answer: The ΔG° for the reaction at 25°C is 1087.53 J/mol, and the ΔGf° for Cl(g) at 25°C is 543.76 J/mol.
Step by step solution
Part (a) Calculate ΔG° for the reaction at 25°C
To calculate ΔG° for the reaction at 25°C, use the relationship between the standard Gibbs free energy change (ΔG°) and the equilibrium constant (K): $$ \Delta G^{\circ}=-RT\ln K $$ where R is the
gas constant (\(8.314 \mathrm{J} \cdot \mathrm{mol}^{-1} \cdot \mathrm{K}^{-1}\)) and T is the temperature in Kelvin. Remember, to convert the temperature from Celsius to Kelvin, simply add 273.15 to
the temperature in Celsius: $$ T(K)=25^{\circ} \mathrm{C} + 273.15=298.15 \mathrm{K} $$ Now, plug the temperature and equilibrium constant into the equation and solve for ΔG°: $$ \Delta G^{\circ}=
-8.314\ \mathrm{J} \cdot \mathrm{mol}^{-1} \cdot \mathrm{K}^{-1} \times 298.15\ \mathrm{K} \times \ln \left( 1.0 \times 10^{-37}\right) $$ $$ \Delta G^{\circ}=1087.53\ \mathrm{J} \cdot \mathrm{mol}^
{-1} $$ Thus, the ΔG° for the reaction at 25°C is \(1087.53\ \mathrm{J} \cdot \mathrm{mol}^{-1}\).
Part (b) Calculate ΔGf° for Cl(g) at 25°C
In order to calculate ΔGf° for Cl(g) at 25°C, we first find the reaction for the formation of 1 mole of Cl(g) from its standard state: $$ \frac{1}{2}\mathrm{Cl}_2(g) \rightarrow \mathrm{Cl}(g) $$
Notice that this reaction is exactly half of the given reaction, so to find the ΔGf° for Cl(g), we divide the ΔG° calculated in part (a) by 2: $$ \Delta G_{\mathrm{f}}^{\circ} =\frac{\Delta G^{\
circ}}{2} = \frac{1087.53\ \mathrm{J} \cdot \mathrm{mol}^{-1}}{2} = 543.76\ \mathrm{J} \cdot \mathrm{mol}^{-1} $$ Therefore, the ΔGf° for Cl(g) at 25°C is \(543.76\ \mathrm{J} \cdot \mathrm{mol}^{-1}
Key Concepts
These are the key concepts you need to understand to accurately answer the question.
Standard Gibbs Free Energy Change
The standard Gibbs free energy change, denoted as \( \Delta G^{\circ} \), is a fundamental concept in thermodynamics. It helps us understand the spontaneity of a chemical reaction under standard
conditions. A negative \( \Delta G^{\circ} \) means the reaction can occur spontaneously, while a positive value means it is non-spontaneous under those conditions. The formula \( \Delta G^{\circ} =
-RT \ln K \) connects the standard Gibbs free energy change to the equilibrium constant \( K \). Here, \( R \) is the universal gas constant, and \( T \) is the temperature in Kelvin. It’s important
to know that this relationship allows us to predict which direction a reaction will favor based on the value of \( K \). When \( K \) is much less than 1, \( \Delta G^{\circ} \) becomes positive,
indicating a non-spontaneous reaction as seen in the given exercise scenario.
Equilibrium Constant
The equilibrium constant \( K \) provides crucial insights into the extent of a reaction's progress towards equilibrium. It is derived from the ratio of concentrations of products to reactants, each
raised to the power of their respective coefficients in the balanced chemical equation. A large \( K \) value implies that, at equilibrium, the reaction favors product formation. Conversely, a tiny \
( K \), like \( 1.0 \times 10^{-37} \), indicates that products are not favored and a large amount of reactants remain. This constant is intimately connected to the standard Gibbs free energy change,
the relationship being \( \Delta G^{\circ} = -RT \ln K \). This connection helps chemists predict if equilibrium conditions are more likely to yield predominantly reactants or products, aiding in
understanding reaction energetics.
Thermodynamics is the broad field of physics that deals with energy changes, particularly regarding heat and work as understood via chemical reactions. In the context of Gibbs free energy, it's
closely tied to two key ideas: enthalpy and entropy. Gibbs free energy combines these thermodynamic properties to establish whether a process is spontaneous. The equation \( \Delta G = \Delta H - T\
Delta S \) sums this up, where \( \Delta H \) is the change in enthalpy, \( \Delta S \) is the change in entropy, and \( T \) is the temperature in Kelvin. This formulation highlights how both energy
contributions and disorder (or randomness) determine the direction in which a process proceeds. Understanding these elements aids in predicting and controlling chemical reactions in various
industrial and laboratory settings.
Chemical Reactions
Chemical reactions involve the transformation of reactants into products, and understanding their nature relies heavily on thermodynamic principles. Each reaction has a unique pathway and energetic
profile. The given reaction \( \mathrm{Cl}_2(g) \rightleftharpoons 2 \mathrm{Cl}(g) \) demonstrates a reversible reaction where chlorine molecules disassociate into chlorine atoms. In evaluating such
reactions, it's crucial to calculate the Gibbs free energy to ascertain the feasibility and spontaneity under standard conditions. The equilibrium state is attained when the forward and reverse
reactions occur at the same rate, which is quantified by the equilibrium constant \( K \). Recognizing the key thermodynamic indicators gives chemists the ability to predict reaction behavior and,
importantly, to manipulate conditions to achieve desired chemical outcomes. | {"url":"https://www.vaia.com/en-us/textbooks/chemistry/chemistry-principles-and-reactions-6-edition/chapter-17/problem-63-consider-the-following-reaction-at-25circ-mathrmc/","timestamp":"2024-11-15T04:38:49Z","content_type":"text/html","content_length":"263191","record_id":"<urn:uuid:f31ed040-c766-4e27-91c3-84a698f7b41d>","cc-path":"CC-MAIN-2024-46/segments/1730477400050.97/warc/CC-MAIN-20241115021900-20241115051900-00715.warc.gz"} |
The Third Workshop on Kernelization
Aims & Scope
Research on theory and applications of kernelization is a vibrant and rapidly developing area in algorithm design and complexity. After successful workshops in Bergen 2009 and Leiden 2010, this third
workshop aims at consolidating the results achieved in recent years and discussing future research directions.
A special aspect this year is to take a closer look at related work from different research areas, in particular from Practical Preprocessing, Property Testing, and Knowledge Compilation. Therefore
we have invited leading researchers from these three areas to provide keynote talks.
The workshop will feature invited keynote talks as well as several invited and contributed talks with surveys and new technical results. The workshop will also provide opportunities for all
participants to engage in joint research and discussions on open problems and future directions.
We expect the workshop program to start in the morning of Friday, 2 September 2011, and to end after lunch on Sunday, 4 September 2011.
Keynote Speakers
The workshop is organized by Serge Gaspers, Sebastian Ordyniak, and Stefan Szeider (chair).
The organizers acknowledge the support from the advisory board consisting of Sourav Chakraborty, Fedor V. Fomin, and Daniel Lokshtanov.
The organizers acknowledge the funding from the following organizations:
• Vienna Center for Logic and Algorithms (VCLA)
• Wolfgang Pauli Institute (WPI)
• European Research Council (ERC)
The registration is now closed.
For people invited by the organizers it was possible to register until 18 July 2011. Registration is free of charge and includes attendance to all workshop events and lectures, coffee breaks, and
Note that this workshop does not produce any proceedings and presentations here should not cause any problem for submitting the same material to a conference or journal.
The invited talks will be open to the public. Local people who are interested in some of the talks but do not want to participate in the entire workshop (and the social program) do not need to
Travel and Local Information
Traveling to Vienna by plane
Vienna's Airport (VIE) is located 15 km (south)east of the city and is served by all major airlines. If you travel from Saarbrücken, you may consider Air Berlin which offers a connection via Berlin;
the trip takes about 3.5 hours.
From the Airport to the City Center
The simplest, but most expensive variant for getting to the city center (resp. your hotel) is taking a taxi. This will probably cost around EUR 30 or more. Some companies like C&K offer taxis, limos
or shuttles for a flat rate starting from EUR 32.
CAT is a new train service linking the airport to "Wien Mitte", where you can change to the metro (U3/U4), as well as to tram and buses (see below for information on the public transport in Vienna).
The unique feature of CAT is that you can check in at "Wien Mitte" for certain flights when departing from Vienna. The cost is EUR 9 for a single and EUR 16 for a return trip. Note that tickets for
underground, trams, or local buses are not included in this price.
Vienna Airport Lines offer buses to "Schwedenplatz" (connection to U1 and U4), "Westbahnhof" (westbound train station, connection to U3), and "Wien Meidling" (southbound train station). One ticket is
EUR 6 (single) or EUR 11 (return). Tickets for underground, trams, or local buses are not included in this price.
The cheapest variant from the airport to the city center is going by Schnellbahn S7 (train) to "Wien Mitte" - it costs EUR 3.60 one-way (including underground and bus in Vienna). Be sure to buy "2
zones" from the vending machine. If you buy a separate ticket for Vienna, you need only "1 zone" (EUR 1.80). Have a look at the timetable.
Public Transport in Vienna
Public transport is very efficient in Vienna, and has a searchable timetable (you can search for a connection between stops, addresses, and even landmarks). There are several kinds of tickets: A
ticket for a single trip costs EUR 1.80 and can be used for any single trip within Vienna. You may change lines, but you may not interrupt your journey. The most convenient option for you may be the
"72h-Ticket" (EUR 13.60, valid for all means of public transport in Vienna for 72 hours from the time punched).
More information: timetable, tickets, Metro and train map (PDF), and Public transportation map (PDF).
The Workshop Venue
Lectures will take place at a department building of Vienna University of Technology. The street address is: Gußhausstrasse 25-29. The lecture room is called "EI 9 Hlawka Hörsaal" and is located on
the ground floor. After entering the building through the glass doors turn right.
The venue can be reached by a 5 minutes walk from Karlsplatz, a hub of Vienna's public transport system. For instance the underground lines U1, U2, and U4 have a stop at Karlsplatz.
Tourist Information
An extensive account of travel information for Vienna can be found on the official webpage of the a Vienna Tourist Board. The webpage of the city of Vienna offers all sorts of informations on Vienna
(including tourist information).
Some of the main sights in Vienna include: Stephansdom, St. Stephen's Cathedral, Schloss Schönbrunn, the National Library, the Austrian Parliament, the Wiener Prater with the famous Giant Ferris
Wheel, Museumsquartier, several museums are located here or nearby, for instance, the Kunsthistorisches Museum Vienna. Sights near the workshop venue: Belvedere, Karlskirche, Secession, Musikverein,
Opera, Albertina, Naschmarkt.
• Faisal Abu-Khzam, Lebanese American University, Lebanon
• Rémy Belmonte, University of Bergen, Norway
• Armin Biere, Johannes Kepler University, Linz, Austria
• Sourav Chakraborty, Chennai Mathematical Institute, India
• Robert Crowston, Royal Holloway, University of London, UK
• Wolfgang Dvorak, Vienna University of Technology, Austria
• Uwe Egly, Vienna University of Technology, Austria
• Michael R. Fellows, Charles Darwin University, Australia
• Henning Fernau, Universität Trier, Germany
• Fedor Fomin, University of Bergen, Norway
• Serge Gaspers, Vienna University of Technology, Austria
• Archontia Giannopoulou, National and Kapodistrian University of Athens, Greece
• Jiong Guo, University of Saarland, Germany
• Gregory Gutin, Royal Holloway, University of London, UK
• Sepp Hartung, TU Berlin, Germany
• Pinar Heggernes, University of Bergen, Norway
• Marijn Heule, Delft University of Technology, The Netherlands
• Pim van 't Hof, University of Bergen, Norway
• Falk Hüffner, TU Berlin, Germany
• Bart Jansen, Utrecht University, the Netherlands
• Matti Järvisalo, University of Helsinki, Finland
• Mark Jones, Royal Holloway, University of London, United Kingdom
• Eunjung Kim, CNRS, France
• Stefan Kratsch, Utrecht University, The Netherlands
• Martin Lackner, Vienna University of Technology, Austria
• Daniel Lokshtanov, University of California, San Diego, USA
• Dániel Marx, Humboldt-Universität zu Berlin, Germany
• Ramanujan Maadapuzhi Sridharan, The Institute of Mathematical Sciences, India
• Pierre Marquis, Université d'Artois & CRIL-CNRS, France
• Jesper Nederlof, University of Bergen, Norway
• Rolf Niedermeier, TU Berlin, Germany
• Sebastian Ordyniak, Vienna University of Technology, Austria
• Christophe Paul, CNRS - LIRMM (Montpellier), France
• Andreas Pfandler, Vienna University of Technology, Austria
• Reinhard Pichler, Vienna University of Technology, Austria
• Marcin Pilipczuk, University of Warsaw, Poland
• Michał Pilipczuk, University of Warsaw, Poland
• Arash Rafiey, IDSIA, Switzerland
• Venkatesh Raman, Institute of Mathematical Sciences, Chennai, India
• Frances A. Rosamond, Charles Darwin University, Australia
• Stefan Rümmele, Vienna University of Technology, Austria
• Saket Saurabh, The Institute of Mathematical Sciences, India
• Martina Seidl, Johannes Kepler Universität Linz, Austria
• Hadas Shachnai, Technion, Israel
• Narges Simjour, University of Waterloo, Canada
• Friedrich Slivovsky, Vienna University of Technology, Austria
• Karolina Soltys, Max Planck Institute, Germany
• Ondra Suchy, Saarland University, Saarbrucken, Germany
• Stefan Szeider, Vienna University of Technology, Austria
• Jan Arne Telle, University of Bergen, Norway
• Dimitrios Thilikos, National and Kapodistrian University of Athens, Greece
• Erik Jan van Leeuwen, University of Bergen, Norway
• Angelina Vidali, University of Vienna, Austria
• Yngve Villanger, University of Bergen, Norway
• Magnus Wahlström, Max Planck Institute for Informatics, Germany
• Mathias Weller, TU Berlin, Germany
• Stefan Woltran, Vienna University of Technology, Austria
• Anders Yeo, Royal Holloway, University of London, UK
Keynote Talks
• Armin Biere: Preprocessing and Inprocessing Techniques in SAT slides [Show/Hide Abstract]
Abstract. SAT solvers are used in many applications in and outside of Computer Science. The success of SAT is based on the use of good decision heuristics, learning, restarts, and compact data
structures with fast algorithms. But also efficient and effective encoding, preprocessing and inprocessing techniques are important in practice. In this talk we give an overview of old and more
recent inprocessing and preprocessing techniques starting with ancient pure literal reasoning and failed literal probing. Hyper-binary resolution and variable elimination are more recent
techniques of this century. We discuss blocked-clause elimination, which gives a nice connection to optimizing encodings and conclude with our recent results on unhiding redundancy fast.
• Sourav Chakraborty: Property Testing: Sublinear Algorithms for Promise Problems slides [Show/Hide Abstract]
Abstract. Deciding weather a graph is $k$-colorable is an NP-complete problem and hence solving this problem is expected to be hard. But if we are given a promise that the graph is either
$k$-colorable of "far from being $k$-coloarble", can we make some intelligent deductions "quickly"? Property testing deals with these kind of questions, where the goal is to solve some promise
problems. The efficiency of an algorihtm is measured by the number of input bits that are read. In many cases there are algorithms that can correctly answer with high probability by looking at a
tiny fraction (sometimes even constant) of the input bits. In the past two decades this area has been at the forefront of research in theoretical computer science - we will take at look at it.
• Michael R. Fellows: Kernelization and the Larger Picture of Practical Algorithmics, in Contemporary Context slides [Show/Hide Abstract]
Abstract. The natural relationship between Parameterized Complexity and heuristics has been a subject of papers and talks since the beginnings of parameterized complexity, and has been especially
recognized within the WorKer kernelization community. In the Journal of Computer and System Sciences (January 2011) celebrating Richard Karp's 2008 Koyoto Prize, and elsewhere, Karp proposes a
general program, closely related to the standard FPT technique of iterative compression, as a structured approach to heuristic algorithm design, for problems in computational molecular biology
and genetics. This talk will discuss Karp's general program in light of the parameterized complexity framework, and survey the contemporary context of programmatic thinking about the deployment
of mathematics to serve practical computing, in which pre-processing (kernelization) has, of course, both a central and a leveraged role.
Fellows, M. R. Parameterized complexity: Main ideas, connections to heuristics and research frontiers. In Proceedings of ISAAC (2001), vol. 2223 of Lecture Notes in Computer Science,
Springer-Verlag, pp. 291-307.
Fellows, M. R. Parameterized complexity: New developments and research frontiers. In Aspects of Complexity (2001), De Gruyter, pp. 51-72.
Fellows, M. R. Parameterized complexity: The main ideas and connections to practical computing. In Experimental Algorithmics (2002), R. Fleischer, B. M. E. Moret, and E. M. Schmidt, Eds., vol.
2547 of Lecture Notes in Computer Science, Springer-Verlag, pp. 51-77.
Fellows, M. R. A survey of FPT algorithm design techniques with an emphasis on recent advances and connections to practical computing. In Proceedings of 12th Annual European Symposium ESA,
Bergen, Norway (2004), S. Albers and T. Radzik, Eds., vol. 3221 of Lecture Notes in Computer Science, Springer-Verlag, pp. 1-2.
• Fedor V. Fomin: Protrusions in graphs and their applications slides [Show/Hide Abstract]
Abstract. A protrusion in a graph is a subgraph of constant treewidth that can be separated from the graph by removing a constant number of vertices. We discuss combinatorial properties of graphs
implying existence of large protrusions and give a number of algorithmic applications of protrusions.
• Bart Jansen: Kernelization for a Hierarchy of Structural Parameters slides [Show/Hide Abstract]
Abstract. There are various reasons to study the kernelization complexity of non-standard parameterizations. Problems such as Chromatic Number are NP-complete for a constant value of the natural
parameter, hence we should not hope to obtain kernels for this parameter. For other problems such as Long Path, the natural parameterization is fixed-parameter tractable but is known not to admit
a polynomial kernel unless the polynomial hierarchy collapses. We may therefore guide the search for meaningful preprocessing rules for these problems by studying the existence of polynomial
kernels for different parameterizations.
Another motivation is formed by the Vertex Cover problem. Its natural parameterization admits a small kernel, but there exist refined parameters (such as the feedback vertex number) which are
structually smaller than the natural parameter, for which polynomial kernels still exist; hence we may obtain better preprocessing studying the properties of such refined parameters.
In this survey talk we discuss recent results on the kernelization complexity of structural parameterizations of these important graph problems. We consider a hierarchy of structural graph
parameters, and try to pinpoint the best parameters for which polynomial kernels still exist.
• Daniel Lokshtanov: Generalization and Specialization of Kernelization slides [Show/Hide Abstract]
Abstract. tba
• Pierre Marquis: A Few Words about Knowledge Compilation slides [Show/Hide Abstract]
Abstract. My talk will be about knowledge compilation, a research topic studied in AI for more than twenty years, and which is concerned with pre-processing some pieces of information in order to
improve some tasks of interest, computationally speaking. In this talk, after an introduction to knowledge compilation, I will focus on two important points: the definition of compilable problems
(roughly, those for which computational improvements via pre-processing can be "guaranteed") and the design of a knowledge compilation map (a multi-criteria evaluation of representation languages
which can be used as target languages for knowledge compilation).
• Anders Yeo: Simultaneously Satisfying Linear Equations Over F_2: Parameterized Above Average slides [Show/Hide Abstract]
Abstract. In this talk we will mainly be considering the parameterized problem MaxLin2-AA. In MaxLin2-AA, we are given a system of variables x_1,... ,x_n and equations of the form x_{i_1}*x_{i_2}
*... *x_{i_r} = b, where {x_{i_1},x_{i_2},...,x_{i_r}} is a subset of {1,2,...,n} and all x_i and b belong to {-1, 1}. Furthermore each equation has a positive integral weight, and we want to
decide whether it is possible to simultaneously satisfy equations of total weight at least W/2+k, where W is the total weight of all equations and k is the parameter (if k=0, the possibility is
In this talk we begin by (briefly) explaining what it means to parameterize a problem above average and why this seems a natural parameterization. We will motivate why MaxLin2-AA is of interest
and outline how to obtain a kernel with at most O(k^2 log k) variables, which solves an open problem of Mahajan et al. (2006). Finally we will mention a number of open problems and conjectures.
Coauthors: Robert Crowston, Michael Fellows, Gregory Gutin, Mark Jones, Frances Rosamond, Stephan Thomasse
Contributed Talks
• Robert Crowston: Max-r-Lin Above Average and its Applications slides
• Henning Fernau: A linear kernel for the differential of a graph slides
• Sepp Hartung: Linear-Time Computation of a Linear Problem Kernel for Dominating Set on Planar Graphs slides
• Pim van 't Hof: Parameterized Complexity of Vertex Deletion into Perfect Graph Classes slides
• Falk Hüffner: Graph Transformation and Kernelization: Confluent Data Reduction for Edge Clique Cover slides
• Stefan Kratsch: Co-nondeterminism in compositions: A kernelization lower bound for a Ramsey-type problem slides
• Erik Jan van Leeuwen: Kernels for domination when the stars are out slides
• Dániel Marx: Kernelization of Packing Problems
• Hadas Shachnai: From Approximative Kernelization to High Fidelity Reductions slides
• Karolina Soltys: Hierarchies of kernelization hardness slides
• Magnus Wahlström: Polynomial kernels for some graph cut problems slides
• Gregory Gutin Kernels for below-upper-bound parameterizations of the hitting set and directed dominating set problem slides
Friday, September 2
08.50 - 09.00 Opening
09.00 - 10.00 Keynote talk. Fedor V. Fomin: Protrusions in graphs and their applications slides
10.00 - 10.30 Sepp Hartung: Linear-Time Computation of a Linear Problem Kernel for Dominating Set on Planar Graphs slides
10.30 - 11.00 Coffee break
11.00 - 12.00 Keynote talk. Pierre Marquis: A Few Words about Knowledge Compilation slides
12.00 - 12.30 Falk Hüffner: Graph Transformation and Kernelization: Confluent Data Reduction for Edge Clique Cover slides
12.30 - 14.00 Lunch: Mensa
14.00 - 15.00 Keynote talk. Anders Yeo: Simultaneously Satisfying Linear Equations Over F_2: Parameterized Above Average slides
15.00 - 15.30 Robert Crowston: Max-r-Lin Above Average and its Applications slides
15.30 - 16.00 Erik Jan van Leeuwen: Kernels for domination when the stars are out slides
16.00 - 16.30 Coffee break
16.30 - 17.30 Keynote talk. Armin Biere: Preprocessing and Inprocessing Techniques in SAT slides
17.30 - 18.00 Henning Fernau: A linear kernel for the differential of a graph slides
Saturday, September 3
09.00 - 10.00 Open problem session.
10.00 - 10.30 Karolina Soltys: Hierarchies of kernelization hardness slides
10.30 - 11.00 Coffee break
11.00 - 12.00 Keynote talk. Sourav Chakraborty: Property Testing: Sublinear Algorithms for Promise Problems slides
12.00 - 12.30 Hadas Shachnai: From Approximative Kernelization to High Fidelity Reductions slides
12.30 - 13.30 Lunch: Buffet
13.30 - 14.00 Group photo
14.00 - 15.00 Keynote talk. Daniel Lokshtanov: Generalization and Specialization of Kernelization slides
15.00 - 15.30 Stefan Kratsch: Co-nondeterminism in compositions: A kernelization lower bound for a Ramsey-type problem slides
15.30 - 16.00 Coffee break
16.00 - 16.30 Gregory Gutin: Kernels for below-upper-bound parameterizations of the hitting set and directed dominating set problems slides
16.30 - 17.30 Keynote talk. Bart Jansen: Kernelization for a Hierarchy of Structural Parameters slides
18.00 Group 1 leaves via public transport to Kahlenberg (view) and then to Heurigen
18.30 Group 2 leaves via private bus directly to Heurigen
19.00 - 23.00 Workshop dinner: at Heurigen Sirbu
22.00 Group 2 leaves via bus to Karlsplatz
Sunday, September 4
09.30 - 10.30 Keynote talk. Michael R. Fellows: Kernelization and the Larger Picture of Practical Algorithmics, in Contemporary Context slides
10.30 - 11.00 Coffee break
11.00 - 11.30 Pim van 't Hof: Parameterized Complexity of Vertex Deletion into Perfect Graph Classes slides
11.30 - 12.00 Magnus Wahlström: Polynomial kernels for some graph cut problems slides
12.00 - 12.30 Dániel Marx: Kernelization of Packing Problems
13.00 - 14.00 Lunch: Restaurant Ischia
14.00 Closing | {"url":"http://www.kr.tuwien.ac.at/drm/worker2011","timestamp":"2024-11-06T15:46:59Z","content_type":"text/html","content_length":"45940","record_id":"<urn:uuid:ba9f93d6-45bd-42b7-8c5d-4ca7c4a79946>","cc-path":"CC-MAIN-2024-46/segments/1730477027932.70/warc/CC-MAIN-20241106132104-20241106162104-00043.warc.gz"} |
Fuel in a 2000 Gallon Skid Tank
The Fuel in a 2000 Gallon Farm Tank calculator computes the content fluid (e.g. fuel) volume in a 2000 gallon above ground circular storage tank based on the tanks
INSTRUCTIONS: Choose units and enter the following:
• (F) Depth of Fluid in Tank (wet dipstick)
Volume (V): The calculator returns the volume of fuel in the tank in gallons. It also returns the volume of fuel needed to fill the tank in gallons. However, these can be automatically converted to
other volume units (e.g. liters) via the pull-down menu.
The Math / Science
This calculator answers the question, "How much is in my 2000 gallon fuel tank?"
2000 gallon fuel tanks have standard dimensions of 144" length and 64" diameter. Based on these dimensions, one can calculate the total volume. To compute the volume of liquid in the tank, enter a
dipstick and measure the depth of the fluid (F).
ASTs are used for home heating oil, kerosene and diesel fuel. The typical AST has a cap that is used to refuel the tank. This is the easiest place to insert a dipstick (measuring stick) to measure
the depth (F) of the liquid content of the AST.
The picture shown is a hand crank that pumps 10 gallons for every 100 turns of the pump crank(See Hand Pump Volume formula). During cold periods, the fuel supplier will treat the diesel with an
anti-freeze mixture, but this is only a concern if the temp is well below zero degrees F (e.g.-5 F or colder).
Storage Tank Calculators:
Other Fuel Calculators | {"url":"https://www.vcalc.com/wiki/fuel-in-2000-gallon-skid-tank","timestamp":"2024-11-06T15:57:29Z","content_type":"text/html","content_length":"59551","record_id":"<urn:uuid:5f5e9f4f-0147-42fd-9749-500a1cba73f6>","cc-path":"CC-MAIN-2024-46/segments/1730477027932.70/warc/CC-MAIN-20241106132104-20241106162104-00255.warc.gz"} |
Multiobjective Optimization#
This section provides code implementations for concepts related to multiobjective optimization. Multiobjective optimization provides methods to solve optimization problems with multiple competing
objectives. The application of these methods to simple analytical examples is also provided in this section. The multiobjective methods used in this section of the Jupyter book are based on
differential evolution (DE) covered earlier. This section has the following two subsections:
1. Multiobjective optimization using differential evolution
2. Multiobjective optimization using Kriging models
The code blocks below introduce the multiobjective optimization examples used in this section. The next code block imports the required packages for this section.
import numpy as np
import matplotlib.pyplot as plt
import matplotlib.colors as colors
from smt.sampling_methods import LHS
from pymoo.util.nds.non_dominated_sorting import NonDominatedSorting
import pymoo.gradient.toolbox as anp
from pymoo.core.problem import Problem
Branin-Currin optimization problem#
The first example used in this section is an unconstrained multiobjective problem with two design variables and two objective functions. The two functions are the rescaled Branin function and the
Currin function. The optimization problem statement is given as
\[\begin{split} \begin{gathered} \min f(\textbf{x}) = \begin{cases} f_1(\textbf{x}) = \frac{1}{51.95}(( \bar{x_2} - \frac{5.1}{4\pi^2} \bar{x_1}^2 + \frac{5}{\pi}\bar{x_1} - 6 )^2 + 10 ( 1-\frac{1}{8
\pi} )\cos \bar{x_1} - 44.81) \\ \\ f_2(\textbf{x}) = [1-\exp(\frac{-1}{2x_2})]\frac{2300x_1^3+1900x_1^2+2092x_1+60}{100x_1^3+500x_1^2+4x_1+20} \end{cases} \\ 0.0 \leq x_1, x_2 \leq 1.0 \\ \text
{where} \quad \bar{x_1} = 15x_1 - 5, \bar{x_2} = 15x_2 \end{gathered} \end{split}\]
The block of code below defines the two functions.
# Defining the objective functions
def branin(x):
dim = x.ndim
if dim == 1:
x = x.reshape(1,-1)
x1 = 15*x[:,0] - 5
x2 = 15*x[:,1]
b = 5.1 / (4*np.pi**2)
c = 5 / np.pi
t = 1 / (8*np.pi)
y = (1/51.95)*((x2 - b*x1**2 + c*x1 - 6)**2 + 10*(1-t)*np.cos(x1) + 10 - 44.81)
if dim == 1:
y = y.reshape(-1)
return y
def currin(x):
dim = x.ndim
if dim == 1:
x = x.reshape(1,-1)
x1 = x[:,0]
x2 = x[:,1]
factor = 1 - np.exp(-1/(2*x2))
num = 2300*x1**3 + 1900*x1**2 + 2092*x1 + 60
den = 100*x1**3 + 500*x1**2 + 4*x1 + 20
y = factor*num/den
if dim == 1:
y = y.reshape(-1)
return y
The next few code blocks will locate the Pareto front for the problem and plot the Pareto front in the objective space. This is done by generating a mesh of points within the bounds of the problem
and sorting the points to locate the non-dominated points. The code block uses the non-dominated sorting algorithm of pymoo to find the non-dominated points in the objective space. The non-dominated
points represent the Pareto front of the problem. The non-dominated sorting algorithm may not always perfectly provide all of the non-dominated points of the problem and it is a good idea to make
sure that all the points provided are actually non-dominated solutions. It is also sometimes necessary to adjust the parameter n_stop_if_ranked which indicates approximately how many points should
survive in the initial population of points at the end of the sorting algorithm. Raising or lowering this value can improve the representation of the Pareto front.
# Generating a grid of points
num_points = 100
# Defining x and y values
x = np.linspace(1e-6,1,num_points)
y = np.linspace(1e-6,1,num_points)
# Creating a mesh
X, Y = np.meshgrid(x, y)
# Finding the front through non-dominated sorting
z1 = branin(np.hstack((X.reshape(-1,1),Y.reshape(-1,1))))
z2 = currin(np.hstack((X.reshape(-1,1),Y.reshape(-1,1))))
nds = NonDominatedSorting()
F = np.column_stack((z1,z2))
pareto = nds.do(F, n_stop_if_ranked=50)
# Plotting the non-dominated solutions
Z_pareto = F[pareto[0]]
fig, ax = plt.subplots(1,1,figsize=(8,6))
ax.scatter(Z_pareto[1:,0], Z_pareto[1:,1], color="red", label="Pareto Front")
ax.set_xlabel("$f_1$", fontsize=14)
ax.set_ylabel("$f_2$", fontsize=14)
ax.legend(loc="upper right", fontsize = 14)
<matplotlib.legend.Legend at 0x13c52c250>
The above plot shows the non-dominated solutions of the problem in the objective space. The curve passing through these points is the Pareto front of the problem. The Pareto front is convex in shape
for this problem. This plot gives good insight into the Pareto front of the problem, however, simply sorting the points from the mesh will not reveal the true front and directly solving the
optimization problem will be a better method for a complex problem such as this.
Constrained multiobjective optimization problem#
The second example used in this section is a constrained multiobjective problem with two design variables, two objective functions and two constraints. The optimization problem statement is given as
\[\begin{split} \begin{gathered} \min f(\textbf{x}) = \begin{cases} f_1(\textbf{x}) = 4x_1^2+4x_2^2 \\ f_2(\textbf{x}) = (x_1-5)^2 + (x_2-5)^2 \end{cases} \\ \textrm{subject to} \quad g_1(\textbf{x})
= (x_1-5)^2 + x_2^2 -25 \leq 0\\ g_2(\textbf{x}) = 7.7 - ((x_1-8)^2 + (x_2+3)^2) \leq 0\\ -20 \leq x_1,x_2 \leq 20\\ \end{gathered} \end{split}\]
The block of code below defines the objective and constraint functions for the problem.
# Defining the objective functions
def f1(x):
dim = x.ndim
if dim == 1:
x = x.reshape(1,-1)
y = 4*x[:,0]**2 + 4*x[:,1]**2
return y
def f2(x):
dim = x.ndim
if dim == 1:
x = x.reshape(1,-1)
y = (x[:,0]-5)**2 + (x[:,1]-5)**2
return y
def g1(x):
dim = x.ndim
if dim == 1:
x = x.reshape(1,-1)
g = (x[:,0]-5)**2 + x[:,1]**2 - 25
return g
def g2(x):
dim = x.ndim
if dim == 1:
x = x.reshape(1,-1)
g = 7.7 - ((x[:,0]-8)**2 + (x[:,1]+3)**2)
return g
The code block below uses the non-dominated sorting algorithm of pymoo to find the non-dominated points in the objective space. The non-dominated points represent the Pareto front of the problem. The
obtained Pareto front is then plotted in the objective space of the problem.
# Generating a grid of points
num_points = 100
# Defining x and y values
x = np.linspace(-20,20,num_points)
y = np.linspace(-20,20,num_points)
# Creating a mesh
X, Y = np.meshgrid(x, y)
# Finding the front through non-dominated sorting
z1 = f1(np.hstack((X.reshape(-1,1),Y.reshape(-1,1))))
z2 = f2(np.hstack((X.reshape(-1,1),Y.reshape(-1,1))))
const1 = g1(np.hstack((X.reshape(-1,1),Y.reshape(-1,1))))
const2 = g2(np.hstack((X.reshape(-1,1),Y.reshape(-1,1))))
z1 = z1[(const1<0) & (const2<0)]
z2 = z2[(const1<0) & (const2<0)]
nds = NonDominatedSorting()
F = np.column_stack((z1,z2))
pareto = nds.do(F, n_stop_if_ranked=50)
Z_pareto = F[pareto[0]]
# Plotting the contours
fig, ax = plt.subplots(1,1,figsize=(8,6))
ax.scatter(Z_pareto[:,0], Z_pareto[:,1], color="red", label="Pareto Front")
ax.set_xlabel("$f_1$", fontsize=14)
ax.set_ylabel("$f_2$", fontsize=14)
ax.legend(loc="upper right", fontsize = 14)
<matplotlib.legend.Legend at 0x13c9bf970>
The Pareto front of the problem is shown in the plot above. Since the functions involved in the problem are simple, it is likely that the sorted non-dominated points represent a fairly accurate
Pareto front for the problem. The Pareto front is continuous and convex for this problem. | {"url":"https://computationaldesignlab.github.io/surrogate-methods/multi_objective/intro_moo.html","timestamp":"2024-11-14T01:13:38Z","content_type":"text/html","content_length":"56658","record_id":"<urn:uuid:f6d0ba8a-ab68-4d59-b4ca-6d92bdf0a8d5>","cc-path":"CC-MAIN-2024-46/segments/1730477028516.72/warc/CC-MAIN-20241113235151-20241114025151-00876.warc.gz"} |
Einstein equation
nLab Einstein equation
gravity, supergravity
Spacetime configurations
Quantum theory
Equality and Equivalence
• equality (definitional, propositional, computational, judgemental, extensional, intensional, decidable)
• isomorphism, weak equivalence, homotopy equivalence, weak homotopy equivalence, equivalence in an (∞,1)-category
• Examples.
What are called Einstein’s equations are the equations of motion of gravity: the Euler-Lagrange equations induced by the Einstein-Hilbert action.
They say that the Einstein tensor $G$ of the metric/the field of gravity equals the energy-momentum tensor $T$ of the remaining force- and matter-fields:
$G = T \,.$
Existence and uniqueness
Given a choice of Cauchy surface $\Sigma$, the initial value problem for Einstein’s differential equations of motion is determined by a choice of Riemannian metric on $\Sigma$ and a second
fundamental form along $\Sigma$.
With this data a solution to the equation exists and is unique. (Klainerman-Nicolo 03).
A general discssion is for instance in section 11 of
A discussion of the vacuum Einstein equations (only gravity, no other fields) in terms of synthetic differential geometry is in
PDE theory
Genuine PDE theory for Einstein’s equations goes back to local existence results by Yvonne Choquet-Bruhat in the 1950s. Global existence in the presence of a Cauchy surface was then shown in
For further developments see
Last revised on September 14, 2016 at 15:23:43. See the history of this page for a list of all contributions to it. | {"url":"https://ncatlab.org/nlab/show/Einstein+equation","timestamp":"2024-11-15T01:36:04Z","content_type":"application/xhtml+xml","content_length":"48808","record_id":"<urn:uuid:78c4170a-a958-4394-9b2d-1fa29236f77e>","cc-path":"CC-MAIN-2024-46/segments/1730477397531.96/warc/CC-MAIN-20241114225955-20241115015955-00190.warc.gz"} |
Eye Tracking with ARKit (iOS)- Part II
Utilizing LookAtPoint for Pinpointing User’s Gaze onto the Mobile Screen Plane
In this article, we’ll explore the practical application of eye gaze tracking on a mobile screen. Leveraging the power of ARKit’s ARFaceAnchor and the dynamic lookAtPoint property, this project
allows us to precisely determine where a user is looking on their device.
The code and resources you need are conveniently hosted in the GitHub repository. Basics about ARKit, ARFaceAnchor, BlendShapeLocations, and LookAtPoint covered in 1st part.
@0:45 You Navigate With Your Eyes
In a recent Apple event, the frontier of eye tracking technology was pushed further with the Vision Pro mixed-reality headset, showcasing a future where users can interact with applications using
just their eyes
Local Coordinate Space Vs World Coordinate System
Local Coordinate Space:
Imagine the local coordinate space of the ARFaceAnchor as a 3D grid centered on the face. The origin of this grid is at the center of the face, and positions and rotations are defined relative to
this origin. The lookAtPoint is a coordinate in this local space and indicates the direction the user is looking concerning their own face.
World Coordinate System:
Now, let’s expand this view to the world coordinate system. In the broader world space (the entire environment captured by the camera), every object, including the face anchor, has its own position
and orientation. The world coordinate system provides a global reference frame for all these objects.
The lookAtPoint is a point expressed in the local coordinate space of the face anchor. To convert it into a meaningful global position, you need to consider the anchor's own position and orientation
in the world coordinate system.
Simplified Explanation: Local Coordinate Space Vs World Coordinate System and their relation with lookAtPoint & ARFaceAnchor :
Think of the ARFaceAnchor as the center of a little world on your face. This is the Local Coordinate Space. In this space, everything is measured from the center of your face, like how you might
give directions using your own body as a reference point (raise your right hand).
Now, imagine you’re in a larger world, like a room. This is the World Coordinate System. In this big world, there are not only things on your face (like the ARFaceAnchor or where you’re looking),
but also everything else around you, including the device’s camera capturing your face.
So, when we talk about where you’re looking (lookAtPoint), we first figure it out in the little world on your face (Local Coordinate Space with the ARFaceAnchor as the center). Then, we take that
information and translate it to make sense in the bigger world around you (World Coordinate System), where there’s more than just your face — there’s the whole environment captured by the camera.
It’s like saying, “I’m pointing to my nose in my personal space, and now let’s find where that is in the entire room.”
lookAtPoint w.r.t. the world coordinate system
To convert the lookAtPoint from the local coordinate space of the ARFaceAnchor to the world coordinate system, we can use matrix multiplication. The process involves applying the transformation
matrix of the ARFaceAnchor to the lookAtPoint. This will bring the point from the local coordinate space of the face anchor to the world coordinate system.
let lookAtPointInWorld = faceAnchor.transform * simd_float4(lookAtPoint, 1)
In this line, faceAnchor.transform represents the transformation matrix of the ARFaceAnchor, and lookAtPoint is extended to a simd_float4 to facilitate matrix multiplication. The multiplication
operation combines the transformation matrix and the point, resulting in a new point that is now in the world coordinate system.
After this transformation, lookAtPointInWorld contains the coordinates of the lookAtPoint with respect to the world coordinate system.
Camera Transformations
The camera is a crucial component in AR, responsible for capturing the real-world environment and providing the necessary data for rendering virtual objects in the correct perspective.
In ARKit, the camera.transform represents the transformation matrix of the device's camera. The transformation matrix is a mathematical construct that encapsulates translation, rotation, and scaling
operations in 3D space. Essentially, it contains information about how the camera is positioned and oriented relative to the world coordinate system.
cameraTransform = session.currentFrame?.camera.transform
Breakdown of what the camera.transform includes:
• Translation (Position): Specifies the location of the camera in 3D space (X, Y, Z coordinates).
• Rotation: Describes the orientation of the camera. This can include rotations around the X, Y, and Z axes.
• Scaling: Defines any scaling factors applied to the camera.
camera.transform is useful to transform the lookAtPoint (gaze direction) from the local coordinate space of the ARFaceAnchor to the world coordinate system.
In summary, the camera.transform provides a comprehensive representation of how the device's camera is positioned and oriented in 3D space, enabling accurate transformations between the real world
and the virtual world in AR applications.
lookAtPoint onto the mobile screen
To find the reflection of a point in the XY plane (phone screen plane) of a given camera transform, you can follow these steps:
1. Transform the LookAtPoint to Camera Coordinates:
• We have the lookAtPointInWorld in global coordinates (world coordinate system).
• Use the inverse of the camera transform to convert it to camera coordinates. This new point is referred to as lookAtPointInCamera.
let lookAtPointInCamera = simd_mul(simd_inverse(cameraTransform), lookAtPointInWorld)
2. Reflection in the XY Plane (Phone screen plane):
• To reflect a point in the XY plane, we can neglect its z-coordinate since lookAtPointInCamera is relative to camera coordinates.
• Coordinates — (transformedLookAtPoint.x, transformedLookAtPoint.y)
(transformedLookAtPoint.x, transformedLookAtPoint.y)
In conclusion, by transforming the lookAtPoint from global coordinates to camera coordinates and reflecting it in the XY plane, we successfully obtained the user’s gaze point coordinates in the
camera coordinates system.
FocusPoint on Mobile Screen
After obtaining the user’s gaze point coordinates in the camera coordinates system, the next step is to project this point onto the mobile screen. The process involves several key calculations:
let screenX = transformedLookAtPoint.y / (Float(Device.screenSize.width) / 2) * Float(Device.frameSize.width)
let screenY = transformedLookAtPoint.x / (Float(Device.screenSize.height) / 2) * Float(Device.frameSize.height)
let focusPoint = CGPoint(
x: CGFloat(screenX).clamped(to: Ranges.widthRange),
y: CGFloat(screenY).clamped(to: Ranges.heightRange)
• transformedLookAtPoint.y / (Float(Device.screenSize.width) / 2): This step normalizes the y-coordinate of the gaze point relative to half of the screen width, resulting in a value between -1 and
• transformedLookAtPoint.x / (Float(Device.screenSize.height) / 2): Similarly, the x-coordinate is normalized relative to half of the screen height.
Scaling to Screen Size:
• * Float(Device.frameSize.width): The normalized x-coordinate is then scaled to the full width of the screen.
• * Float(Device.frameSize.height): Likewise, the normalized y-coordinate is scaled to the full height of the screen.
Creating a CGPoint:
• let focusPoint = CGPoint(...): The scaled coordinates are then used to create a CGPoint, representing the user's focus point on the mobile screen.
Clamping to Screen Boundaries:
• clamped(to: Ranges.widthRange) and clamped(to: Ranges.heightRange): To ensure the focus point stays within the screen boundaries, these clamping operations restrict the coordinates to valid
Eye Tracking with ARKit
limitations of eye tracking with ARKit
• Head movement: Head movement can affect the accuracy of eye tracking, especially if the user is moving their head quickly.
• Blinking: Blinking can also affect the accuracy of eye tracking, as the camera may not be able to track the user’s eyes during a blink.
• Ambient lighting: Ambient lighting can also affect the accuracy of eye tracking, as the camera may not be able to see the user’s eyes properly if the lighting is too bright or too dark.
• Other Factors: Glasses and accessories, Eyelid occlusion, Eye conditions, Distance and angle, Limited field of view:, Environmental factors, Hardware limitations | {"url":"https://shiru99.medium.com/eye-tracking-with-arkit-ios-part-ii-2723f9bfe04e?source=user_profile_page---------2-------------a19e9a6bab29---------------","timestamp":"2024-11-09T12:52:41Z","content_type":"text/html","content_length":"146416","record_id":"<urn:uuid:5297ec1b-6c73-400d-b258-ee81e8397cd0>","cc-path":"CC-MAIN-2024-46/segments/1730477028118.93/warc/CC-MAIN-20241109120425-20241109150425-00482.warc.gz"} |
RE: st: RE: RE: comparing different means using ttest
Notice: On April 23, 2014, Statalist moved from an email list to a forum, based at statalist.org.
[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]
RE: st: RE: RE: comparing different means using ttest
From DE SOUZA Eric <[email protected]>
To "[email protected]" <[email protected]>
Subject RE: st: RE: RE: comparing different means using ttest
Date Fri, 17 Dec 2010 15:27:26 +0100
I agree with you completely. And I drum it into my students. Which is why I organise my introductory econometrics course according to data structure: cross-section data, time series data and panel data, emphasising the assumptions required, the consequences of their non-realisation and possible remedies.
Eric de Souza
College of Europe
BE-8000 Brugge
-----Original Message-----
From: [email protected] [mailto:[email protected]] On Behalf Of Nick Cox
Sent: vrijdag 17 december 2010 14:33
To: '[email protected]'
Subject: RE: st: RE: RE: comparing different means using ttest
I'd leave economists to discuss that one. My larger point remains that applying tests that ignore time series structure to data that are time series is a dubious and dangerous thing to do.
[email protected]
DE SOUZA Eric
" The regression still assumes independent error terms."
True. But GDP does often behave as a random walk (with structural breaks, may be). Hence the errror terms are very likely to be uncorrelated.
One could also robustify against serial correlation in the error terms.
Nick Cox
The regression still assumes independent error terms. There is more scope for doing something about that in a regression framework then within -ttest-, but in terms of what Eric suggested it is still a matter of six on one side and half-a-dozen on the other.
DE SOUZA Eric
It does, because it simply avoids the starting point of David Lempert which in my opinion is a false start: regressing GDP levels on a time trend will get you nowhere. If David is interested testing the equality of GDP growth rates across two time periods, you pool the data, calculate the GDP growth rate and regress this variable on two dummy (binary) variables for each time period. In order to avoid perfect collinearit you drop one of the two dummies and test whether the coefficient on the other is equal to zero.
Steven Samuels
But. Eric, I don't think that pooling will solve the dependence issues that Nick mentioned.
On Dec 16, 2010, at 1:26 PM, DE SOUZA Eric wrote:
Why not just pool your data and regress %GDP-growth on a dummy
(binary) variable (and a constant, of course) which takes the value of one for one of the two sub-samples and zero for the other; and test whether the coefficient on the dummy is significantly different from zero (or examine its confidence interval) ?
You can robustify for heteroscedasticity.
* For searches and help try:
* http://www.stata.com/help.cgi?search
* http://www.stata.com/support/statalist/faq
* http://www.ats.ucla.edu/stat/stata/
* For searches and help try:
* http://www.stata.com/help.cgi?search
* http://www.stata.com/support/statalist/faq
* http://www.ats.ucla.edu/stat/stata/ | {"url":"https://www.stata.com/statalist/archive/2010-12/msg00675.html","timestamp":"2024-11-13T14:38:44Z","content_type":"text/html","content_length":"16111","record_id":"<urn:uuid:09e3b603-3bc6-4e3b-901d-c10435b6bf9a>","cc-path":"CC-MAIN-2024-46/segments/1730477028369.36/warc/CC-MAIN-20241113135544-20241113165544-00741.warc.gz"} |
On projective representations on finite abelian group
Saeed [11] has considered Schur multipliers of some of the finite abelian groups.The study of the schur multipliers of abelian groups is the first step in the studying of the projective
representations of such groups. Our objective here is to determine the Inequivalent Irreducible projective representations of these groups which correspond to certain classes of factor sets. Let On'
denote the direct product of n cyclic m groups C of order m. Then in [9] and [10] the a-regular classes have been determined; these being the classes at which non trivial projective representations
with factor set a take on non zero character values. Here we review these results, and determine the Inequivalent Irreducible characters corresponding to these a-regular classes. In particular, a
complete set of irreducible inequivalent projective characters is obtained for these classes. The following is a brief description of how the work in the sequel has been organised. Chapter one gives
the basic facts about factor sets and projective representations of finite groups together with some of their properties. The concepts of schur multipliers and twisted group algebras are also
considered. The central(vii)and stem extensions of finite groups are discussed in chapter two; while chapter three is concerned with projective character theory. Here the interest is in reviewing
those properties of projective characters which are analogous to those of ordinary characters. Finally the work in the previous chapters is applied in chapter four to obtain the irreducible
projective characters of certain finite abelian groups; and the results follow the works of Morris and Saeed (c.f [8], [9], [10] and [11].)
Finite Groups , Linear Algebraic Groups | {"url":"https://dspace.unza.zm/items/848aad78-b99a-4215-9250-0d894a607756","timestamp":"2024-11-09T19:53:55Z","content_type":"text/html","content_length":"375902","record_id":"<urn:uuid:64bd439f-87f2-4fe6-beb9-dc6b3bd9671a>","cc-path":"CC-MAIN-2024-46/segments/1730477028142.18/warc/CC-MAIN-20241109182954-20241109212954-00538.warc.gz"} |
AUTOCAD TUTORIAL: Chapter 2 Introduction Of 2D Drawing Tool > Circle Tool - Mech4study
AUTOCAD TUTORIAL: Chapter 2 Introduction Of 2D Drawing Tool > Circle Tool
AutoCAD has a wide library of commands. There are many commands which used to drawing any type of drawing. Today i am going to tell you about various method to drawing circle in AutoCAD.
Circle tool:
Short key: C enter
To draw a
circle first select the circle tool by clicking the circle on draw toolbar or
by simply press C enter.
Now there
are six method to draw a circle which are explain given below.
1. Center-radius method:
This method is used when we know
the center and radius of the circle
• Select
Circle tool from the quick access toolbar or press C enter.
• Now specified
center point of circle by
x,y and press enter. If you don’t have specified location than click anywhere
in drawing area.
• Now enter
the radius of circle and press enter.
To draw
a circle whose center lies at the origin and radius is 5.
enter (select the tool)
0,0 enter (specified center point)
5 enter (specified radius)
2. Center diameter method:
This method is used when we know the center and diameter of the circle
• Select
Circle tool from the quick access toolbar or press C enter.
• Now
specified center point of circle by x,y and press enter. If you don’t have specified location
than click anywhere in drawing area.
• Now
press D and enter.
• Now
enter the diameter of circle and press enter.
To draw
a circle whose center is (0,2)and diameter is 8.
enter (select the tool)
0,2 enter (specified center point)
D enter (select diameter input option)
8 enter (specified diameter)
3. Two point method:
This method is used when
we know the two points from which the circle pass.
• Select
Circle tool from the quick access toolbar or press C enter.
• Now
press 2P and enter.
• Now
specified these two point by simply clicking the left click of mouse on drawing
area. You can also use absolute coordinate by selecting first point by x,y and
same as second point.
To draw
a circle which pass from two point (4,0) and (8,0).
enter (select the tool)
enter (select two point option)
enter (specified first point)
enter (specified second point)
4. Three point method:
This method is used when we know the two points from which the circle pass.
• Select
Circle tool from the quick access toolbar or press C enter.
• Now
press 3P and enter.
• Now
specified these three point from which circle pass by simply clicking the left
click of mouse on drawing area. You can also use absolute coordinate by
selecting first point by x,y and same as second and third point point.
To draw
a circle which pass from three point (4,2) , (8,4) and (2,2).
enter (select the tool)
enter (select three point option)
4,2enter (specified first point)
enter (specified second point)
enter (specified third point)
5. Tangent tangent radius method:
This method is used when we know two tangent of the circle and radius of the circle.
• Select
Circle tool from the quick access toolbar or press C enter.
• Now
press TTR or T and enter.
• Now
specified first and second tangent by clicking on the tangent with the mouse.
• Now
enter the radius of circle and press enter.
To draw
a circle which has two tangents line AB and curve PQR and whose radius is 8.
enter (select the tool)
T enter (select Tangent-tangent-radius option)
the line AB.
the curve PQR.
enter (enter the radius)
6. Tangent-Tangent-Tangent method:
This method is used when we know three tangent of the circle.
• Select
the tangent-tangent-tangent option on drop down menu of the circle on the
quick access toolbar.
• Now
specified first ,second and third tangent by clicking on the tangent with the
To draw
a circle which has three tangent line AB, line CD and curve PQR.
the circle TTT tool from the quick access toolbar > draw toolbar>
circle> drop down menu> TTT.
line AB.
line CD.
curve PQR.
Problem 1 : draw a circle whose radius is 8 and center lies at point (4,4).
Problem 2 : draw the following diagram-
Problem 3 : draw a circle which have radius three following tangent
1. line AB which passes from point (3,4) and (6,8).
2. line CD which passes from origin and make angle 45 degree with x axis.
3. line EF which passes from point (0,4) and (0,8).
If you like this post than don’t forgot to comment on it.
You can follow us on Google + and like us on
Leave a Comment | {"url":"https://mech4study.com/uncategorized/autocad-tutorial-chapter-2-introduction-of-2D-drawing-tool-circle-tool.html/","timestamp":"2024-11-14T18:33:43Z","content_type":"text/html","content_length":"201298","record_id":"<urn:uuid:bc692965-36b5-4f86-b838-d78d9d2ebc4d>","cc-path":"CC-MAIN-2024-46/segments/1730477393980.94/warc/CC-MAIN-20241114162350-20241114192350-00164.warc.gz"} |
Finding gravitational potential inside solid sphere
• Thread starter demonelite123
• Start date
In summary, the conversation discusses finding the gravitational potential and force inside and outside a solid sphere (the Earth) of radius R for a mass m. The approach involves treating the Earth
as a collection of spherical shells and using integration to derive the potential and force equations. However, there is a discrepancy in the calculation of the constant CM for the potential inside
the sphere, which is later resolved by correcting the calculation for the potential due to the shells enclosing the mass m.
So I am given that the gravitational potential of a mass m a distance r away from the center of a spherical shell with mass m' is -Cm'/r for m outside the shell and constant for m inside ths shell.
I am to find the potentials inside and outside a solid sphere (the earth) of radius R as well as the gravitational force inside and outside on a mass m.
I thought of the Earth as a lot of spherical shells of mass dm so if the mass of the solid sphere is M, i integrated for example -Cdm/r from m = 0 to m = M to get -CM/r outside the sphere. Then
taking the negative gradient, i find F = (-CM/r^2) e[r]. Then since the gravitational force on the surface of the Earth is -mg, i see that -CM/R^2) = -mg or CM = mgR^2.
now for inside the sphere, i have the potential to be D - CM'/r where D is a constant (due to the shells that enclose the mass m) and M' is the total mass of the shells that do not enclose the mass
m. since the sphere has uniform density, we have M'/M = r^3/R^3 so the potential is D - CMr^2/R^3. Taking the negative gradient once again, i get 2CMr/R^3 and since the force at the surface is -mg, i
get 2CM/R^2 = -mg or CM = (-1/2)mgR^2.
but earlier, i got that the constant CM = mgR^2.
why does my constant CM have 2 different values? have i done something wrong?
i think i figured it out. i didn't calculate the potential correctly for the case that m was inside the earth. while my -CM'/r was correct, the term D was incorrect. after correctly calculating the
potential due to all the shells that enclose the mass m, i do indeed get the same value of CM as i did in the other case.
FAQ: Finding gravitational potential inside solid sphere
1. How is gravitational potential defined?
Gravitational potential is defined as the amount of work required to move an object from an infinite distance to a specific point in a gravitational field. It is a scalar quantity and is measured in
joules per kilogram (J/kg).
2. What is the formula for gravitational potential inside a solid sphere?
The formula for gravitational potential inside a solid sphere is V = -(3Gρr^2)/(5R), where G is the gravitational constant, ρ is the density of the sphere, r is the distance from the center of the
sphere, and R is the radius of the sphere.
3. How does the gravitational potential inside a solid sphere vary with distance from the center?
The gravitational potential inside a solid sphere decreases as the distance from the center increases. This is because the mass within a certain distance from the center increases as the distance
increases, resulting in a stronger gravitational force.
4. What is the difference between gravitational potential inside a solid sphere and outside a solid sphere?
The main difference is that the gravitational potential inside a solid sphere is negative, while outside it is positive. This is because inside the sphere, the gravitational force is attractive,
while outside it is repulsive. Additionally, the formula for gravitational potential is different inside and outside the sphere.
5. Can gravitational potential inside a solid sphere be negative?
Yes, the gravitational potential inside a solid sphere can be negative. This occurs when the distance from the center is less than the radius of the sphere. As the distance increases beyond the
radius, the gravitational potential becomes positive. | {"url":"https://www.physicsforums.com/threads/finding-gravitational-potential-inside-solid-sphere.661922/","timestamp":"2024-11-06T05:59:48Z","content_type":"text/html","content_length":"77446","record_id":"<urn:uuid:a7fc7682-a584-4985-ad02-ac381c6a9312>","cc-path":"CC-MAIN-2024-46/segments/1730477027909.44/warc/CC-MAIN-20241106034659-20241106064659-00146.warc.gz"} |
Module compare
Expand description
Provides various ways to compare crate::encoding::Encodings.
Some of the comparisons require an crate::smt::SmtSolver. You can use the liblisa-z3 crate to import bindings to the Z3 SMT solver.
• A condensed summary of the architecture comparison, suitable for export to a file.
• A matrix of pairwise comparisons between all architectures.
• For each architecture, a group of overlapping encodings.
• The result of encodings_semantically_equal. Semantics are guaranteed to be equivalent if equal && !timeout holds.
• A mapping between part indices of two encodings.
• A row in the architecture comparison table.
• The key for a row in an architecture comparison table. All encodings that have the same RowKey should be grouped into the same row.
• A collection of rows.
• The result of a pairwise comparison between two encodings.
• Represents the equivalence between two encodings.
• Splits the encoding group into multiple groups, such that each group contains encodings that cover the exact same space. This function is one-to-many. That is, it will generate many smaller
groups that each cover a small subset.
• Compares the
of the two encodings, and returns whether they are equal.
• Determines if the dataflows of the encodings provided are equal, ignoring the actual computations. Only compares the overlapping parts of the covered instruction space.
• Determines whether two encodings are semantically equivalent.
• Only compares the overlapping parts of the covered instruction space. | {"url":"https://docs.liblisa.nl/liblisa/compare/","timestamp":"2024-11-06T07:17:01Z","content_type":"text/html","content_length":"10396","record_id":"<urn:uuid:64993dca-ef18-49e7-8370-5a91fa8d1b4e>","cc-path":"CC-MAIN-2024-46/segments/1730477027910.12/warc/CC-MAIN-20241106065928-20241106095928-00493.warc.gz"} |
Simulation Tutorial - Models
Simulation Models
A simulation model is a mathematical model that calculates the impact of uncertain inputs and decisions we make on outcomes that we care about, such as profit and loss, investment returns,
environmental consequences, and the like. Such a model can be created by writing code in a programming language, statements in a simulation modeling language, or formulas in a Microsoft Excel
spreadsheet. Regardless of how it is expressed, a simulation model will include:
• Model inputs that are uncertain numbers -- we'll call these uncertain variables
• Intermediate calculations as required
• Model outputs that depend on the inputs -- we'll call these uncertain functions
It's essential to realize that model outputs that depend on uncertain inputs are uncertain themselves -- hence we talk about uncertain variables and uncertain functions. When we perform a simulation
with this model, we will test many different numeric values for the uncertain variables, and we'll obtain many different numeric values for the uncertain functions. We'll use statistics to analyze
and summarize all the values for the uncertain functions (and, if we wish, the uncertain variables).
Creating Models in Excel or Custom Programs
An Excel spreadsheet can be a simple, yet powerful tool for creating your model -- especially when paired with Monte Carlo simulation software such as Risk Solver. If your model is written in a
programming language, Monte Carlo simulation toolkits like the one in Frontline's Solver Platform SDK provide powerful aids.
An example model in Excel might look like this, where cell B6 contains a formula =PsiTriangular(E9,G9,F9) to sample values for the uncertain variable Unit Cost, and cell B10 contains a formula =
PsiMean(B9) to obtain the mean value of Net Profit across all trials of the simulation.
A portion of an example model in the C# programming language might look like this, where the array Var[] receives sample values for the two uncertain variables X and Y, and the uncertain function
values are computed and assigned to the Problem's FcnUncertain object Value property:
Choosing Samples for Uncertain Variables
We must also choose what random sample values to use for the uncertain variables. During a simulation, a new sample value will be drawn for every uncertain variable on each trial. Risk Solver
provides state-of-the-art random number generators and sampling methods for your simulation needs.
In the simplest case, we might generate random numbers between 0 and 1, and use these as sample values. But in most cases, the range of values, and chance that different values in the range will be
drawn on each trial, must be tailored to the uncertain variable. To do this, we normally choose a probability distribution and appropriate parameters for the uncertain variable. As discussed next,
selecting appropriate probability distributions is a key step in building a simulation model. | {"url":"https://www.solver.com/simulation-models","timestamp":"2024-11-07T09:14:09Z","content_type":"text/html","content_length":"60201","record_id":"<urn:uuid:196da969-6a8a-44b3-b227-8afa744a9abb>","cc-path":"CC-MAIN-2024-46/segments/1730477027987.79/warc/CC-MAIN-20241107083707-20241107113707-00230.warc.gz"} |
deduction unit for grinding mill
WEBThe Metso stirred mills are suitable for a large range of product sizes. The standardized range includes chamber units of up to 50,000 liters and the world's largest industry units with up to
6,500 kW of installed power. Floor space use is optimized, which reduces investment costs, and installation is easy. All our stirred mills are part of ...
WhatsApp: +86 18838072829
WEBAug 14, 2019 · Above 500,000 the general Federal tax rate is 15%. For Ontario, the small business limit is 500,000. Up to 500,000 the Ontario tax rate is % and above 500,000 it is %. From tax
years commencing from Jan 1, 2019, the business limit will be reduced by 5 for every 1 of investment income above 50,000 (the term used is "Adjusted ...
WhatsApp: +86 18838072829
WEBJul 29, 2016 · The pregrind process. Prior to finish the ingredient grinding step, a process called pregrinding is often carried out. It usually involves a Full Circle Hammer Mill to
individually grind whole grains such as wheat, rice, corn and others. This is typically done using screen sizes of 8/64" – 9/64" ( mm– mm).
WhatsApp: +86 18838072829
WEBNCP International offers new Grinding Mills to clients throughout the world. We specialize in the supply of new grinding mills. Our designs are completed using modern FEA design techniques and
3D simulation packages. This ensures that our clients receive optimized, fitforpurpose mill designs at the best project pricing every time.
WhatsApp: +86 18838072829
WEBOct 4, 2019 · In the present work, vibration, acoustic and thermal signals were correlated to the semiautogenous grinding mill working parameters such as total power and inlet water flow rate,
and then these parameters were monitored using vibration, acoustic and thermal analyses. Next, the influential controlling parameters were obtained to monitor the mill .
WhatsApp: +86 18838072829
WEBHighly energy intensive unit operation of size reduction in cement industry is intended to provide a homogeneous and super fine ( Blain) cement. Grinding operation is monitored for following
parameters to ensure objectivity and economy of operation. ... About 27 to 35 % volume of mill is filled with grinding media. Equilibrium charge ...
WhatsApp: +86 18838072829
WEBJan 3, 2019 · Clinker grinding technology is the most energyintensive process in cement manufacturing. Traditionally, it was treated as "low on technology" and "high on energy" as grinding
circuits use more than 60 per cent of total energy consumed and account for most of the manufacturing cost. Since the increasing energy cost started burning the benefits .
WhatsApp: +86 18838072829
WEBA grain mill (or Flour grinder) is a grinder that can be used to grind wheat, oats, barley, corn, and other cereals into a fine powder or flour to use in baking and cooking. According to
Wikipedia "A gristmill (also: grist mill, corn mill, flour mill, feed mill, or feedmill) grinds cereal grain into flour and middlings.
WhatsApp: +86 18838072829
WEBJul 3, 2021 · Evan Doran. Associate Editor, Modern Machine Shop. On its surface, grinding seems simple: a machine takes a rotating tool (usually a wheel) with abrasive grains and applies it to
a workpiece's surface to remove material. Each grain is its own miniature cutting tool, and as grains dull, they tear from the tool and make new, sharp grains .
WhatsApp: +86 18838072829
WEBDec 7, 2023 · Regardless of your project or material, whether it's metal or plastic, our CNC machines empower you to create whatever you envision. To fully harness the precision and control of
these benchtop mills, all you need is a computer, some time, and a willingness to learn. Desktop CNC Mills CNC Baron Milling Machine Max CNC Mill CNC JR. Table ...
WhatsApp: +86 18838072829
WEBAug 10, 2022 · Therefore, producing cement with less energy is becoming a key element of profitability: as the grinding process consumes about 60 per cent of the total plant electrical energy
demand and about 20 per cent of cement production variable cost. So efficient grinding unit selection impacts profitability of cement manufacturing.
WhatsApp: +86 18838072829
WEBNETZSCH Discus Grinding System The universal Grinding System Discus is the byword for highperformance agitator bead mills with disk agitator. The high length/diameter ratio, the different
grinding disk geometries, the various material options, as well as the highlyefficient grinding bead separation system facilitate an appliionspecific design of this .
WhatsApp: +86 18838072829
STEP 3 : Estimate the power consumption of a grinding mill for a particular appliion. The Bond 3rd theory of comminution is estimating the power required to mill a particular ore thanks to the
following formula [Chopey] : W = 10*Wi* () With : W = Power consumption of the mill (kWh/t)
WhatsApp: +86 18838072829
WEBThe Contender™ Series is our newest line of premium spare parts for nonMetso grinding mills. The line includes grinding mill heads, shells, gears and pinions, main bearings, trunnions,
trunnion liners, and more for select machines. For certain designated parts, additional enhancements have been made for increased safety, reliability and ...
WhatsApp: +86 18838072829
WEBMay 1, 2014 · The ball size distribution (BSD) in a mill is usually not known, as the measurement of the charge size distribution requires dumping the load and laboriously grading the balls
into size classes. Fortunately we had one set of data as discussed below. The general nonavailability of BSD necessitates the use of ball wear theory to estimate .
WhatsApp: +86 18838072829
WEBThe Rotor Beater Mill SR 300 is ideal for coarse and fine size reduction, in batches or continuously. ... optional grinding inserts 180° for grinding of hardbrittle materials by additional
impact; ... optionally available cyclone unit with 1 2 5 or 30 liters collecting vessel;
WhatsApp: +86 18838072829
WEBDec 1, 2016 · Ta bl e 4 Slopes and coef ficients of determina tion of the three grinding equat ions: Kick, Rittinger, and Bond for grinding pi ne Form of equatio ns Kick ' se q u a t i o n, n
= 1 Rittinger ...
WhatsApp: +86 18838072829
WEBMachine Sizes . The Zeta ® grinding system is available in sizes ranging from the Mini/MicroSeries laboratory mills with grinding chamber volumes of l to production machines with grinding
chamber volumes of 400 l. Full scaleup of the results achieved on the laboratory scale is possible. Cleaning made easy. The optimized cleaning concept .
WhatsApp: +86 18838072829
WEBThe highspeed mill system Zeta ® with improved peg grinding system optimizes your production capacity, energy demand and quality. Designed for circulation operation and multipass operation,
you achieve high throughput rates and high quality with a narrow particle size distribution for higher viscous products.. A minimal control expenditure is .
WhatsApp: +86 18838072829
WEBThe SAG mill was designed to treat 2,065 t h −1 of ore at a ball charge of 8% volume, total filling of 25% volume, and an operating mill speed of 74% of critical. The mill is fitted with 80 mm
grates with total grate open area of m 2 ( Hart et al., 2001 ). A m diameter by m long trommel screens the discharge product at a cut size ...
WhatsApp: +86 18838072829
WEBFeb 26, 2021 · The paper highlights the features of constructing a model of a wet semiautogenous grinding mill based on the discrete element method and computational fluid dynamics. The model
was built using Rocky DEM (v., ESSS, Brazil) and Ansys Fluent (v. 2020 R2, Ansys, Inc., United States) software. A list of assumptions and boundary .
WhatsApp: +86 18838072829
WEBAug 2, 2013 · Based on his work, this formula can be derived for ball diameter sizing and selection: Dm <= 6 (log dk) * d^ where D m = the diameter of the singlesized balls in = the diameter
of the largest chunks of ore in the mill feed in mm. dk = the P90 or fineness of the finished product in microns (um)with this the finished product is ...
WhatsApp: +86 18838072829
WEBCharacteristics: Small batch sizes: Batch size with 1 g API. Flexibility: Different sizes of grinding chambers available. Suitable for grinding media up to 2 mm. Screening: Test up to 40
samples at one time. Scaleup: Similar results on all sizes of the DeltaVita® series. Coolable production room.
WhatsApp: +86 18838072829
WEBThe laboratory mills of the MiniSeries are available in three different material designs: The MiniCer® and MiniPur enable metal free fine grinding of your high quality products. All grinding
chamber parts of the MiniCer® are of wear resistant NETZSCHCeram Z or NETZSCH Ceram_C/ NETZSCHCeram N or in the case of the MiniPur of NElast.
WhatsApp: +86 18838072829
WEBPPS Air Classifier Mills are high energy grinding mills with an integral classifier to produce ultrafine powders. Contact Us; Put Us To The Test ... from small lab pilot units through to 400
hp production units. PPS Air Classifier Mill Range. Model: Main Drive Power: Classifier Drive Power: Approx. Airflow: Lab CMT: hp: 1 hp: 175 cfm: 1 ...
WhatsApp: +86 18838072829
WEBApr 14, 2016 · A 100 lb. representative sample of the ball mill feed is sufficient for the unit cell flotation tests. Flotation in a Grinding Circuit. The simplest flotation circuit is a
comparatively recent innovation. It consists of the introduction of a flotation cell into the grinding circuit between ball mill and classifier as shown below.
WhatsApp: +86 18838072829
WEBOur extensive experience and wide range of mills are available for the fine cutting, fine grinding and ultrafine grinding of dry products of all desired finenesses. Highperformance classifiers
for the finest of products, round off our program. From individual mills to complete turnkey milling systems; nonPSR or dust explosion protected ...
WhatsApp: +86 18838072829
WEBJun 20, 2015 · The critical speed of a rotating mill is the RPM at which a grinding medium will begin to "centrifuge", namely will start rotating with the mill and therefore cease to carry out
useful work. Ball mills have been successfully run at speeds between 60 and 90 percent of critical speed, but most mills operate at speeds between 65 and 79 percent ...
WhatsApp: +86 18838072829
WEBJun 2, 2017 · Crushers, grinding mills and pulverizers are types of grinding equipment used to transform or reduce a coarse material such as stone, coal, or slag into a smaller, finer
material. Grinding equipment can be classified into to two basic types, crushers and grinders. Industrial crushers are the first level of size reducer; further granularization ...
WhatsApp: +86 18838072829
WEBOur largest and strongest products include dry and wet milling machines that grind hard, abrasive materials for appliions like mineral processing, cement plants, and power generation. They are
are characterized by their excellent wear life, high availability and easy maintenance. At the other end of the scale we have ultra fine mills that ...
WhatsApp: +86 18838072829
WEBJan 23, 2022 · Preparing the RunofMine (ROM) bauxite for the grinding mill circuit must be completed in order to efficiently size the grinding mill circuit. The focus of the Sect. discussion
is about choosing the correct crusher for the bauxite appliion. Including the considerations for using a drum scrubber or not. Bauxite mines are in many places.
WhatsApp: +86 18838072829
WEBJan 1, 2016 · Therefore, it demands higherlevel advanced control (as shown in the right part of Fig. 1) of the whole grinding plant to achieve integrated control and optimization of the
indices of control, operation and mineralogical economics. This is also an important way to boost benefit and increase market competiveness of mineral processing companies.
WhatsApp: +86 18838072829
WEBYou need a twostage solution, first stage opencircuit mill and then second stage closedcircuit mill. First stage, will be broken into two parts as well, you use a Bond rod mill work index for
the coarse component of the ore (+ mm) and the Bond ball mill work index for the fine component ( mm). It would look like this:
WhatsApp: +86 18838072829
WEBThe ultrafine grinding of pigments places high demands on the machine technology to be employed. The finest, absolutely gritfree granulation, lowresidue processing with minimal contamination,
as well as fast, thorough cleaning when switching products are the minimum requirements. Technical Article from NETZSCH published on processworldwide ...
WhatsApp: +86 18838072829
WEBMay 10, 2012 · One of the most efficient processes for the hard finishing of gears in batch production of external gears and gear shafts is the continuous generating gear grinding. The
generating gear grinding is used for the hard finishing of gears with a module of mn = mm to mn = 10 mm [2], [3]. By the appliion of new machine tools the process can .
WhatsApp: +86 18838072829 | {"url":"https://bernardgueringuide.fr/Sep/03_deduction-unit-for-grinding-mill.html","timestamp":"2024-11-07T06:44:29Z","content_type":"application/xhtml+xml","content_length":"31272","record_id":"<urn:uuid:d42fca92-2f23-478c-bcde-1af10265210e>","cc-path":"CC-MAIN-2024-46/segments/1730477027957.23/warc/CC-MAIN-20241107052447-20241107082447-00893.warc.gz"} |
How do you calculate percentage change in pivot table?
How do you calculate percentage change in pivot table?
Right-click on a value in the second column, point to “Show Values,” and then click the “% Difference from” option. Select “(Previous)” as the Base Item. This means that the current month value is
always compared to the previous months (Order Date field) value.
How do you calculate percentages in a pivot table Excel 2007?
When the Value Field Settings window appears, click on the “show values as” tab. Then select “% of total” from the drop down list. Click on the OK button. Now when you view your pivot table, you
should only see the Totals displayed as a percentage of the Grand Total.
How do you calculate a change in a pivot table?
Edit a calculated field formula
1. Click the PivotTable.
2. On the Options tab, in the Tools group, click Formulas, and then click Calculated Field.
3. In the Name box, select the calculated field for which you want to change the formula.
4. In the Formula box, edit the formula.
5. Click Modify.
How do you calculate percentage change in Excel?
The formula =(new_value-old_value)/old_value can help you quickly calculate the percentage change between two numbers. Please do as follows. 1. Select a blank cell for locating the calculated
percentage change, then enter formula =(A3-A2)/A2 into the Formula Bar, and then press the Enter key.
How do I calculate percentage in a PivotTable?
Right-click anywhere in the % of wins column in the pivot table. Select Value Field Settings > Show Values As > Number Format > Percentage. Click OK twice.
How do you find Percent change and decrease?
How to Calculate Percentage Decrease
1. Subtract starting value minus final value.
2. Divide that amount by the absolute value of the starting value.
3. Multiply by 100 to get percent decrease.
4. If the percentage is negative, it means there was an increase and not an decrease.
How do you calculate percentage in Excel?
Basic Excel percentage formula
1. Enter the formula =C2/B2 in cell D2, and copy it down to as many rows as you need.
2. Click the Percent Style button (Home tab > Number group) to display the resulting decimal fractions as percentages.
How do you calculate the percentage change in Excel?
If want to calculate percentage change in Excel, this can be done using a simple Excel formula. Generally, if you have two numbers, a and b, the percentage change from a to b is given by the formula:
percentage change = ( b – a ) / a.
How do you add percentages to a pivot table?
Select any cell in the new data field, and from the PivotTable toolbar, select Field Settings (in Excel 97, select PivotTable Field). In the Name box, type the new heading text: % Quantity. From Show
Data as, choose % of Total and click OK. To move the new field, select the column in the PivotTable report and drag to a new position.
How do you count unique values in a pivot table?
Instead of a unique count, the pivot table is counting each record that has a store number. So, the result is really a count of the orders, not a count of the unique stores. As a workaround, you can
add a column to the pivot table source data, and use a formula to calculate one or zero in each row.
How do you add a custom column to a pivot table?
Click Calculated Field on the drop-down menu. It will open a new window where you can add a new, custom column to your Pivot Table. Enter a name for your column in the “Name” field . Click the Name
field, and type in the name you want to use for your new column. | {"url":"https://morethingsjapanese.com/how-do-you-calculate-percentage-change-in-pivot-table/","timestamp":"2024-11-09T04:35:29Z","content_type":"text/html","content_length":"131126","record_id":"<urn:uuid:a77b4972-ca82-4154-8b04-9e855e100ee4>","cc-path":"CC-MAIN-2024-46/segments/1730477028115.85/warc/CC-MAIN-20241109022607-20241109052607-00669.warc.gz"} |
How to Find the Area of a Rectangular Prism
••• Hemera Technologies/Photos.com/Getty Images
A rectangular prism's two identical ends are rectangles, and as a result, the four sides between the ends are also two pairs of identical rectangles. Because a rectangular prism has six rectangular
faces or sides, its surface area is just the sum of the six faces, and because each face has an identical opposite, you can calculate the surface area with the formula 2 * length * width + 2 * width
* height + 2 * height * length, where length, width and height are the prism's three dimensions.
Find the length, width and height measurements of the prism. For this example, let the length be 12, the width be 10 and the height be 20.
Multiply the length and width, then double that product. In this example, 12 multiplied by 10 equals 120, and 120 multiplied by 2 equals 240.
Multiply the width and height, then double that product. In this example, 10 multiplied by 20 equals 200, and 200 multiplied by 2 equals 400.
Multiply the height by the length, then double that product. In this example, 20 multiplied by 12 equals 240, and 240 multiplied by 2 equals 480.
Sum the three doubled products to obtain the rectangular prism's surface area. Concluding this example, adding together 240, 400 and 480 results in 1,120. The example's rectangular prism has a
surface area of 1,120.
About the Author
Chance E. Gartneer began writing professionally in 2008 working in conjunction with FEMA. He has the unofficial record for the most undergraduate hours at the University of Texas at Austin. When not
working on his children's book masterpiece, he writes educational pieces focusing on early mathematics and ESL topics.
Photo Credits
Hemera Technologies/Photos.com/Getty Images | {"url":"https://sciencing.com/area-rectangular-prism-8256517.html","timestamp":"2024-11-03T08:55:35Z","content_type":"text/html","content_length":"403421","record_id":"<urn:uuid:31e5eba2-9b26-449c-b7f1-3b2f171af9bf>","cc-path":"CC-MAIN-2024-46/segments/1730477027774.6/warc/CC-MAIN-20241103083929-20241103113929-00565.warc.gz"} |
Linear Algebra: Foundations to Frontiers
by M.E. Myers, P.M. van de Geijn, R.A. van de Geijn
Publisher: ulaff.net 2014
Number of pages: 905
This document is a resource that integrates a text, a large number of videos (more than 270 by last count), and hands-on activities. It connects hand calculations, mathematical abstractions, and
computer programming. It encourages you to develop the mathematical theory of linear algebra by posing questions rather than outright stating theorems and their proofs. It introduces you to the
frontier of linear algebra software development.
Download or read it online for free here:
Download link
(33MB, PDF)
Similar books
Linear Algebra with Applications
W. Keith Nicholson
LyryxThe aim of the text is to achieve a balance among computational skills, theory, and applications of linear algebra. It is a relatively advanced introduction to the ideas and techniques of linear
algebra targeted for science and engineering students.
Calculus and Linear Algebra. Vol. 2
Wilfred Kaplan, Donald J. Lewis
University of Michigan LibraryIn the second volume of Calculus and Linear Algebra, the concept of linear algebra is further developed and applied to geometry, many-variable calculus, and differential
equations. This volume introduces many novel ideas and proofs.
Course of Linear Algebra and Multidimensional Geometry
Ruslan Sharipov
Samizdat PressThis is a textbook of multidimensional geometry and linear algebra for the first year students. It covers linear vector spaces and linear mappings, linear operators, dual space,
bilinear and quadratic forms, Euclidean spaces, Affine spaces.
Linear Algebra
David Cherney, Tom Denton, Andrew Waldron
UC DavisThis textbook is suitable for a sophomore level linear algebra course taught in about twenty-five lectures. It is designed both for engineering and science majors, but has enough abstraction
to be useful for potential math majors. | {"url":"http://www.e-booksdirectory.com/details.php?ebook=10026","timestamp":"2024-11-08T19:18:30Z","content_type":"text/html","content_length":"11346","record_id":"<urn:uuid:9eefaa8b-acc5-452a-811e-5f5298370892>","cc-path":"CC-MAIN-2024-46/segments/1730477028070.17/warc/CC-MAIN-20241108164844-20241108194844-00586.warc.gz"} |
– Re
Hi everyone,
The grades for Exam #3 are posted under Dashboard / OpenLab Gradebook – the exams will be returned on Tuesday. Let me know if you have any questions.
Prof. Reitz
Written work, due Tuesday, December 3rd, in class:
Chapter 10 p167: 1, 2, 5, 10, 15
WeBWorK – none
OpenLab – none
Project – Final Draft of paper due in class on Thursday, 12/5.
Group Presentations on Thursday, 12/5.
Hi everyone,
The review sheet for Exam #3, taking place on Tuesday 11/26, is posted under Classroom Resources / Exam Reviews. As always, if you have any questions or notice any errors please let me know (by
email, in person, or here on the OpenLab).
Prof. Reitz
The last significant group assignment for your semester project is a group presentation (there will be one more individual assignment, a reflection on the process). I’ll put the details here,
followed by an outline of the grading criteria (the presentation is worth 20 points total).
Semester Project – Group Presentation
This is your chance to share your group’s work with the rest of the class. Each group will give a 5-8 minute presentation, including the following items:
• State your conjecture (this should be written down, either on a slide or on the board). Give an explanation, and an example to demonstrate your conjecture.
• If you were able to prove your conjecture, give a proof. If not, describe briefly some of the ideas you had and strategies you tried while trying to prove it.
• Give the class at least one puzzle to work on on their own – a challenge!
• Give the audience a chance to ask questions (either during the presentation, or after).
Keep in mind the following:
• You must include some kind of slides (you may also put work on the board): PowerPoint, Google Slides, Prezi.com, LaTeX Beamer, or other.
• You may decide as a group how to divide up the work, but each group member must present something to class.
• Be aware that you will be asked at a later time to describe your own specific contributions as well as those of each group member.
• Presentations will be given at the beginning of class on Thursday, 12/5. Your group must sign up for a presentation time before leaving class on 11/21.
Grading Criteria (20 points total)
_____ points (4 possible). Basics. Stay within time limits (5-8 minutes). All group members participate.
_____ points (6 possible). Conjecture. Conjecture is written down. Explanation and example are provided.
_____ points (7 possible). Proof of conjecture or proof process description.
_____ points (3 possible). Challenge the class. At least one puzzle is given for the class to work on on their own.
____ points TOTAL (20 possible)
Hi everyone,
The group process paper will be worth 35 points towards your Project grade. I will be filling out the sheet below for each paper submitted. Please let me know if you have any questions.
Prof. Reitz
Semester Project – Group Process Paper
Grading Criteria
_____ points (3 possible). Basics/formatting. Length (1500 words required). Group members names. Semester/Date/Course.
_____ points (2 possible). Puzzle description. Description given in own words, demonstrates understanding of puzzle mechanics.
_____ points (16 possible). Proof process narrative.
_____ points (4 possible). Shows progress across various stages of the project.
_____ points (4 possible). Includes all participating members of the group.
_____ points (4 possible). Includes objective facts (“what we did”) as well as experience (“how it felt, what it was like”).
_____ points (4 possible). Tells a story.
_____ points (5 possible). Conjecture.
_____ points (3 possible). State your group’s conjecture.
_____ points (2 possible). Proof or disproof of conjecture. If no proof or disproof was obtained, these points can be earned by clear explanation of proof process in the preceding account.
_____ points (9 possible). Images (3 points each). Original or clearly attributed. Includes caption. Connection to puzzle/process is evident.
____ points TOTAL (35 possible)
Written work, due Tuesday, November 19th, in class:
Chapter 8: 3, 4, 7, 18, 19, 20
Chapter 9: 3, 4, 5
WeBWorK – none
OpenLab – none
Project – Initial Draft of paper due in class next Thursday, 11/21 (feedback will be sent by email to group members).
Final Draft of paper due in class on Thursday 12/5.
Group Presentations on Thursday, 12/5.
In his essay A Mathematician’s Lament, Paul Lockhart says “A good problem is something you don’t know how to solve.” This is quite different from most of the “problems” that appear in our mathematics
education. In the past weeks, you’ve all spent some time individually and in groups working on such problems, in the context of graph theory (“Bridges and Walking Tours”).
As a group, write an account of your experiences working on your puzzle/problem. You should include the following elements:
• Description of the Bridges and Walking Tours problem, in your own words.
• An account of working on your problem as a group, from playing with the problem to formulating and perhaps proving a conjecture. What did your group do/think/feel? You can include examples of
puzzles and solutions if you wish, as well as work by individual group members completed outside the group (both optional). Your goal is not to go over every detail, but to tell a story that
your readers will enjoy – “what was it like”?.
• A statement of your group’s chosen conjecture, and a proof (or disproof) of the conjecture.
• At least three images (more if you wish). They can include images of puzzles you’ve created or solutions, but you can also be creative with images or photos related to your puzzle, your group or
your story in some way. Each image should have a caption describing. NOTE: You may freely use your own drawings, images or photos. If you wish to use photos from another source, they must be
from a legal source (for example, Creative Commons licensed, with proper attribution – the library or your professor can help with this).
• Basic details: the names of all group members, the date, course and section numbers, and your professor’s name.
I will be meeting with each group next Tuesday, November 14th, in class. Please be in touch with your other group members before then! Be prepared to discuss your progress so far – at the very
least, you should be able to describe how you are dividing up the work of the paper among your group.
The first draft of this assignment is due in class on Thursday, November 21. Each group should submit one paper, of no less than 1500 words. You may decide as a group how to divide up the work. Be
aware that you will be asked at a later time to describe your own specific contributions as well as those of each group member.
The final draft of this assignment is due in class on Tuesday, December 5.
REGARDING SEMESTER PROJECT: As you may recall from the Course Description, the semester project is worth 10% of your overall grade. The project consists of a number of interrelated activities (many
of which have already been completed) – complete details can be found on the Project Overview & Deliverables page. The group paper assigned here forms a significant portion of the project.
Group 1: Song Yu, Randy, Aurkaw
A diagram is solvable when a diagram has a greater than or equal to a number of vertices with an even number of adjacent lines than the number of vertices with an odd number of adjacent lines.
And a line graph is solvable by choosing either of the endpoints of a line.
Group 2: Youshmanie, Dylan:
A puzzle is solvable with a bit string where the length is the total number of points and the elements are the amount of bridges connected to each point in descending order then the pattern is
solvable for any other puzzle with the same bit string.
Assignment. Your goal for today is to refine the conjecture you decided on during your last class meeting. Some things to consider:
• Specificity: The conjecture should be stated clearly. It should include all information necessary to be understood by someone who is familiar with graph theory terms (vertex, edges, paths) and
familiar with the assignment (walking tours). A reader should be able to tell from the statement whether a conjecture applies to a given drawing or not.
• Generality: Your conjecture should apply to more than just a single specific graph (it can apply to a collection of similar graphs, for example, as long as you describe exactly what types of
graphs you are considering).
• Drawing: You can create a drawing to accompany your conjecture, but your conjecture should be understandable without the picture.
• You can revise your conjecture as a group if you wish – but try to come up with something similar.
• You can add additional clarification to your conjecture.
• You can extend your conjecture to include more types of graphs.
Written work – Due Tuesday, November 12, in class:
Chapter 7: 5, 6, 7, 9, 12
WeBWorK – none
OpenLab – none
Project – First draft of your group paper is due in class on Thursday, 11/21.
EXAM #3 will take place on Tuesday, 11/26 (right before Thanksgiving break). | {"url":"https://openlab.citytech.cuny.edu/2019-fall-mat-2071-reitz/?m=201911","timestamp":"2024-11-06T02:13:37Z","content_type":"text/html","content_length":"138715","record_id":"<urn:uuid:2a863e2a-2013-4bde-a3c8-ab67ea2ac8ea>","cc-path":"CC-MAIN-2024-46/segments/1730477027906.34/warc/CC-MAIN-20241106003436-20241106033436-00293.warc.gz"} |
Tasty Math Homework - Math Motivator
Tasty Math Homework
I just finished making brownies (from a box) with my granddaughter Charlotte who is in Junior Kindergarten. I was reminded about all of the ways both math and literacy can be injected into a fun
time. I baked with my own children when they were young but I know I was not as explicit about opportunities to build math content knowledge. When we got the measuring cups out I was reminded about
the challenges many students in the later grades have with fractions so I took out my four measuring cups – 1 cup, 1/3, 1/4 and 1/2. I turned the water on and asked her how many of the one-fourth
cups it would take to fill the 1 cup. She said two so I said, let’s find out. After discovering that it took exactly four we tried the same thing with the half cup and she predicted three. After
we found out that it took two, we lined the cups up from smallest to largest. I pointed out the one- third cup and asked her how many she thought it could take to fill the one cup. She predicted
three and she discovered that it was indeed three. We also had the opportunity to talk about the shape of the square pan and the round bowl (circle). She showed me that she was able to recognize
these shapes and I made sure that I said that a square is a special rectangle. After we were finished, Charlotte asked me a very important question, “Can I lick the spoon?” Of course, I said yes,
but I also added that a very important part about baking is cleaning up! Another time we will talk about cutting the brownies up into equal parts. So many ways to inject many important ideas into a
fun experience. Next time you want to do some math with your children or grandchildren have fun baking and talk together about what you are doing! What if homework was always that tasty! | {"url":"http://mathmotivator.com/tasty-math-homework/","timestamp":"2024-11-10T05:04:56Z","content_type":"text/html","content_length":"59504","record_id":"<urn:uuid:702b7cdc-7b30-4220-aebd-eb24bb5ef2c6>","cc-path":"CC-MAIN-2024-46/segments/1730477028166.65/warc/CC-MAIN-20241110040813-20241110070813-00336.warc.gz"} |