text
stringlengths 100
957k
| meta
stringclasses 1
value |
|---|---|
## Computing Chow groups.(English)Zbl 0663.14001
Algebraic geometry, Proc. Conf., Sundance/Utah 1986, Lect. Notes Math. 1311, 220-234 (1988).
[For the entire collection see Zbl 0635.00006.]
Let X be an algebraic k-scheme of finite type with a cellular decomposition $$\{Z_{ij}\}$$ as defined by W. Fulton [(“Intersection theory” (Berlin 1984; Zbl 0541.14005)] and k an algebraically closed field. The authors give a description of the Chow groups of X, they show that they are free with basis the closure of the cells of the decomposition, in any characteristic. In the case of a fibration $$f:\quad X'\to X,$$ such that $$f^{-1}(Z_{ij})=Z_{ij}\times F$$ where F is a fixed scheme, they give a method to compute the Chow groups of X’ in terms of the Chow groups of X and F.
Reviewer: A.Papantonopoulou
### MSC:
14C05 Parametrization (Chow and Hilbert schemes) 14C17 Intersection theory, characteristic classes, intersection multiplicities in algebraic geometry 14D99 Families, fibrations in algebraic geometry
### Keywords:
Intersection; Chow groups; fibration
### Citations:
Zbl 0635.00006; Zbl 0541.14005
|
{}
|
# Why is the kernel of a Convolutional layer a 4D-tensor and not a 3D one?
I am doing my final degree project on Convolutional Networks and trying to understand the explanation shown in Deep Learning book by Ian Goodfellow et al.
When defining convolution for 2D images, the expression is:
$$S(i,j) = (K*I)(i,j) = \sum_{m}\sum_{n}I(i+m,j+n)K(m,n)$$ where $$I$$ is the image input (two dimensions for widht and height) and $$K$$ is the kernel.
Later it states that the input is usually a 3D tensor. This is because usually the input image has multiple channels (say, red, green and blue channels). Furthermore, it says that it can also be a 4D-tensor when the input is seen as a batch of images, where the last dimension represents a different example, but that they will omit this last dimension for the sake of simplicity. I understand this.
Let now $$I_{i,j,k}$$ be the input element with row $$j$$, column $$k$$ (height and width) and channel $$i$$. The output of the convolution is a similarly-structured 3D-tensor $$S_{i,j,k}$$.
Then, the generalized convolution expression for this 3D-tensor input is $$S_{i,j,k} = \sum_{m,n,l} I_{l,j+m-1,k+n-1}K_{i,l,m,n}$$ where "the 4-D kernel tensor $$K$$ with element $$K_{i,j,k,l}$$ giving the connection strength between a unit in channel $$i$$ of the output and a unit in channel $$j$$ of the input, with an offset of $$k$$ rows and $$l$$ columns between the output unit and the input unit".
I am completely lost in this definition. Why is the Kernel 4-D and not 3-D (which is a logical generalization of the first formula)? What is the analogous to the sentence "giving the connection strength between a unit in channel $$i$$ of the output and a unit in channel $$j$$ of the input" in the initial 2D Kernel tensor? I think that is the main things that has to be understood to understand the 4D-kernel.
The first formula you quote is for an image with one input channel and one output channel, it just focuses on height and width. In this case, if we consider a 5x5 convolution, the Kernel will just have size 5x5, $$m$$ and $$n$$ and going from -2 to +2.
Now if our input has 3 channels (RGB, but could be feature maps). we need to use each channel as an input, and the weights will be different for each input map. So K becomes 3-D : 3x5x5. This new dimension corresponds to the $$l$$ in your formula.
If we want to have 10 outputs, we need 10 different ways of convoluting the input feature maps, so our Kernel will have one more dimension, leading to a 10x3x5x5 size for the Kernel. This last dimension corresponds to the $$i$$ of your formula.
To recap, the 4 dimensions of the Kernel $$i, l, m, n$$ stand for :
• $$i$$ : Output dimension
• $$l$$ : Input dimension
• $$m, n$$ : height and width
About this sentence "giving the connection strength between a unit in channel $$i$$ of the output and a unit in channel $$j$$ of the input".
I feel it is wrong as I would rather interpret it as : "giving the connection strength between a unit in channel $$i$$ of the output and a unit in channel $$l$$ of the input"
• I didnt understood that output part but now I think I'm in the right path. Will ask just in case: Could it be that the 'output' dimension of the kernel is the one that allows to use different kernels, hence leading to different feature maps? For instance if I want to apply a horizontal lines detector kernel to an RGB image, and a vertical lines detector kernel aswell, I would have to set the 'output dimension' of the kernel to 2 right? May 4 '21 at 8:34
• That is correct, what I call the output dimension is the number of feature maps after applying the kernel. Just remember that in the case you described (horizontal/vertical line), the kernel is the same for all input image (R, G and B), but it could be different. One output feature map could be the sum of vertical lines in R and horizontal lines of B, and the other output the sum of vertical lines in G and diagonal lines in B. May 4 '21 at 9:40
|
{}
|
## Reading and writing files: the cctbx DataManager
Learn how to use the cctbx DataManager. It lets you read and write files that describe experimental data, models, restraints and so on. This tutorial shows how to set up the DataManager, and how to read and write information for models, maps and reflection information.
### The cctbx DataManager
When you write a script that performs some actions on data or a model, you have to start with reading the file. Once the action is done, you probably want to write the files. To do this conveniently, you can use the cctbx DataManager. The DataManager lets you read and write files describing atomic models, restraints, reflection data files, symmetry files, sequence files, and MRC/CCP4 format map files. In this section we will see how to read in and write information from some of these file types.
### Preparation: Get a library model
Let’s work with a library model that we can obtain with the map_model_manager (revisit this page if you need a refresher on high level objects):
from iotbx.map_model_manager import map_model_manager # load in the map_model_manager
mmm=map_model_manager() # get an initialized instance of the map_model_manager
mmm.generate_map() # get a model from a small library model and calculate a map for it
mmm.write_map("map.mrc") # write out a map in ccp4/mrc format
mmm.write_model("model.pdb") # write out a model in PDB format
### Setting up the DataManager
Let’s set up and initialize a DataManager so that it can read and write files for us:
from iotbx.data_manager import DataManager # Load in the DataManager
dm = DataManager() # Initialize the DataManager and call it dm
dm.set_overwrite(True) # tell the DataManager to overwrite files with the same name
The DataManager is always aware of all supported file types, such as models, maps, reflection data, map coefficients, sequences, restraints and more. But it is possible to set up a DataManger object that is only aware of a subset of these data types that you want to work with:
dm_many_functions = DataManager(datatypes = ["model", "real_map",
"phil", "restraint"]) # DataManager data types
This is useful for writing programs that only need certain data types to work. The DataManager will then return a message if it encounters a data type that is not going to be used in any way by the program. This prevents users from unknowingly adding unnecessary data. For simple scripts, the full DataManager is most likely sufficient.
The datatypes are:
• model: a file containing coordinates and other features of a model
• real_map: a density map on a grid
• phil: a file containing parameters
• ncs_spec: a file with map or model symmetry information
• miller_array: a file with any number of arrays of Fourier data
• map_coefficients: a file with one array of Fourier coefficients representing a map
• sequence: a file with sequence information
• restraint: a file with geometric restraints to be applied to a model
### Reading and writing model information
The DataManager dm knows how to read and write model files. A model file contains 3-D coordinates of atoms in a model as well as other information such as the fractional “occupancy” and “atomic displacement parameters” or B-factors for each atom. It also contains information on the space group and unit cell dimensions (the box that the model is placed in).
The format for model files is typically either “PDB” (extension .pdb) or “mmCIF” (extension .cif). Both can be read and written by the DataManager. The DataManager first sets things up with the method process_model_file and then it creates useful objects like a model object with methods like get_model. Let’s read in a model from a file called “model.pdb”:
model_filename="model.pdb" # Name of model file
dm.process_model_file(model_filename) # Read in data from model file
model = dm.get_model(model_filename) # Deliver model object with model info
Here we defined the name of the file we want to read from (model_filename), read in the information with the DataManager dm, and created a new model object called model containing information about this model. If we had restraint information about this model that we wanted to load in, we would load that restraint information from a restraints file with the process_restraint_file() method before creating the model object with get_model().
We will see in detail later how to use models, change them, and make new models. For now, let’s see how to write our model out as a new file. We can use the DataManager to do this:
dm.write_model_file(model,filename="output_model.pdb") # write model to a file
Note that the DataManager can write either in PDB or CIF format. By default it writes out whichever format was used when the model was read in, or PDB format if the model was not read in from a file. You can specify which format you want with a keyword: extension="pdb".
### Reading and writing map information
Now let’s read in a map file with the DataManager dm. The format for these is called CCP4 or MRC (they are pretty much the same). The DataManager first sets things up with commands like process_real_map_file() and then it creates useful objects like a map object with the command get_real_map():
map_filename="map.mrc" # Name of map file
dm.process_real_map_file(map_filename) # Read in data from map file
mm = dm.get_real_map(map_filename) # Deliver map_manager object with map info
Now we have a map_manager object called mm that has information about the map that was read from map.mrc. We can write out a new map file using the DataManager just as we did for the model:
dm.write_real_map_file(mm,filename="output_map") # write map
### Reading and writing reflection information
Crystallographic reflection data and other Fourier-space data is often stored in binary “MTZ”-formatted data files. These can be read and written by the DataManager. We can create some Fourier data from our map and write it out. First let’s convert our map to Fourier coefficients (we’ll see more about this here (<--LINK)):
map_coeffs = mm.map_as_fourier_coefficients(d_min = 3) # map represented by Fourier coefficients
Now map_coeffs are Fourier coefficients corresponding to the map in map_manager mm, up to a resolution of 3 A. We can write these map coefficients out as an “MTZ” file with our DataManager. It takes a couple steps because we need to specify all the things that are going to go into this MTZ file. First we create an mtz_dataset from our map_coeffs Fourier coefficients:
mtz_dataset = map_coeffs.as_mtz_dataset(column_root_label='FC') # create an mtz dataset
You have to indicate a column_root_label. Although you can choose any name you like for the label, there are many conventional choices, such as FC for calculated structure factors, I for intensities, and so on (see http://www.ccp4.ac.uk/html/mtzformat.html for other conventional choices). Also there is a restriction of 30 characters for the length of the label.
At this stage, if we have additional data that we want to write out, we can append it to the mtz_dataset like this (we won’t do it this time but you can see how it works:
# mtz_dataset.add_miller_array(
# miller_array = some_other_array_like_map_coeffs,
# column_root_label = column_root_label_other_array)
Then we create an object that knows how to write the MTZ format:
mtz_object=mtz_dataset.mtz_object() # extract an object that knows mtz format
and finally we write out those Fourier coefficients with our DataManager (note that in this case the filename is the full file name with the .mtz suffix):
dm.write_miller_array_file(mtz_object, filename="map_coeffs.mtz") # write map coeffs as MTZ
We can also read in Fourier coefficients with our DataManager. An MTZ file may contain one or more arrays of data. These arrays are indexed by a 3D reciprocal space lattice (h,k,l) and for each index may have one or more data for a single “array”. One array can be map coefficients with a complex number as data for each index (as in the map_coeffs object we have worked with). Alternatively one array can be experimental amplitudes and their sigmas. So one array can consist of several “columns” in the mtz file. In cctbx these arrays are called miller_arrays; they are the basic unit of Fourier data that are worked with. Each miller_array has an associated label string that is used to identify it (and also a data type). Label strings for a complex array typically look like, “FC,PHIFC” which stands for amplitude and phase.
Let’s read the Fourier coefficients from map_coeffs.mtz. First, let’s figure out what the label string is for the one array in this file:
array_labels = dm.get_miller_array_labels("map_coeffs.mtz") # List of labels in map_coeffs.mtz
labels=array_labels[0] # select the first (only) label string
labels # print out the first label string
This label string (labels) is “FC,PHIFC”. Now let’s read the file in with DataManager, selecting just the array that we are interested in. We do this by calling the get_reflection_file_server method with a list of file names (in this case just one entry) and a list of label strings (just one this time). The DataManager then reads in and stores just the matching arrays:
dm.get_reflection_file_server(filenames=["map_coeffs.mtz"], # read reflection data.
labels=[labels]) # file names and labels are matching lists
Now we can get all the arrays that we selected based on their label strings:
miller_arrays=dm.get_miller_arrays() # extract selected arrays
and we can take the first (only) one and call it map_coeffs. It is a miller_array object that is like the map_coeffs object that we created from the map object in the beginning of this section:
map_coeffs=miller_arrays[0] # select the first array in the list called miller_arrays
|
{}
|
# What is torque SI unit?
Torque is measured in newton metres in SI units.
## What is the formula to calculate torque?
Measure the distance, r , between the pivot point and the point the force is applied. Determine the angle θ between the direction of the applied force and the vector between the point the force is applied to the pivot point. Multiply r by F and sin θ and you will get the torque.
## What is the formula of torque class 11th?
T = F × r × sinθ In this formula, sin(theta) has no units, r has units of meters (m), and F happens to have units of Newtons (N). Combining these together, one can see that a unit of this force is a Newton-meter (Nm).
## What is torque class 11 phy?
Torque is the measure of the force that can cause an object to rotate about an axis. Force is what causes an object to accelerate in linear kinematics.
## How do you calculate torque from kinetic energy?
According to work-kinetic theorem for rotation, the amount of work done by all the torques acting on a rigid body under a fixed axis rotation (pure rotation) equals the change in its rotational kinetic energy: W torque = Δ K E rotation . W_\texttorque = \Delta KE_\textrotation. Wtorque=ΔKErotation.
## How do you calculate torque by hand?
Torque Calculation A practical way to calculate the magnitude of the torque is to first determine the lever arm and then multiply it times the applied force. The lever arm is the perpendicular distance from the axis of rotation to the line of action of the force. and the magnitude of the torque is τ = N m.
## What are the 2 types of torque physics?
Torque can be either static or dynamic. A static torque is one which does not produce an angular acceleration. Someone pushing on a closed door is applying a static torque to the door because the door is not rotating about its hinges, despite the force applied.
## What is P in torque formula?
Derivation of Torque Since ‘qd’ is the magnitude of the dipole moment (p), and the direction of the dipole moment ranges from positive to negative; torque is the product of a dipole moment cross and an electric field.
## What is torque symbol called?
The symbol for torque is typically \boldsymbol \tau or τ, the lowercase Greek letter tau. When being referred to as moment of force, it is commonly denoted by M.
## How do you write torque symbol?
Torque is commonly denoted with a capital “T,” but the correct symbol is the Greek letter tau, “τ.” When torque is referred to as a moment of force, the symbol “M” is used. Torque is a vector quantity, meaning it has both magnitude and direction.
## How is torque related to energy?
or τ=dEdθ. From this equation, one can interpret torque as the amount of rotational energy gained per radian of rotation. In other words, joules per radian in SI units.
## How do you convert torque to force?
The equation, Force = Torque ÷ [Length × sin (Angle)], converts torque into force. In the equation, Angle is the angle at which the force acts on the lever arm, where 90 degrees signifies direct application.
## What is r in torque formula?
A particle is located at position r relative to its axis of rotation. When a force F is applied to the particle, only the perpendicular component F⊥ produces a torque. This torque τ = r × F has magnitude τ = |r| |F⊥| = |r| |F| sin θ and is directed outward from the page.
## How do you calculate mass torque and distance?
Multiply the force times the distance to find the torque. If you know the magnitude of the force (in Newtons) and the distance (in meters), you can solve for the torque, expressed in newton-meters (N∙m).
## What is the right hand rule for torque?
Right Hand Rule for Torque To use the right hand rule in torque problems, take your right hand and point it in the direction of the position vector (r or d), then turn your fingers in the direction of the force and your thumb will point toward the direction of the torque.
## How do you calculate moment of inertia and torque?
1. a = r d d t ( d θ d t ) Thus,
2. a = r α — — — − ( 2 ) α is the angular acceleration.
3. τ = F r. ⇒ F = τ r — — – ( 3 )
4. ⇒ τ = m r 2 α We know that moment of inertia.
5. I = m r 2. Thus, substituting it in the above equation we get-
6. ⇒ τ = I α
## What are the 3 units of torque?
Torque Units: SI, Metric and American There are three common torque units: SI (International Standard) based on Newton meters, Metric based on kilogram force centimeters, and American/ English based on inch pounds.
## Why is torque r cross F?
Because torque is a kind of moment. So when u apply a force in outer most layer of a shaft or twist a shaft with one end fixed, the force you given multiplied by the distance from out most layer to centre of the shaft. So torque is equal to force x perpendicular distance.
## Is torque a scalar or vector?
Torque is a vector quantity and its direction is always perpendicular to the plane containing the force and displacement vector.
## Is torque a unit of pressure?
It is the rotational equivalent of linear force. Formally, torque (or the moment of force) is the product of the magnitude of the force and the perpendicular distance of the line of action of force from the axis of rotation. The SI unit for torque is the newton metre (N•m).
## What is torque example?
Torque is defined as a twisting force that tends to cause rotation. We call the point where the object rotates the axis of rotation. You use torque every day without realizing it. You apply torque three times when you simply open a locked door.
## Who invented torque?
The concept of torque, also called moment of a force, as you’ve pointed out correctly, originated with the studies of Archimedes on levers. The term torque was apparently introduced into English scientific literature by James Thomson, the brother of Lord Kelvin, in 1884, as suggested here.
|
{}
|
# Tic Tac Toe implementation in Ruby
Here's my first shot at implementing Tic Tac Toe in Ruby. After watching Gary Bernhardt's Functional Core, Imperative Shell, I thought this would be a great exercise in trying out the ideology presented in that screencast. Tear my code apart.
Link to GitHub repo (includes video of usage)
tic_tac_toe.rb -- the procedural shell
require_relative "board"
require_relative "cell"
require "pry"
def initialize_board
@board = Board.new
end
def choose_characters
puts "X or O?"
@player_character = gets.chomp
@computer_character = case @player_character
when "X" then "O"
when "O" then "X"
end
end
def start_game
initialize_board
choose_characters
play
end
def play
loop do
puts "Make a move: "
move = gets.chomp
board_index = Board::POSITIONS[move]
winner = @player_character
@board.board[board_index].value = @player_character
if !@board.winner?
@board.available_cells.sample.value = @computer_character
winner = @computer_character if @board.winner?
end
if @board.winner?
p "#{winner.upcase} WINS!!!"
@board.display
puts "Do you want to play again?"
start_game if gets.chomp == "Yes"
else
@board.display
play
end
break if @board.winner?
end
end
start_game
board.rb
class Board
attr_accessor :board
WINNING_COMBINATIONS = [
[0, 1, 2],
[3, 4, 5],
[6, 7, 8],
[0, 3, 6],
[1, 4, 7],
[2, 5, 8],
[0, 4, 8],
[2, 4, 6]
]
POSITIONS = {
"top left" => 0,
"top middle" => 1,
"top right" => 2,
"middle left" => 3,
"center" => 4,
"middle right" => 5,
"bottom left" => 6,
"bottom middle" => 7,
"bottom right" => 8
}
def initialize
@board = Array.new(9) { Cell.new }
end
def winner?
winner = false
WINNING_COMBINATIONS.each do |combination|
first_cell = board[combination[0]].value
second_cell = board[combination[1]].value
third_cell = board[combination[2]].value
consideration = [first_cell, second_cell, third_cell]
if consideration.uniq.length == 1 && first_cell.is_a?(String)
winner = true
end
end
winner
end
def available_cells
board.select { |cell| cell.value == :blank }
end
def display
rows = board.each_slice(3).to_a
rows.each do |row|
p row.map(&:value)
end
end
end
cell.rb
class Cell
attr_accessor :value
def initialize(options = {})
@value = options.fetch(:value, :blank)
end
end
• Does this code live in a repository somewhere that is easy to grab and work with? – vgoff Apr 30 '15 at 4:43
• Yes! It's here: github.com/dylanerichards/tic-tac-toe. There's even a video of me demonstrating use of the application. I'll update the original post. – Dylan Richards Apr 30 '15 at 5:14
• Now that is some very useful information to have right there. :) You even have specs... Was having some difficulties running it without referring to the code to know what the expected inputs were. – vgoff Apr 30 '15 at 5:20
• The fact that you had to explain the moves in your video by stating verbally what the list of valid locations are, means that there is not enough of a prompt, there should be something indicating move options, just like (and probably less reason to be) your prompt showing the very well know and expected "X" and "O" options for player. – vgoff Apr 30 '15 at 6:15
• "9 openings (8 indexes)" 0 is an index as well. You have 9 indices, (or the other valid English plural, 'indexes') to be sure. Otherwise you are missing a valid placement on a tic-tac-toe board. – vgoff Apr 30 '15 at 6:54
You require pry, and you don't seem to use it. Perhaps it is being used for troubleshooting, and the binding.pry or other code, has been removed. In that case it may be more appropriate to simply use the -rpry option as in ruby -rpry file.rb. Then you don't have to remember to remove that line, just the troubleshooting line. (Or even better, set up a system that lets you leave things in but turn them off and on from an option in the system call. Perhaps -w or -v flags. Perhaps an environment variable. A line in with this in mind may look like this: binding.pry if $VERBOSE And a require 'pry' if$VERBOSE)
Looking at your loop do tells me that you have an infinite loop. But you have a well defined exit strategy, there is no need for an infinite loop. Using something like until @board.winner? will run the block at least once, and will stop when your method returns true/truthy value. You also call 'play' from inside the loop inside of play, but it is already going to start from the beginning of the loop.
If you had folders bin/, lib/, I would expect some of the code in your tic-tac-toe.rb file to be in maybe bin/tic-tac-toe. The start equivalent for sure. This will allow you to use the lib/tic-tac-toe.rb file in a way that someone that requires that library would expect. Right now, you are unable to require that library file without starting the game.
The interface of using words like top left seems clumsy and error prone, even if it is nice. Creating a numbered or lettered menu or presenting the grid with numbers or letters could make play easier, less typing to do, less error prone. Perhaps a lettered 'x, y' grid. Presenting that grid would also document the game play, and make initial play more straight forward.
Your "choose_character" method does two things, it prompts and processes the choice. The @player_character and @computer_character is repetitive, confusing in a way. Simply having @player and @computer is probably clear enough. @player.character might be better, though I think we consider that letter a 'marker', as the player mark may be 'X' and the AI mark may be 'O'.
Looking at cell.rb you could be using the new named arguments in Ruby 2.0.
In the Board file you can likely use the all? Enumerable method, if an array has all? the markers that match the current players marker, then that player is the winner.
|
{}
|
Commentary| Volume 5, ISSUE 8, P1905-1908, August 18, 2021
Ok
# A framework for a hydrogen economy
Open ArchivePublished:August 11, 2021
Arun Majumdar is the Jay Precourt Professor at Stanford University, a faculty in the Department of Mechanical Engineering, and a Senior Fellow and former Director of the Precourt Institute for Energy. He served in the Obama administration as the Founding Director of the US Department of Energy’s Advanced Research Projects Agency-Energy (ARPA-E) (2009–2012), as the Acting Undersecretary for Energy (2011–2012), and as the Vice Chair of the Secretary of Energy Advisory Board (2014–2017). Dr. Majumdar is a member of the US National Academy of Sciences, US National Academy of Engineering, and the American Academy of Arts and Sciences.
John Deutch is an emeritus Institute Professor at MIT. He has served as Chairman of the Department of Chemistry, Dean of Science, and Provost. In the Carter administration, he served as Director of Energy Research (1977–1979), Acting Assistant Secretary for Energy Technology (1979), and Undersecretary (1979–1980) in the US Department of Energy. He has been a member of the President’s Nuclear Safety Oversight Committee (1980–1981), the White House Science Council (1985–1989), the President’s Committee of Advisors on Science and Technology (1997–2001), and the Secretary of the Energy Advisory Board (2008–2017). He has published widely on the technical and policy aspects of energy and the environment.
Ravi Prasher is an Adjunct Professor at the University of California, Berkeley. He has more than 20 years of experience in working in R&D in large industry, startup, government, and academia. He was one of the first program directors at ARPA-E in the US Department of Energy. Prasher has published more than 100 papers on thermal energy science and technology and holds more than 30 patents. At UC Berkeley, he teaches undergraduate and graduate courses on thermal science and supervises multiple PhD students on various research topics, including thermochemical production of carbon-free hydrogen.
Tom Griffin specializes in identifying high-impact technology investment opportunities in the manufacturing sector for Breakthrough Energy Ventures. He has over 25 years of industrial experience in applied technology development and deployment, including contributions over a wide range of energy and environmental sectors. He served as CTO at both Edeniq and Pennsylvania Sustainable Technologies (where was also co-founder), pursuing capabilities and applications in biofuels and catalytic fuel upgrading. Griffin was an officer in the US Nuclear Navy, directing operations of shipboard propulsion facilities. He holds SB, SM, and PhD degrees in Chemical Engineering from MIT and has authored or co-authored several granted patents.
## Main Text
### Introduction
If the world is to avoid exceeding the Paris Agreement goal of 2°C for the global average temperature rise, the deployment of current solar, wind, and battery technologies for the grid and mobility is necessary, but far from sufficient. A zero-carbon electricity grid with nuclear power and carbon capture for fossil fuel use is also insufficient. A chemical form of energy storage that can enable zero-carbon transportation and provide high-temperature industrial heat is also needed.
Hydrogen (H2) that is free of greenhouse gas (GHG) emissions is emerging as a prime candidate for these purposes. Hydrogen and its derivatives such as ammonia
• Valera-Medina A.
• Xiao H.
• Owen-Jones M.
• David W.I.F.
• Bowen P.J.
Ammonia for power.
are receiving significant attention and resources in Europe
European Comission
A hydrogen strategy for a climate-neutral Europe.
and Japan;
• Dvorak P.
How Japan’s Big Bet on Hydrogen Could Revolutionize the Energy Market.
the possibility of a “hydrogen economy” is being revisited in the United States.
Fuel Cell and Hydrogen Energy Association. Road map to a US hydrogen economy.
Designations such as gray, green, blue, and turquoise hydrogen are entering the lexicon, often evoking strong sentiments and opinions. But rarely do the public discussions offer a systems view of a hydrogen infrastructure that includes production, transport, storage, and use—and, most importantly, its economic viability.
### Production of GHG-free H2
Currently, the world produces about 70 million metric tons (MMT) of H2 per year, with the US producing ∼10 MMT/year. Ninety-five percent of today’s H2 is produced by steam methane reforming (SMR) and is called gray hydrogen. Today, gray H2 has the lowest cost, ∼$1/kg-H2 in the US but produces roughly 10 kg of carbon dioxide (CO2) per kg of H2. It should be noted that costs can vary slightly from the nominal value of$1/kg-H2 because of other factors such as purity and pressure of H2. Capturing the CO2 and reducing the emissions turns gray into blue H2, but the CO2 capture is imperfect (∼70%–80%) at needed SMR throughputs. Blue H2 costs
• Roussanaly S.
• Anantharaman R.
• Fu C.
Low-carbon footprint hydrogen production from natural gas: a techno-economic analysis of carbon capture and storage from steam-methane reforming.
,
International Energy Agency GHG Programme
Techno-economic evaluation of SMR based standalone (merchant) plant with CCS.
∼50% more than gray H2, which does not include the cost of creating a CO2 infrastructure of pipelines and sequestration systems. To make blue-H2 a viable option, research and development is needed to reduce CO2 capture costs and further improve capture completeness. Most importantly, policies involving direct or indirect charges on CO2 emissions as well as those for government regulation of the CO2 infrastructure are critical. We note that there is today no federal agency with the authority to permit interstate CO2 pipelines or tariffs, leaving the burden of regulation in this area to the states.
There is significant focus on using electricity and electrolyzers to split water, producing what is termed green H2. The production of green hydrogen obviously requires a zero-carbon grid, which is the top priority for any climate policy. Today, green H2 costs
US Department of Energy
Cost of electrolytic hydrogen production with existing technology. DOE hydrogen and fuel cells program record.
$4–$6/kg, ∼3–4 times higher than SMR with CO2 capture. It is conceivable that this cost could go below $2/kg, but that requires carbon-free electricity costs on the grid to be less than$30/MWh, and the installed capital costs of electrolyzers must fall by 50%–70% (or more). These goals highlight the need for R&D efforts with a systems view of electrolyzers, as well as technology demonstrations that reduce the risk for economy-wide scaling and supply chain development.
Recently, US Energy Secretary of Energy Jennifer Granholm announced the launch of the Hydrogen Earthshot
US Department of Energy
Secretary Granholm launches hydrogen Energy Earthshot to accelerate breakthroughs toward a net-zero economy.
program with a technology-agnostic target of achieving carbon free H2 at $1/kg within one decade. The target of$1/kg for GHG-free H2 will be a game changer for the transportation sector and the ammonia and petrochemical industries; however, widespread use of H2 in applications such as industrial heating requires production costs ≤ $0.50/kg. No technology today can produce GHG-free H2 at even$1/kg. An alternative approach worth considering is methane pyrolysis—sometimes referred to as turquoise H2. This process cracks methane to generate GHG-free H2 and solid carbon as a co-product, which is easier to manage than CO2—and would mitigate the need and additional costs for post-capture CO2 management infrastructure associated with blue H2. Furthermore, solid carbon, largely in amorphous form, would initially be a useful co-product; it can be sold today at ∼$1/kg as carbon black, which partially offsets the cost for GHG-free H2 production. The potential scale of methane pyrolysis implementation, however, implies a quantity of co-product that will exceed current carbon black markets. For this reason, projected R&D efforts needed to develop new markets for this co-product—likely either in novel building materials or soil additives—comprise an essential part of the technology and market development roadmaps. An additional (and significant) advantage of methane pyrolysis is that it leverages the current natural gas pipeline infrastructure to move the feedstock for hydrogen in the form of methane over long distances. Just as electrolyzers are needed to produce green H2, methane pyrolysis needs pyrolyzers. Technoeconomic analyses suggest that efficient pyrolyzers that co-synthesize GHG-free H2 and high-value carbon (e.g., carbon fibers for use in structural and construction materials) can potentially reduce the cost of GHG-free H2 to$1/kg or lower. The combination of research, design, and development (RD&D) to reduce H2 production costs and de-risk pyrolyzer technologies as well as policies to grow the solid carbon market are critical for methane pyrolysis to scale. Because of the high global warming potential of methane, if natural gas is used for GHG-free H2 production by using pyrolysis, it is also important to continue monitoring and controlling methane leakage.
### Uses and cost targets for GHG-free H2
Most of today’s 70 MMT/year of H2 produced is used to make petrochemicals from crude oil and to synthesize ammonia, which is a precursor for making fertilizers. In a hydrogen economy, applications of GHG-free H2 or its derivatives (e.g., ammonia) will expand to include transportation (long-haul trucking, maritime shipping, and possibly aviation), chemical reduction of captured CO2, long-duration energy storage in a highly renewable energy-dependent grid, chemical reductants for steel and metallurgy, and even as high-temperature industrial heat for the production of commodities such as glass and cement.
For use as a reducing agent, a target cost of $1/kg is commercially viable for ammonia and petrochemicals. This target cost is also sufficient for H2 use as a transportation fuel in internal combustion engines or fuel cells, given that it is equivalent to roughly a$1 per gallon of gasoline. Note, however, that $1/kg-H2 corresponds to a heating value of$7.5 per million British Thermal unit (BTU). Thus, if GHG-free H2 is to compete in the US with natural gas at $3 per million BTU as a heating fuel for cement and steel production, the cost of hydrogen would need to fall below to ∼$0.40/kg. However, if a carbon capture cost of $80–$90/ton CO2 for natural gas combustion is considered, the breakeven GHG-free H2 cost of ∼$1/kg allows H2 to compete with natural gas use without CO2 emissions. Note that in Europe the price of natural gas is ∼$7–$8/MMBTU and, therefore, a$1/kg-H2 as a heating fuel is cost competitive.
### Transport and storage of GHG-free H2
If the use of H2 expands beyond today’s petrochemical use to those described above, demand in the United States will rapidly scale from 10 MMT/year today to ∼100 MMT/year—implying that hydrogen production and transport capacities will increase by an order of magnitude. This raises the issue of bulk transport and storage of H2. How will H2 pipelines be developed and deployed? How will H2 be stored cost effectively at large scale?
Given that the volumetric energy density of H2 is ∼30 percent that of natural gas, either the H2 flow velocity or its pressure will need to be increased by a factor of at least 3 to match the energy delivery rate of natural gas. This will increase the cost of compressors beyond those used in natural gas pipelines. Steel pipelines for high-pressure bulk H2 transport are a risky proposition because of uncertainties regarding the fundamentals (mechanisms and condition-dependent kinetics) of hydrogen embrittlement
• Chen Y.-S.
• Haley D.
• Gerstl S.S.A.
• London A.J.
• Sweeney F.
• Wepf R.A.
• Rainforth W.M.
• Bagot P.A.J.
• Moody M.P.
Direct observation of individual hydrogen atoms at trapping sites in a ferritic steel.
in steel pipelines. There are proposals to mix H2 up to 20 percent in existing natural gas pipelines, but the risks of embrittlement are not quantitatively understood. Currently, the H2 pipeline system in the United States is very small compared with the natural gas pipeline system, and there is oversight by the Department of Transportation because of safety concerns. The Departments of Energy and Transportation should commission a science and engineering study to define the safety envelope of steel for hydrogen and methane compositions and its effect on pipeline lifetime.
Developing and siting new pipeline infrastructure is generally expensive and involves challenges of social acceptance. Therefore, it is important to explore alternative approaches for a hydrogen economy that does not require a new H2 infrastructure. One alternative is using two existing infrastructures—the electricity grid and natural gas pipelines—to transport the feedstock to produce GHG-free H2 via (1) electrolysis (green H2), (2) via SMR with CO2 capture and sequestration (blue H2), or (3) methane pyrolysis (turquoise H2). Electrolysis and methane pyrolysis do not need any additional transport infrastructure, making them highly attractive as long as electrolyzers and pyrolyzers can be installed at appropriate scales where H2 is needed. But RD&D is needed to reduce the fully installed and levelized costs (both capital expenditure [CAPEX] and operating expenditure [OPEX]) of these systems. On the other hand, SMR with CO2 capture and sequestration (CCS) is the cheapest method to produce GHG-free H2 today, and its widespread use calls for a CO2 pipeline and sequestration infrastructure to enable a hydrogen economy. Such an infrastructure is needed broadly to decarbonize the economy—for electricity generation from natural gas, steel and cement manufacturing, and direct capture of atmospheric CO2. SMR with CCS could leverage such a CO2 infrastructure to create a hydrogen economy. This underscores the need for Congress to authorize the Federal Energy Regulatory Commission (FERC) to regulate the location and tariffs of H2 and CO2 pipelines.
Storage of H2 can involve physical and/or chemical transformations. Geological storage in salt caverns is the most economical option today, but it is highly geographically limited. Although there has been some systematic study of geological storage, the United States Geological Survey should be charged with undertaking a national survey to identify the many locations where underground storage of hydrogen is possible while also considering the infrastructure costs needed to use these caverns.
• Lord A.S.
• Kobos P.H.
• Borns D.J.
Geologic storage of hydrogen: scaling up to meet city transportation demands.
,
Request for information (RFI) on stationary hydrogen storage technology development.
The low volumetric energy density of H2 requires cost-effective physical storage to be at ultrahigh pressures, in liquid form at cryogenic temperatures, or a combination of the high pressure and low temperature. They are several orders of magnitude more expensive than geologic storage. Although this approach is geographically independent, the energy and materials costs can be significant; RD&D is needed to reduce them.
There have been proposals to store hydrogen in several chemical forms with a variety of attractive attributes (e.g., density, cycle energy expenditures, stability, easy of handling, etc.). Some of these include liquid-organic hydrogen carriers (LOHCs), ammonia, light alcohols, organic acids, and metal hydride complexes. Although LOHCs are being demonstrated in Japan, much research is needed to reduce their costs and round-trip losses. Ammonia, on the other hand, already has a global production and transport infrastructure at the scale of millions of metric tons, which has been used to make fertilizers for the last 100 years. Ammonia can be used directly as a fuel in internal combustion engines or in fuel cells. Furthermore, high volumetric and gravimetric energy densities make ammonia based on GHG-free H2 attractive as a versatile storage medium. Methanol and ethanol have their own substantial (and growing) use cases, leading to some degree of needed infrastructure availability. Metal hydrides have perhaps the most interesting properties in terms of net energy transport potential, but also raise challenges in terms of their reactivity. There is, therefore, significant “white space” for continued R&D to reduce cost and increase their industrial viability.
### Closing Remarks
Given the importance of hydrogen to create a net-zero emissions economy, the President of the United States should commission a comprehensive hydrogen strategy and develop a decadal plan for the country.
First, we applaud the US Secretary of Energy, Jennifer Granholm, for launching the ambitious Hydrogen Earthshot program
US Department of Energy
Secretary Granholm launches hydrogen Energy Earthshot to accelerate breakthroughs toward a net-zero economy.
with a technology-agnostic stretch goal of GHG-free H2 production at \$1/kg before the end of this decade. Similar R&D programs with techno-economic stretch goals are needed for H2 storage, use, and transport as well. These actions are similar to the successful DOE Sunshot initiative that US Secretary of Energy Steven Chu launched in 2010.
US Department of Energy. The Sunshot Initiative. https://www.energy.gov/eere/solar/sunshot-initiative.
The Hydrogen Earthshot is necessary to create a hydrogen economy, but it is not sufficient.
Second, the R&D should be integrated with a private-public partnership for technology demonstration programs to address economic, regulatory, supply chain, and policy considerations and thereby establish a credible de-risking approach to attract private investors.
Third, federal and/or state authorities must adopt policies to support a hydrogen market either by a charge on GHG emissions or via clean energy standards that involve GHG-free H2 as an option, or a combination of the two. These policies should also include the enabling market creating policies for solid carbon produced via methane pyrolysis. Furthermore, governments should use their purchasing power to create a demand for GHG-free H2 and, most importantly, consider using a reverse auction to foster a globally competitive supply chain in the private sector.
Finally, despite the strong interest in green hydrogen from electrolysis, the economic reality suggests that there could be a significant fraction of GHG-free hydrogen originating from natural gas. Therefore, a holistic hydrogen strategy should also be aligned with a national carbon management plan, which should include an infrastructure for carbon capture, transport, and sequestration derived from processes yielding either gaseous (SMR) or solid (pyrolysis) carbon co-production.
### Declaration of interests
A.M., J.M.D., and R.S.P. serve on the scientific advisory board of Breakthrough Energy Ventures (BEV). T.P.G. is an employee of BEV. BEV has interests in investing in the hydrogen industry, especially in early-stage commercial efforts. A.M. serves on the advisory board of Joule.
## References
• Valera-Medina A.
• Xiao H.
• Owen-Jones M.
• David W.I.F.
• Bowen P.J.
Ammonia for power.
Pror. Energy Combust. Sci. 2018; 69: 63-102
• European Comission
A hydrogen strategy for a climate-neutral Europe.
• Dvorak P.
How Japan’s Big Bet on Hydrogen Could Revolutionize the Energy Market.
Wall Street Journal. 2021;
1. Fuel Cell and Hydrogen Energy Association. Road map to a US hydrogen economy.
• Roussanaly S.
• Anantharaman R.
• Fu C.
Low-carbon footprint hydrogen production from natural gas: a techno-economic analysis of carbon capture and storage from steam-methane reforming.
Chem. Eng. Trans. 2020; 81: 1015-1020
• International Energy Agency GHG Programme
Techno-economic evaluation of SMR based standalone (merchant) plant with CCS.
• US Department of Energy
Cost of electrolytic hydrogen production with existing technology. DOE hydrogen and fuel cells program record.
• US Department of Energy
Secretary Granholm launches hydrogen Energy Earthshot to accelerate breakthroughs toward a net-zero economy.
• Chen Y.-S.
• Haley D.
• Gerstl S.S.A.
• London A.J.
• Sweeney F.
• Wepf R.A.
• Rainforth W.M.
• Bagot P.A.J.
• Moody M.P.
Direct observation of individual hydrogen atoms at trapping sites in a ferritic steel.
Science. 2017; 355: 1196-1199
• Lord A.S.
• Kobos P.H.
• Borns D.J.
Geologic storage of hydrogen: scaling up to meet city transportation demands.
Int. J. Hydrogen Energy. 2014; 39: 15570-15582
|
{}
|
## Abstract
A recently proposed theory on frontal lobe functions claims that the prefrontal cortex, particularly its dorso-lateral aspect, is crucial in defining a set of responses suitable for a particular task, and biasing these for selection. This activity is carried out for virtually any kind of non-routine tasks, without distinction of content. The aim of this study is to test the prediction of Frith's ‘sculpting the response space’ hypothesis by means of an ‘insight’ problem-solving task, namely the matchstick arithmetic task. Starting from Knoblich et al.'s interpretation for the failure of healthy controls to solve the matchstick problem, and Frith's theory on the role of dorsolateral frontal cortex, we derived the counterintuitive prediction that patients with focal damage to the lateral frontal cortex should perform better than a group of healthy participants on this rather difficult task. We administered the matchstick task to 35 patients (aged 26–65 years) with a single focal brain lesion as determined by a CT or an MRI scan, and to 23 healthy participants (aged 34–62 years). The findings seemed in line with theoretical predictions. While only 43% of healthy participants could solve the most difficult matchstick problems (‘type C’), 82% of lateral frontal patients did so (Fisher's exact test, P < 0.05). In conclusion, the combination of Frith's and Knoblich et al.'s theories was corroborated.
## Introduction
The prefrontal cortex is known for being involved in the successful execution of a wide variety of tasks. For example, it has been claimed, both in the neuroimaging and in the neuropsychological literature, that frontal lobes are involved in episodic memory (Stuss et al., 1994; Tulving et al., 1994), semantic memory (Henry and Crawford, 2004; Thompson-Schill et al., 1997), planning (Shallice, 1982), attentional switching (Nagahama et al., 1996; Stuss et al., 2000), and reasoning (Goel and Dolan, 2004; Reverberi et al., 2005).
One parsimonious approach to cope with such a variety of functions is that of hypothesizing a function for the prefrontal cortex that is sufficiently abstract to contribute to all the tasks listed above (Duncan, 2001). A recently proposed theory adopts this strategy: it claims that the prefrontal cortex, particularly its dorso-lateral aspect, is crucial for defining a set of responses suitable for a particular task, and biasing these for selection (‘sculpting the response space’; Frith, 2000). This activity could be carried out for virtually any kind of non-routine tasks, regardless of content. For example, if a lay person is asked to plan her/his dream house, she/he will likely not consider the possibility of building the roof with caramel, cork and ice. In Frith's framework this reduction in the number of alternative hypotheses considered, and the building of the ad hoc category of ‘suitable material for roof making’ is ascribed to frontal lobes. To date, the corroborating evidence mainly consists of neuroimaging studies that either directly refer to Frith's theory (Fletcher et al., 2000; Nathaniel-James and Frith, 2002) or belong to different but compatible theoretical views (e.g. Duncan et al., 2000; Kerns et al., 2004; Koechlin et al., 2003; Thompson-Schill et al., 1997). Just a few neuropsychological studies have considered the issue so far, and only within the domains of semantic memory and language processing (Metzler, 2001; Thompson-Schill et al., 1998).
The aim of the present study is to test the prediction of Frith's ‘sculpting the response space’ hypothesis by means of a problem-solving task of the ‘insight’ type (Sternberg and Davidson, 1995), namely the matchstick arithmetic task (Knoblich et al., 1999).
In this task, a false arithmetic statement, written using roman numerals (e.g. ‘I’, ‘II’ and ‘IV’), operations (‘+’ and ‘−’) and an equal sign (‘=’) all composed of matchsticks, can be transformed into a true statement by moving only one stick from one position to another within the pattern. It has been shown that some types of matchstick arithmetic problems are quite difficult for healthy participants, particularly those in which the only solution involves changing the operators (+ and −), producing a tautological statement (e.g. IV = IV = IV), or both (Knoblich et al., 1999). Knoblich and collaborators interpreted this pattern of results as suggesting that the initial representation of those kinds of problem was biased by two strong constraints, the ‘operator constraint’ and the ‘tautology constraint’, that prevented participants from considering and evaluating the correct solutions. In Frith's terminology, the response space that was sculpted by the dorsolateral frontal lobes of the healthy participants excluded the correct responses. Thus the search for them was unsuccessful, at least until the response space was revised. Knoblich et al.'s (1999) hypothesis was corroborated in another study (Knoblich et al., 2001) in which eye movements of participants solving the same problems were recorded. They were able to show that participants, during the initial phase, attended more frequently to the result and the operands of the equations than to the operator and the equal sign (e.g. for ‘V = III − II’ healthy participants attended more to ‘V’, ‘III’ and ‘II’ than to ‘=’ and ‘−’). They also found that solution finding corresponded to an increased number of eye movements towards the previously neglected but crucial element of the equation. Knoblich et al. argued that in this second phase participants revised the initial misleading representation of the problem, thus eliminating (‘relaxing’) the inappropriate constraints, and could access the solution.
By combining both Knoblich et al.'s (1999) interpretation for the failure of healthy controls on the matchstick problem, and Frith's theory on the role of dorsolateral frontal cortex, we derived the counterintuitive prediction that a group of patients with focal damage to the lateral frontal cortex should perform better than a group of healthy participants on this rather difficult task. Indeed, if both theories held true (A, Knoblich et al.'s; B, Frith's), the cognitive factors causing the inadequate performance of healthy participants (the two detrimental constraints, ‘operators’ and ‘tautology’, following theory A) would no longer be present in a group of lateral frontal patients (following theory B). Hence the prediction of lateral frontal patients performing better than a control group on the matchstick task.
In this study we administered a revised version of the matchstick arithmetic task (Knoblich et al., 1999) to a group of patients with focal brain damage to the frontal lobes. Since Frith's theoretical claim only regards the lateral surface of the frontal lobes, we divided our series into lateral and medial damaged patients. The crucial test for the theory will be that on lateral patients; data from the medial subgroup will be useful, at most, for estimating the anatomical specificity of the phenomenon.
The prediction of patients outperforming healthy participants provides a methodological advantage. Since in only very rare instances a lesion to a cognitive structure produces an improvement in performance (e.g. Warrington and Davidoff, 2000), it can be reasonably assumed that all processes and representations occurring before the crucial stage in the information flow (in the present case the ‘response space sculpting’ function) are relatively spared, without being forced to test this hypothesis with additional manipulation or diagnostic tools. This advantage applies equally well to all studies driven by similar predictions.
## Materials and methods
### Participants
Thirty-five patients with a single focal brain lesion as determined by a CT or an MRI scan were recruited from the neurological and neurosurgical wards of Ospedale Civile in Udine (Italy). All patients gave their consent to participate in the study; the study was approved by the ethical committee of SISSA-ISAS (International School for Advanced Studies, Trieste). The aetiology varied across patients: stroke, neoplasm and arachnoid cyst (Table 1). Exclusion criteria were (i) a clinical history of psychiatric disorders, substance abuse or previous neurological disease, (ii) neuroradiological evidence of diffuse brain damage, (iii) age <18 or >70 and (iv) educational level <8 years. Time after lesion onset (Table 2) ranged between 7 and 1507 days (the onset date considered in the case of neoplasm was that of surgery). Twenty-three normal control volunteers also participated in the study. Controls were matched to patients for age and educational level. There were no significant differences between frontal patients and controls both for age [t(56) = 0.388, P > 0.1] and education [t(56) = 0.403, P > 0.1].
Table 1
Aetiology for each lesion group
LAT
MED
Patients overall
Arachnoid cyst
Meningioma 10 19
Stroke
LAT
MED
Patients overall
Arachnoid cyst
Meningioma 10 19
Stroke
Absolute frequencies of patients included in the study.
LAT, lateral frontal; MED, medial frontal.
Table 2
Demographic and clinical variables for each lesion group and for control participants
LAT
MED
Patients overall
CTL
N 17 18 35 23
Age [mean (SD)] 47 (13) 47 (11) 47 (11) 46 (9)
Education [mean (SD)] 10.59 (2.83) 10.44 (2.64) 10.51 (2.69) 10.22 (2.83)
Sex [female proportion (%)] 50 71 60 56
Lesion volume (cc) [mean (SD)] 50 (47) 44 (32) 47 (40)
Days since lesion [median (range)] 653 (7–1314) 360 (7–1507) 619 (7–1507)
LAT
MED
Patients overall
CTL
N 17 18 35 23
Age [mean (SD)] 47 (13) 47 (11) 47 (11) 46 (9)
Education [mean (SD)] 10.59 (2.83) 10.44 (2.64) 10.51 (2.69) 10.22 (2.83)
Sex [female proportion (%)] 50 71 60 56
Lesion volume (cc) [mean (SD)] 50 (47) 44 (32) 47 (40)
Days since lesion [median (range)] 653 (7–1314) 360 (7–1507) 619 (7–1507)
LAT, lateral frontal; MED, medial frontal; CTL, control group.
For all patients we obtained a CT or an MRI scan. Patients were assigned to two anatomically defined subgroups depending on their lesion site: medial (MED), in which the lesion involved the orbital and/or the medial surface on one or both frontal lobes, and lateral (LAT), showing unilateral damage to the frontal lobe convexity (Fig. 1). This classification was carried out by a senior neuroradiologist (S.D.A.) blind to the behavioural results. All lesions were also mapped using the free MRIcro (www.mricro.com) software distribution (Rorden and Brett, 2000) and were drawn manually on slices of a T1-weighted template MRI scan from the Montreal Neurological Institute (www.bic.mni.mcgill.ca/cgi/icbm_view). This template is oriented to approximately match the Talairach space (Talairach and Tournoux, 1988) and is distributed with MRIcro. As a final result, 17 patients were classified as lateral and 18 as medial. Lesion volume was also obtained by means of the automatic routines of MRIcro.
Fig. 1
Overlay lesion plots for the two lesion subgroups. The number of overlapping lesions in each voxel is illustrated on a grey scale: the lighter a voxel, the higher the number of patients with damage to that. The grey scale is devised so that voxels that were damaged with maximal frequency within a patient subgroup are shown in white. Thus, white areas were damaged in 6 out of 17 lateral frontal patients, and in 10 out of 18 medial frontal patients. Talairach z-coordinates (Talairach and Tournoux, 1988) of each transverse slice are 45, 55, 65, 75, 85, 95, 105, 115, 125, 135, 145.
Fig. 1
Overlay lesion plots for the two lesion subgroups. The number of overlapping lesions in each voxel is illustrated on a grey scale: the lighter a voxel, the higher the number of patients with damage to that. The grey scale is devised so that voxels that were damaged with maximal frequency within a patient subgroup are shown in white. Thus, white areas were damaged in 6 out of 17 lateral frontal patients, and in 10 out of 18 medial frontal patients. Talairach z-coordinates (Talairach and Tournoux, 1988) of each transverse slice are 45, 55, 65, 75, 85, 95, 105, 115, 125, 135, 145.
### Materials and procedure
A matchstick arithmetic problem (Knoblich et al., 1999) consists of a false arithmetic statement written with roman numerals (I, II, III, etc.), arithmetic operations (+, −), and equal signs constructed out of matchsticks. For example in
$\mathrm{IV}\ =\ \mathrm{III}\ +\ \mathrm{III}$
the participant is required to move a single stick in such a way that the initial false statement is transformed into a true arithmetic statement. A move consists of grasping a single stick and moving it, rotating it or sliding it. The rules are that (i) only one stick can be moved, (ii) a stick cannot be discarded and (iii) the result must be a correct arithmetic statement. Two additional rules are that (iv) an isolated slanted stick cannot be interpreted as I (one) and that (v) a V symbol must always be composed of two slanted sticks.
As an example, consider the false equation (1) above: it can be transformed into the true equation by moving the left-most stick of number ‘IV’ to the immediate right of the ‘V’:
$\mathrm{VI}\ =\ \mathrm{III}\ +\ \mathrm{III}$
All matchstick arithmetic problems are composed by three roman numerals separated by two arithmetic signs and have a unique solution, consisting of a single move. Hence, differences in difficulty are solely a function of how hard it is to think of the correct move.
Three different classes of problems can be identified on the basis of the kind of move necessary to achieve the solution. These classes are:
#### Type A
This type of problem is solved by moving a matchstick that is part of a numeral, to another numeral. For example, the problem ‘II = III + I’ is solved by moving one of the matchsticks of the ‘III’ to the ‘II’ in head position.
#### Type B
In this case it is necessary to move a matchstick from the equal sign to the minus sign, in order to change it into an equal sign. Thus, e.g. the false equation ‘IV = III − I’, should be transformed in the true ‘IV − III = I’.
#### Type C
In this last problem type, a plus sign has to be changed, by rotating its vertical matchstick through 90°, into an equal sign. Crucially, this action transforms the starting equation into a tautology; e.g. ‘VI = VI + VI’ becomes ‘VI = VI = VI’.
Four blocks of three problems were administered (Table 3). In each of the four blocks, an equation of each type was presented in pseudo-random order. The equations were built by the experimenter on the table in front of the participants by using real matchsticks. Participants were allowed to touch and move the matchsticks while they were looking for the solution. Three minutes were given to solve each problem.
Table 3
Matchstick problems in the experimental presentation order
Block
Type
Problem
Solution
Cue 1
Cue 2
IV = III − I IV − III = I
VI = VII + I VII = VI + I
III = III + III III = III = III
IV = III + III VI = III + III IV = III + III IV = III + III
V = III − II V − III = II V = III − II V = IIIII
VI = VI + VI VI = VI = VI VI = VI + VI VI = VI + VI
VIII = VI − II VIII − VI = II VIII = VI − II VIII = VIII
IV = IV + IV IV = IV = IV IV = IV + IV IV = IV + IV
II = III + I III = II + I II = III + I II = III +I
VII = VII + VII VII = VII = VII VII = VII + VII VII = VII + VII
VII = II + III VI = III + III VII = II + III VII = II +III
VI = IV − II VI − IV = II VI = IV − II VI = IVII
Block
Type
Problem
Solution
Cue 1
Cue 2
IV = III − I IV − III = I
VI = VII + I VII = VI + I
III = III + III III = III = III
IV = III + III VI = III + III IV = III + III IV = III + III
V = III − II V − III = II V = III − II V = IIIII
VI = VI + VI VI = VI = VI VI = VI + VI VI = VI + VI
VIII = VI − II VIII − VI = II VIII = VI − II VIII = VIII
IV = IV + IV IV = IV = IV IV = IV + IV IV = IV + IV
II = III + I III = II + I II = III + I II = III +I
VII = VII + VII VII = VII = VII VII = VII + VII VII = VII + VII
VII = II + III VI = III + III VII = II + III VII = II +III
VI = IV − II VI − IV = II VI = IV − II VI = IVII
If a participant failed to solve a problem on the first block, the solution was not shown, and the next problem was presented. By contrast, if the participant failed in the succeeding blocks, a cueing procedure followed each unsuccessful trial, with ‘first-level’ and ‘second-level’ cues (Table 3). Both levels consisted of suggesting to the participant to avoid moving or changing some of the components of the equation. The second-level cue was more informative than the first-level one, and included the information given in it. If the participant failed after the first-level cue, she/he was given the second-level cue. For example, for the problem ‘IV = III = I’, unsuccessful participants were informed that they should leave unchanged:
• (first-level cue) ‘II’: V = III – II i.e. participants were informed that they would not find the solution by changing the roman numeral ‘II’ (bold and underlined in the example);
• (second-level cue) ‘V’, ‘III’ and ‘II’ : V = IIIII i.e. participants were informed that they would not find the solution by changing the roman numerals ‘V’, ‘III’ or ‘II’ (bold and underlined in the example).
After each cue, a further minute was granted to look for the solution. To make the cues clear and readily available throughout the whole thinking period, the elements that were not to be changed were composed of black sticks of the same size as the original ones.
Accuracy and solution time were collected for each problem.
### Variables
We analysed the following variables.
• Index of success (‘success score’ henceforth). This index estimated the participants' ability to solve the problems without assistance, i.e. before the delivery of cues. Therefore, the index was derived from the performance on the first two attempts to solve a problem type (first two blocks). The index was dichotomous: if the participant could work out the correct answer in at least one of those two attempts, she/he was given a ‘pass’ score (1); a ‘fail’ score (0) was given otherwise. Since the index was specific of the problem type, each participant received three dichotomous scores: ‘success A’, ‘success B’, ‘success C’.
• Index of accuracy after relaxation of constraint (‘relax score’). When a subject solves a problem for the first time, either autonomously or helped by the cueing procedures, the solution-preventing constraint is removed (‘relaxed’). Therefore, the performance on that problem type after the first success expresses the participant's ability to solve it in the absence of constraints. This ability was estimated by means of the ‘relax’ score, i.e. the proportion of times a participant was able to find the solution (without cues) after having solved that specific problem type (with or without cues) for the first time. For instance, if a participant obtained the following scores for type A problems [1st block: 0; 2nd block: 0 (after cue, 1); 3rd block: 1; 4th block: 0 (after cue, 1)], the participant was given a type A relax index of 0.5. Indeed, after the first success (reported in bold), she/he obtained one pass out of two uncued attempts (underlined). Three relax indices were thus obtained, one for each problem type (‘relax A’, ‘relax B’, ‘relax C’).
### Appropriateness of the task for testing theoretical predictions
#### Replication of Knoblich et al.'s (1999) results
Since our control and patient samples tapped a different and wider range of age and education than that of the original study (Knoblich et al., 1999), a first step was to see whether or not we were able to replicate its main findings. Therefore, normal participants should show a significantly better performance on low-constraint problems (type A) than on high-constraint problems (types B and C).
#### Specific form of the main prediction
The main prediction of this study is that patients should perform better than controls on high-constraint problems (types B and C). That is, they should outperform controls on Success B and Success C scores. This should happen if (i) the Frith–Knoblich theory is true and (ii) patients do not have concomitant deficits, i.e. deficits other than the inability to sculpt the response space, affecting their matchstick performance (e.g. processing difficulties; Kershaw and Ohlsson, 2004). Indeed, if such supplementary deficits were present, they would mask the advantage of patients by reducing it, nullifying it, or even reverting it into a disadvantage. Therefore, in order to provide a valid test of the Frith–Knoblich prediction, problem types that were found to contain, for the patients, difficulties other than the constraints, had to be excluded.
Such extra difficulties would be apparent from looking at Relax scores (that are independent of Success scores, on which the crucial prediction was made), for the following reasons. Suppose that the constraints were the only difficulties. In this case, after their relaxation, performance should reach a high level both in patients and in controls, without group differences. Suppose instead that patients had supplementary difficulties. If this were the case, they would still show a disadvantage with respect to controls, even after constraint relaxation. Therefore, problem types that produced a significant disadvantage for patients versus controls after relaxation (i.e. on relax scores) had to be excluded.
By considering these remarks, the Frith–Knoblich prediction takes a more specific form: frontal lateral patients should obtain success scores higher than those of normal controls, on problem types that were solved with high probability and equally well by both groups after constraint relaxation (relax scores).
### Planned statistical analyses
#### Success scores
Success scores were dichotomous (pass–fail); thus, nominal-scale statistics were appropriate in this case. Fisher's Exact test was used for between-subjects comparisons. The McNemar test and Cochran's Q were used for, respectively, two- and three-level within-subjects comparisons.
The performance of control and patient groups during the first two blocks was compared for each type of problem. Moreover the performance on each high-constraint problem type (B and C) was compared with that on the low-constraint problem type (A). Only the types of problem meeting the criterion discussed above (Section Specific form of the main predicition) were considered. Since we had a priori expectations on the direction of the effects, one-tailed P-values were used. P-values were all estimated from Exact distributions.
#### Relax scores
Relax scores ranged between 0 and 1, with few intermediate values (0.33, 0.66). Therefore, ordinal-level non-parametric analyses were prudently applied. The Mann–Whitney test was carried out for between-subjects comparisons, while Wilcoxon and Friedman tests were applied for, respectively, two- and three-level within-subjects comparisons. Since we expected patients to show, if anything, a worse Relax index than controls, one-tailed P-values were considered. P-values were all estimated from Exact distributions.
## Results
### Effects of clinical variables
A logistic regression analysis was carried out on both the success and the relax scores of the patient group. Predictors were lesion volume (cc) and the logarithm of post-onset time (days). No significant effect of either variable on either score was found.
#### Healthy participants
Success indices were significantly different according to problem type (A, B and C) in the control group (Cochran's Q = 10.778, P < 0.01; see Fig. 2). As expected, control participants scored significantly better on problem type A than on B (McNemar test, P < 0.05) and C (McNemar test, P < 0.01). Relax indices (Fig. 3) did not differ significantly for different problem types (Friedman test, χ2 = 3.800, P > 0.1). This pattern of results replicated successfully the main findings of Knoblich et al. (1999) in an older and less educated population.
Fig. 2
Success scores of patient and control groups on the matchstick arithmetic task for each problem type. LAT, lateral frontal; MED, medial frontal; CTL, control group. *P < 0.05, **P < 0.01 for the control group versus patient subgroups comparisons.
Fig. 2
Success scores of patient and control groups on the matchstick arithmetic task for each problem type. LAT, lateral frontal; MED, medial frontal; CTL, control group. *P < 0.05, **P < 0.01 for the control group versus patient subgroups comparisons.
Fig. 3
Relax score of patient and control groups on the matchstick arithmetic task for each problem type. Conventions as in Fig. 2.
Fig. 3
Relax score of patient and control groups on the matchstick arithmetic task for each problem type. Conventions as in Fig. 2.
#### Patient groups
Lateral frontal patients showed a significant effect of problem type (A, B, C) on relax indices (Friedman test, χ2 = 17.429, P < 0.001; see Fig. 3). This finding is attributable to type B problems, which had a significantly lower relax score than both type A (Wilcoxon test, P < 0.01) and type C (Wilcoxon test, P < 0.01) problems. By contrast, type A and type C problems had similar relax scores (Wilcoxon test, P > 0.1). Moreover, relax B index of lateral frontal patients was significantly lower than that of the control group (Mann–Whitney, z = 3.150, P < 0.01), but relax A (Mann–Whitney, z = 1.405, P > 0.05) and relax C (Mann–Whitney, z = 0.878, P > 0.1) were not.
In medial frontal patients relax indices for A, B and C types (Fig. 3) showed a statistical trend in being different from each other (Friedman test, χ2 = 5.448, P < 0.1). This trend is attributable to an advantage of type C over both type B (Wilcoxon test, P < 0.01) and type A (Wilcoxon test, P < 0.01). Relax B scores of medial frontal patients were significantly lower than those of the control group (Mann–Whitney, z = 2.343, P < 0.05). This difference was not found for relax A (Mann–Whitney, z = 1.346, P > 0.05) or relax C (Mann–Whitney, z = 0.758, P > 0.1).
The relax score of patients (from both anatomical subgroups) on type B problems was significantly lower with respect to both the controls' score on that same type, and the scores of the patients themselves on problem types A and C. Thus, according to the above-mentioned criteria (see the section Specific form of the main prediction) type B was not considered in the following analyses.
### Comparison between success scores of control and patient subgroups
The type A success score of the lateral group was not significantly different from that of the control group (Fisher's exact test, P > 0.1). In contrast, lateral patients performed significantly better than controls on problem type C (Fisher's exact test, P < 0.05). Within the lateral group, A and C success scores were not significantly different from each other (McNemar test, P > 0.1).
The success score of the medial group was not significantly different from that of controls both on type A and on type C problems (Fisher's exact test, P > 0.1). Within the medial group, A and C success scores were not significantly different from each other (McNemar test, P > 0.1).
### Is the patients' profile specific of the lateral subgroup?
The success profile of medial patients was not significantly different from both controls' and lateral patients' profiles (Fig. 2). Given this lack of significance it is not possible to rule out either of the extreme possibilities: that the real medial patients' profile matches closely the controls' or the lateral patients' pattern. One possible suggestion is that medial patients' profile fell in between the other two groups' profiles (see especially type C problems, Fig. 2), because some of the medial patients had a minor involvement of the lateral cortex surface. If this were the case, by excluding patients with such minor involvement from the medial group, the pattern of residual patients should match that of controls. We selected, in a further analysis, the subset of the medial group that had pure medial or orbito-frontal damage, that is patients with no involvement whatsoever of the frontal lobe convexity (n = 9; Fig. 1, Supplementary material). Interestingly, their mean success score on type A was only 56%, significantly lower than both lateral patients' (Fisher's exact test, P < 0.05) and controls' (Fisher's exact test, P < 0.05) scores. It is thus clear that pure medial patients cannot be assumed to have an entirely normal profile. The crucial type C success score was also 56%, not significantly different from both controls' and lateral patients' scores. Overall, the success profile of pure medial group was essentially flat (the effect of problem type was not significant; Cochran's Q = 1.143, P > 0.05) and quite far from optimal (type A: 56% = 5/9; B: 33% = 3/9; C: 56% = 5/9; see Fig. 2 and Table 1, Supplementary material).
## Discussion
It is widely acknowledged that frontal cortex is crucial in order to cope with problems that are novel and ‘difficult’. In this study, we tested a prediction that was deduced from the combination of Frith's theory (Frith, 2000) on lateral frontal cortex functions with Knoblich and collaborators' observational theory on the matchstick arithmetic task (Knoblich et al., 1999). The counterintuitive prediction was that lateral frontal patients should perform better than healthy controls on the most difficult trials of a novel task.
Two requisites were necessary for our experiment to be an adequate test of the proposed prediction. First, our revised version of Knoblich et al.'s task should have produced, from healthy participants, the same results pattern as that of the original study, in spite of age and education differences. Second, problem types should have been excluded that encompassed, for lateral patients, difficulties other than the triggering of inappropriate constraints, i.e. problems that were still performed below normal level after ‘relaxation’ of those constraints. In this way, the possibility that concomitant deficits at stages other than the one of interest (the constraint implementation stage) masked the predicted supra-normal performance by lateral patients would have been ruled out.
Both of these prerequisites were met. Although older and less educated, our control subjects showed a progressively declining performance from type A to B to C (91, 57, 43%, respectively, see Fig. 2), as did Knoblich et al.'s participants. Furthermore, problem type B was excluded because relaxation of constraints, for lateral patients, did not lead to normal performance. Therefore, by analysing only problem types A and C we could test the prediction generated by the Frith–Knoblich model.
The pattern of results closely matched the prediction. Lateral frontal patients were as successful as healthy controls in solving type A problems, which have weak constraints, but significantly more successful than controls on type C problems, which have strong constraints. Besides statistical significance, the difference was far from negligible (82% of lateral patients solved those problems, while only 43% of controls did so). The hypothesis that lateral frontal patients constrain the response space less than controls do, is thus corroborated. According to this view, it is as if lateral frontal patients faced a problem with a trial-and-error approach without a prior assessment of the likely fruitfulness or appropriateness of a strategy. They would simply explore the whole of the response space (see appendix I). In the artificial situation of the matchstick arithmetic task, this procedure can be an advantage, but in real life situations, in which a preliminary downgrade of the possibilities to be considered is a necessary step in order to make problems tractable, this produces less efficient and more disorganized behaviour (Shallice and Burgess, 1991).
The hypothesis that lateral patients apply a trial-and-error approach could also account for their impairment on type B problems. By evaluating problems, without any constraint whatsoever apart from respecting the composition rules for roman numerals and the rules of equation writing it can easily be observed that type B problems have about twice as many possible matchstick moves (on average, 10.2) as types A and C problems (respectively 5.7 and 5.5). Thus, it is possible to speculate that lateral patients' advantage in having no constraints is not enough to compensate for the higher computational load induced by problems with a larger response space (Kershaw and Ohlsson, 2004) (see appendix II).
### Processing contextual hints
One might wonder whether the inability to implement constraints is the only possible explanation for the present findings from the matchstick arithmetic task. Another explanation might be that in this task the ‘impasse’ arises in healthy participants, because they inappropriately represent the problems as if they came from the superficially similar field of algebra (Knoblich et al., 1999). For instance: In this view, the behaviour of lateral frontal patients could be explained by proposing that they do not take into consideration contextual hints, as proposed by a number of theories of frontal lobe functions (e.g. Braver and Cohen, 2000; Kerns et al., 2004; Metzler, 2001). In the present case, the superficial similarities of the matchstick problems to simple equations could constitute the relevant contextual hint. Frontal patients may not be able to take advantage—or disadvantage as in the present case—of tricks that apply to the more familiar field of simple mathematical problems (see the two examples above). More generally, lateral frontal patients would not ‘enrich’ their representation of the problematic situation (more specifically, of the ‘goal state’ in cognitivistic terms) with past experience, as healthy controls do. This explanation would link the present findings to evidence suggesting a crucial role of lateral frontal cortex in the efficient encoding of novel stimuli (particularly in the memory domain), e.g. through chunking (Bor et al., 2003).
• Healthy participants might initially think that the solution should respect the general prototypical form of an equation, with at least an equal sign and an operator (+ or −). Thus tautologies (e.g. III = III = III) are not considered appropriate solutions (type C problems);
• Healthy participants might think that the only variable parts of an equation are numbers (type B and C problems).
According to this explanatory hypothesis, lateral frontal patients would still be able to implement constraints on the response space but they would not do so because they fail to envisage or generate possible candidates to the role of constraint. The present study does not provide enough evidence to decide whether our findings are related to the inability to ‘sculpt the response space’ by implementing constraints, or to the inability to generate candidates to the role of constraint before implementation.
### Medial frontal patients
The general pattern of success scores of medial patients seemed quite similar to that of lateral patients, although less marked (see Fig. 2). Nevertheless, from the statistical viewpoint, this medial profile was not distinguishable also from that of the controls. Given the present data, two alternative accounts may be proposed.
• Frith's claim about the anatomical structure subserving constraints implementation (lateral frontal) is correct. In the present medial group, some patients had also the lateral cortical surface involved, which would explain the partial resemblance of the medial group's to the lateral group's success profile.
• The lateral frontal cortex is not the only region involved in constraints implementation, and perhaps some medial structure is also crucial in these processes (such as the anterior cingulate, as Duncan and collaborators would argue, see Duncan and Owen, 2000).
The prediction of the first hypothesis (i) is that in the pure medial subgroup, patients with a lesion of the medial or orbitofrontal cortex but without any involvement whatsoever of the frontal lobe convexity, should show, as controls do, a clear effect of constraint level. Type C success score should be lower than type A score (type B is to be excluded as discussed in the section Specific form of the main prediction). By contrast, if hypothesis (ii) held true, pure medial patients should show, as lateral patients do, no effect of constraint level. They should show the same performance on types A and C and a clear advantage on problem type C with respect to the control group. The pattern of results (see Fig. 2 and Table 1, Supplementary material) is more compatible with hypothesis (ii). Pure medial patients obtained the same score (56%) on both type A and type C problems. However, unlike lateral patients' scores, their performance was clearly suboptimal. For instance, pure medial patients obtained a significantly lower type A score (56%) with respect to both lateral (94%) and control (91%) groups. An additional cognitive deficit (e.g. of initiation) is to be assumed in order to explain why pure medial patients did not take full advantage, as lateral patients did, from the absence of constraints.
In summary, pure medial patients' performance may be interpreted as the effect of a combination of an aspecific cognitive deficit (e.g. a general slowing down), that accounts for the suboptimal level of performance, and a lack of constraints on response space, that accounts for the similarity of the success index across problem types. In any case, the pure medial sample size (n = 9) makes conclusions premature in this respect.
## Conclusions
This study provides strong evidence in favour of a role of lateral—and perhaps medial—frontal structures in biasing response space. Further evidence, allowing one to carry out a single-case estimation of biasing ability, would be useful in order to build more fine-grained anatomo-functional maps.
One final remark regards the advantage of testing theoretically-driven predictions of a better performance in a neurological population with respect to a control population. Brain lesions produce deficits, i.e. performances that are lower than in healthy controls, much more frequently than improvements. Therefore, models of the cognitive architecture have been proposed in cognitive neuropsychology that when ‘damaged’ predict deficits. As a consequence if one wants to drive inferences about stage X, she/he has to guarantee that there are no deficits both downstream and upstream from X in the information flow. In a trivial example, both blindness and plegia have to be excluded if one wants to study visual depth perception in a task in which the patient has to respond by pressing a key. If the prediction of a cognitive theory is that of a paradoxical improvement of performance after a lesion of stage X, there is no need to test for integrity all the other processing stages. If these latter were damaged, this would, if anything, predict a deficit in the final performance, not an improvement. The advantage is, therefore, in terms of much fewer tests to be administered to each participant. Furthermore, theories that predict counterintuitive phenomena, like improvement after brain damage, tend to have relatively fewer rival theories. Thus, whenever a counterintuitive prediction is confirmed, it represents a strong corroboration in favour of the originating theory (Lakatos, 1978).
## Appendices
### Appendix I
In this context an unpublished finding from a former study by our lab (Reverberi et al., 2005) is worth mentioning. In this work, the authors administered the Brixton Spatial Rule Attainment task to a sample of frontal patients. On this task, participants were presented with a card containing a 2 × 5 display of circles one only of which was coloured in blue. The blue circle moved from one card to the next following five rules. Participants had to predict which circle should be blue on the next card. One of the rules was ‘stay the same’, that is the blue circle remained in the same position across six cards. This rule was quite difficult for the control group (the rule was attained only by 32% of the healthy participants, see Fig. 3, Supplementary material), but not for the right lateral damaged patients (70%, significantly >32%). If we interpret the difficulty of the control group as arising because of an a priori constraint—not specified in the instructions—such as ‘the blue circle should move in some direction’, the advantage of right lateral patients could be explained by appealing to Frith's theory. In the present study, the advantage observed in lateral frontal patients tended to be higher in right than in left unilateral patients (Right: N = 9; success type C = 0.89; significantly higher than the control group's one; Left: N = 8; success type C = 0.75; not significantly higher than the control group's one).
### Appendix II
Some examples of computation of the number of possible moves follow:
• Problem: IV = IV + IV. Four possible moves: VI = IV + IV; IV = VI + IV; IV = IV + VI; IV = IV = IV.
• Problem: V = III − II. Ten possible moves: VI = II − II; IV = II − II; VI = III − I; IV = III − I; V = II − III; V = III + I; V = II + II; V = II = II; V = III = I; V − III = II.
The stimuli used in the present work are a subset of Knoblich et al.'s (1999) problems. At an early stage of their work (G. Knoblich, personal communication) the authors found no effect of number of possible moves in a set of healthy participants. Hence, this variable was not taken into account both in their published study and in the present work. Further investigation in which problem type and number of possible moves are independently manipulated could help clarify the relevance of the latter factor in a patient population.
The study was carried out in the Santa Maria della Misericordia Hospital (Udine, Italy), in SISSA (Trieste, Italy) and in Università Milano-Bicocca (Milano, Italy). We are grateful to Günther Knoblich, Tim Shallice and Claudio Luzzatti for their thoughtful comments on earlier drafts of the paper. We are also indebted to Ms Sarah J. Healy for her kind help in revising the English.
## References
Bor D, Duncan J, Wiseman RJ, Owen AM. Encoding strategies dissociate prefrontal activity from working memory demand.
Neuron
2003
;
37
:
361
–7.
Braver TS, Cohen JD. On the control of control. The role of dopamine in regulating prefrontal function and working memory. In Monsell S, Driver J, editors. Attention and performance XVIII: control of cognitive performance. Cambridge, MA:MIT Press;
2000
. p. 713–37.
Duncan J. An adaptive coding model of neural function in prefrontal cortex.
Nat Rev Neurosci
2001
;
2
:
820
–9.
Duncan J, Owen AM. Common regions of the human frontal lobe recruited by diverse cognitive demands.
Trends Neurosci
2000
;
23
:
475
–83.
Duncan J, Seitz RJ, Kolodny J, Bor D, Herzog H, Ahmed A, et al. A neural basis for general intelligence.
Science
2000
;
289
:
457
–60.
Fletcher PC, Shallice T, Dolan RJ. “Sculpting the response space”—an account of left prefrontal activation at encoding.
Neuroimage
2000
;
12
:
404
–17.
Frith CD. The role of the dorsolateral prefrontal cortex in the selection of action as revealed by functional imaging. In: Monsell S, Driver J, editors. Control of cognitive processes: attention and performance XVIII. Cambridge, MA: MIT Press;
2000
. p. 549–65.
Goel V, Dolan RJ. Differential involvement of left prefrontal cortex in inductive and deductive reasoning.
Cognition
2004
;
93
:
B109
–21.
Henry JD, Crawford JR. A meta-analytic review of verbal fluency performance following focal cortical lesions.
Neuropsychology
2004
;
18
:
284
–95.
Kerns JG, Cohen JD, Stenger VA, Carter CS. Prefrontal cortex guides context-appropriate responding during language production.
Neuron
2004
;
43
:
283
–91.
Kershaw TC, Ohlsson S. Multiple causes of difficulty in insight: the case of the nine-dot problem.
J Exp Psychol Learn Mem Cogn
2004
;
30
:
3
–13.
Knoblich G, Ohlsson S, Haider H, Rhenius D. Constraint relaxation and chunk decomposition in insight problem solving.
J Exp Psychol Learn Mem Cogn
1999
;
25
:
1534
–55.
Knoblich G, Ohlsson S, Raney GE. An eye movement study of insight problem solving.
Mem Cognit
2001
;
29
:
1000
–9.
Koechlin E, Ody C, Kouneiher F. The architecture of cognitive control in the human prefrontal cortex.
Science
2003
;
302
:
1181
–5.
Lakatos I. The methodology of scientific research programmes. Cambridge (UK): Cambridge University Press;
1978
.
Metzler C. Effects of left frontal lesions on the selection of context-appropriate meanings.
Neuropsychology
2001
;
15
:
315
–28.
Nagahama Y, Fukuyama H, Yamauchi H, Matsuzaki S, Konishi J, Shibasaki H, et al. Cerebral activation during performance of a card sorting test.
Brain
1996
;
119
:
1667
–75.
Nathaniel-James DA, Frith CD. The role of the dorsolateral prefrontal cortex: evidence from the effects of contextual constraint in a sentence completion task.
Neuroimage
2002
;
16
:
1094
–102.
Reverberi C, Lavaroni A, Gigli GL, Skrap M, Shallice T. Specific impairments of rule induction in different frontal lobe subgroups.
Neuropsychologia
2005
;
43
:
460
–72.
Rorden C, Brett M. Stereotaxic display of brain lesions.
Behav Neurol
2000
;
12
:
191
–200.
Shallice T. Specific impairments of planning.
Philos Trans R Soc Lond B Biol Sci
1982
;
298
:
199
–209.
Shallice T, Burgess PW. Deficits in strategy application following frontal lobe damage in man.
Brain
1991
;
114
:
727
–41.
Sternberg RJ, Davidson JE. The nature of insight. Cambridge, MA: MIT Press;
1995
.
Stuss DT, Alexander MP, Palumbo CL, Buckle L, et al. Organizational strategies with unilateral or bilateral frontal lobe injury in word learning tasks.
Neuropsychology
1994
;
8
:
355
–373.
Stuss DT, Levine B, Alexander MP, Hong J, Palumbo C, Hamer L, et al. Wisconsin Card Sorting Test performance in patients with focal frontal and posterior brain damage: effects of lesion location and test structure on separable cognitive processes.
Neuropsychologia
2000
;
38
:
388
–402.
Talairach J, Tournoux P. Co-planar stereotaxic atlas of the human brain. New York: Thieme;
1988
.
Thompson-Schill SL, D'Esposito M, Aguirre GK, Farah MJ. Role of left inferior prefrontal cortex in retrieval of semantic knowledge: a reevaluation.
1997
;
94
:
14792
–7.
Thompson-Schill SL, Swick D, Farah MJ, D'Esposito M, Kan IP, Knight RT. Verb generation in patients with focal frontal lesions: a neuropsychological test of neuroimaging findings.
1998
;
95
:
15855
–60.
Tulving E, Kapur S, Craik FI, Moscovitch M, Houle S. Hemispheric encoding/retrieval asymmetry in episodic memory: positron emission tomography findings.
1994
;
91
:
2016
–20.
Warrington EK, Davidoff J. Failure at object identification improves mirror image matching.
Neuropsychologia
2000
;
38
:
1229
–34.
## Author notes
1International School for Advanced Studies (SISSA–ISAS), Trieste, Italy, 2Department of Psychology, Università Milano–Bicocca, Milano, Italy, 3Department of Psychology, Università degli Studi di Pavia, Pavia, Italy and 4Azienda Ospedaliera S. Maria della Misericordia, Udine, Italy
|
{}
|
# Homework Help: Proof about primes
1. Jul 17, 2013
### cragar
1. The problem statement, all variables and given/known data
Prove that every prime of the form 4k+1 is a hypotenuse of a rectangular triangle with integer sides.
3. The attempt at a solution
I tried messing around with the Pythagorean theorem but not really sure where to go.
$x^2+y^2=(4k+1)^2$
it seems like there would exist x and y that make that true, but I haven't really said anything about the primes
2. Jul 17, 2013
|
{}
|
# April Fools' Day Hoax: Fermat's Last Theorem
I read this answer to this question on MathOverflow, and I enjoyed reading the proof given in the linked paper, but... where is the mistake? I know nothing of the Mason-Stothers Theorem except its statement and truth, and the reasoning in the paper seems to be logically sound to me.
Intuitively, I feel that the result obtained (that there cannot exist complex polynomials with the given conditions) does not actually imply the statement of Fermat's Last Theorem (which deals with positive integers), though I cannot justify this. Could anyone explain the flaw in the proof?
It's a dumb trick; the author's just misstating the Mason-Stothers theorem, which includes the condition that the three polynomials are relatively prime. Here the polynomials are all multiples of $t$ so they aren't relatively prime.
|
{}
|
# Force systems
Part of the Statics course offered by the Division of Applied Mechanics, School of Engineering and the Engineering and Technology Portal
## Lecture
Force is a component of dynamics, which describes the causes of motion. Dynamics is in turn part of Newtonian Mechanics in the large field of Physics. Force systems are also the starting point of engineering analysis.
### Force Vectors
A force vector is a force defined in two or more dimensions with a component vector in each dimension which may all be summed to equal the force vector. Similarly, the magnitude of each component vector, which is a scalar quantity, may be multiplied by the unit vector in that dimension to equal the component vector.
${\displaystyle {\vec {F}}={\vec {F}}_{x}+{\vec {F}}_{y}+{\vec {F}}_{z}=F_{x}{\hat {i}}+F_{y}{\hat {j}}+F_{z}{\hat {k}}}$
### Moment
For a system wherein a rigid body experiences a force F at a orthogonal distance L from a fixed point, the moment M is the quantity (oddly enough of the same units as energy) defined by the force multiplied by the length of distance between the fixed point and the point where the force is applied. The direction of the moment is perpendicular to the force ecotro and the length, using the right hand rule.
${\displaystyle {\vec {M}}\ ={\vec {F}}*L}$
In the event that a force impacts the rigid body at an angle other than a right angle ${\displaystyle {\vec {F}}=F\angle \alpha =F_{x}+F_{y}}$, the moment is determined by the component of the force vector ${\displaystyle {\vec {F}}}$ that is orthogonal to the length L.
The general case in three dimensions can be calculated with the cross product. Do note that the order of the distance vector ${\displaystyle {\vec {r}}}$ and the force ${\displaystyle F}$ does matter in cross products as opposite order will change signs.
${\displaystyle {\vec {M}}\ ={\vec {r}}\times {\vec {F}}}$
The components of a moment vector is the moment around the respective axis, following the right hand rule.
Example:
M = Force * Length = 100 Newtons * 10 Meters = 1,000 Newton-meters (N-m)
Example: Force F is incident on the end of a rigid body of length L at an angle A degrees from the central axis of the body x (Hint: draw a free body diagram).
Then ${\displaystyle F_{y}\ ={\vec {F}}\sin(A)}$ and ${\displaystyle M\ =F_{y}\ *L}$
### Couple
A couple is a pair of equal and opposite force vectors that are some distance apart and that act upon the same body, thus causing a rotation. Imagine that force ${\displaystyle F_{1}\ }$ and force ${\displaystyle F_{2}\ }$ are incident at two locations along a rigid body of total length ${\displaystyle L\ }$ at positions ${\displaystyle a\ }$ and ${\displaystyle b\ }$, where ${\displaystyle a\ +b=L}$. (Hint: draw a free body diagram)
Then ${\displaystyle {\vec {M}}=F(a+b)-Fa}$
In 3D the same rule applies, using that ${\displaystyle {\vec {M}}={\vec {r}}_{1}\times {\vec {F}}_{1}+{\vec {r}}_{2}\times {\vec {F}}_{2}={\vec {r}}_{1}\times {\vec {F}}_{1}-{\vec {r}}_{2}\times {\vec {F}}_{1}=({\vec {r}}_{1}-{\vec {r}}_{2})\times {\vec {F}}_{1}}$ which means that the moment will be the same around any point in the system.
### Resultant
Any system of forces may be reduced to a system of components and a resulting moment.
That is to say, ${\displaystyle {\vec {R}}=\sum {\vec {F}}}$ and ${\displaystyle {\vec {M}}_{o}\ =\sum M}$ about the point ${\displaystyle O\ }$
${\displaystyle {\vec {R}}_{x}=\sum {\vec {F}}_{x}}$ and ${\displaystyle {\vec {R}}_{y}=\sum {\vec {F}}_{y}}$ and ${\displaystyle {\vec {R}}_{z}=\sum {\vec {F}}_{z}}$ and then ${\displaystyle R={\sqrt {{\vec {R}}_{x}^{2}+{\vec {R}}_{y}^{2}+{\vec {R}}_{z}^{2}}}}$ with ${\displaystyle \theta =\arctan {\frac {F_{y}}{F_{x}}}}$
...then ${\displaystyle {\vec {R}}}$ is magnitude ${\displaystyle R\ }$ in the direction of ${\displaystyle \theta }$
Activities:
|
{}
|
# What is the constant term of the expansion of ((x-(2/x^2))^9?
${\left(x - \frac{2}{x} ^ 2\right)}^{9} = - \frac{512}{x} ^ 18 + \frac{2304}{x} ^ 15 - \frac{4608}{x} ^ 12 + {x}^{9} + \frac{5376}{x} ^ 9 - 18 {x}^{6} - \frac{4032}{x} ^ 6 + 144 {x}^{3} + \frac{2016}{x} ^ 3 - 672$
Hence the constant term is $- 672$
|
{}
|
#### Sentence Examples
• Obviously equimolecular surfaces are given by (Mv) 3, where M is the molecular weight of the substance, for equimolecular volumes are Mv, and corresponding surfaces the two-thirds power of this.
• Hence S may be replaced by (Mv) 3.
• Ramsay and Shields found from investigations of the temperature coefficient of the surface energy that Tin the equation y(Mv) 3 = KT must be counted downwards from the critical temperature T less about 6°.
• Their surface energy equation therefore assumes the form y(Mv)i=K(T-6°).
• N is the mean number of molecules which associate to form one molecule, then by the normal equation we have y (Mnv) 3 =2.121(r -6°); if the calculated constant be K 1, then we have also y(Mv)3=K,(r-6°).
|
{}
|
# Finding the unit vectors parallel to a tangent line
This is the solution to the problem. However, I dont understand how they got to the part that says "and the parallel vector is i+4j..". What does this mean and how did they derive that?
-
## 1 Answer
For any real number $m$, the vector $(1,m)$ determines a line of slope $m$ through the origin: simply note that the line through $(0,0)$ and $(1,m)$ has rise $m$ and run $1$.
In this case, your $m=4$, so the vector $(1,4) = 1i + 4j = i+4j$ is a vector of the appropriate slope, hence parallel to the tangent.
-
So only if the x component is 1 and you have (1,m), the slope is m? Whats so special about 1? – maq Feb 7 '11 at 1:55
@mohabitar: That it makes things easy! Or, that it is pretty much the easiest number to divide by. The vector $(x,y)$, with $x\neq 0$, determines a line of slope $\frac{y}{x}$. So you want to find any $x$ and $y$ with $\frac{y}{x}=m$. You can certainly find others, but isn't the simplest thing to take $x=1$? – Arturo Magidin Feb 7 '11 at 1:58
Ohh ok didnt realize that. And so is there any such/similar rule for 3D vectors? – maq Feb 7 '11 at 2:00
It depends on how you "know" the line. Lines through the origin in 3D need more than a single number to be determined (while in 2D the slope does it). If you know two points on the line, then the vector determined by their difference gives you a direction. – Arturo Magidin Feb 7 '11 at 2:21
|
{}
|
# Derivation of the Equation for Relativistic Mass
1. Jan 27, 2007
### NanakiXIII
Throughout pages on the internet I've seen the following relationship between rest mass and relativistic mass:
$$m = \frac{m_0}{\sqrt{1-\frac{v^2}{c^2}}}$$
However, I have been utterly unable to find any sort of derivation or explanation of this formula, other than such explanations as "nearing the speed of light, an object's mass nears infinity and thus.. etc.". Where did this equation come from? Does it somehow follow from the Lorentz transformations? Any helpful insights would be much appreciated.
2. Jan 27, 2007
### Roman
hello,
this equation comes from lorentz transformations. just look in any book about special relativity
3. Jan 27, 2007
### cesiumfrog
One possible starting point is $E=mc^2$, $E^2=(m_0c^2)^2+(pc)^2$, so I suppose you could get it from the symmetries of minkowskii space (and the equivalence of relativistic mass and energy). Alternatively, you might want to consider from first-principles a constantly accelerated box of moving particles or photons, and show that the accelerating force on the box (hence the inertial mass of the system) depends on the internal momentum (relativistic mass) of the contents.
4. Jan 27, 2007
### cristo
Staff Emeritus
One way to see how the relativistic mass arises, is to consider the particle's four momentum P=m0U, where $$\bold{U}=\left(\frac{dx}{d\tau},\frac{dy}{d\tau},\frac{dz}{d\tau},\frac{dct}{d\tau}\right)$$, the particle's four velocity. So, $$\bold{P}=m_0\bold{U}=m_0\gamma(u)(\bold{u},c)=(m\bold{u},mc)=:(\bold{p},mc)$$ where u and p are the particle's three velocity and "relativistic momentum", respectively, and m is the relativistic mass defined as $m=m_0\gamma(u)$.
Last edited: Jan 27, 2007
5. Jan 27, 2007
### bernhard.rothenstein
mass in special relativity
IMHO you can consider the equation which relates proper mass and relativistic mass (horribile dictu) as an experimental fact obtained by Kaufmann, Bucherer and probably many others. American Journal of Physics offers its derivation via thought experiments, whereas others derive it considering an intercation between a photon and a tardion, and taking into account conservation of momentum and energy. I am fond of such derivations wich are simple and involve subtle physical thinking. If you are interested in the papers I mention, please let me know.
The best things a physicist can offer to another one are information and constructive criticism in the spirit of IMHO.
6. Jan 28, 2007
### NanakiXIII
Roman: The only book about Special Relativity I have at my disposal at the moment is Relativity by Albert Einstein. It doesn't mention anything about the topic. If you know of any sources on the internet that contain the same information, I'd like to hear of them.
cesiumfrog: I'm actually interested in understanding this equation because it is at the basis of a derivation for E=mc^2 I found.
In general, my preference lies with a derivation starting from the Lorentz transformations, not in the least because my knowledge and understanding of physics and mathematics is limited. cristo's post, for example - and you may think me lazy for not doing my homework - is incomprehensible to me.
bernhard.rothenstein: If those papers are understandable for someone with little knowledge of the matter, in this case a mere high school student, then yes, I might be interested. If you could provide me with some more information on these papers, I'd much appreciate it.
So thank you all for your replies. If anyone knows, however, of a derivation of this equation using the Lorentz transformations and preferably no other assumptions that are specifically within the theory of Special Relativity, if such a derivation exists at all, that would be the greatest help.
7. Jan 28, 2007
### Gib Z
It would be hard to derive that with no assumptions from Special Relativity, since one of the axioms is that C is constant ...
8. Jan 28, 2007
### NanakiXIII
I didn't say no assumptions. I said preferably as few as possible besides the Lorentz transformations, which account for the constancy of the speed of light.
9. Jan 28, 2007
### Staff: Mentor
Some introductory textbooks analyze glancing collisions between objects, using time dilation and length contraction, to derive that momentum should equal $m_0 \gamma \vec v$ in order to preserve conservation of momentum. That is, they show that $m_0 \vec v$ is not conserved, but $m_0 \gamma \vec v$ is (where $\gamma = 1 / \sqrt {1 - v^2 / c^2}$) and use this as justification for redefining momentum accordingly.
From that, one can take an additional step and say that we can preserve the classical formula $p = mv$ by defining $m = \gamma m_0$.
10. Jan 28, 2007
### bernhard.rothenstein
relativistic dynamics
Please quote places where using glancing collisions, time dilation and length contraction lead to derive the momentum. Thanks
11. Jan 28, 2007
### Staff: Mentor
Wouldn't such an analysis apply to any collision, not just glancing?
12. Jan 28, 2007
### bernhard.rothenstein
mass momentum
have a look at
American Journal of Physics -- September 1986 -- Volume 54, Issue 9, pp. 804-808
An alternate derivation of relativistic momentum
P. C. Peters
Department of Physics FM-15, University of Washington, Seattle, Washington 98195
(Received 5 August 1985; accepted 13 September 1985)
An alternate derivation of the expression for relativistic momentum is given which does not rely on the symmetric glancing collision first introduced by Lewis and Tolman in 1909 and used by most authors today. The collision in the alternate derivation involves a non-head-on elastic collision of one body with an identical one initially at rest, in which the two bodies after the collision move symmetrically with respect to the initial axis of the collision. Newtonian momentum is found not to be conserved in this collision and the expression for relativistic momentum emerges when momentum conservation is imposed. In addition, kinetic energy conservation can be verified in the collision. Alternatively, the collision can be used to derive the expression for relativistic kinetic energy without resorting to a work-energy calculation. Some consequences of a totally inelastic collision between these two bodies are also explored. ©1986 American Association of Physics Teachers
13. Jan 28, 2007
### robphy
If I recall correctly, the glancing collision is a simple case which facilitates a motivation of the relativistic 3-momentum. However, I am generally unhappy with that approach. For me, I prefer an approach that uses the velocity-composition formula to analyze a general collision.
14. Jan 28, 2007
### nakurusil
This is a preocupation with a useless notion (there are many threads that explain why "relativistic mass" is a waste of time, you can check them out in this forum).
Having said that, the type of "elementary" derivatin that you are looking for can be found in the many articles written by R.C.Tolman on the subject. For example, he uses collision thought experiments, see pages 43,44 in his book "Relativity, Thermodynamics and Cosmology". In this experiment he looks at a collision between two masses:
$$m_1u_1+m_2u_2=(m_1+m_2)V$$ (1)
from the point of two different frams S and S' in relative motion with speed V.
In frame S' the two masses are moving at speed +u' and -u' respectivelly.
Using the speed composition law, Tolman shows that :
$$u_1=(u'+V)/(1+u'V/c^2)$$ (2)
$$u_1=(-u'+V)/(1-u'V/c^2)$$ (3)
Substituting (2)(3) into (1) he gets:
$$m_1/m_2=(1+u'V/c^2)/(1-u'V/c^2)$$ (4)
Since he showed earlier that:
$$1+u'V/c^2=\sqrt(1-u'^2/c^2)*\sqrt(1-V^2/c^2)/\sqrt(1-u^2/c^2)$$
substituting in (4) he gets:
$$m_1/m_2=\sqrt(1-u_2^2/c^2)/\sqrt(1-u_1^2/c^2)$$ (5)
If one takes $$u_2=0$$ then $$m_2=m_0$$ (the "rest mass"), $$u_1=u$$ and (5) becomes:
$$m_1=m(u)=m_0/\sqrt(1-u^2/c^2)$$ (6)
Very ugly and useless.
15. Jan 28, 2007
### quantum123
He used conservation of momentum for deriving eqn 5?
16. Jan 28, 2007
### nakurusil
Yes, look at (1). Or check out the book if you prefer. Tolman wrote a series of papers on the subject.
17. Jan 28, 2007
### Staff: Mentor
Exactly. That's what I was thinking of.
18. Jan 28, 2007
### arcnets
I think it's more simple, but I can't do the nice formulae, sorry...
First, from the invariance of c for all observers, you get the equation
(ct)^2 - x^2 = invariant for all observers.
Next, you multiply with m^2 and divide by t^2:
(mc)^2 - (mx/t)^2 = invariant.
Now if one observer (0) is in rest frame, then x0/t0 = v0 = 0:
(m0 c)^2 = (mc)^2 - (mx/t)^2
Solve that for m and there you are.
19. Jan 28, 2007
### cesiumfrog
arcnets, i don't like the RHS of your second equation.. can you show why m^2 x invariant / t^2 should itself be invariant?
20. Jan 28, 2007
### nakurusil
This is not an acceptable derivation, it is a "sleigh of hand". Can you detect the "trick"?
|
{}
|
## Thinking about Francis Su’s “Mathematics for Human Flourishing”
Francis Su:
What I hope to convince you of today is that the practice of mathematics cultivates virtues that help people flourish. These virtues serve you well no matter what profession you choose. And the movement towards virtue happens through basic human desires.
I want to talk about five desires we all have. The first of these is play.
Yeah, so the first of these fundamental human desires is play, and the whole list looks like this:
1. Play
2. Beauty
3. Truth
4. Justice
5. Love
The way Su sees things, if you use math to chase these desires then you end up cultivating virtues. Here are some of those virtues:
• Hopefulness
• Perseverance
• Joy
• Transcendence
• Rigorous Thinking
• Humility
• Circumspection
(Justice and Love are desires that don’t end in virtues for Su, it seems, as he doesn’t name any virtues in those sections of his talk. Instead, pursuing love is supposed to enable your other pursuits because it results in seeking meaningful human connections: “Love is the greatest human desire. And to love and be loved is a supreme mark of human flourishing. For it serves the other desires—play, truth, beauty, and justice—and it is served by them.”)
(So, then what’s the role of Justice? Maybe for Su it’s sort of like a necessary consequence of valuing all these other things. To truly desire all these other things is to desire them for everyone, and that necessarily is the pursuit of justice? I’m just making stuff up here, I’m not sure what Su thinks.)
Anyway, basically none of Su’s talk resonates with me. I don’t mean that I think he’s wrong.* I mean it doesn’t resonate — it doesn’t hit my frequency, make me hum.
* Well, I guess I don’t have any confidence that the study of math itself can impart any of these virtues. I don’t know if he’s claiming that they will, though I think he is.
OK, fine, but then what do I think?
One of my favorite games is “long response to short thing OR short response to long thing.” Let’s play “short response to long thing”:
Michael Pershan’s Version of Math for Human Flourishing:
Five Fundamental Desires:
1. Understanding
2. Belonging
3. Growing
4. Teaching
5. I’m not sure but I came up with four
Virtues that one MIGHT develop in math, but I make no commitments about how frequently or reliably even an idealized version of math education could promote these in our students, since virtues are BIG things and math is just ONE thing:
• Humility
• uhhh this is also hard
• I’m totally stuck
Maybe not “virtues,” but ways in which I think math makes my life richer:
• Math helps me know that, to really understand something or someone, I need to give it my full attention.
• It gives me an arena in which to grow.
• It gives me questions to share with others.
• Is there a name for that thing where you’re walking down the street and you can’t not see parallel lines or tessellations or similar triangles? The math curse? I love that.
• I love theory. I love that thing where you take a perfectly conventional idea and flesh it out, completely. I recently read that topology came, in part, in response to Cantor’s proof that the line and the plane were equinumerous. But clearly there is a difference between the two — how can we capture that? “How can we capture that?” is one of my favorite questions to ask.
• I love that math gives me things to help other people think about.
## Responding to Criticism from @blaw0013
I wasn’t sure whether to respond to this or not. I want to be the sort of person that gives people stuff to think about, and (just like in the classroom) there’s a point where you have to step back and give people a chance to speak.
But: “deny joy of and access to maths for many”? It’s an interesting criticism, one that I have a lot of thoughts about.
I don’t see micro-skills as denying joy and access to students. And I think it’s partly about seeing joy in maths as something that happens in the abstract versus something that happens in the context of school.
If you think “abstractly” about what joy in math involves, your mind would probably start thinking about the sort of math that is joyous and exciting, the very coolest stuff that math has to offer. You would think of noticing surprising patterns, of unusual theorems, the endorphin release of cracking a puzzle.
Francis Su is the current leading expositor of this side of math, the beautiful, joyous, elegant side:
Pursuing mathematics in this way cultivates the virtues of transcendence and joy. By joy, I refer to the wonder or awe or delight in the beauty of the created order. By transcendence, I mean the ability to embrace mystery of it all. There’s a transcendent joy in experiencing the beauty of mathematics.
If you think abstractly, and ignore the context that students of math actually encounter math in, then you’d look at something like “micro-skills” as just the opposite of all this. And yet I think if you look at the reality of students’ lives (instead of a radical proposal for what students’ lives should be) then I think you can see where joy comes into the picture.
Yesterday I gave students a no-grades quiz in algebra. A student who, I had been told at the start of the year, frequently struggles in math, has been having a lot of success lately. She knew exactly how to handle both of the systems of equations that were on this short quiz, but she got stumped at one of the resulting equations:
$-1.7x = 4.3x + 3.6$
I didn’t know what to say when she got stuck, exactly, but I was fairly confident that this was an example of a micro-skill that she was missing.
She and I agreed that she’d like me to write a little example on the side of her page, so I wrote this:
$-2x = 5x + 7$
[I drew some arrows going down from each side labeled “+2x.”]
$0 = 7x + 7$
My student read the example and then exclaimed (in a way I can only describe as “joyous”), Oh wait, you can make 0 there?!
You can! It’s very cool, and to the mind of a child learning algebra it’s surprising, elegant, beautiful, joyous. This is what I’m talking about — not treating the moments when kids get stuck as “forgettings” or “bugs” in some universal algorithm, and instead thinking of them as opportunities for students to prove mini-theorems, try mini-strategies, learn mini-skills.
And to treat these as moments lacking joy is also to ignore the major impediment to joy in a classroom: feelings of incompetence, worries about status, anxieties about math.
I’m no psychic and my students’ story isn’t mine to tell, but she showed all appearances of being happy and relieved when she understood how to go about solving this problem. How could she experience this, given that she was dealing with the drudgery of a micro-skill? Well, part of it is that (it’s easy to forget) things that are drudgery to teachers are often rich, problematic (in a good way) terrain for students.
But part of it is that these are children in school, surrounded by other children in school. Joy can’t be separated from that social context. Students can’t experience joy if they don’t feel competent, and conversely there is joy in competence. I see this every day.
If, like me, you care both about helping kids experience joy in math and joy from competence in math (hard to separate) then you need to find opportunities in your teaching to do both. The above is how I’m currently thinking, and I’d be interested to read Brian’s take on all this — maybe he and I can find a way to write up a case that illustrates the different choices we’d make in a situation like this. I love the idea of collaborations to resolve differences.
## Where did working memory come from?
This is a quick note, before I forget the last couple things that I read.
Where did working memory come from? Here’s my picture so far:
1. There’s a limit on how much random stuff you can remember at once. You don’t need science to know this; you just need random stuff that you have to remember. I assume this has been known forever.
2. People had different names for this. William James called it primary vs. secondary memory. Others were calling it long-term vs. short-term store. What was controversial was whether these constituted just two facets of the same memory system, or whether they were two totally independent memory systems.
3. Evidence for independence comes largely from patients with brain damage. These patients either are amnesic, but do perfectly fine with short-term memories, or else they have greatly impaired short-term memories but otherwise function and learn OK. This suggests independence.
4. Question: does short-term memory constitute a working memory for functioning and long-term memory? In other words, is short-term memory necessary for learning, reasoning and comprehension?
5. Baddeley and Hitch do a battery of experiments to show that impairing short-term memory with verbal info does (modestly) impair learning, reasoning and comprehension. This is evidence that short-term memory does constitute a working memory system.
6. But the thing is that performance was only modestly impaired by their experiments, so there must be more to the cognitive system than what their experiments uncovered. (Their experiments almost entirely used span tasks that ask people to remember a bunch of stuff, random numbers, letters, etc.)
7. They then go beyond their experiments to make a guess about what the structure of working memory might be. They propose that there is a capacity to just passively take in verbal info, up to a point. Beyond that point, a “central executive” has to take active steps to hold on to info. Thus working memory limitations come both because the passive store has a limited capacity and also because the central executive can only do so much at once. The span experiments fill up the passive store, and force the active executive to do something. This allows task performance to go OK, but at a cost in performance. They also guess that there is a spacio-visual store that is entirely parallel to the phonological store.
8. This takes us up to, like, 1974 or something.
## Here’s what I think I do differently
I just read a really interesting post called ‘Applying Variation Theory.’ It’s by a teacher from the UK who I don’t yet know, Naveen Rizvi. The core mathematics is familiar territory to teachers of algebra. To factor an expression like $x^2 + 10x + 16$, you can ask yourself “what pair of numbers sum to 10 and multiply to 16?”
Therefore: $x^2 + 10x + 16 = (x + 8)(x + 2)$
In her post, Naveen talks about intentionally designing problems of this sort to draw out this underlying structure — that the factors of “c” have to sum to “b.” Factoring then becomes a quest to search the factors of “c” for things that sum to “b” (or vice versa).
Her main points is a good one, which is that if you keep everything else the same, and then vary just one thing, that thing will draw a lot of attention. Here is how she uses these ideas to design a practice set of quadratics, one that isn’t so unlike Dan’s:
My experience of teaching this topic is that, even knowing the relationship between c and b, these problems can be very difficult for students.
Why? What makes these factoring problems hard? Part of the reason has to do with fact automaticity, to be sure. While there are many topics in algebra that a student can handle without automaticity, this is definitely one of the times when things get much hairier for kids if they don’t know e.g. quick ways to find all the ways of multiplying to 60.
But take a closer look at some of these problems, and you can see that there is more going on than just knowing the relationship between b and c and knowing your facts. Consider one of these problems from Naveen’s page, which I just chose randomly:
$a^2 + 29a + 28$
This is a problem that I can imagine my students having some trouble with, at first. Not because the facts are difficult or because they don’t know the relationship…it’s just that the solution (a + 1)(a + 28) might not occur to kids. One thing I’ve noticed is that a lot of kids don’t think of 1 x N for a while when they’re searching for ways to make N. This makes sense — it’s so computationally straightforward, they don’t spend a lot of time thinking about multiplying by 1. It can slip under the radar.
Now, might a kid working on Naveen’s problem set become familiar with this tiny nugget or structure, that if the “b” term is one off from the “c” term, you should try sticking “1” into one of your binomials?
A student might make this generalization from the examples in the practice set, but this would essentially amount to learning by discovery. Which absolutely happens sometimes, but especially since this activity isn’t structured around giving kids instances that would cue-up that generalization…it probably won’t happen for most kids.
Now, the question is whether this sort of “micro-strategy” is a good use of classroom time. Maybe it’s too narrow a class of problems to be worth making it an instructional focus, I don’t know. Maybe you just go for the main strategy, and hope that kids are able to apply what they know to this little side-case.
But then again, maybe you give a quiz and kids end up mostly baffled by this problem. Teaching is full of surprises — this could happen.
That’s when I say, OK, let’s design a quick activity that would focus entirely on this micro-skill. Maybe a mini-worked example, or maybe a string of mental math factoring problems ala Naveen or Dan’s that puts that entire “variation theory” focus on this one, specific corner of the mathematical landscape — just the one that the students in your class need.
$a^2 + 8a + 7$
$x^2 + 14x + 13$
$j^2 + 176j + 175$
$q^2 - 14q + 13$
$s^2 - 11s - 12$
And then I call this “feedback” and don’t spend a lot of time writing up stuff in the margins of their quizzes.
There’s something nice about these little micro-skills. For one, it’s an alternate way of thinking about what’s leftover after you’ve taught the “main” skill. (Meaning, it’s not just that kids are forgetting what you’ve taught and need to be reminded — it’s that there are little corners of the mathematical world that haven’t yet been uncovered for kids.)
One thing I’ve been struggling with has been trying to figure out what exactly my pedagogy involves that’s distinctive. It’s not about the activities I tend to choose or design — since I’m pretty boring in that regard. I’m not an amazing motivator of people. Kids like me, I think, but not in the “oh my god he was my best teacher” way. More like “he’s nice.” (“Being nice” is an important part of my pedagogy.)
Though I still don’t have a snappy way to put it, I think that this is part of my story:
• I’m really curious about how kids think
• So I try to use that to come up with a more systematic understanding of how they think about different types of problems, especially when it’s something that people typically think of as constituting a single “type” of problems (e.g. factoring quadratics, solving equations, adding fractions)
• While teaching a topic I try to figure out which types of problems the kids understand how to handle, and which they don’t yet
• Then I focus in on a micro-skill for handling one of those little types of problem and I teach it with a short little activity followed-up by practice, in place of feedback
I don’t think any of this is exciting or inspiring, and I don’t really think I can make it so. There really is something here, though.
I think the exciting action comes in the second of those bullet points, in describing the mathematical landscape in a way that’s pedagogically useful. One of my favorite things in math education is Carpenter et al’s Cognitively Guided Instruction. I’ve moved away from some of the pedagogy it has grown into, but the problem breakdown is my paragon of pedagogically useful knowledge. It’s what I always come back to.
I sometimes wish there was more to what I do, and I also wish I didn’t have such a hard time figuring out how to describe what it is that I’m into. I’ll be doing a thing with teachers this spring that will give me another shot at refining my little message. I sometimes get jealous when I see all the cool things that people make and share online. I’ve never been cool, and none of what I’ve written here is cool either, but it’s what I’ve got.
(Last year’s version of this post: here.)
## I’m realizing I don’t really understand what working memory is
I’m trying to find some solid ground. Here are the questions I’m trying to get straight on:
• Suppose someone didn’t believe in the existence of a separate short-term memory system, just as (apparently) people in the ’50s and ’60s were skeptical. How would you convince a skeptic?
• What is the working memory system, anyway?
• Say that you were a behaviorist, someone uncomfortable with talk of the cognitive. How would you make sense of the observational findings?
• More concretely, what were the problems that Baddeley and Hitch were trying to solve when they introduced their working memory model?
I’m looking for foundations. Prompted by this, I went back to this, which brought me here, to Warrington and Shallice’s 1969 case reporting on a patient with severely impaired short-term memory.
The case is fascinating:
“K. F., a man aged 28, had a left parieto-occpital (“head” – MP) fracture in a motor-bicycle accident eleven years before, when a left parietal subdural (“brain” – MP) haematoma was evacuated. He was unconscious for ten weeks.”
He had lasting brain damage, especially when it came to language:
“His relatively poor language functions were reflected in his verbal I.Q. His ability to express himself was halting, and some word-finding difficulty and circumlocutions were noted.”
His short-term verbal memory in particular was damaged:
“The most striking feature of his performance was his almost total inability to repeat verbal stimuli. His digit span was two, and on repeated attempts at repeating two digits his performance would deteriorate, so that on some trials his digit span was one, or even none. His repetition difficulty was not restricted to digits; he had a similar difficulty in repeating letters, disconnected words and sentences. Single verbal items would be repeated correctly with the exception of polysyllabic words which were on occasion mispronounced.”
The thing that made him especially interesting was that, for a guy with significant short-term memory damage, there were a lot of things that he could do:
“Memory for day-to-day happenings was good and he had an adequate knowledge of recent and past events. Immediate memory for the Binet figures was accurate.”
Here is the real surprise, for people in the 1960s: his long-term learning was, actually, not bad. Consider the ten-word learning task, at which he performed admirably:
“A list of 10 high-frequency words was presented auditorily at the standard rate. Subjects were required to recall as many words as possible from the list immediately after presentation. This procedure was repeated until all the 10 words were recalled (not necessarily in the correct order). K. F. needed 7 trials. Twenty normal (“didn’t fall off a motorcycle” – MP) controls too an average of 9 trials, 4 of the subjects failing on the task after 20 trials. After an interval of two months he was able to recall 7 of these 10 words without relearning.”
On two other long-term memory tests, K. F. seemed to be performing normally as well.
And, what’s the significance of all this?
This was written when the existence of a short-term memory system was not universally accepted. (Is it universally accepted now? It feels like it but I don’t actually know.) And it’s useful to me for identifying what the core, foundational findings are that we have to grapple with in memory. There really aren’t very many, it seems.
At the core of things is the “digit span” task. This is the finding that there is some sort of limit on how many random things we can remember. This itself was the core finding that was supposed to support short-term memory. (“All subjects have a limited capacity to recall a series of digits or letters, and this limitation is regarded as a characteristic of the ‘short-term’ memory store.”)
The strongest evidence that this digit-span task was measuring a totally different system of memory was the evidence of amnesiacs, whose long-term memory is severely impaired:
“The question as to whether the organization of memory is a unitary process or a two-stage process has received much attention in recent years. The strongest evidence that there are separate short- and long-term memory systems is provided by the specific and isolated impairment of long-term memory in amnesic subjects.”
If you can have short-term memory but no long-term memories, and you can measure this with all sorts of repetition and digit span tasks, then there needs to be some distinction between two memory systems. Right?
Here, though, they found the opposite. A patient could have pretty normal long-term memory performance even though their short-term memory system was severely impaired.
In a different paper (1970) they lay out the implications of this for then-current controversies about short- and long-term memory:
“Most important, the results present difficulties for those theories in which STM and LTM are thought to use the same physical structures in different ways. (Because, I suppose, they’ve shown STM and LTM to be doubly independent of each other.-MP) They also indicate that the frequently used flow-diagrams in which information must enter STM before reaching LTM may be inappropriate. On this model, if the STM system were greatly impaired, one would expect impairment on LTM tasks, since the input to the LTM store would be reduced.”
What do they suggest?
“In light of these findings, it is suggested that a model in which the inputs to STM and LTM are in parallel rather than in series should be considered.”
One way of thinking of this could be that their patient, K. F., had damaged his ability to encode information about verbal sounds, but not his ability to encode the meanings behind those words, and that long-term memory is a system for storing meanings while short-term memory is just a system for storing sounds.
This is all fifty years ago, of course. But I think it’s helpful for me to understand what it took to get from a world where this seemed as plausible as the alternatives, to a world where scientific communities seem to universally accept that alternative.
That there’s a distinction between STM and LTM is beyond question. This is something that is confirmed a million-times over. Amnesiacs and people like K. F. speak to the distinctness of short- and long-term memory systems. We also experience this a million-times daily.
What is up for debate in the early 1970s is the relationship between these two systems. Is it one big system (“unitary”), with STM feeding directly into LTM? This study challenged that, suggesting that they were two fully independent memory systems.
Current models of memory suggest that they are connected, though in a more complex way than was understood before Baddeley and Hitch came along. (At least, I think that’s what’s going on…)
## Baddeley, A. (1983). Working Memory. Philosophical Transactions of the Royal Society of London. Series B, Biological Sciences, 302(1110), 311-324.
I.
This paper seems to be one of, like, a dozen mostly equivalent review pieces on working memory that Baddeley has written over the years. This one strikes a nice balance between concision and context, so it’s the one I chose to read with care.
Baddeley’s contribution is the idea that working memory consists of multiple independent components. When he was writing this paper, there were only three parts that he’d identified in short-term, working memory: a place for remembering sounds, a place for remembering space and visuals, and a “central executive” who is in charge of the whole system. This is contrast to earlier models that didn’t feel the need to distinguish between different components of working memory.
There are two lenses through which I’m reading this. First, how did we get here? Second, what metaphors do we use to describe memory?
Here is a handy table from a psych textbook, by the way:
II.
Baddeley positions his work as a reaction to the “modal model” of memory, popular in the 1960s, one that is represented by (I’ve learned) Atkinson and Shiffrin.
Atkinson and Shiffrin were people; this is all I know about them.
The modal model, according to Baddeley, looked like this:
• Short-term memory is one unit — no components. This is the working memory, the memory whose main function is to facilitate learning, reasoning, decision-making, etc.
• Long-term memory is where you keep memories for the long-term
• Here’s how long-term memories are formed: stuff automatically goes from short-term memory to long-term memory, but the process takes some time. (That’s why you don’t learn everything.) If you want to learn something, you have to make sure it’s rattling around short-term memory for long enough.
And what did things look like in the 1960s, when the modal model was prominent? I don’t have a clear picture of this. Wikipedia sends me to a piece from 1963 that talks of an explosion of results relating to short-term memory in the preceding years:
In 1958, and increasingly thereafter, the principal journals of human learning and performance have been flooded with experimental investigations of human short-term memory. This work has been characterized by strong theoretical interest, and sometimes strong statements, about the nature of memory, the characteristics of the memory trace, and the relations between short-term memory and the memory that results from multiple repetitions. The contrast with the preceding thirty years is striking. During those years most research on short-term memory was concerned with the memory span as a capacity variable, and no more…I venture to say that Broadbendt’s Perception and Communication (1958), with its emphasis on short-term memory as a major factor in human information-processing performance, played a key role in this development.
So the picture you get is that there was controversy about the distinction between short-term and long-term memory, springing from a great deal of results in the early ’60s.
One of the major impetuses? The famous psychological subject H.M. (Read about the controversies surrounding him in a fascinating New York Times magazine piece from a few years ago.) H.M makes an appearance in the Baddeley paper — he seemed to have the ability to form immediate memories (despite his profound inability to form long-term ones) help make the case for a distinction between short- and long-term memory systems.
And what was the state of things before the early ’60s? Baddeley credits Francis Galton with an early version (1883) of the notion that there are two separate memory systems, but I can’t figure out where exactly he says this or how he puts it. Wikipedia points us to William James, who distinguishes between primary and secondary memory. in the “Memory” chapter of The Principles of PsychologyI’ve only skimmed it, but I think he thinks of primary memory as a lot like the after-image of some visual perception. It just lingers for a second — real memory is secondary memory.
And what about before James and Galton? I’d have to figure that something as basic as the distinction between short- and long-term memory is not an insight unique to psychologists. I know a few other philosophers who make distinctions that seem relevant, but I’m not sure how to trace the lineage of short- and long-term memory before the late 1800s.
As it is, it seems that the early impetus is just the recognition that some stuff we can remember for a little bit, and other stuff we remember for a long time. Maybe this is as far as we get without more careful measurements.
III.
Back to Baddeley, who makes the case that the modal model of the early 1960s is wrong. There are multiple components to short-term, working memory.
What was wrong with the modal model?
• Even when short-term memory is impaired, long-term memories can be formed just fine. So how could it be that the path to long-term passes through a unified short-term store? There has to be a pathway besides the damaged one that memories could pass-through, on their way to the land of long-term.
• This is hilarious, but the BBC tried to use the modal model in their advertising to let people know about newer wavelength bands they’d be switching broadcasts to. They figured the more frequently a phrase is in short-term memory, the more likely it is for a long-term memory to be formed, so they just slipped the phrase into radio broadcasts here and there and…nobody remembered. So much for “saturation advertising.” So how are
• The third thing — that people remember recent stuff better, even in long-term memory — is confusing to me. I don’t get how it’s relevant to this yet.
So what does he suggest, to fix things?
First, that there is a central executive that is in charge of making decisions during activities that is unrelated to the memory stores. I think this is supposed to explain how people who have short-term memory impairment can still function or form long-term memories. The assumption is that as long as the central executive (who decides what the brain should do) decides to actively reinforce the stuff that makes it into short-term memory, long-term memory can happen. And the central executive can also be in charge of doing stuff accurately, even if the stores of memory traces are depleted.
The central executive is a bizarre notion. Metaphorically, it’s a little dude in your head that decides where to put attention, or when to actively reinforce the memory traces in the other stories of working memory — thus he’s also responsible for learning and reasoning. He is, to put it bluntly, your soul, an unanalyzable source of free will. It’s weird.
Second, Baddeley proposes two different passive stores of memories — the phonological and the visual/spacial. Each comes with an active element, something that can reinforce the memory traces.
The metaphors are fascinating here. The phonological loop is, metaphorically, a piece of audio tape that loops around your brain, over and over. When a sound goes into your mind it lands on the memory trace, and then an active recording element has to rewrite the sound on your mental tape for it to be sustained over time. Otherwise, it gets overwritten (or it fades?) as time goes on.
Baddeley’s model for visual/spacial stuff could have used an audio tape metaphor, as far as I can tell, but chose something that felt more appropriate for visual information — a scratchpad. So there’s an entirely parallel system that is posited for visual info, but with a totally different set of metaphors that is appropriate for visual stuff.
So there’s a scratchpad, and when visual or spacial stuff comes into your head it is inscribed onto the pad, very lightly. It’s only sustained if the active element reinscribes it on the pad.
IV.
Let’s end here, because I’m still massively confused as to how any of the results that Baddeley says are issues with the modal model are satisfied by creating two independent stores of working memory. I’ll need to read something else to make any progress on this, I think.
Update: I’m going to read this next.
## “I was sitting there and my skin was burning and I said, this might make a great TV show one day.”
I watched the new Chris Rock special on Netflix. Watching all these classic comics with new specials feels a lot like that Modern Seinfeld feed — Chris Rock on Trump! — but it had some very funny moments. The personal stuff (about his divorce and failings) was interesting too, but I sort of felt it would have been more sincere to just do jokes about it.
Anyway, one thing led to another after I finished the special, and I present to you the clip above.
I also present to you the very, very funny and awkward clip below:
## “That structure was just guess work, but we seem to have guessed well.”
This interview with Alan Baddeley, one of the researchers responsible for the contemporary notion of “working memory,” is fantastic. I’m not up to speed on all the psych concepts yet, but there are a lot of juicy bits.
First, you can’t get into research like this any longer:
When I graduated I went to the States for a year hoping, when I returned, to do research on partial reinforcement in rats. But when I came back the whole behaviourist enterprise was largely in ruins. The big controversy between Hull and Tolman had apparently been abandoned as a draw and everybody moved on to do something else. On return, I didn’t have a PhD place, and the only job I could get was as a hospital porter and later as a secondary modern school teacher – with no training whatsoever! Then a job cropped up at the Medical Research Council Applied Psychology Unit in Cambridge. They had a project funded by the Post Office on the design of postal codes and so I started doing research on memory.
On his “guess work”:
I think what we did was to move away from the idea of a limited short-term memory that was largely verbal to something that was much broader, and that was essentially concerned with helping us cope with cognitive problems. So we moved from a simple verbal store to a three component store that was run by an attentional executive and that was assisted by a visual spatial storage system and a verbal storage system. That structure was just guess work, but we seem to have guessed well because the three components are still in there 30 odd years later – although now with a fourth component, the ‘episodic buffer’.
What if he had guessed something different, and if that different guess had held up decently? Psychology is at a local maximum, but it seems to me that there’s very reason to think that our current conception of the mind is at anywhere near a global maximum, the most natural and useful way of conceiving of things.
On that:
The basic model is not too hard to understand, but potentially it’s expandable. I think that’s why it’s survived.
Here is the context for their description of working memory, as distinctive from short-term memory:
I suppose the model came reasonably quickly. Graham and I got a three-year grant to look at the relationship between long- and short-term memory just at a time when people were abandoning the study of short-term memory because the concept was running into problems. One of the problems was that patients who seemed to have a very impaired short-term memory, with a digit span of only one or two, nevertheless could have preserved long-term memory. The problem is that short-term memory was assumed to be a crucial stage in feeding long-term memory, so such patients ought to have been amnesic as well. They were not. Similarly, if short-term memory acted as a working memory, the patients ought to be virtually demented because of problems with complex cognition. In fact they were fine. One of them worked as a secretary, another a taxi driver and one of them ran a shop. They had very specific deficits that were inconsistent with the old idea that short-term memory simply feeds long-term memory. So what we decided to do was to split short-term memory into various components, proposing a verbal component, a visual spatial one, and clearly it needed some sort of attentional controller. We reckoned these three were the minimum needed.
In other words, short-term memory was assumed to be unitary. Baddeley and Hitch figured that it could have three independent components. Since short-term memory was running into trouble, it sounds like they kind of rebranded it as working memory.
Are there other differences between short-term memory and working memory besides its multi-component nature? I think so, but I’m currently fuzzy on this. Another thing that I’ve read suggests that short-term memory was seen as just on the pathway to long-term memory, so it was essentially part of a memory-forming pathway. Working memory is supposed to play a broader range of roles…I’m confused on this, honestly.
Finally, here’s a solid interaction:
Your model with Graham Hitch has a central executive controlling ‘slave’ systems. People sometimes have a problem with the term ‘slave’?
This is presumably because people don’t like the idea of slavery.
|
{}
|
TITLE
# Nuclear caloric curve. A systematic study
AUTHOR(S)
Imme´, G.; Raciti, G.; Riccobene, G.; Romano, F. P.; Sajia, A.; Sfienti, C.; Verde, G.; Giudice, N.
PUB. DATE
April 2000
SOURCE
AIP Conference Proceedings;2000, Vol. 513 Issue 1, p290
SOURCE TYPE
DOC. TYPE
Article
ABSTRACT
Investigations performed by the ALADiN Collaboration about signals for a liquid-gas phase transition in nuclear matter are reported. Temperature-excitation energy correlation measurements on several systems at different incident energies are compared. Moreover the comparison between isotope and excited states temperatures, extracted from double ratios of isotope yields and population ratios of fragment unbound states, respectively, shows a discrepancy between the two methods that cannot be accounted for by the sequential feeding corrections. Instead, they seem to be related to the space-time evolution of the fragmentation process. © 2000 American Institute of Physics.
ACCESSION #
6028428
## Related Articles
• Excitation functions of 6,7Li+7Li reactions at low energies. Prepolec, L.; Soic, N.; Blagus, S.; Miljanic, Đ.; Siketic, Z.; Skukan, N.; Uroic, M.; Milin, M. // AIP Conference Proceedings;8/26/2009, Vol. 1165 Issue 1, p349
Differential cross sections of 6,7Li+7Li nuclear reactions have been measured at forward angles (10°and 20°), using particle identification detector telescopes, over the energy range 2.75–10.00 MeV. Excitation functions have been obtained for low-lying residual-nucleus states. The...
• Symmetric/asymmetric p- and n-induced fission of Th, Pa, U and Np. Maslov, Vladimir M. // AIP Conference Proceedings;10/15/2009, Vol. 1175 Issue 1, p87
The excitation energy and nucleon composition dependence of the transition from asymmetric to symmetric scission of fission observables of Th(Pa) and U(Np) nuclei is interpreted for nucleon-induced fission cross sections of 232Th(p,F)(232Th(n,F)) and 238U(p,F) (238U(n,F)) reactions at En(p) =...
• Excitation functions of proton induced nuclear reactions on natural tellurium up to 18 MeV for validation of isotopic cross sections. Király, B.; Tárkányi, F.; Takács, S.; Kovács, Z. // Journal of Radioanalytical & Nuclear Chemistry;Nov2006, Vol. 270 Issue 2, p369
Excitation functions of proton induced nuclear reactions on natural Te were investigated up to 18 MeV. Cross sections for production of 121,123,124,126,128,130gI and 121gTe were measured. The new experimental data were compared with the results of ALICE-IPPE model calculations and with data...
• Effective pairing interaction for the Argonne nucleon-nucleon potential v18. Pankratov, S. S.; Baldo, M.; Lombardo, U.; Saperstein, E. E.; Zverev, M. V. // Physics of Atomic Nuclei;Apr2007, Vol. 70 Issue 4, p658
The effective pairing interaction corresponding to the Argonne nucleon-nucleon potential v18 is found within the local potential approximation for several values of the boundary energy E 0 that specifies the model subspace S 0. A detailed comparison with the analogous effective interaction...
• The reaction Ca + Cm → 116 studied at the GSI-SHIP. Hofmann, S.; Heinz, S.; Mann, R.; Maurer, J.; Khuyagbaatar, J.; Ackermann, D.; Antalic, S.; Barth, W.; Block, M.; Burkhard, H.; Comas, V.; Dahl, L.; Eberhardt, K.; Gostic, J.; Henderson, R.; Heredia, J.; Heßberger, F.; Kenneally, J.; Kindler, B.; Kojouharov, I. // European Physical Journal A -- Hadrons & Nuclei;May2012, Vol. 48 Issue 5, p1
The synthesis of element 116 in fusion-evaporation reactions of a Ca beam with radioactive Cm targets was studied at the velocity filter SHIP of GSI in Darmstadt. At excitation energies of the compound nuclei of 40.9MeV, four decay chains were measured, which were assigned to the isotope 116,...
• High-resolution study of 0+ and 2+ excitations in 168Er with the ( p, t) reaction. Bucurescu, D.; Casten, R.; Graw, G.; Jolie, J.; Braun, N.; Brentano, P.; Faestermann, T.; Heinze, S.; Hertenberger, R.; Iudice, N.; Krücken, R.; Mahgoub, M.; Meyer, D.; Möller, O.; Mücher, D.; Scholl, C.; Shirikova, N.; Sun, Y.; Sushkov, A.; Wirth, H. // Physics of Atomic Nuclei;Aug2007, Vol. 70 Issue 8, p1336
Excited states in the deformed nucleus 168Er have been studied with high energy resolution in the ( p, t) reaction, with the Munich Q3D spectrograph. A large number of excited 0+ states (25) and 2+ states (64) have been assigned up to 4.0-MeV excitation energy. This allows detailed...
• The low-energy limit of validity of the intranuclear cascade model. Cugnon, J.; Henrotte, P. // European Physical Journal A -- Hadrons & Nuclei;Mar2003, Vol. 16 Issue 3, p393
The intranuclear cascade model is generally considered to be valid when the incident particle has a sufficiently small de Broglie wavelength to interact with individual nucleons. On this basis, a lower limit of 200 MeV is usually quoted for the incident energy in nucleon-induced reactions. Here...
• Properties of $\alpha$ -decay to ground and excited states of heavy nuclei. Wang, Y. Z.; Gu, J. Z.; Dong, J.; Peng, B. B. // European Physical Journal A -- Hadrons & Nuclei;May2010, Vol. 44 Issue 2, p287
Branching ratios and half-lives of $\alpha$ -decay to the ground-state rotational bands as well as the high-lying excited states of even-even nuclei have been calculated in the framework of the generalized liquid drop model (GLDM) and Royer’s formula that we improved very recently. The...
• Electron and nuclear dynamics of molecular clusters in ultraintense laser fields. IV. Coulomb explosion of molecular heteroclusters. Last, Isidore; Jortner, Joshua // Journal of Chemical Physics;11/1/2004, Vol. 121 Issue 17, p8329
In this paper we present a theoretical and computational study of the temporal dynamics and energetics of Coulomb explosion of (CD4)n and (CH4)n (n=55–4213) molecular heteroclusters in ultraintense (I=1016–1019 W cm-2) laser fields, addressing the manifestation of electron...
Share
|
{}
|
# Closed-form solution to $y = \frac{\ln(a+x)}{\ln(b+x)}$?
I'm trying to solve the following equation for $\delta$:
$$-0.01 = \frac{\ln(1+\delta)}{\ln{\delta}}$$
And I found that while Wolfram Alpha and Mathematica can give me numerical estimates ($\delta = 0.034300977...$), they cannot give me a closed form solution (even including the non-elementary functions that they'll sometimes use.)
Is there a closed-form solution to the problem above, or the more general version below? $$y = \frac{\ln(a+x)}{\ln(b+x)}$$
Or is this a case where there isn't a solution and moreover determining one would require a new function unrelated to those already established (such as the Lambert W function).
I have not taken Complex Analysis, or any other math higher than Differential Equations; however, depending on the complexity I may be able to follow along.
Full context and motivation:
A problem from my undergrad Nuclear Detection course: Given exponentially decaying signals of the form $f(t) = K\mathrm{e}^{-\mu t}$ that are generated periodically with period $x$. We can find the smallest $x$ such that signal pileup is less than 1%, we can see that $1.01 = \sum_n^\infty{\mathrm{e}^{-\mu n x}}$ which yields $x = \frac{\ln{101}}{\mu}$ after some massaging.
However, I noticed that in this instance, neglecting all but the first two terms of the series (ie, 1 and $\mathrm{e}^{-\mu x}$), gave the result $x^{\prime} = \frac{\ln{100}}{\mu}$ for an error percentage between $x^{\prime}$ and $x$ of only 0.216%. So I wanted to figure out what percentage of pileup was required before more than just these terms were required. I selected 1% as my threshold. So I had $$\frac{x}{x^{\prime}} = \frac{\ln(1+\delta) - \ln{\delta}}{-\ln{\delta}}$$ which gives the first formula of the question.
• I think a numerical solution is as good as you can expect to do in this sort of situation. – Sean Lake Sep 4 '16 at 2:26
• A numerical solution certainly is useful, and it's all I really need. I was merely curious if there was some sort of closed form solution to this more interesting than let $OMG(x; a, b) = \frac{\ln{a+x}}{\ln{b+x}}$. – OmnipotentEntity Sep 4 '16 at 2:32
• Not really an answer, but I suspect that (depending on your choices for $a, b, x$) you'd most likely get a transcendental equation which would only be solvable numerically. – jamesh625 Sep 4 '16 at 2:48
• The specific case described can, by exponentiating both sides appropriately, be brought to the form $\delta(1+\delta)^{100}=1$. So this is in fact a polynomial of high degree, and one is looking for its real roots (of which there is but one in this case). More generally, if $y$ is rational in the second equation, then the problem amounts to solving some polynomial equation. My guess is that this isn't going to have a closed-form solution in any reasonable sense. – Semiclassical Sep 6 '16 at 4:48
We have $$y=\frac{\ln(a+x)}{\ln(b+x)}\implies (b+x)^y=a+x\implies x=(b+x)^y-a$$ $$\implies x=(b+(b+(b+......+(b+x)^y-a)^y.....-a)^y-a)^y-a)$$ The actual solution may come differently in different values of a ,b and y. Now we come to the specific problem: $$\frac{-1}{100}=\frac{\ln(1+\delta)}{\ln \delta}$$ let $f(\delta)=\ln \delta$ then $$\frac{-1}{100}f(\delta)=f(\delta+1)$$ When you would solve this recurrence you would get something like this:- $$f(\delta)=(-1)^{(n+1)}\frac{f(\delta-n)}{100^{(n+1)}}$$ Now put the value of $f(\delta)$ and put $\lim_{{\delta-n}\to 0}$ such that $n \to 0$
|
{}
|
# D3D10 - How to set constant buffers for effects?
## Recommended Posts
Cycles 187
Hi, I'm having a bit of trouble getting constant buffers to work. I have grouped the variables together in my shader in a constant buffer, like so:
cbuffer Camera
{
matrix view, projection, world;
};
And I am using them like this:
VS_OUTPUT VS(VS_INPUT input)
{
VS_OUTPUT output = (VS_OUTPUT) 0;
output.pos = mul(input.pos, world);
output.pos = mul(output.pos, view);
output.pos = mul(output.pos, projection);
return output;
}
This code works fine if I set variables by hand. By this, I mean retrieving each variable with ID3DEffect::GetVariableByName(...)::AsMatrix. Here's an example, to clarify:
world->SetMatrix(static_cast<float*>(geometry->GetTransformation().GetWorldMatrix()));
view->SetMatrix(static_cast<float*>(scene->GetCamera()->GetView()));
projection->SetMatrix(static_cast<float*>(scene->GetCamera()->GetProjection()));
However, when I try and use a constant buffer to set these variables, I get no output - so presumably the matrices are not being passed.
ID3D10EffectConstantBuffer *cameraCBuffer, *worldCBuffer;
cameraCBuffer = effect->GetConstantBufferByName("Camera");
bool valid = cameraCBuffer->IsValid(); // Is showing as true.
D3D10_BUFFER_DESC cameraBufferDesc;
cameraBufferDesc.ByteWidth = sizeof(CameraConstants);
cameraBufferDesc.Usage = D3D10_USAGE_DYNAMIC;
cameraBufferDesc.BindFlags = D3D10_BIND_CONSTANT_BUFFER;
cameraBufferDesc.CPUAccessFlags = D3D10_CPU_ACCESS_WRITE;
cameraBufferDesc.MiscFlags = 0;
CameraConstants cameraConstants;
cameraConstants.projection = scene->GetCamera()->GetProjection();
cameraConstants.view = scene->GetCamera()->GetView();
cameraConstants.world= geometry->GetTransformation().GetWorldMatrix();
ID3D10Buffer *cameraBuffer;
HRESULT hr = display->GetDevice()->CreateBuffer(&cameraBufferDesc, &cameraData, &cameraBuffer); // Returns S_OK
hr = cameraCBuffer->SetConstantBuffer(cameraBuffer); // Returns S_OK
I've check my hr's where they're returned and they are all S_OK. Also, there is nothing in the debug output. Any ideas?
##### Share on other sites
Cycles 187
While I haven't found a solution to make this work, I don't think it's actually necessary. I made a shader with 2 constant buffers, and updated variables in both buffers. Profiling in PIX automatically handles 2 VSSetConstantBuffers(...) calls, so presumably setting "just" by variable in ID3D10Effect handles this for you.
Brilliant! :)
|
{}
|
# Transistor as an Amplifier
Transistor As An Amplifier
A transistor can be used for amplifying a weak signal.
When a transistor is to be operated as amplifier, three different basic circuit connections are possible.
These are (i) common base, (ii) common emitter and (iii) common collector circuits.
Whichever circuit configuration, the emitter-base junction is always forward biased while the collector-base junction is always reverse biased.
Common base amplifier using a p-n-p transistor
In common base amplifier, the input signal is applied across the emitter and the base, while the amplified output signal is taken across the collector and the base. This circuit provides a very low input resistance, a very high output resistance and a current gain of just less than 1. Still it provides a good voltage and power amplification. There is no phase difference between input and output signals.
The common base amplifier circuit using a p-n-p transistor is shown in figure. The emitter base input circuit is forward biased by a low voltage battery VEB. The collector base output circuit is reversed biased by means of a high voltage battery VCC. Since, the input circuit is forward biased, resistance of input circuit is small. Similarly, output circuit is reverse biased, hence resistance of output circuit is high.
The weak input AC voltage signal is superimposed on VEB and the amplified output signal is obtained across collector-base circuit. In the figure we can see that,
VCB = VCC – ic RL
The input AC voltage signal changes net value of VEB. Due to fluctuations in VEB, the emitter current ie also fluctuates which in turn fluctuates ic. In accordance with the above equation there are fluctuations in VCB, when the input signals is applied and an amplified output is obtained.
Input characteristics graph of common base transistor
Output characteristics graph of common base transistor
Current gain, Voltage gain and Power gain
Current gain : Also called AC current gain (αac), is defined as the ratio of the change in the collector current to the change in the emitter current at constant collector-base voltage.
Thus, αac or simply $\large \alpha = \frac{\Delta i_c}{\Delta i_e}$ (VCB = constant)
As stated earlier also, α is slightly less than 1.
Voltage gain : It is defined as the ratio of change in the output voltage to the change in the input voltage. It is denoted by AV. Thus,
$\large A_V = \frac{\Delta i_c \times R_o}{\Delta i_e \times R_i}$
but $\large \alpha = \frac{\Delta i_c}{\Delta i_e}$ = the current gain.
$\large A_V = \alpha \frac{ R_o}{ R_i}$
Since, Ro >> Ri, AV is quite high, although α is slightly less than 1.
Power gain : It is defined as the change in the output power to the change in the input power. Since, P = Vi
Therefore, power gain = current gain × voltage gain
or, $\large Power \; gain = \alpha^2 . \frac{ R_o}{R_i}$
Important Points in Common Base Amplifier
1. The output voltage signal is in phase with the input voltage signal.
2. The common base amplifier is used to amplify high (radio) – frequency signals and to match a very low source impedance (~20 Ω) to a high load impedance (~100 k Ω).
Common emitter amplifier using a p-n-p transistor
Figure shows a p-n-p transistor as an amplifier in common emitter mode. The emitter is common to both input and output circuits. The input (base-emitter) circuit is forward biased by a low voltage battery VBE. The output (collector-emitter) circuit is reverse biased by means of a high voltage battery VCC.
Since, the base-emitter circuit is forward biased, input resistance is low. Similarly, collector-emitter circuit is reverse biased, therefore output resistance is high. The weak input AC signal is superimposed on VBE and the amplified output signal is obtained across the collector-emitter circuit.
In the figure we can see that,
VCE = VCC – ic RL
When the input AC voltage signal is applied across the base-emitter circuit, it fluctuates VBE and hence the emitter current ie. This in turn changes the collector current ic consequently VCE varies in accordance with the above equation. This variation in VCE appears as an amplified output.
Input characteristics graph of common emitter transistor
Output characteristics graph of common emitter transistor
Current gain, Voltage gain and Power gain
Current gain : Also called AC current gain (αac), is defined as the ratio of the change in the collector current to the change in the emitter current at constant collector-base voltage.
Thus, βac or simply $\large \beta = \frac{\Delta i_c}{\Delta i_b}$ (VCE = constant)
Voltage gain : It is defined as the ratio of change in the output voltage to the change in the input voltage. It is denoted by AV. Thus,
$\large A_V = \frac{\Delta i_c \times R_o}{\Delta i_b \times R_i}$
but $\large \beta = \frac{\Delta i_c}{\Delta i_b}$ = the current gain.
$\large A_V = \beta \frac{ R_o}{ R_i}$
Power gain : It is defined as the change in the output power to the change in the input power. Since, P = Vi
Therefore, power gain = current gain × voltage gain
or, $\large Power \; gain = \beta^2 . \frac{ R_o}{R_i}$
Important Points in Common Emitter Amplifier
(i) The value of current gain β is from 15 to 50 which is much greater than α .
(ii) The voltage gain in common-emitter amplifier is larger compared to that in common base amplifier.
(iii) The power gain in common-emitter amplifier is extremely large compared to that in common base amplifier.
(iv) The output voltage signal is 180° out of phase with the input voltage signal in the common emitter amplifier.
Transconductance (gm):
There is one more term called transconductance (gm) in common emitter mode. It is defined as the ratio of the change in the collector current to the change in the base to emitter voltage at constant collector to emitter voltage. Thus,
$\large g_m = \frac{\Delta i_c}{\Delta V_{BE}}$ ; (VCE = constant)
The unit of gm is Ω-1 or siemen (S).
By simple calculation we can prove that,
$\large g_m = \frac{\beta}{R_{in}}$
Advantages of a transistor over a triode valve
A transistor is similar to a triode valve in the sense that both have three elements. While the elements of a triode are, cathode, plate and grid the three elements of a transistor are emitter, collector and base. Emitter of a transistor can be compared with the cathode of the triode, the collector with the plate and the base with the grid.
Transistor has following advantages over a triode valve
(i) A transistor is small and cheap as compared to a triod valve. They can bear mechanical shocks.
(ii) A transistor has much longer life as compared to a triode valve.
(iii) Loss of power in a transistor is less as it operates at a much lower voltage.
(iv) In a transistor no heating current is required. So, unlike a triode valve, a transistor starts functioning immediately as soon as the switch is opened. In case of valves, they come in operation after some time of opening the switch (till cathode gets heated).
Drawbacks of a transistor over a triode valve:
Transistor have following drawbacks as compared to valves
(i) Since, the transistors are made of semiconductors they are temperature sensitive. We cannot work on transistors at high temperatures.
(ii) In transistors noise level is high. Keeping all the factors into consideration, transistors have replaced the valve from most of the modern electronic devices.
3. In transistors, the base region is narrow and lightly doped, otherwise the electrons or holes coming from the input side (say emitter in CE-configuration) will not be able to reach the collector.
|
{}
|
# Tag Info
5
The article makes no sense. Einstein realized that matter was composed out of atoms, so the number of collisions of a Brownian particle with the surrounding molecule is finite in a finite period of time. However, for times $t$ much longer than the typical scale between the collisions, the particle moves by a distance scaling like $\sqrt{t}$. It follows that ...
4
There are tons of papers on the connection between quantum processes and probability theory (though I don't understand why you single out coherent states - they don't play a special role in this connection). The theory of stochastic processes and the theory of quantum processes are the commutative and noncommutative side of the same coin, with many ...
4
The differential is used to specify that the number is for a "differential range", which is a way to remind you that the notions involved are somewhat fuzzy. Let me give a purely mathematical example. Suppose I tell you that I am going to pick an arbitrary real number between 0 and 10, with the likely hood of a number being picked being proportional to the ...
4
For Brownian motion, Langevin equation, Fokker-Planck equations, Stochastic process.. from the viewpoint of physicists, the following are standard references: Brownian Motion: Fluctuations, Dynamics, and Applications The Fokker-Planck Equation: Methods of Solutions and Applications Handbook of Stochastic Methods: for Physics, Chemistry and the Natural ...
3
Many different types of connection can be made between stochastic states over commutative algebras of observables and quantum states over noncommutative algebras of observables. As Arnold says, there is a substantial literature. One approach is to construct both classical and quantum models in a formalism that accommodates both; within the structured ...
1
The equation resembles the heat equation very closely, so much so, that it is the heat equation. The literature on elementary things like this is uniformly terrible, so I cannot give a reference. To see why it is the heat equation, note that the law for the distribution function is the same as for a probability distribution, so it is linear equation, by the ...
1
This is a bit strange. The Langevin equation $$\frac{dv}{dt}~+~\beta v~=~\frac{F}{m}$$ for the motion of a free particle under a stochastic force $F$ evaluates the velocity as an average or in an interval. The stochastic force has a Gaussian probability distribution $\langle F(t)F(t’)\rangle~=$ $2\beta kT\delta(t~-~t’)$, which is also a Markov ...
Only top voted, non community-wiki answers of a minimum length are eligible
|
{}
|
# How has mathematics been done historically? [Book Reference Request]
I know the original question title "what is the foundation of mathematics really?" seems pretty bad, or extremely bad, since it's really a huge question to answer. And I totally accept the fact that someone is going to disapprove this question.
But I have to ask this question anyway, the curiosity, or more kind of like pain, is killing me since it really has been bothering me for a very long period of time.
What is the foundation of mathematics really? Or the other way to state this question is like how those people lived in the past studied mathematics really?
I mean come on, I know nowadays people start to learn math from their elementary school doing simple things like addition or subtraction that sort of Arithmetics stuff, and as they move to middle school or high school, they start to learn geometry, or pre-calculus.
But in my perspective, these contents, or this path of learning math, is purely just a highly condensed abstraction that those people in education field designed that way. I have high confidence to assert that this can't be the way how people in the past learned math. By saying people in the past, I am really talking about peole in the past, like two or three hundred years ago. Moreover, to the very beginning of human kind.
I have done some researches on my own, and it seems like the foundation of modern mathematics is Euclid's elements, this reference is pretty much the one that I can find that looks like the very beginning of math to me. I am wondering if someone can recommend some references like Euclid's elements this sort of "very-beginning-of-math" to me.
Sincerely appreciated.
• Half-assed answer: check out some type theory. There are actually many competing models for the "foundations of math", be that the Zermelo-Frankel formulation of set theory (with or without Choice), many different formulations of type theory (Homotopy Type Theory, anyone?) or set theory, and other categorical hijinx. Steve Awodey describes how three of the most popular formulations can be interchanged here: researchgate.net/publication/…. A nice interactive introduction to type theory: leanprover.github.io/tutorial – Jack Crawford Aug 2 at 2:10
• Actually, I posted this comment before completely processing your question. It seems that rather than "foundations" of math (in which many developments are actually quite recent and cutting-edge), you are asking more about the way that math has historically developed and been taught and understood. The earliest math really was just arithmetic for accounting purposes - although they mostly worked in the semigroup of $\mathbb{N}_+$ since they didn't have negatives or the number $0$. A little bit of (Euclidean) geometry used for largely agricultural purposes, too. What time period do you want? – Jack Crawford Aug 2 at 2:17
• If its possible to have stuff around the 19th century that would be nice. Sincerely appreciated. – DigitalSoul Aug 2 at 2:33
• Galois made some huge strides towards the start of the 19th century and his research was controversial when he released it but very quickly basically became foundational to all studies in algebra. We owe a surprising amount of our understanding of field theory to him. – Jack Crawford Aug 2 at 2:52
• There are the books by F. Cajori that contain a view on the history of mathematics from around 1900 covering the period from about Fibonacci to Gauss. "A History of mathematics", and also instructive if a bit long-winded, "A history of mathematical notation". – Dr. Lutz Lehmann Aug 7 at 7:45
Victor Katz's A History of Mathematics does a pretty good job at tracing how some of our modern notions were developed, and sometimes how these things were conceptualized at the time and place. He gets a bit technical at times, being in an awkward spot between writing a history book and writing mathematics. But overall, I think it provides some of the context you're looking for, starting with Mesopotamian mathematics.
Of course, it's difficult to say definitively that such-and-such is definitively how they were thinking in comparison to now. So often enough the best we can do is instead describe how mathematics was used and taught. Lots of mathematics in the ancient world isn't based on proof but on algorithm or learned by example, where it seems expected that you follow and can generalize. Things are often stated in physical terms and appeal to intuition, being related to constructions, farm yields, and other things that ancient civilizations understandably prioritized.
Katz also goes into later mathematics in the later chapters, at first focusing on the development of calculus, but also touching on later developments and understandings of algebra, complex analysis, geometry, and a bit of logic and arithmetic. Part four of the book, for example, has chapters called "Analysis in the Eighteenth Century", "Algebra and Number theory in the Nineteenth Century", and "Aspects of the Twentieth Century and Beyond". The book more or less tries to follow a chronological telling, but obviously this isn't completely possible with things happening concurrently.
Your question is not fully clear to me. It's not clear whether you are asking about math education or the foundation of mathematics, which is different. As far as I know, in the past, learning math was basically studying Euclid's elements(at least in the west).
Now with regards to the foundation of mathematics. Euclid's elements is not the foundation of mathematics, what Euclid did was to organize all the mathematical knowledge of his time(specially geometry) in an axiomatic way, Euclid was the first person to do this(again, as far as I know), his work is not perfect and it contains logical flaws, but it is impressive, and it is totally worth studying it. As of the current foundation of mathematics, it's mainly zfc set theory(again as far as I know, I'm not an expert). There are a lot of books about foundation of math, some of them are more philosophical than mathematical, so it depends on your interests, questions like "what is a number?" is more of a philosophical question than mathematical. A great book that I'm currently studying is Introduction to the foundations of mathematics by Raymond L. Wilder, I totally recommend it, besides, it's free, you can find it in archive.org The other book that comes to mind is introduction to the philosophy of mathematics by Russell.
If you want a brief overview about the history and development/evolution of mathematics throught the ages around the world there are some sides in the web like this one:https://www.storyofmathematics.com "Probably intended for a general audience not academics only"
For early Mathematics in the Ancient Middle East you might check the book "A Remarkable Collection of Babylonian Mathematical Texts(Springer 2007)" published by author Jöran Friberg.
From Ancient Greek Mathematics there are some original textbooks that have been preserved and edited like "Euclid's Elements" the standard textbook of Ancient Greek mathematics which is the best known one,some works by Diophantos and the treatise on Conic Sections by Appolonius most notable(i think) are the versions edited and commented by Author John Heath.
Then Descarte's Geometry is also worth taking a look at.
Then there is a classic historical Chinese Math textbook called Nine Chapters on the Mathematical Art.
About Indian Mathematics you find information at the side i posted at the beginning i don't know if there exist modern translations of classical Indian math texts though
There's a difference between the questions:
1. How did people used to learn math?
2. What is the foundation of math?
For (1), the answer is roughly: before computers, people learned math from books, school, university courses, and from whatever experts they had access to - their parents, their teachers, friends of the family, etc. Most people's approach was ad hoc, learning things as necessary, updating their point of view where needed. They would seek to create, not the ship straightaway, but an ongoing project to construct and reconstruct the ship to sail further and further into deeper waters. And honestly, not much as changed. Our approach to math is still rooted in this incremental, ad-hoc approach I just described. Math education theorists basically say: "Expose the student and give them some tasks, then expose them some more and give them more tasks, then expose them even more with some further tasks. Eventually they will get there."
For what it's worth, I largely disagree with this approach. In my opinion, we should be creating software that allows both novices and experts to get started doing formalized mathematics right from the early days. I'm not saying we should go crazy, and ask Year 1 students to do programming and formalized mathematics before they can even multiply numbers properly. I am saying that we should ask Year 3 students to do programming and formalized mathematics. Good software doesn't restrict people, it aids them. This is the path that I think we need to go down.
For (2), note that Euclid's elements is definitely not a foundation of math in any meaningful sense of the word, because it only deals with plane geometry.
The currently "standard" foundation of math is called ZFC. You can learn about it from Goldrei's excellent Classic Set Theory.
An alternative framework that arguably fits better with the modern categorical viewpoint is called constructive type theory. You may also be interested in Vladmir Voevodsky's Univalent Foundations program. Under Voevodsky's proposal, these very fundamental objects called $$\infty$$-groupoids that are very hard to reason about in classical foundations become much easier to use, which makes his program very appealing.
Be sure to check out Coq and Lean if you're interested in computer-formalized mathematics.
Also be sure to read up about the Brouwer-Hilbert controversy, which was a debate about the question: "If I want to prove $$\varphi$$, is it enough to assume $$\neg \varphi$$ and derive a contradiction?"
Other approaches you might be interested in include SEAR and NFU.
However, in my opinion, all of the above approaches are fundamentally very unpleasant. In particular, my opinion is that all the above approaches ignore the fundamental insight of category theory, namely that composition is more fundamental than evaluation. I'm also dissatisfied with existing ideas about how math and programming are connected. Surely the real connections are deeper and cleaner than anything we currently understand. Long story short, I think we're still waiting for a final framework. Eventually a maximally beautiful framework may come to light. For now, wandering around in half-lit caverns will have to do.
• Interesting. Two very correct answers of mine on this website, have both been down-voted, one after another. This site seems to be dying some kind of a "Dunning-Kruger" death. The ignorant multiply and grow bolder, dominating discussions with their boring superstitions, while those with more knowledge gradually leave for greener pastures. – goblin Aug 2 at 5:34
• I didn't downvote, but I think the downvotes may come from the fact that you answer seems to be directed only at the title of the question, rather than the body of the question itself. – Jackozee Hakkiuz Aug 2 at 6:29
|
{}
|
My Math Forum Mathetical Model of Cow herd size after n years
Applied Math Applied Math Forum
November 11th, 2014, 06:38 PM #1 Newbie Joined: Nov 2014 From: Baroda Posts: 1 Thanks: 0 Mathetical Model of Cow herd size after n years I am trying to figure out a formula (mathematical model) for cow herd size after n years starting from a single cow 4 years old. Provided: A cow starts giving child after she becomes 4 years old. Every years she gives one child. Take 50% male and 50% female new born cows. Take lifespan of male as well as female cow as 20 years Assumptions: No premature deaths. I want to construct a mathematical model by which I can know what will be the total number of animals in this herd after 'n' number of years, provided that no animals are killed or sold. If someone can help it will be nice. Thankyou, Damodara Das damodara.bvks@gmail.com
November 11th, 2014, 07:45 PM #2
Senior Member
Joined: Jan 2012
From: Erewhon
Posts: 245
Thanks: 112
Quote:
Originally Posted by Damodara Das I am trying to figure out a formula (mathematical model) for cow herd size after n years starting from a single cow 4 years old. Provided: A cow starts giving child after she becomes 4 years old. Every years she gives one child. Take 50% male and 50% female new born cows. Take lifespan of male as well as female cow as 20 years Assumptions: No premature deaths. I want to construct a mathematical model by which I can know what will be the total number of animals in this herd after 'n' number of years, provided that no animals are killed or sold. If someone can help it will be nice. Thankyou, Damodara Das damodara.bvks@gmail.com
What follows is untested and may contain errors, omissions and/or over-simplifications etc, but is to give you some idea of how to go about setting up such a model.
Let $C(t)$ be the number of cows in year $t$. Then the births in year $t$ are $b(t)= [C(t)-b(t-1)-b(t-2)-b(t-3)]/2 = C(t-4)/2$. and the deaths $d(t)=b(t-20)$. So the population in year $t+1$ is:
$$C(t+1)=C(t)+b(t)-d(t)$$
With initial conditions $b(-3)=1$ and zero for all other $t \le 0$ and $b(1)=1$
Notes: more work may be needed on the start up conditions.
CB
Last edited by CaptainBlack; November 11th, 2014 at 08:13 PM.
November 12th, 2014, 02:45 AM #3 Math Team Joined: Dec 2013 From: Colombia Posts: 7,674 Thanks: 2654 Math Focus: Mainly analysis and algebra Your births equation assumes that no female cows died in the last 4 years. Thanks from CaptainBlack
November 12th, 2014, 05:30 AM #4
Senior Member
Joined: Jan 2012
From: Erewhon
Posts: 245
Thanks: 112
Quote:
Originally Posted by v8archie Your births equation assumes that no female cows died in the last 4 years.
Indeed. See the disclaimer ..
I think the model really needs to model the herd state as [M(t),F(t)] a vector of male and female numbers at each epoc. There is also a problem in that the births are equally likely to be male or female which needs better treatment.
An interesting alternative might be to model the populations with delay differential equations.
On third thoughts we need only model the population of females since the population of males can be reconstructed from it. So now we are (maybe?) interested in the model:
F(t+1)=F(t) + F(t-4)/2 - F(t-24)/2 - F(t-23)/2 - F(t-22)/2 - F(t-21)/2 - F(t-20)
CB
Last edited by CaptainBlack; November 12th, 2014 at 06:28 AM.
November 12th, 2014, 07:14 AM #5 Math Team Joined: Dec 2013 From: Colombia Posts: 7,674 Thanks: 2654 Math Focus: Mainly analysis and algebra I think the herd size is 1 for 16 years and zero thereafter. Because if there is only one cow, there is nothing to mate with it. Unless we aren't counting the bull at all.
November 12th, 2014, 09:45 AM #6
Senior Member
Joined: Jan 2012
From: Erewhon
Posts: 245
Thanks: 112
Quote:
Originally Posted by v8archie I think the herd size is 1 for 16 years and zero thereafter. Because if there is only one cow, there is nothing to mate with it. Unless we aren't counting the bull at all.
I think it was mated before we bought the farm.
CB
November 12th, 2014, 10:38 AM #7 Math Team Joined: Dec 2013 From: Colombia Posts: 7,674 Thanks: 2654 Math Focus: Mainly analysis and algebra At what age would a bull become able to perform? Because you're still without a bull until one happens to be born and get old enough. Thanks from CaptainBlack
November 12th, 2014, 07:04 PM #8
Senior Member
Joined: Jan 2012
From: Erewhon
Posts: 245
Thanks: 112
Quote:
Originally Posted by v8archie At what age would a bull become able to perform? Because you're still without a bull until one happens to be born and get old enough.
If we are being serious, even if the first cow gives birth in the first year, there is still a 50% chance that it will be a cow and so there will be no further births. Unless we are assuming there is some means of getting the cows served which does not depend on the bulls of the herd. Or we need to start with a pair rather than a single cow. But in practical terms unless the services of an external bull/s are employed we are running into serious inbreeding problems.
CB
November 12th, 2014, 07:16 PM #9 Math Team Joined: Dec 2013 From: Colombia Posts: 7,674 Thanks: 2654 Math Focus: Mainly analysis and algebra Indeed. My (half-serious) point is that the situation needs some clarification at least.
November 12th, 2014, 08:09 PM #10
Senior Member
Joined: Jan 2012
From: Erewhon
Posts: 245
Thanks: 112
Quote:
Originally Posted by v8archie Indeed. My (half-serious) point is that the situation needs some clarification at least.
We agree about that. I particularly don't like having to combine a random element (probability for any birth of a male is 0.5, probability of a female is 0.5) with what is essentially a Fibonacci sequence like generation process.
CB
Last edited by CaptainBlack; November 12th, 2014 at 08:11 PM.
Tags cow, herd, mathetical, model, size, years
Thread Tools Display Modes Linear Mode
Similar Threads Thread Thread Starter Forum Replies Last Post BenFRayfield Applied Math 4 December 21st, 2013 10:58 PM Pvunderink Algebra 2 November 20th, 2013 05:50 AM johnny New Users 1 January 8th, 2008 03:29 AM sivela Number Theory 0 December 31st, 1969 04:00 PM Pvunderink Calculus 0 December 31st, 1969 04:00 PM
Contact - Home - Forums - Cryptocurrency Forum - Top
|
{}
|
## what a type system always proves vs. what it can be made to prove
It seems to me that PL theory focuses on type systems with respect to what they prove about all well-typed programs. It seems to me that PL theory does not focus on type systems with respect to what they can be made to prove about some well-typed programs.
Is my perception of this focus correct, and, if so, is this a good state of affairs?
I am open to the idea that the answer is "yes, that is PL theory's focus, and that is a good state of affairs since the rest is pragmatics, i.e. software engineering, not PL theory." But I guess I wouldn't have asked the question if I didn't have suspicions to the contrary.
Another way of asking it is, does PL theory treat type systems primarily as a way language designers help programmers, and only secondarily as a way language designers help programmers help themselves? Or does it treat neither role as primary? Or are these roles inseparable?
## Comment viewing options
### I am really unsure I
I am really unsure I understand what you are trying to say. Perhaps if you are more concrete it would be easier to know.
But if I understand you correctly, what you are asking about is the expressiveness of type systems. I think this is often discussed, even if this goal is hidden when you read theoretical papers that focus on proofs of type system properties.
You should search for "typeful programming", and check out languages with expressive type systems, from Haskell in the FP world to Ada in the imperative.
### thanks
Thanks for your response. I will do some searches for "type system expressiveness", "typeful programming", etc.
The discussions of expressiveness I have run across before seemed to focus on "false negatives", i.e. to what extent does a type system reject programs that, given an untyped interpretation, would not actually go wrong at runtime in the way the type checker "feared" they would. But probably I am misremembering or there are other broader discussions of expressiveness that I need to find.
Rereading my original post, I see now that it is a bit vague. Perhaps it would be more concrete to say that what I'm interested in is to what extent does a type system help me prove my program will not "go wrong" in application-specific ways (e.g. units checking in a physics program) as opposed to application-generic ways (e.g. null dereference). The point being that you can write a physics program using only "double" type or you can impose your own types on those doubles to make the type system prove stronger things about your program for you.
Suppose we have an untyped programming language, and then impose a type discipline on it. So, now we will reject certain programs as being ill-typed, for whatever reasons the type system designer had in mind. That's clear, but now ask, what else happens?
Well, a surprising fact is that the imposition of a type discipline changes the language's natural notion of program equivalence. Intuitively, two program modules are "contextually equivalent" when all client programs that could use them ("contexts" in PL jargon) have equivalent behavior when you switch between the module implementations.
Why? This is because client contexts are themselves programs, and so there are fewer contexts in the typed world than the untyped. This is the underlying mechanism that makes type abstraction work: we use the type discipline to ensure that there are no client programs that can "peek under the covers".
As an example, consider the following ML program:
module type EVEN =
sig
type even
inject : int -> even option
outject : even -> int
end
module Even : EVEN =
sig
type even = int
let inject n = if n mod 2 = 0 then Some n else None
let outject n = n
end
Because the type Even.even is held abstract, we know with certainty that whenever v is a value of type Even.even, then calls of the form (Even.outject v) mod 2 will always return 0.
### Exactly
First, your recollection is correct: static typing is necessarily conservative over execution because the entirety of the dynamic semantics of the program is not available at compile time except in a handful of extreme, trivial cases. This is the chief critique of static typing from dynamic typing aficionados that has any "bite," in my opinion, although in practice I find the observation trivial. If you have a Turing-complete dependent type system, the ε between what you can express at compile time and what you can't becomes infinitesimally small, and even short of that you can get vastly more helpful static guarantees out of, e.g. GADTs in Haskell or Scala, modules and phantom types in ML, etc. than most people are aware of.
It's precisely in this context—the context of relatively rich type systems like those of OCaml, Haskell, Scala, etc.—that we find the beginning of the answer to the latter part of your question. It's one thing to say "I have static typing with type inference, so I don't have to spell everything out but the compiler keeps me from making boneheaded mistakes," and here I think dynamic typing aficionados are essentially correct in saying "big deal; I get that much out of my unit tests," assuming good test coverage. An important line gets crossed, though, in making the transition to "typeful programming," where you, the programmer, express important program invariants using the type system, so that your program will not compile if you should happen to violate one or more of those invariants. Using types to implement proper units in numerical code is indeed an excellent example of this. This thread goes more deeply into the idea of sealing off important invariants in a module and having the type system then guarantee that these invariants hold throughout the program.
### Domain specific type systems
I may be misunderstanding the questions, but it almost sounds like benckla is asking about research into what I'll call "domain specific type systems." I.e. most type systems that are researched give you some fairly general concepts for preventing a fairly broad range of problems and leave it up to a programmer to map their domain into the type system. But just as languages can be targeted to domains, are there DSLs with type systems optimized for fairly specific areas? An example might be a physics based DSL with a static type system that natively understands units so it can statically prevent adding centimeters to ergs without forcing the developer to figure out how to make it all hang together.
To answer the question broadly, I think most PLT research into type systems is naturally going to be more general in nature but such research may often be motivated by such specific problems. So expect that while there are going to be DSLs with type system optimized for their domain, it's unlikely that you'll find much core research into the subject of designing such systems. That does seem to fall more on the SE side of the world rather than the PLT side.
### progress report
I just read Cardelli's "Typeful Programming." Undoubtedly the rich type system of the language it presents (Quest) is capable of expressing the kind of application-specific constraints I am interested in. And, when the introduction identifies typeful programming with "the widespread use of type information," I'm guessing/hoping that by "widespread" he means not just that which is imposed by the language but also that which is imposed by the programmer and only enabled by the language. Also the interpretations of types as partial program specifications is encouraging.
But, to me, the paper doesn't really pursue this idea. It presents the language, which is very cool and thought provoking, but it only briefly touches on the typeful programming style that the language enables. When it does touch on this topic it is mainly with respect to the module, interface, and system constructs.
So, unless I'm really missing something, I have to say I disagree with its conclusion that it has "illustrated a style of programming based on the use of rich type systems." It has presented a language that enables that style, and perhaps given some hints as to what that style might be like, but it has not illustrated that style, unless there is some major reading between the lines expected about how the style flows naturally/inevitably from the language.
I realize that Cardelli is a huge player in the field and who am I to criticize him, etc. But I don't mean it that way. No doubt this is a great paper. It's just not about what I wanted it to be about, and not what I thought it was going to be about from reading the introduction and conclusion. I must say I find this to be the case disturbingly often. Either authors habitually claim more than they actually deliver, or I read abstracts and introductions through the rose-colored glasses of what I hope the paper will be about. Or some combination of the two.
### Over-precise types boomerang?
Benjamin Pierce gave a talk (discussed here) that sounds like it relates closely to your distinction between imposed versus enabled by a language.
### I am a bit baffled and I am
I am a bit baffled and I am perhaps simply again not understanding what it is you are after. Do you have any experience programming in Haskell or, alternatively, Ada? If so, have you also read examples of good code in these languages, or style manuals or textbooks? It seems to me that one of the main things that are emphasized is how to use the expressive type systems (imperfect as they are) to encode domain and program specific invariants, in order to assist design, programming, debugging, reuse etc. etc. At least that's what I tried to show students when I taught software engineering with Ada.
Or am I missing the point entirely?
P.S
Here is an ancient riddle I posed students that is a short illustration of what I have in mind.
### I think you get it
Thanks Ehud, I think you do "get" what I am talking about except for one little twist: my original question was not "where can I find discussions of typeful programming," it was more like "where can I find discussions of typeful programming *in PL theory*, or are such discussions only to be found in software engineering?"
Here's yet another way of putting my question. "It seems to me that PL theory focuses on type systems with respect to where they draw the lines in the classic quadrants of any test: false positives, etc. What about what a type system enables within the "true positive" quadrant? Is that just software engineering? Or am I just ignorant of its discussions in PL theory?
### I'm not Ehud...
...but my \$.02 here is that it's difficult to talk about this from a perspective that is simultaneously PLT-based (that is, giving a detailed description of language and type system design properties that lend themselves to typeful programming) and pragmatic (that is, making the connection to solving specific problems, e.g. statically disallowing write operations to read-only files, or statically disallowing any access at all to a file that isn't open [in the first place|anymore]). I continue to claim that the most accessible such writing is that of Oleg Kiselyov and Chung-chieh Shan on Lightweight static capabilities, which are all about selecting domain-specific (pragmatic, non-PLT-founded) invariants, hiding them behind an inviolable abstraction barrier (PLT/type theory/module system), and extending their guarantees to the rest of the code. It's partly to make a point about what can be done even with the type and module systems we have today, but also to demonstrate why you might want an even more powerful type system, e.g. with explicit support for dependent types. Well worth gaining mastery of, IMHO.
### OK then ;-) There are all
OK then ;-)
There are all kinds of projects that explicitly attempt to make type systems more expressive (Paul is our resident expert on those). I'd mention a more general issue: the goal you are after is part of the context and background of the research. It is why a type system (or a type system feature) is being designed, proposed or analyzed. But most papers are not about the intuitive design process but rather on the provable properties of the resulting system. So when you read the papers you need to look for the motivation, come to seminars and conferences, or ask people in the community about why the type systems are of interest for expressing application-level invariants. These elements are usually not discussed at length in papers (be on the lookout for the "related work" and "further research" sections which often provide important clues...)
|
{}
|
# The following is a summary of all relevant transactions of Vicario Corporation since it was...
The following is a summary of all relevant transactions of Vicario Corporation since it was organized in 2010.
In 2010, 15,000 shares were authorized and 7,000 shares of common stock ($50 par value) were is- sued at a price of$57. In 2011, 1,000 shares were issued as a stock dividend when the stock was selling for $60. Three hundred shares of common stock were bought in 2012 at a cost of$64 per share. These 300 shares are still in the company treasury.
• ### Acct Question
(Solved) November 19, 2013
of them . b. Issued 15,000 shares of preferred stock at $25 per share ; collected in cash . Net income for 2010 was$40,000; cash dividends declared and paid at year-end were $10,000 . Prepare the stockholders' equity section of the balance #### Answer Preview : Stockholders' equity section of the balance sheet at December 31, 2010 should be prepared as follows: Corporation S Balance Sheet (Partial) December 31, 2010 Contributed capital: Preferred... • ### Early in 2009, Feller Corporation was formed with authorization to issue 50,000 shares of$1 par...
(Solved) November 13, 2015
• ### The stockholders’ equity of Embassy Corporation at December 31, 2010, is shown below. Stockholders’...
(Solved) November 13, 2015
on the open market at $37 per share . July 1 The company reissued 1,000 shares of treasury stock at$48 per share . July 1 The company issued for cash 20,000 shares of previously unissued $8 par value common stock at a price of$47 per share . Dec. 1 A cash dividend of \$1 per share
• ### earnings per share and diluted earnings per share
(Solved) August 26, 2012
please see attached file - Computation of basic and diluted earnings per share .
|
{}
|
# show that
Question:
If $a=\frac{\sqrt{5}+\sqrt{2}}{\sqrt{5}-\sqrt{2}}$ and $b=\frac{\sqrt{5}-\sqrt{2}}{\sqrt{5}+\sqrt{2}}$, show that $3 a^{2}+4 a b-3 b^{2}=4+\frac{56}{3} \sqrt{10}$.
Solution:
According to question,
$a=\frac{\sqrt{5}+\sqrt{2}}{\sqrt{5}-\sqrt{2}}$ and $b=\frac{\sqrt{5}-\sqrt{2}}{\sqrt{5}+\sqrt{2}}$
$a=\frac{\sqrt{5}+\sqrt{2}}{\sqrt{5}-\sqrt{2}}$
$=\frac{\sqrt{5}+\sqrt{2}}{\sqrt{5}-\sqrt{2}} \times \frac{\sqrt{5}+\sqrt{2}}{\sqrt{5}+\sqrt{2}}$
$=\frac{(\sqrt{5}+\sqrt{2})^{2}}{(\sqrt{5})^{2}-(\sqrt{2})^{2}}$
$=\frac{(\sqrt{5})^{2}+(\sqrt{2})^{2}+2 \sqrt{5} \sqrt{2}}{5-2}$
$=\frac{5+2+2 \sqrt{10}}{3}$
$=\frac{7+2 \sqrt{10}}{3} \quad \ldots(1)$
$b=\frac{\sqrt{5}-\sqrt{2}}{\sqrt{5}+\sqrt{2}}$
$=\frac{\sqrt{5}-\sqrt{2}}{\sqrt{5}+\sqrt{2}} \times \frac{\sqrt{5}-\sqrt{2}}{\sqrt{5}-\sqrt{2}}$
$=\frac{(\sqrt{5}-\sqrt{2})^{2}}{(\sqrt{5})^{2}-(\sqrt{2})^{2}}$
$=\frac{(\sqrt{5})^{2}+(\sqrt{2})^{2}-2 \sqrt{5} \sqrt{2}}{5-2}$
$=\frac{5+2-2 \sqrt{10}}{3}$
$=\frac{7-2 \sqrt{10}}{3} \ldots(2)$
Now,
$3 a^{2}+4 a b-3 b^{2}$
$=3\left(a^{2}-b^{2}\right)+4 a b$
$=3(a+b)(a-b)+4 a b$
$=3\left(\frac{7+2 \sqrt{10}}{3}+\frac{7-2 \sqrt{10}}{3}\right)\left(\frac{7+2 \sqrt{10}}{3}-\frac{7-2 \sqrt{10}}{3}\right)+4\left(\frac{7+2 \sqrt{10}}{3} \times \frac{7-2 \sqrt{10}}{3}\right)$
$=3\left(\frac{14}{3}\right)\left(\frac{4 \sqrt{10}}{3}\right)+4\left(\frac{(7)^{2}-(2 \sqrt{10})^{2}}{9}\right)$
$=\frac{56}{3} \sqrt{10}+4\left(\frac{49-40}{9}\right)$
$=\frac{56}{3} \sqrt{10}+4$
Hence, $3 a^{2}+4 a b-3 b^{2}=4+\frac{56 \sqrt{10}}{3}$
|
{}
|
| Home | E-Submission | Sitemap | Contact Us |
Original Article Journal of Korean Academy of Fundamentals of Nursing 1999;6(3): 477
일 지역 대학생의 피로와 건강증진 생활양식과의 관계분석 장희정 한림대학교 간호학과 The correlation analysis between fatigue and health promoting life style among a rural college students Hee-Jung Jang Department of Nursing Science, Hallym University Abstract The disease patterns among the Korean was shifted from acute and infectious diseases to chronic diseases. According to the these disease patterns trends, people have concerned about the health promotion and health behaviors. Pender's(1996) revised health promotion model(HPM) is consist of three categories; Individual characteristics and experiences, Behavior-specific cognitions and affect, behavioral outcome. Of these categories, individual characteristics and experiences, this category of variables is considered to be of biological, psychological and socio-cultural personal factors, especially, individual fatigue. Futhermore. these variables constitute a critical core for nursing intervention, as they are subject to modification through nursing actions. But there is no few the research of the relationship between the fatigue and health promotion. Therefore, the purpose of this study is to investigate the correlation between the fatigue and health promoting life style among a rural college students. Additionally, this descriptive correlational study identified the relation of demographic factors and fatigue, health promoting life style. From June 20 to 26, 1998, a convenience sample of 270 college students completed the questionnaire of the fatigue and health promoting life style profile which were developed by the Yoshitake(1978) and Walker, et al.(1987), respectively. The descriptive correlational statistics, mean, t-test, ANONA, Pearson correlation coefficient were used to analyze the data gathered with SAS pc+ program. The results were as it follows: 1. The average fatigue score of the subjects was $64.93{pm}12.89$. Fatigue scores by subcategory were physical symptoms($23.5{pm}4.87$). psychological symptoms($22.11{pm}4.66$) and neuro-sensory symptoms($19.32{pm}5.14$). With the respect to the demographic characteristics of the subjects, there were statistically significant differences between the demographic factors and fatigue, especially, sex(t==3.69 p<0.01), major(t=-2.89 p<0.01). the experience of family illness(t=2.76 p<0.01). 2. The average health promoting life style item score of the subjects was $2.33{pm}0.33$. In the subcategories, the highest degree of performance was self-actualization(2.94), following interpersonal support(2.81). stress management(2.33), exercise(2.20), nutrition(2.10), and the lowest degree was health responsibility(1.73). There were the significant differences on the learning of health education(t=2.00 p<0.01). religion(F=3.01, p<0.05), circle activity(t=2.07, p<0.05), nutrition control(t=5.25, p<0.01) of demographical factors with the health promoting life style. 3. The correlation between the fatigue and health promoting life style made statistically no significance(r=-0.09731, p>0.05). But there was negative significant relationship between health promoting life style and psychological symptom as a fatigue subcategory(r=-0.15721, p<0.05). The self-actualization showed negative significant correlation with all fatigue subcategory. The health responsibility showed significant relationship with total fatigue(r=0.13050. p<0.05). For further research, it suggests to replicate the correlational and causal study between the fatigue and the health promoting life style using the another fatigue scale which is able to measure the subjective and objective fatigue degree. And it needs to develop the nursing intervention program for maintaining and promoting the health behavior as well as for decreasing the college students's fatigue. Key words: college students | fatigue | health promoting life style 주요어: 대학생 | 피로 | 건강증진생활양식
TOOLS
|
{}
|
$$\require{cancel}$$
Any reader of this book will know that the solutions to the quadratic equation
$ax^2 + bx + c = 0 \label{1.3.1}$
are
$x = \frac{-b \pm \sqrt{b^2 - 4ac}}{2a} \label{1.3.2}$
and will have no difficulty in finding that the solutions to
$2.9x^2 - 4.7x + 1.7 = 0 \nonumber$
are
$x = 1.0758 \text{ or } 0.5449. \nonumber$
We are now going to look, largely for fun, at two alternative iterative numerical methods of solving a quadratic equation. One of them will turn out not to be very good, but the second will turn out to be sufficiently good to merit our serious attention.
In the first method, we re-write the quadratic equation in the form
$x = \frac{-\left(ax^2 + c \right)}{b} \nonumber$
We guess a value for one of the solutions, put the guess in the right hand side, and hence calculate a new value for $$x$$. We continue iterating like this until the solution converges.
For example, let us guess that a solution to the equation $$2.9x^2 − 4.7x + 1.7 = 0$$ is $$x = 0.55$$. Successive iterations produce the values
\begin{array}{c c}
0.54835 & 0.54501 \\
0.54723 & 0.54498 \\
0.54648 & 0.54496 \\
0.54597 & 0.54495 \\
0.54562 & 0.54494 \\
0.54539 & 0.54493 \\
0.54524 & 0.54493 \\
0.54513 & 0.54494 \\
0.54506 & 0.54492 \\
\nonumber
\end{array}
We did eventually arrive at the correct answer, but it was very slow indeed even though our first guess was so close to the correct answer that we would not have been likely to make such a good first guess accidentally.
Let us try to obtain the second solution, and we shall try a first guess of 1.10, which again is such a good first guess that we would not be likely to arrive at it accidentally. Successive iterations result in
\begin{array}{c}
1.10830 \\
1.11960 \\
1.13515 \\
\nonumber
\end{array}
and we are getting further and further from the correct answer!
Let us try a better first guess of 1.05. This time, successive iterations result in
\begin{array}{c}
1.04197 \\
1.03160 \\
1.01834 \\
\nonumber
\end{array}
Again, we are getting further and further from the solution.
No more need be said to convince the reader that this is not a good method, so let us try something a little different.
$ax^2 + bx = -c \label{1.3.3}$
Add $$ax^2$$ to each side:
$2ax^2 + bx = ax^2 - c \label{1.3.4}$
or $(2ax + b)x = ax^2 - c \label{1.3.5}$
Solve for $$x$$: $x = \frac{ax^2-c}{2ax+b} \label{1.3.6}$
This is just the original equation written in a slightly rearranged form. Now let us make a guess for $$x$$, and iterate as before. This time, however, instead of making a guess so good that we are unlikely to have stumbled upon it, let us make a very stupid first guess, for example $$x = 0$$. Successive iterations then proceed as follows.
\begin{array}{c}
0.00000 \\
0.36170 \\
0.51751 \\
0.54261 \\
0.54491 \\
0.54492 \\
\nonumber
\end{array}
and the solution converged rapidly in spite of the exceptional stupidity of our first guess. The reader should now try another very stupid first guess to try to arrive at the second solution. I tried $$x = 100$$, which is very stupid indeed, but I found convergence to the solution $$1.0758$$ after just a few iterations.
Even although we already know how to solve a quadratic equation, there is something intriguing about this. What was the motivation for adding $$ax^2$$ to each side of the equation, and why did the resulting minor rearrangement lead to rapid convergence from a stupid first guess, whereas a simple direct iteration either converged extremely slowly from an impossibly good first guess or did not converge at all?
|
{}
|
# Is this even or odd?
Note: There is not been a vanilla parity test challenge yet (There is a C/C++ one but that disallows the ability to use languages other than C/C++, and other non-vanilla ones are mostly closed too), So I am posting one.
Given a positive integer, output its parity (i.e. if the number is odd or even) in truthy/falsy values. You may choose whether truthy results correspond to odd or even inputs.
# Examples
Assuming True/False as even and odd (This is not required, You may use other Truthy/Falsy values for each), responsively:
(Input):(Output)
1:False
2:True
16384:True
99999999:False
var QUESTION_ID=113448,OVERRIDE_USER=64499;function answersUrl(e){return"https://api.stackexchange.com/2.2/questions/"+QUESTION_ID+"/answers?page="+e+"&pagesize=100&order=desc&sort=creation&site=codegolf&filter="+ANSWER_FILTER}function commentUrl(e,s){return"https://api.stackexchange.com/2.2/answers/"+s.join(";")+"/comments?page="+e+"&pagesize=100&order=desc&sort=creation&site=codegolf&filter="+COMMENT_FILTER}function getAnswers(){jQuery.ajax({url:answersUrl(answer_page++),method:"get",dataType:"jsonp",crossDomain:!0,success:function(e){answers.push.apply(answers,e.items),answers_hash=[],answer_ids=[],e.items.forEach(function(e){e.comments=[];var s=+e.share_link.match(/\d+/);answer_ids.push(s),answers_hash[s]=e}),e.has_more||(more_answers=!1),comment_page=1,getComments()}})}function getComments(){jQuery.ajax({url:commentUrl(comment_page++,answer_ids),method:"get",dataType:"jsonp",crossDomain:!0,success:function(e){e.items.forEach(function(e){e.owner.user_id===OVERRIDE_USER&&answers_hash[e.post_id].comments.push(e)}),e.has_more?getComments():more_answers?getAnswers():process()}})}function getAuthorName(e){return e.owner.display_name}function process(){var e=[];answers.forEach(function(s){var r=s.body;s.comments.forEach(function(e){OVERRIDE_REG.test(e.body)&&(r="<h1>"+e.body.replace(OVERRIDE_REG,"")+"</h1>")});var a=r.match(SCORE_REG);a&&e.push({user:getAuthorName(s),size:+a[2],language:a[1],link:s.share_link})}),e.sort(function(e,s){var r=e.size,a=s.size;return r-a});var s={},r=1,a=null,n=1;e.forEach(function(e){e.size!=a&&(n=r),a=e.size,++r;var t=jQuery("#answer-template").html();t=t.replace("{{PLACE}}",n+".").replace("{{NAME}}",e.user).replace("{{LANGUAGE}}",e.language).replace("{{SIZE}}",e.size).replace("{{LINK}}",e.link),t=jQuery(t),jQuery("#answers").append(t);var o=e.language;/<a/.test(o)&&(o=jQuery(o).text()),s[o]=s[o]||{lang:e.language,user:e.user,size:e.size,link:e.link}});var t=[];for(var o in s)s.hasOwnProperty(o)&&t.push(s[o]);t.sort(function(e,s){var F=function(a){return a.lang.replace(/<\/?a.*?>/g,"").toLowerCase()},el=F(e),sl=F(s);return el>sl?1:el<sl?-1:0});for(var c=0;c<t.length;++c){var i=jQuery("#language-template").html(),o=t[c];i=i.replace("{{LANGUAGE}}",o.lang).replace("{{NAME}}",o.user).replace("{{SIZE}}",o.size).replace("{{LINK}}",o.link),i=jQuery(i),jQuery("#languages").append(i)}}var ANSWER_FILTER="!t)IWYnsLAZle2tQ3KqrVveCRJfxcRLe",COMMENT_FILTER="!)Q2B_A2kjfAiU78X(md6BoYk",answers=[],answers_hash,answer_ids,answer_page=1,more_answers=!0,comment_page;getAnswers();var SCORE_REG=/<h\d>\s*([^\n,]*[^\s,]),.*?(\d+)(?=[^\n\d<>]*(?:<(?:s>[^\n<>]*<\/s>|[^\n<>]+>)[^\n\d<>]*)*<\/h\d>)/,OVERRIDE_REG=/^Override\s*header:\s*/i;
body{text-align:left!important}#answer-list,#language-list{padding:10px;width:290px;float:left}table thead{font-weight:700}table td{padding:5px}
<script src="https://ajax.googleapis.com/ajax/libs/jquery/2.1.1/jquery.min.js"></script> <link rel="stylesheet" type="text/css" href="//cdn.sstatic.net/codegolf/all.css?v=83c949450c8b"> <div id="answer-list"> <h2>Leaderboard</h2> <table class="answer-list"> <thead> <tr><td></td><td>Author</td><td>Language</td><td>Size</td></tr></thead> <tbody id="answers"> </tbody> </table> </div><div id="language-list"> <h2>Winners by Language</h2> <table class="language-list"> <thead> <tr><td>Language</td><td>User</td><td>Score</td></tr></thead> <tbody id="languages"> </tbody> </table> </div><table style="display: none"> <tbody id="answer-template"> <tr><td>{{PLACE}}</td><td>{{NAME}}</td><td>{{LANGUAGE}}</td><td>{{SIZE}}</td><td><a href="{{LINK}}">Link</a></td></tr></tbody> </table> <table style="display: none"> <tbody id="language-template"> <tr><td>{{LANGUAGE}}</td><td>{{NAME}}</td><td>{{SIZE}}</td><td><a href="{{LINK}}">Link</a></td></tr></tbody> </table>
• This isn't the first time I've confused mathematical with computational parity... this is a code site after all! – Neil Mar 21 '17 at 10:38
• Since this is pretty much one of these(1,2,3) questions, it should probably have a snippet to see all the answers. – fəˈnɛtɪk Mar 21 '17 at 13:57
• @MikeBufardeci Because "catalogue" is spelled differently based on which country you're from. For those of us in the U.S., it's "catalog". "Leaderboard" is culture-invariant. – mbomb007 Mar 21 '17 at 16:37
• @tuskiomi The challenge only asks about positive integers. (0 is considered even but not positive) – Calvin's Hobbies Mar 22 '17 at 2:46
• @LucioCrusca Welcome to PPCG! The basic idea of Code Golf is to make a program in the shortest form you can. This challenge is to read an integer (positive,non-zero), and output if it is even or odd. If you are confused with something, please visit The Nineteenth Byte and ask freely. Or if you are confused with the site's policy or rules, go to the Meta. Finally, Thanks for subscribing to our community! – Matthew Roh Mar 23 '17 at 15:09
# Microscript, 4 bytes
2si%
# Microscript II, 4 bytes
2sN%
Prints 1 for odd and 0 for even.
# Pyramid Scheme, 185 149 bytes
^
/=\
^---^
^- /#\
^- ^---
/#\ /+\
---^---^
/"\ ^-
^---/"\
-^ ^---
//\-
^---^
-^ /2\
/#\---
---^
/ \
/arg\
^-----
-^
/1\
---
Try it online!
Uses a different method from Khuldraeseth's answer, and outputs 1 for odd numbers, 0 for even. This takes input as a command line argument. This basically equates to:
a = int(input())/2
print(a == int(str(a) + str(0)))
This works because a string like "0.5"+"0" is converted to the float 0.5, whereas "1"+"0" will be 10. This means numbers already with a decimal point will ignore the added 0.
(0)(1)()^S
Input is in hard-coded in Unary, using ~ in the 3rd bracket. 0 for odd, 1 for even.
Try it Online!
# AsciiDots, 16 bytes
.-#?{&}$# .-#1/ Outputs 0 if even, and 1 if odd. Basically just a Boolean AND with 1. Try it online! # MineFriff, 7 bytes Ii2,%o; Explanation: Ii {Change the input mode to (I)nteger and take input} 2, {Push 2 onto the stack} %o {x % 2 and output} ; {end prog} ## vJASS, 259 bytes //! zinc library a{function onInit(){trigger t=CreateTrigger();TriggerRegisterPlayerChatEvent(t,Player(0),"",false);TriggerAddAction(t,function(){real r=S2R(GetEventPlayerChatString());string s="0";if(ModuloReal(r,2)==0){s="1";}BJDebugMsg(s);});}} //! endzinc Prints 1 for even inputs and 0 for odd inputs. Explanation //! zinc library a{ function onInit() { trigger t = CreateTrigger(); // Create a CHAT EVENT. TriggerRegisterPlayerChatEvent(t,Player(0),"",false); TriggerAddAction(t, function(){ // Convert STRING to Decimal value. real r = S2R(GetEventPlayerChatString()); string s = "0"; if( ModuloReal(r,2) == 0 ){ s = "1"; } // Print the result. BJDebugMsg(s); }); } } //! endzinc Using Zinc to able to minimize the code. If you want to get the user's input, you need to add TriggerRegisterPlayerChatEvent. # asm2bf, 7 bytes modr1,2 Yes, it compiles. Takes input in r1, writes output to r1. 1 for odd, 0 for even. ## SystemVerilog, 29 bytes task t(n);$write(n%2);endtask
Prints 1 for odd numbers and 0 for even numbers.
Testbench:
module m;
initial begin
t(1);
t(2);
t(16384);
t(99999999);
end
task t(n);$write(n%2);endtask endmodule Output (VCS on EDA Playground): 1 0 0 1 # FEU, 21 bytes u/x m/^(xx)+x$/1/x+/0
Try it online!
0 for even, 1 for odd
# Python 3, 15 bytes
lambda n:n%2==0
Try it online!
Prints True/False when even/odd respectively.
New contributor
Eesa is a new contributor to this site. Take care in asking for clarification, commenting, and answering. Check out our Code of Conduct.
• ==0 can be <1 – Jo King Aug 11 at 7:26
• There are two earlier answers that use exactly the same approach but without the (in)equality (printing 1/0 instead of True/False). Does your answer really contribute anything new? – Dingus Aug 11 at 9:08
# Javascript <= ES5, 28 bytes
function i(n){return n%2==1}
If returning 1 or 0 is allowed, than (27 bytes):
function i(n){return n%2&1}
• The second one is invalid JS, but the first is ok. – Rɪᴋᴇʀ Apr 17 '17 at 21:40
• @Riker fixed... – user68281 Apr 17 '17 at 21:43
• 1 or 0 is okay, yes. Also, you might be able to make this an anonymous function or lambda to save some bytes. – Rɪᴋᴇʀ Apr 17 '17 at 21:52
• You don't need the &1 at all, since -1 and 1 are both "truthy" and returning -1, 1, and 0 is allowed since it passes the "truthiness" test, i.e., they're valid values to place in an if() statement and be evaluated as you would expect. – Patrick Roberts May 17 '17 at 4:43
Scala 8 bytes
a=>a%2<1
Similar to the Java 8 solution and probably should also include type to compile like this:
val m:Int=>Boolean=a=>a%2<1
`
• Welcome to the site. If you include an explanation of your code (even if short) It will be long enough to fix the title formatting. – Ad Hoc Garf Hunter Jul 30 '19 at 13:06
• Thank you.. I will do that – user88272 Jul 30 '19 at 14:38
|
{}
|
# IAEA inspection at the Institute for Safety Problems of NPP
Functioning of the ISP NPP of NAS of Ukraine is under the constant control of the Department for Nuclear Safety and Security of the International Atomic Energy Agency (IAEA).
From 6 to 7 September, 2017 by inspectors of Vienna International Center (IAEA) Diego Manzipe Jimenez and Syn Chong Sin there were conducted inspection surveys of the territories of Institute, areas of laboratories and office premises in Kyiv and Chornobyl in order to verify the absence of undeclared nuclear material and those institution activity mismatching to statutory tasks. From ISP NPP work of inspectors was provided by Director Anatoly Nosovskiy, Deputy Director Volodymyr Scherbin, Assistant Director Fedir Khrustaliov, Head of Department Valery Khan, Head of sector Yuri Vertkov and Director of the Special Engineering and Design Department Vitaly Tovstonogov.
During the inspection, the IAEA inspectors became familiar with the Institute activities, the scientific and practical capabilities of the research laboratories, and carried out the necessary examinations and measurements by means of special equipment.
Цей запис також доступний: Ukrainian
## Related posts
Insert math as
$${}$$
|
{}
|
# Superposition of discrete level and continuum: Electron bound and free [duplicate]
Superposition between discrete states of a system is widely considered in the literature, but this system, e.g., a $H$ atom, can also have a continuum in its energy spectrum.
Can the state of a system be in a superposition of an energy level in the discrete part of the spectrum with one level in the continuum part (or with an interval thereof)?
In other words: Can an electron in a $H$ atom be in a superposition of bound to and free from the proton?
|
{}
|
# How do you find the Vertical, Horizontal, and Oblique Asymptote given y = (x + 1)/(x - 1)?
Feb 7, 2017
The vertical asymptote is $x = 1$
The horizontal asymptote is $y = 1$
No oblique asymptote
#### Explanation:
As you cannot divide by $0$, $x \ne 1$
The vertical asymptote is $x = 1$
As the degree of the numerator $=$ the degree of the denominator, there is no oblique asymptote
${\lim}_{x \to + \infty} y = {\lim}_{x \to + \infty} \frac{x}{x} = 1$
The horizontal asymptote is $y = 1$
graph{(y-(x+1)/(x-1))(y-1)(y-100x+100)=0 [-8.1, 9.674, -4.8, 4.085]}
|
{}
|
# I have an embedding $\iota$ between two Hilbert spaces and want to know if $\iota\iota^\ast$ is something simple like an orthogonal projection
I'm reading A Concise Course on Stochastic Partial Differential Equations. In Proposition 2.5.2 the authors define the notion of a cylindrical $Q$-Wiener process $W$. It turns out that $W$ is just a $Q_1$-Wiener process on a "larger" Hilbert space. I've got the feeling that $Q_1$ is actually a really "simple" operator (like the identity or some kind of projection), especially in the popular case where $Q$ is the identity, but I got problems to figure that out.
Let me introduce the necessary objects: Let$^1$
• $U$ and $V$ be real Hilbert spaces
• $Q\in\mathfrak L(U)$ be nonnegative and symmetric
• $U_0:=Q^{1/2}U$ be equipped with $$\langle u,v\rangle_{U_0}:=\langle Q^{-1/2}u,Q^{-1/2}v\rangle_U\;\;\;\text{for }u,v\in U_0$$
• $\iota\in\operatorname{HS}(U_0,V)$ be an embedding
• $C:=\iota\iota^\ast$
• $V_0:=C^{1/2}V$ be equipped with $$\langle u,v\rangle_{V_0}:=\langle C^{-1/2}u,C^{-1/2}v\rangle_V\;\;\;\text{for }u,v\in V_0$$
In Proposition 2.5.2 the authors show that $V_0=\iota U_0$ and that $\iota$ is an isometry between $U_0$ and $V_0$.
If we treat $\iota$ as being an element of $\mathfrak L(U_0,V_0)$, it's easy to see that $\iota$ is unitary and hence $$C=\text{id}_{V_0}\;.$$ However, in the context of the book, I need to treat $\iota$ as an element of $\mathfrak L(U_0,V)$ (in Proposition 2.5.2 the authors show that $\iota$ is nonnegative and symmetric with finite trace). I think that even in that case $C$ is something really "simple", but I don't know what I need to do to reveal that.
$^1$ Let $\mathfrak L(A,B)$ and $\operatorname{HS}(A,B)$ denote the space of bounded, linear operators and Hilbert-Schmidt operators from $A$ to $B$, respectively, and $\mathfrak L(A):=\mathfrak L(A,A)$.
• I think you mean "the authors show that $\iota\iota^*$ [i.e., $C$] is nonnegative and symmetric with finite trace". If all you know about $\iota$ is that it is Hilbert-Schmidt and has no kernel, there is nothing else you can say about $\iota\iota^*$. – Nik Weaver Jun 24 '16 at 0:21
|
{}
|
## Differential and Integral Equations
### On positive solutions of quasilinear elliptic equations
#### Abstract
In 1981, Peter Hess established a multiplicity result for solutions of boundary-value problems for nonlinear perturbations of the Laplace operator. The sufficient conditions given were later shown to be also necessary by Dancer and the second author. In this paper, we show that similar (and slightly more general) results hold when the Laplace operator is replaced by the $p-$Laplacian. Some applications to singular problems are given, as well.
#### Article information
Source
Differential Integral Equations Volume 22, Number 9/10 (2009), 829-842.
Dates
First available in Project Euclid: 20 December 2012
https://projecteuclid.org/euclid.die/1356019510
Mathematical Reviews number (MathSciNet)
MR2553058
Zentralblatt MATH identifier
1240.35210
Subjects
Primary: 35J92: Quasilinear elliptic equations with p-Laplacian
Secondary: 35B45: A priori estimates
#### Citation
Loc, Nguyen Hoang; Schmitt, Klaus. On positive solutions of quasilinear elliptic equations. Differential Integral Equations 22 (2009), no. 9/10, 829--842. https://projecteuclid.org/euclid.die/1356019510.
|
{}
|
Looking online to seek Homework Help regarding the Big Ideas Math Grade 6 Advanced Ch 2 then you have come the right way. Don’t bother as we have compiled the best preparation resources for all of them here in detail. Make the most out of these study materials and stand out from the rest of the crowd. Download the BIM Book Grade 6 Advanced Ch 2 Fractions and Decimals Solutions PDF for free and prepare anywhere and anytime you want.
Big Ideas Math Book 6th Grade Advanced Chapter 2 Fractions and Decimals Answers are sequenced as per the latest syllabus guidelines and are given by highly subject expertise people. Fractions and Decimals Big Ideas Math Grade 6 Advanced Answer Key covers questions from exercises, assignment tests, practice tests, etc. Prepare whichever topic you wish to prepare from the Chapter 2 Big Ideas Math Grade 6 Advanced Concepts by simply tapping on the quick links available.
### Fractions and Decimals STEAM Video/ Performance Task
STEAM Video
Space is Big
An astronomical unit (AU) is the average distance between Earth and the Sun, about 93 million miles. Why do asn000mers use astronomical units to measure distances in space? In what different ways can you compare the distances between objects and the locations of objects using the four mathematical operations?
Watch the STEAM Video ‘Space is Big: Then answer the following questions.
Question 1.
You know the distances between the Sun and each planet. How can you find the minimum and maximum distances between two planets as they rotate around the Sun?
Question 2.
‘The table shows the distances of three celestial bodies from Earth. It takes about three days to travel from Earth to the Moon. I low can you estimate the amount of Lime it would take to travel from Earth to the Sun or to Venus?
Space Explorers
After completing this Chapter, you will be able to use the concepts you learned to answer the questions in the STEAM Video Performance Task.
You will use a table that shows the average distances between the Sun and each planet in our solar system to find several distances in space. Then you will use the speed of the Orion spacecraft to answer questions about time and distance.
Is it realistic for a manned spacecraft to travel to each planet in our solar system? Explain why or why not.
### Getting Ready for Chapter Fractions and Decimals
Chapter Exploration
Work with a partner. ‘the area model represents the multiplication of two fractions. Copy and complete the statement.
Question 1.
Question 2.
Question 3.
Question 4.
Work with a partner. use an area model to find the product.
Question 5.
$$\frac{1}{2} \times \frac{1}{3}$$
Question 6.
$$\frac{4}{5} \times \frac{1}{4}$$
Question 7.
$$\frac{1}{6} \times \frac{3}{4}$$
Question 8.
$$\frac{3}{5} \times \frac{1}{4}$$
Question 9.
MODELING REAL LIFE
You have a recipe that serves 6 people. The recipe uses three-fourths of a cup of milk.
a. How can you use the recipe to serve more people? How much milk would you need? Give 2 examples.
b. How can you use the recipe to serve fewer people? How much milk would you need? Give 2 examples.
Vocabulary
The following vocabulary terms are defined in this chapter. Think about what each term might mean and record your thoughts.
reciprocals multiplicative inverses
### Section 2.1 Multiplying Fractions
Exploration 1
Using Models to Solve a Problem
Work with a partner. A bottle of water is $$\frac { 1 }{ 2 }$$ full. You drink $$\frac { 2 }{ 3 }$$ the water. Use one of the models to find the portion of the bottle of water that you drink. Explain your steps.
number line
area model
tape diagram
Exploration 2
Solving a Problem Involving Fractions
Work with a partner. A park has a playground that is $$\frac { 3 }{ 4 }$$ of its width and $$\frac { 4 }{ 5 }$$ of its length.
a. use a model to find the portion of the park that is covered by the playground. Explain your steps.
b. How can you find the solution of part (a) without using a model?
Math Practice
Find General Methods
How can you use your answer to find a method for multiplying fractions?
### Lesson 2.1 Multiplying Fractions
Try It MuItiply
Question 1.
$$\frac{1}{3} \times \frac{1}{5}$$
Question 2.
$$\frac{2}{3} \times \frac{3}{4}$$
Question 3.
$$\frac{1}{2} \cdot \frac{5}{6}$$
Try It Multiply. Write the answer in simplest form.
Question 4.
$$\frac{3}{7} \times \frac{2}{3}$$
Question 5.
$$\frac{4}{9} \cdot \frac{3}{10}$$
Question 6.
$$\frac{6}{5} \cdot \frac{5}{8}$$
Question 7.
WHAT IF?
You use $$\frac{1}{4}$$ of the flour to make the dough. How much of the entire bag do you use to make the dough?
Try It Multiply. Write the answer in simplest form.
Question 8.
$$\frac{1}{3} \times 1 \frac{1}{6}$$
Question 9.
$$3 \frac{1}{2} \times \frac{4}{9}$$
Question 10.
$$4 \frac{2}{3} \cdot \frac{3}{4}$$
Try It Multiply. Write the answer in simplest form.
Question 11.
$$1 \frac{7}{8} \cdot 2 \frac{2}{5}$$
Question 12.
$$5 \frac{5}{7} \times 2 \frac{1}{10}$$
Question 13.
$$2 \frac{1}{3} \cdot 7 \frac{2}{3}$$
MULTIPLYING FRACTIONS AND MIXED NUMBERS
Multiply. Write the answer in simplest form.
Question 14.
$$\frac{1}{8} \times \frac{1}{6}$$
Question 15.
$$\frac{3}{8} \cdot \frac{2}{3}$$
Question 16.
$$2 \frac{1}{6} \cdot 4 \frac{2}{5}$$
Question 17.
MP REASONING
What is the missing denominator?
Question 18.
USING TOOLS
Write a multiplication problem involving fractions that is represented by the model. Explain your reasoning.
Question 19.
USING TOOLS
Use the number line to find $$\frac{3}{4} \times \frac{1}{2}$$ Explain your reasoning.
Self-Assessment for Problem Solving
Solve each exercise. Then rate your understanding of the success criteria in your journal.
Question 20.
You spend $$\frac{5}{12}$$ of a day at an amusement park. You spend $$\frac{2}{5}$$ of that time riding waterslides. How many hours do you spend riding waterslides? Draw a model to show why your answer makes sense.
Question 21.
A venue is preparing for a concert on the floor shown. The width of the red carpet is $$\frac{1}{6}$$ of the width of the floor. What is the area of the red carpet?
Question 22.
You travel 9$$\frac{3}{8}$$ miles from your house to a shopping mall. You travel $$\frac{2}{3}$$ of that distance on an interstate. The only road construction you encounter is on the first $$\frac{2}{5}$$ of the interstate. On how many miles of your trip do you encounter construction?
### Multiplying Fractions Practice 2.1
Review & Refresh
Find the LCM of the numbers.
Question 1.
8, 10
Question 2.
5, 7
Question 3.
2, 5, 7
Question 4.
6, 7, 10
Question 5.
6 ÷ $$\frac{1}{2}$$
Question 6.
$$\frac{1}{4}$$ ÷ 8
Question 7.
4 ÷ $$\frac{1}{3}$$
Question 8.
$$\frac{1}{5}$$ ÷ 4
Write the product as a power.
Question 9.
10 × 10 × 10
Question 10.
5 × 5 × 5 × 5
Question 11.
How many inches arc in 5$$\frac{1}{2}$$ yards?
A. 15$$\frac{1}{2}$$
B. 16$$\frac{1}{2}$$
C. 66
D. 198
Concepts, Skills, & Problem Solving
MP CHOOSE TOOLS
A bottle of water is $$\frac{2}{3}$$ full. You drink the given portion of the water. Use a model to find the portion of the bottle of water that you drink.
Question 12.
$$\frac{1}{2}$$
Question 13.
$$\frac{1}{4}$$
Question 14.
$$\frac{3}{4}$$
MULTIPLYING FRACTIONS
Multiply. Write the answer in simplest form.
Question 15.
$$\frac{1}{7} \times \frac{2}{3}$$
Question 16.
$$\frac{5}{8} \cdot \frac{1}{2}$$
Question 17.
$$\frac{1}{4} \times \frac{2}{5}$$
Question 18.
$$\frac{3}{7} \times \frac{1}{4}$$
Question 19.
$$\frac{2}{3} \times \frac{4}{7}$$
Question 20.
$$\frac{5}{7} \times \frac{7}{8}$$
Question 21.
$$\frac{3}{8} \cdot \frac{1}{9}$$
Question 22.
$$\frac{5}{6} \cdot \frac{2}{5}$$
Question 23.
$$\frac{5}{12}$$ × 10
Question 24.
6 • $$\frac{7}{8}$$
Question 25.
$$\frac{3}{4} \times \frac{8}{15}$$
Question 26.
$$\frac{4}{9} \times \frac{4}{5}$$
Question 27.
$$\frac{3}{7} \cdot \frac{3}{7}$$
Question 28.
$$\frac{5}{6} \times \frac{2}{9}$$
Question 29.
$$\frac{13}{18} \times \frac{6}{7}$$
Question 30.
$$\frac{7}{9} \cdot \frac{21}{10}$$
Question 31.
MODELING REAL LIFE
In an aquarium, $$\frac{2}{5}$$ of the fish are surgeonfish. of these, $$\frac{3}{4}$$ are yellow tangs. What portion of all fish in the aquarium are yellow tangs?
Question 32.
MODELING REAL LIFE
You exercise for $$\frac{3}{4}$$ of an hour. You jump rope for $$\frac{1}{3}$$ of that time. What portion of the hour do you spend jumping rope?
MP REASONING
Without finding the products copy and complete the statement using <, >, or =. Explain your reasoning.
Question 33.
Question 34.
Question 35.
MULTIPLYING FRACTIONS AND MIXED NUMBERS
Multiply. Write the answer in simplest form.
Question 36.
$$1 \frac{1}{3} \cdot \frac{2}{3}$$
Question 37.
$$6 \frac{2}{3} \times \frac{3}{10}$$
Question 38.
$$2 \frac{1}{2} \cdot \frac{4}{5}$$
Question 39.
$$\frac{3}{5} \cdot 3 \frac{1}{3}$$
Question 40.
$$7 \frac{1}{2} \times \frac{2}{3}$$
Question 41.
$$\frac{5}{9} \times 3 \frac{3}{5}$$
Question 42.
$$\frac{3}{4} \cdot 1 \frac{1}{3}$$
Question 43.
$$3 \frac{3}{4} \times \frac{2}{5}$$
Question 44.
$$4 \frac{3}{8} \cdot \frac{4}{5}$$
Question 45.
$$\frac{3}{7} \times 2 \frac{5}{6}$$
Question 46.
$$1 \frac{3}{10} \times 18$$
Question 47.
$$15 \cdot 2 \frac{4}{9}$$
Question 48.
$$1 \frac{1}{6} \times 6 \frac{3}{4}$$
Question 49.
$$2 \frac{5}{12} \cdot 2 \frac{2}{3}$$
Question 50.
$$5 \frac{5}{7} \cdot 3 \frac{1}{8}$$
Question 51.
$$2 \frac{4}{5} \times 4 \frac{1}{16}$$
YOU BE THE TEACHER
Question 52.
Question 53.
Question 54.
MODELING REAL LIFE
A vitamin C tablet contains $$\frac{1}{4}$$ of a gram of vitamin C. You take 1$$\frac{1}{2}$$ every day. How many grams of vitamin C do you take every day?
Question 55.
MP PROBLEM SOLVING
You make a banner for a football ralIy.
a. What is the area of the banner?
b. You add a$$\frac{1}{4}$$ foot border on each side. What is the area of the new banner?
MULTIPLYING FRACTIONS AND MIXED NUMBERS
Multiply. Write the answer in simplest form.
Question 56.
$$\frac{1}{2} \times \frac{3}{5} \times \frac{4}{9}$$
Question 57.
$$\frac{4}{7} \cdot 4 \frac{3}{8} \cdot \frac{5}{6}$$
Question 58.
$$1 \frac{1}{15} \times 5 \frac{2}{5} \times 4 \frac{7}{12}$$
Question 59.
$$\left(\frac{3}{5}\right)^{3}$$
Question 60.
$$\left(\frac{4}{5}\right)^{2} \times\left(\frac{3}{4}\right)^{2}$$
Question 61.
$$\left(\frac{5}{6}\right)^{2} \cdot\left(1 \frac{1}{10}\right)^{2}$$
Question 62.
OPEN-ENDED
Find a fraction that, when multiplied by $$\frac{1}{2}$$, is less than $$\frac{1}{4}$$.
Question 63.
MP LOGIC
You are in a bike race. When you get to the first checkpoint, you arc $$\frac{2}{5}$$ of the distance to the second checkpoint. When you get to the second checkpoint, you are $$\frac{1}{4}$$ of the distance to the finish. What is the distance from the start to the first checkpoint?
Question 64.
MP NUMBER SENSE
Is the product of two positive mixed numbers ever less than 1? Explain.
Question 65.
MP REASONING
a. Draw a diagram o[thc fountain in the garden. Label the dimensions.
b. Describe two methods for finding the area of the garden that surrounds the fountain.
c. find the area. Which method did you use, and why?
Question 66.
MP PROBLEM SOLVING
The cooking time for a ham is $$\frac{2}{5}$$ of an hour for each pound. What time should you start cooking a ham that weighs 12$$\frac{3}{4}$$ pounds so that it is done at 4:45 P.M. ?
Question 67.
MP PRECISION
Complete the Four Square for $$\frac{7}{8} \times \frac{1}{3}$$
Question 68.
DIG DEEPER!
You ask 150 people about their pets. The results show that $$\frac{9}{25}$$ of the people own a dog. of the people who own a dog, $$\frac{1}{6}$$ of them also own a cat.
a. What portion of the people own a dog and a cat?
b. How many people own a dog but not a cat? Explain.
Question 69.
MP NUMBER SENSE
Use each of the numbers from 1 to 9 exactly once to create three mixed numbers with the greatest possible product. then use each of the numbers exactly once to create three mixed numbers with the least possible product. Find each product. Explain your reasoning. The fraction portion of each mixed number should be proper.
### Section 2.2 Dividing Fractions
Exploration 1
Dividing by Fractions
Work with a partner. Answer each question using a model.
a. How many two-thirds arc in four?
b. How many three-fourths arc in three?
c. How many two-fifths arc in four-fifths?
d. How many two-thirds arc in three?
e. How many one-thirds arc in five-sixths?
Exploration 2
Finding a Pattern
Work with a partner. The table shows the division expressions from Exploration 1. Complete each multiplication expression so that it has the same value as the division expression above it. What can you conclude about dividing by fractions?
Math Practice
Look for Structure
Can the pattern you found be applied to division by a whole number? Why or why not?
### Lesson 2.2 Dividing Fractions
Try It Write hie reciprocal of the number.
Question 1.
$$\frac{3}{4}$$
Question 2.
5
Question 3.
$$\frac{7}{2}$$
Question 4.
$$\frac{4}{9}$$
Question 5.
$$\frac{1}{2} \div \frac{1}{8}$$
Question 6.
$$\frac{2}{5} \div \frac{3}{10}$$
Question 7.
$$\frac{3}{8} \div \frac{3}{4}$$
Question 8.
$$\frac{2}{7} \div \frac{9}{14}$$
Try It Divide. Write the answer in simplest form.
Question 9.
$$\frac{1}{3}$$ ÷ 3
Question 10.
$$\frac{2}{3}$$ ÷ 10
Question 11.
$$\frac{5}{8}$$ ÷ 4
Question 12.
$$\frac{6}{7}$$ ÷ 4
Self-Assessment for Concepts & Skills
Solve each exercise. Then rate your understanding of the success criteria in your journal.
DIVIDING FRACTIONS
Question 13.
$$\frac{2}{3} \div \frac{5}{6}$$
Question 14.
$$\frac{6}{7}$$ ÷ 3
Question 15.
WHICH ONE DOESN’T BELONG?
Which of the following does not belong with the other three? Explain your reasoning.
$$\frac{2}{3} \div \frac{4}{5}$$ $$\frac{3}{2} \cdot \frac{4}{5}$$ $$\frac{5}{4} \times \frac{2}{3}$$ $$\frac{5}{4} \div \frac{3}{2}$$
MATCHING
Match the expression with its value.
16. $$\frac{2}{5} \div \frac{8}{15}$$ (A) $$\frac{1}{12}$$ 17. $$\frac{8}{15} \div \frac{2}{5}$$ (B) $$\frac{3}{4}$$ 18. $$\frac{2}{15} \div \frac{8}{5}$$ (c) 12 19. $$\frac{8}{5} \div \frac{2}{15}$$ (D) 1$$\frac{1}{3}$$
Self-Assessment for Problem Solving
Solve each exercise. Then rate your understanding, of the success criteria in your journal.
Question 20.
You have 5 cups of rice to make bibimbap, a popular Korean meal. The recipe calls for $$\frac{4}{5}$$ cup of rice per serving. H0w many full servings of bibimbap can you make? How much rice is left over?
Question 21.
A band earns $$\frac{2}{3}$$ of their profit from selling concert tickets and $$\frac{1}{5}$$ of their profit from selling merchandise. The band earns a profit of \$1500 from selling concert tickets. How much profit does the band earn from selling merchandise?
### Dividing Fractions Practice 2.2
Review & Refresh
Multiply. Write the answer in simplest form.
Question 1.
$$\frac{7}{10} \cdot \frac{3}{4}$$
Question 2.
$$\frac{5}{6} \times 2 \frac{1}{3}$$
Question 3.
$$\frac{4}{9} \times \frac{3}{8}$$
Question 4.
$$2 \frac{2}{5} \cdot 6 \frac{2}{3}$$
Match the expression with its value.
5. 3 + 2 × 42 A. 22 6. (3 + 2) × 42 B. 35 7. 2 + 3 × 42 C. 50 8. 42 + 2 × 3 D. 80
Find the area of the rectangle.
Question 9.
Question 10.
Question 11.
Concepts, skills, & Problem Solving
CHOOSE TOOLS
Answer the question using a model. (See Exploration 1, Page.No 53.)
Question 12.
How many three-fifths are in three?
Question 13.
How many two-ninths are in eight-ninths?
Question 14.
How many three-fourths are in seven-eighths?
WRITING RECIPROCALS
Write the reciprocal of the number.
Question 15.
8
Question 16.
$$\frac{6}{7}$$
Question 17.
$$\frac{2}{5}$$
Question 18.
$$\frac{11}{8}$$
DIVIDING FRACTIONS
Divide. Write the answer in simplest form.
Question 19.
$$\frac{1}{3} \div \frac{1}{2}$$
Question 20.
$$\frac{1}{8} \div \frac{1}{4}$$
Question 21.
$$\frac{2}{7} \div 2$$
Question 22.
$$\frac{6}{5} \div 3$$
Question 23.
$$\frac{2}{3} \div \frac{4}{9}$$
Question 24.
$$\frac{5}{6} \div \frac{2}{7}$$
Question 25.
$$12 \div \frac{3}{4}$$
Question 26.
$$8 \div \frac{2}{5}$$
Question 27.
$$\frac{3}{7} \div 6$$
Question 28.
$$\frac{12}{25} \div 4$$
Question 29.
$$\frac{2}{9} \div \frac{2}{3}$$
Question 30.
$$\frac{8}{15} \div \frac{4}{5}$$
Question 31.
$$\frac{1}{3} \div \frac{1}{9}$$
Question 32.
$$\frac{7}{10} \div \frac{3}{8}$$
Question 33.
$$\frac{14}{27} \div 7$$
Question 34.
$$\frac{5}{8} \div 15$$
Question 35.
$$\frac{27}{32} \div \frac{7}{8}$$
Question 36.
$$\frac{4}{15} \div \frac{10}{13}$$
Question 37.
$$9 \div \frac{4}{9}$$
Question 38.
$$10 \div \frac{5}{12}$$
YOU BE THE TEACHER
Question 39.
Question 40.
Question 41.
MP REASONING
You have $$\frac{3}{5}$$ of an apple pie. You divide the remaining pie into 5 equal slices. What portion of the original pie is each slice?
Question 42.
MP PROBLEM SOLVING
How many times longer is the baby alligator than the baby gecko?
OPEN-ENDED
Write a real-life problem for the expression. Then solve the problem.
Question 47.
Question 48.
Question 49.
MP REASONING
Without finding the quotient, copy and complete the statement using <, >, or =. Explain your reasoning.
Question 50.
Question 51.
Question 52.
Question 53.
ORDER OF OPERATIONS
Evaluate the expression. Write the answer in simplest form.
Question 54.
$$\frac{1}{6}$$ ÷ 6 ÷ 6
Question 55.
$$\frac{7}{12}$$ ÷ 14 ÷ 6
Question 56.
$$\frac{3}{5} \div \frac{4}{7} \div \frac{9}{10}$$
Question 57.
$$4 \div \frac{8}{9}-\frac{1}{2}$$
Question 58.
$$\frac{3}{4}+\frac{5}{6} \div \frac{2}{3}$$
Question 59.
$$\frac{7}{8}-\frac{3}{8} \div 9$$
Question 60.
$$\frac{9}{16} \div \frac{3}{4} \cdot \frac{2}{13}$$
Question 61.
$$\frac{3}{14} \cdot \frac{2}{5} \div \frac{6}{7}$$
Question 62.
$$\frac{10}{27} \cdot\left(\frac{3}{8} \div \frac{5}{24}\right)$$
Question 63.
MP NUMBER SENSE
When is the reciprocal of a fraction a whole number? Explain.
Question 64.
MODELING REAL LIFE
You use $$\frac{1}{8}$$ of your battery for every $$\frac{2}{5}$$ of an hour that you video chat. You use $$\frac{3}{4}$$ of your battery video chatting. How long did you video chat?
Question 65.
MP PROBLEM SOLVING
The table shows the portions of a family budget that are spent on several expenses.
Expense Portion of Budget Housing $$\frac{2}{5}$$ Food $$\frac{4}{9}$$ Automobiles $$\frac{1}{15}$$ Recreation $$\frac{1}{40}$$
a. How many times more is the expense for housing than for automobiles?
b. How many times more is the expense for food than for recreation?
c. The expense for automobile fuel is $$\frac{1}{60} 0$$ of the total expenses. What portion of the automobile expense is spent on fuel?
Question 66.
CRITICAL THINKING
A bottle of juice is $$\frac{2}{3}$$ full. the bottle contains $$\frac{4}{5}$$ of a cup of juice.
a. Write a division expression that represents the capacity of the bottle.
b. Write a related multiplication expression that represents the capacity of the bottle.
e. Explain how you can use the diagram to verify the expression In part (b).
d. Find the capacity of the bade.
Question 67.
DIG DEEPER!
You have 6 pints of glaze. It takes $$\frac{7}{8}$$ of a pint to glaze a bowl and $$\frac{9}{16}$$ of a pint to glaze a plate.
a. how many howls can you completely glaze? How many plates can you completely glaze?
b. You want to glaze 5 bowls, and then use the rest for plates. How many plates can you completely glaze? How much glaze will be left over?
c. How many of each object can you completely glaze so that there is no glaze left over? Explain how you found your answer.
Question 68.
MP REASONING
A water tank is $$\frac{1}{8}$$ full. The tank is $$\frac{3}{4}$$ full when 42 gallons of water are added to the tank.
a. How much water can the tank hold?
b. How much water was originally in the tank?
c. How much water is in the tank when it is $$\frac{1}{2}$$ full?
### Section 2.3 Dividing Mixed Numbers
Exploration 1
Dividing Mixed Numbers
Work with a partner. Write a real-life problem that represents each division expression described. Then solve each problem using a model. Check your answers.
a. How many three-fourths arc in four and one-half?
b. How many three-eighths arc in two and one-fourth?
c. How many one and one-halves arc in six?
d. How many seven-sixths arc in three and one-third?
e. How many one and one-fifths are in five?
f. How many three and one-halves arc in two and one-half?
g. How many four and one-halves arc in one and one-half?
### Lesson 2.3 Dividing Mixed Numbers
Try It Divide. Write the answer in simplest form.
Question 1.
$$3 \frac{2}{3} \div \frac{1}{3}$$
Question 2.
$$1 \frac{3}{7} \div \frac{2}{3}$$
Question 3.
$$2 \frac{1}{6} \div \frac{3}{4}$$
Question 4.
$$6 \frac{1}{2} \div 2$$
Question 5.
$$10 \frac{2}{3} \div 2 \frac{2}{3}$$
Question 6.
$$8 \frac{1}{4} \div 1 \frac{1}{2}$$
Question 7.
$$3 \div 1 \frac{3}{4}$$
Question 8.
$$\frac{3}{4} \div 2 \frac{1}{2}$$
Try It Evaluate the expression. Write the answer in simplest form.
Question 9.
$$1 \frac{1}{2} \div \frac{1}{6}-\frac{7}{8}$$
Question 10.
$$3 \frac{1}{3} \div \frac{5}{6}+\frac{8}{9}$$
Question 11.
$$\frac{2}{5}+2 \frac{4}{5} \div 2$$
Question 12.
$$\frac{2}{5}+2 \frac{4}{5} \div 2$$
Self-Assessment for Concepts & Skills
Solve each exercise. Then rate your understanding of the success criteria in your journal.
EVALUATING EXPRESSIONS
Evaluate the expression. Write the answer in simplest form.
Question 13.
$$4 \frac{4}{7} \div \frac{4}{7}$$
Question 14.
$$\frac{1}{2} \div 5 \frac{1}{4}$$
Question 15.
$$\frac{3}{4}+6 \frac{2}{5} \div 1 \frac{3}{5}$$
Question 16.
MP NUMBER SENSE
Is $$2 \frac{1}{2} \div 1 \frac{1}{4}$$ the sense as $$1 \frac{1}{4} \div 2 \frac{1}{2}$$? Use models to justify your answer.
Question 17.
DIFFERENT WORDS. SAME QUESTION
Which is different? Find “b0th” answers.
What is 5$$\frac{1}{2}$$ divided by $$\frac{1}{8}$$?
What is the quotient of 5$$\frac{1}{2}$$ and $$\frac{1}{8}$$?
What is 5$$\frac{1}{2}$$ times 8?
What is $$\frac{1}{8}$$ of 5$$\frac{1}{2}$$?
Self-Assessment for Problem Solving
Solve each exercise. Then rate your understanding of the success criteria in your journal.
Question 18.
A water cooler contains 160 cups of water. During practice, each person on a team fills a water bottle with 3$$\frac{1}{3}$$ cups of water from the cooler. Is there enough water for all 45 people on the team to fill their water bottles? Explain.
Question 19.
A cyclist is 7$$\frac{3}{4}$$ kilometers from the finish line of a race. The cyclist rides at a rate of 25$$\frac{5}{6}$$ kilometers per hour. How many minutes will it take the cyclist to finish the race?
### Dividing Mixed Numbers Practice 2.3
Review & Refresh
Divide. Write the answer in simplest form.
Question 1.
$$\frac{1}{8} \div \frac{1}{7}$$
Question 2.
$$\frac{7}{9} \div \frac{2}{3}$$
Question 3.
$$\frac{5}{6}$$ ÷ 10
Question 4.
12 ÷ $$\frac{3}{8}$$
Find the LCM of the numbers.
Question 5.
8, 14
Question 6.
9, 11, 12
Question 7.
12, 27, 30
Find the volume of the rectangular prism.
Question 8.
Question 9.
Question 10.
Question 11.
Which number is not a prime factor of 286?
A. 2
B. 7
C. 11
D. 13
Concepts, Skills & Problem Solving
MP CHOOSE TOOLS
Write a real-life problem that represents the division expression described. Then solve the problem using a model. Check your answer algebraically. (See Exploration 1, Page.No 61.)
Question 12.
How many two-thirds are in three and one-third?
Question 13.
How many one and one-sixths are in five and five-sixths?
Question 14.
How man two and one-halves are in eight and three-fourths?
DIVIDING WITH MIXED NUMBERS
Divide. Write the answer in simplest form.
Question 15.
$$2 \frac{1}{4} \div \frac{3}{4}$$
Question 16.
$$3 \frac{4}{5} \div \frac{2}{5}$$
Question 17.
$$8 \frac{1}{8} \div \frac{5}{6}$$
Question 18.
$$7 \frac{5}{9} \div \frac{4}{7}$$
Question 19.
$$7 \frac{1}{2} \div 1 \frac{9}{10}$$
Question 20.
$$3 \frac{3}{4} \div 2 \frac{1}{12}$$
Question 21.
$$7 \frac{1}{5} \div 8$$
Question 22.
$$8 \frac{4}{7} \div 15$$
Question 23.
$$8 \frac{1}{3} \div \frac{2}{3}$$
Question 24.
$$9 \frac{1}{6} \div \frac{5}{6}$$
Question 25.
$$13 \div 10 \frac{5}{6}$$
Question 26.
$$12 \div 5 \frac{9}{11}$$
Question 27.
$$\frac{7}{8} \div 3 \frac{1}{16}$$
Question 28.
$$\frac{4}{9} \div 1 \frac{7}{15}$$
Question 29.
$$4 \frac{5}{16} \div 3 \frac{3}{8}$$
Question 30.
$$6 \frac{2}{9} \div 5 \frac{5}{6}$$
Question 31.
YOU BE THE TEACHER
Your friend finds the quotient of 3$$\frac{1}{2}$$ and 1$$\frac{2}{3}$$. Is your friend correct? Explain your reasoning.
Question 32.
MP PROBLEM SOLVING
A platinum nugget weighs 3$$\frac{1}{2}$$ ounces. How many $$\frac{1}{4}$$ ounce pieces can he cut from the nugget?
ORDER OF OPERATIONS
Evaluate the expression. Write the answer in simplest form.
Question 33.
$$3 \div 1 \frac{1}{5}+\frac{1}{2}$$
Question 34.
$$4 \frac{2}{3}-1 \frac{1}{3} \div 2$$
Question 35.
$$\frac{2}{5}+2 \frac{1}{6} \div \frac{5}{6}$$
Question 36.
$$5 \frac{5}{6} \div 3 \frac{3}{4}-\frac{2}{9}$$
Question 37.
$$6 \frac{1}{2}-\frac{7}{8} \div 5 \frac{11}{16}$$
Question 38.
$$9 \frac{1}{6} \div 5+3 \frac{1}{3}$$
Question 39.
$$3 \frac{3}{5}+4 \frac{4}{15} \div \frac{4}{9}$$
Question 40.
$$\frac{3}{5} \times \frac{7}{12} \div 2 \frac{7}{10}$$
Question 41.
$$4 \frac{3}{8} \div \frac{3}{4} \cdot \frac{4}{7}$$
Question 42.
$$1 \frac{9}{11} \times 4 \frac{7}{12} \div \frac{2}{3}$$
Question 43.
$$3 \frac{4}{15} \div\left(8 \cdot 6 \frac{3}{10}\right)$$
Question 44.
$$2 \frac{5}{14} \div\left(2 \frac{5}{8} \times 1 \frac{3}{7}\right)$$
Question 45.
MP LOGIC
Your friend uses the model shown to state that $$2 \frac{1}{2} \div 1 \frac{1}{6}=2 \frac{1}{6}$$. Is your friend correct? Justify your answer using the model.
Question 46.
MODELING REAL LIFE
A bag contains 42 cups of dog food. Your dog cats 2$$\frac{1}{3}$$ cups of dog food each day. Is there enough food to last 3 weeks? Explain.
You have 12 cups of granola and 8$$\frac{1}{2}$$ cups of peanuts to make trail mix. What is the greatest number of full batches of trail mix you can make? Explain how you found your answer.
|
{}
|
# Hydrogen as a deisel engine additive
Discussion in 'Architecture & Engineering' started by kingcarrot, Feb 25, 2012.
1. ### Aqueous Idflat Earth skepticValued Senior Member
Messages:
6,152
Wow no kidding. This is why I think the mfr's are not pushing the envelope, presumably due to safety and reliability.
As for high load, that what I was just beginning to understand, after seeing a chart that shows no appreciable performance change, until right at peak, and then an additional 15% hydrogen adds a boost, for about 8% improvement in efficiency. I've always thought it was a scam, but now I'm seeing a small window for some kind of claim. And yes, large volume gas production costs way too much electricity, so you would want to build it up and store it. That would seem to contradict the mileage claims though, because you'd be robbing Peter to pay Paul.
3. ### GrumpyCurmudgeon of LucidityValued Senior Member
Messages:
1,876
kingcarrot
As to the HHO systems, I would pass, there are better, more reliable and safer ways of doing this, your stored HHO is a time bomb that could be set off by static electricity, it takes a large amount of unrecoverable energy to produce and seems it would have limited utility. Propane gives you the same types of benefits and you can carry a quite large supply. You'll need a common rail electronic injection motor and a Bully Dog(or similar)reprogrammer to take full advantage, plus you may want to up the springs in the turbo waste gate for more boost. With a stock turbo you'll probably be limited to about 300-350 hp, but torque will be close to 600 ft/lbs, great for towing. Going even further and be prepared to pull out your wallet and empty it on the parts counter as you will need a bigger turbo and a stronger drivetrain(the Allison can be built to handle more torque but it's not cheap), but at those levels you WILL twist your driveshafts into pretzels if you don't blow the chunk out of the rear axle first. Even if they live, the axle shafts are on borrowed time. There are heavier duty parts available but you would think they were made of some sort of gold alloy for what they cost. But you would end up with a snorting monster of a truck that would still get decent milage as long as you control your right foot.
Grumpy
Messages:
92
numbers yet?
Messages:
92
Numbers?
8. ### adoucetteCaca OccursValued Senior Member
Messages:
7,829
Sure.
But have you considered how much H2 that would be?
As has been pointed out, a Semi gets about 6 miles per gallon of Diesel.
So, while tooling along at 60 mph, it's burning 10 gallons per hour.
Do you really think you can produce the equiv of 1/2 gallon of liquid H2 per hour with the energy put out by an alternator?
9. ### TrippyALEA IACTA ESTStaff Member
Messages:
10,890
^ New best friend.
10. ### wlminexBannedBanned
Messages:
1,587
How soluble is hydrogen in hydrocarbon fuels; point being: one might inject pressurized hydrogen into the hydrocarbon fuel just prior to intake and ignition.
11. ### GrumpyCurmudgeon of LucidityValued Senior Member
Messages:
1,876
Trippy
Are you a Ford man, too? It's a disease that has no cure wherin you KNOW that the Ford is always going to win, despite any so-called evidence to the contrary, as the other cars must be cheating!
Grumpy
12. ### spidergoatValued Senior Member
Messages:
51,927
grumpy, I hope you relocated the gas tank. I wanna turbo my 2001 Focus!
13. ### TrippyALEA IACTA ESTStaff Member
Messages:
10,890
Yeah, most days. I don't talk to loudly about it because I'm surrounded by Holden fanatics.
My first car was supposed to be a Ford Anglia, but I pissed away most of the money that I was supposed to use to restore it to a useable condition.
Having said that, the only way I could fit into it was to take the front seat of its rails and have it sitting hard against the back seat.
That and I just really like the idea of what you're doing. I have a number of ideas that I would love to try out, but lack the capital to invest in them, and if I'm being honest with myself, although I thoroughly understand the theoretical aspects, I was never given the opportunity to learn how to apply them.
14. ### GrumpyCurmudgeon of LucidityValued Senior Member
Messages:
1,876
spidergoat
Fuel cell for sure, one with a bladder and foam. About $180 from Summit, but good insurance. They actually changed the tanks in 1974 and statistics from the time indicate that Pintos had firey accidents at rates no different from Impalas and Chevy Trucks and only slightly elevated from other vehicles. The stink was caused by callous interoffice memos that said it would have only required about$12 each to fix the problems(filler pipe going further into the tank and a shield between the tank and the rear axle)but Ford management nixed it due to the desire for profits. A shame since the Pinto was a decade long success that sold in numbers beat only by the Model T, Citreon 2CV and the Volkswagon. I had a 71(1600, drum brakes)and a 1980 wagon, one of the first and one of the last. This one is my 6th. It is a roomy small car(for two, anyway)that handles very well, and cute and rare in this day(everybody's got Mustangs, no one has a Pinto at all the car shows I go to). Plus it will hold almost every Ford engine made...
That is a 526 ci Boss 429
And that is a 427 SOHC with a blower on top. Wish they were mine!
This is what mine will look more like(that's a MGB with an SHO)
Grumpy
15. ### spidergoatValued Senior Member
Messages:
51,927
Wow, sweet! I hear you can get a V8 engine kit for the focus too, with rear wheel drive. But I'm sure it's easier in a Pinto.
16. ### GrumpyCurmudgeon of LucidityValued Senior Member
Messages:
1,876
spidergoat
What year Focus? Ztec or Duratec?
Either can be turboed to close to 300 hp, but you really don't want more than that going through the front wheels without a Torsen or Wavetrac limited slip. Wish we had the European focus with four wheel drive in the states.
Grumpy
17. ### spidergoatValued Senior Member
Messages:
51,927
z-tec, and I would be happy with 180-200! But I don't know enough about it...
18. ### Aqueous Idflat Earth skepticValued Senior Member
Messages:
6,152
Sorry.. I wandered off.
Here I will pick up where I left off. The question now is: how to generate enough H[sub]2[/sub] to provide a mix with diesel, the intent being to bump the efficiency, not to gain something from nothing, as I first understood the claim.
Adoucette hit the nail on the head: how to generate 1/2 gal of H[sub]2[/sub]. That may not even be sufficient. One source I found claims you need a 15% mixture to achieve 8% mileage improvement, so I will assume that is the design requirement. This bumps us up to 1.5 gal/hr hydrogen generating capacity. That's not gas, that's how much water must be electrolyzed! It's hefty, as we will see.
6 mpg × 1609 m/mi ÷ 4.54609 L/gal ≈ 2123.6 m/L
60 mph × 1609 m/mi ÷ 3600 sec/hr ≈ 26.8167 m/s
diesel burn rate: 26.8167 m/s ÷ 2123.6 m/L ≈ 0.012628 L/s
rate of electrolysis: 15% of 0.012628 L/s ≈ 0.001894 L/s (liquid water)
hourly water consumption: 0.001894 L/s × 3600 s/hr ÷ 4.54609 L/gal = 1.5 gal
Now for the electrolysis:
0.001894 L/s × 55.6 mol/L = 0.105318 mol/s
2 electrons are needed to liberate a mole of H[sub]2[/sub]:
0.105318 mol∙H[sub]2[/sub]O/s × 2e[sup]-[/sup]/H[sub]2[/sub]O = 0.210636 mol∙e[sup]-[/sup]/s
0.210636 mol∙e[sup]-[/sup]/s × 96,485 C/mol∙e[sup]-[/sup] = 20323 C/s
20323 C/s = 20323 A.
Now the question is: where do we get 20kA of electricity?
20323 A × 1.23 V = 25 kW
The maximum available power from the alternator is
140 A × 13 V = 1820 W.
Obviously this is impossible. I have ignored the problem of sending 20 kA down any conductor without huge ohmic loss, I've assumed perfect efficiency in the wet cell, which is impossible, and I've ignored size of the plates and deterioration by plating out the electrode material.
My next move is to try to rethink what the claim is really saying. Perhaps they were talking about 15% mixture by vol of gas, not liquid. If so, we can divide the result by 22, since a volume of gas has 1/22 the moles of the equivalent liquid volume:
25 kW ÷ 22 = 1136 W
This places us within reasonable operating capacity of the alternator, but efficiency of the wet cell will steal that gain. For example, if we assume 50% efficiency:
1136 W × 2 = 2273 W
but we can only produce at most
1820 W ÷ 2273 W = 80%
Therefore we need to reduce our assumption of 15% mixture down to
15% × 80% = 12%
I don't know what happens when we do that. Now let's see how we're doing with production of current:
20323 A × 2 ÷ 22 = 1848 A
Still huge. But this was assuming a single cell. Now suppose we break them into a series. Ten cells in series gives:
1.23 V × 10 = 12.3 V
1848 A ÷ 10 = 185 A
But we can only produce 140 A, so we must derate our mixture further:
140 A ÷ 185 A = 75.7 %
12% × 75.7% = 9%
so this would seem to be the limit on mixture.
Last but not least is to look at plate design. Here is what happens when you pass electrons through an electrode:
One of the facts missing in the fine print is that you are consuming the electrodes, not just water. That can get expensive, unless you have a cheap supply. I'm going to go off and look at plate design and come back to this later.
I still don't know if I am anywhere close to what the manufacturers are claiming. I would need to see their numbers.
Last edited: Feb 28, 2012
19. ### EmilValued Senior Member
Messages:
2,789
Oops ... I did not notice this thread !
How about a new engine design? (not a four-stroke engine)
Messages:
1,876
|
{}
|
# Tips on coding style
Published on February 28, 2012 by
Tagged:
Good programmers know that writing code is more than just… writing code. It’s more than writing efficient code… It’s also about writing good code with respect to the ones that are going to read and/or use that code. This is specially true in open source communities where potentially hundreds of people could be looking at your code. You have to write code that can be easily read and used by others. And to do that, you need some some sort of standards of code writing. This is where the idea of coding styles comes in.
Every software project has its (hopefully properly defined) coding style. It can depend a lot on the programming language that the project uses. The style can specify the indentation, the variable naming, the use of spaces or the use of curly braces.
For example, the Linux Kernel has its coding style well defined in the Documentation pages. It is based on the Kernighan & Ritchie (K&R) style, the Linux Kernel being written in C. This is a very popular coding style with several projects using it, sometimes considered the de facto coding style for C.
If you want to check if your code follows the coding style of Linux, you can use checkpatch.pl. This script can be found in the source code of the Linux Kernel in the scripts directory. It is mainly used for checking patches submitted for Linux, but it can be used on normal C source fies using the -f parameter. You need to clone the Linux tree to get the script, and you need to run it from the root of the tree.
Here is an example of badly written code:
1
2 int main(void)···
3 {
4 int i,a;···
5 » »
6 for(i=0;i<10;i++)
7 a=i;
8 //this code is useless
9 if(a==i){
10 return 0;
11 }
12
13 return 0;
14 }·····
Note that the · character would represent a space and » would represent a tab. Spaces would represent… spaces.
And this is what checkpatch would report:
alexj@ixmint ~/linux $scripts/checkpatch.pl -f bad.c ERROR: trailing whitespace #2: FILE: bad.c:2: +int main(void)$
ERROR: trailing whitespace
+ int i,a; $WARNING: please, no spaces at the start of a line #4: FILE: bad.c:4: + int i,a;$
ERROR: space required after that ',' (ctx:VxV)
+ int i,a;
^
ERROR: trailing whitespace
+^I^I$WARNING: please, no spaces at the start of a line #6: FILE: bad.c:6: + for(i=0;i<10;i++)$
WARNING: suspect code indent for conditional statements (3, 6)
+ for(i=0;i<10;i++)
+ a=i;
ERROR: spaces required around that '=' (ctx:VxV)
+ for(i=0;i<10;i++)
^
ERROR: space required after that ';' (ctx:VxV)
+ for(i=0;i<10;i++)
^
ERROR: spaces required around that '<' (ctx:VxV)
+ for(i=0;i<10;i++)
^
ERROR: space required after that ';' (ctx:VxV)
+ for(i=0;i<10;i++)
^
ERROR: space required before the open parenthesis '('
+ for(i=0;i<10;i++)
WARNING: please, no spaces at the start of a line
+ a=i;$ERROR: spaces required around that '=' (ctx:VxV) #7: FILE: bad.c:7: + a=i; ^ WARNING: please, no spaces at the start of a line #8: FILE: bad.c:8: + //this code is useless$
ERROR: do not use C99 // comments
+ //this code is useless
WARNING: please, no spaces at the start of a line
+ if(a=i){$WARNING: suspect code indent for conditional statements (3, 3) #9: FILE: bad.c:9: + if(a=i){ + return 1; ERROR: spaces required around that '=' (ctx:VxV) #9: FILE: bad.c:9: + if(a=i){ ^ ERROR: space required before the open brace '{' #9: FILE: bad.c:9: + if(a=i){ ERROR: space required before the open parenthesis '(' #9: FILE: bad.c:9: + if(a=i){ ERROR: do not use assignment in if condition #9: FILE: bad.c:9: + if(a=i){ WARNING: braces {} are not necessary for single statement blocks #9: FILE: bad.c:9: + if(a=i){ + return 1; + } WARNING: please, no spaces at the start of a line #10: FILE: bad.c:10: + return 1;$
WARNING: please, no spaces at the start of a line
+ }$WARNING: please, no spaces at the start of a line #13: FILE: bad.c:13: + return 0$
ERROR: trailing whitespace
+} \$
total: 16 errors, 11 warnings, 14 lines checked
NOTE: whitespace errors detected, you may wish to use scripts/cleanpatch or
scripts/cleanfile
bad.c has style problems, please review.
Most of the errors are regarding whitespaces, space or tab characters that shouldn’t be there. It’s hard to spot spaces or tabs because they are invisible. But a good tip is to make them visible in your editor. Visually replacing characters will not modify the source (spaces will still be spaces) but they will pop up in your editor so you know to delete them. For example, in vi you can use this (credits to Vlad Dogaru for it):
set list listchars=tab:»\ ,trail:·,extends:»,precedes:«
Other warnings come from the fact that indentation was made with 3 spaces and not 8. Tabs and spaces should be used consistently. For example, you can set in vi the ‘width’ of a tab with:
:set tabstop=8
There are places where you don’t want spaces, but there are situations where you do want them. You should leave a space after keywords like if or for and around operators like =. Doing this makes the code a lot more readable.
Curly braces should be used, but only when needed. If an if has only one instruction to be executed on the branch, it is pointless to have braces enclosing it. Indentation is enough to mark the instruction.
Comment types are a delicate subject. The classic C specification only allows /* */ block comments. C99 allows // as one line comments. Some coding styles (like the Linux coding style) don’t allow C99 comments.
This is the way the code should look like with proper coding style:
1 int main(void)
2 {
3 » int i, a;
4
5 » for (i = 0; i < 10; i++)
6 » » a = i;
7 » /* This code is useless */
8 » if (a == i)
9 » » return 1;
10
11 » return 0;
12 }
Other programing languages can have similar coding guidelines. For Python, there is PEP, as dictated by the creator of Python himself.
But we should always keep in mind that there is no One True Coding Style. Like all great debates, everybody could argue that one is better than another. What is important and everybody (mostly) agrees is to have consistency within a project in regards to the code the community writes.
|
{}
|
# Pauli Matrices and the Bloch Sphere¶
In [1]:
from qiskit import *
from qiskit.visualization import plot_bloch_vector
In this section we'll further develop the topics introduced in the last, and introduce a useful visualization of single-qubit states.
### Pauli matrices¶
Wherever there are vectors, matrices are not far behind. The three important matrices for qubits are known as the Pauli matrices.
$$X= \begin{pmatrix} 0&1 \\\\ 1&0 \end{pmatrix}\\\\ Y= \begin{pmatrix} 0&-i \\\\ i&0 \end{pmatrix}\\\\ Z= \begin{pmatrix} 1&0 \\\\ 0&-1 \end{pmatrix}$$
These have many useful properties, as well as a deep connection to the x, y and z measurements. Specifically, we can use them to calculate the three quantities used in the last section: $$\langle a | X | a\rangle = p^x_0 (|a\rangle)-p^x_1(|a\rangle),\\\\ \langle a | Y | a\rangle = p^y_0 (|a\rangle)-p^y_1(|a\rangle),\\\\ \langle a | Z | a\rangle = p^z_0 (|a\rangle)-p^z_1(|a\rangle).$$
These quantities are known as the expectation values of the three matrices. In calculating them, we make use of standard matrix multiplication.
Typically, we prefer to use a more compact notation for the quantities above. Since we usually know what state we are talking about in any given situation, we don't explicitly write it in. This allows us to write $\langle X \rangle = \langle a|X|a \rangle$, etc. Our statement from the last section, regarding the conservation of certainty for an isolated qubit, can then be written
$$\langle X \rangle^2 + \langle Y \rangle^2 + \langle Z \rangle^2 = 1.$$
To calculate these values in Qiskit, we first need a single qubit circuit to analyze.
In [2]:
qc = QuantumCircuit(1)
Then we need to define the x, y and z measurements.
In [3]:
# z measurement of qubit 0
measure_z = QuantumCircuit(1,1)
measure_z.measure(0,0);
# x measurement of qubit 0
measure_x = QuantumCircuit(1,1)
measure_x.h(0)
measure_x.measure(0,0)
# y measurement of qubit 0
measure_y = QuantumCircuit(1,1)
measure_y.sdg(0)
measure_y.h(0)
measure_y.measure(0,0);
Finally we can run the circuit with each kind of measurement, calculate the probabities and use them to determine $\langle X \rangle$, $\langle Y \rangle$ and $\langle Z \rangle$. This requires a process largely similar to the one used in the last section to calculate total certainty.
Here we place the results in a list called bloch_vector, for which bloch_vector[0] is $\langle X \rangle$, bloch_vector[1] is $\langle Y \rangle$ and bloch_vector[2] is $\langle Z \rangle$
In [4]:
shots = 2**14 # number of samples used for statistics
bloch_vector = []
for measure_circuit in [measure_x, measure_y, measure_z]:
# run the circuit with a the selected measurement and get the number of samples that output each bit value
counts = execute(qc+measure_circuit,Aer.get_backend('qasm_simulator'),shots=shots).result().get_counts()
# calculate the probabilities for each bit value
probs = {}
for output in ['0','1']:
if output in counts:
probs[output] = counts[output]/shots
else:
probs[output] = 0
bloch_vector.append( probs['0'] - probs['1'] )
### The Bloch sphere¶
Let's take a moment to think a little about the numbers $\langle X \rangle$, $\langle Y \rangle$ and $\langle Z \rangle$. Though their values depend on what state our qubit is in, they are always constrained to be no larger than 1, and no smaller than -1. They also collectively obey the condition $\langle X \rangle^2 + \langle Y \rangle^2 + \langle Z \rangle^2 = 1$.
The same properties are also shared by another set of three numbers that we know from a completely different context. To see what they are, first consider a sphere. For this, we can describe every point on the surface in terms of its x, y and z coordinates. We'll place the origin of our coordinate system at the center of the sphere. The coordinates are then constrained by the radius in both directions: they can be no greater than $r$ , and no less than $-r$ . For simplicity, let's set the radius to be $r=1$.
For any point, the distance from the center of the sphere can be determined by the 3D version of Pythagoras' theorem. Specifically, $x^2 + y^2 + z^2$. For points on the surface, this distance is always 1.
So now we have three numbers that can each be no greater than 1, no less than -1, and for which the sum of the squares is always 1. All exactly the same as $\langle X \rangle$, $\langle Y \rangle$ and $\langle Z \rangle$. They even have pretty much the same names as these values.
Because of this correspondence, we can apply all our existing knowledge and intuition about balls to our understanding of qubits. Specifically, we can visualize any single-qubit state as a point on the surface of a sphere. We call this the Bloch sphere.
In [5]:
plot_bloch_vector( bloch_vector )
Out[5]:
We usually associate $|0\rangle$ with the north pole, $|1\rangle$ with the south, and the states for the x and y measurements around the equator. Any pair of orthogonal states correspond to diametrically opposite points on this sphere.
As we'll see in future sections, the Bloch sphere makes it easier to understand single-qubit operations. Each moves points around on the surface of the sphere, and so can be interpreted as a simple rotation.
|
{}
|
# On the accuracy and running time of GSAT
Randomized algorithms for deciding satisfiability were shown to be effective in solving problems with thousands of variables. However, these algorithms are not complete. That is, they provide no guarantee that a satisfying assignment, if one exists, will be found. Thus, when studying randomized algorithms, there are two important characteristics that need to be considered: the running time and, even more importantly, the accuracy --- a measure of likelihood that a satisfying assignment will be found, provided one exists. In fact, we argue that without a reference to the accuracy, the notion of the running time for randomized algorithms is not well-defined. In this paper, we introduce a formal notion of accuracy. We use it to define a concept of the running time. We use both notions to study the random walk strategy GSAT algorithm. We investigate the dependence of accuracy on properties of input formulas such as clause-to-variable ratio and the number of satisfying assignments. We demonstrate that the running time of GSAT grows exponentially in the number of variables of the input formula for randomly generated 3-CNF formulas and for the formulas encoding 3- and 4-colorability of graphs.
## Authors
• 4 publications
• 25 publications
• ### Are You Satisfied by This Partial Assignment?
Many procedures for SAT and SAT-related problems – in particular for tho...
02/28/2020 ∙ by Roberto Sebastiani, et al. ∙ 0
• ### Faster algorithm for Unique (k,2)-CSP
In a (k,2)-Constraint Satisfaction Problem we are given a set of arbitra...
10/07/2021 ∙ by Or Zamir, et al. ∙ 0
• ### Erratum: Fast and Simple Horizontal Coordinate Assignment
We point out two flaws in the algorithm of Brandes and Köpf (Proc. GD 20...
08/04/2020 ∙ by Ulrik Brandes, et al. ∙ 0
• ### Hard satisfiable formulas for DPLL algorithms using heuristics with small memory
DPLL algorithm for solving the Boolean satisfiability problem (SAT) can ...
01/23/2021 ∙ by Nikita Gaevoy, et al. ∙ 0
• ### Faster Random k-CNF Satisfiability
We describe an algorithm to solve the problem of Boolean CNF-Satisfiabil...
03/25/2019 ∙ by Andrea Lincoln, et al. ∙ 0
• ### Learning definable hypotheses on trees
We study the problem of learning properties of nodes in tree structures....
09/24/2019 ∙ by Emilie Grienenberger, et al. ∙ 0
• ### Energy-Efficient Naming in Beeping Networks
A single-hop beeping network is a distributed communication model in whi...
06/07/2021 ∙ by Ny Aina Andriambolamalala, et al. ∙ 0
##### This week in AI
Get the week's most popular data science and artificial intelligence research sent straight to your inbox every Saturday.
## 1 Introduction
The problem of deciding satisfiability of a boolean formula is extensively studied in computer science. It appears prominently, as a prototypical NP-complete problem, in the investigations of computational complexity classes. It is studied by the automated theorem proving community. It is also of substantial interest to the AI community due to its applications in several areas including knowledge representation, diagnosis and planning.
Deciding satisfiability of a boolean formula is an NP-complete problem. Thus, it is unlikely that sound and complete algorithms running in polynomial time exist. However, recent years brought several significant advances. First, fast (although, clearly, still exponential in the worst case) implementations of the celebrated Davis-Putnam procedure [DP60] were found. These implementations are able to determine in a matter of seconds the satisfiability of critically constrained CNF formulas with 300 variables and thousands of clauses [DABC96]. Second, several fast randomized algorithms were proposed and thoroughly studied [SLM92, SKC96, SK93, MSG97, Spe96]. These algorithms randomly generate valuations and then apply some local improvement method in an attempt to reach a satisfying assignment. They are often very fast but they provide no guarantee that, given a satisfiable formula, a satisfying assignment will be found. That is, randomized algorithms, while often fast, are not complete. Still, they were shown to be quite effective and solved several practical large-scale satisfiability problems [KS92].
One of the most extensively studied randomized algorithms recently is GSAT [SLM92]. GSAT was shown to outperform the Davis-Putnam procedure on randomly generated 3-CNF formulas from the crossover region [SLM92]. However, GSAT’s performance on structured formulas (encoding coloring and planning problems) was poorer [SKC96, SK93, SKC94]. The basic GSAT algorithm would often become trapped within local minima and never reach a solution. To remedy this, several strategies for escaping from local minima were added to GSAT yielding its variants: GSAT with averaging, GSAT with clause weighting, GSAT with random walk strategy (RWS-GSAT), among others [SK93, SKC94]. GSAT with random walk strategy was shown to perform especially well. These studies, while conducted on a wide range of classes of formulas rarely address a critical issue of the likelihood that GSAT will find a satisfying assignment, if one exists, and the running time is studied without a reference to this likelihood. Notable exceptions are [Spe96], where RWS-GSAT is compared with a simulated annealing algorithm SASAT, and [MSG97], where RSW-GSAT is compared to a tabu search method.
In this paper, we propose a systematic approach for studying the quality of randomized algorithms. To this end, we introduce the concepts of the accuracy and of the running time relative to the accuracy. The accuracy measures how likely it is that a randomized algorithm finds a satisfying assignment, assuming that the input formula is satisfiable. It is clear that the accuracy of GSAT (and any other similar randomized algorithm) grows as a function of time — the longer we let the algorithm run, the better the chance that it will find a satisfying valuation (if one exists). In this paper, we present experimental results that allow us to quantify this intuition and get insights into the rate of growth of the accuracy.
The notion of the running time of a randomized algorithm has not been rigorously studied. First, in most cases, a randomized algorithm has its running time determined by the choice of parameters that specify the number of random guesses, the number of random steps in a local improvement process, etc. Second, in practical applications, randomized algorithms are often used in an interactive way. The algorithm is allowed to run until it finds a solution or the user decides not to wait any more, stops the execution, modifies the parameters of the algorithm or modifies the problem, and tries again. Finally, since randomized algorithms are not complete, they may make errors by not finding satisfying assignments when such assignments exist. Algorithms that are faster may be less accurate and the trade-off must be taken into consideration [Spe96].
It all points to the problems that arise when attempting to systematically study the running times of randomized algorithms and extrapolate their asymptotic behavior. In this paper, we define the concept of a running time relative to the accuracy. The relative running time is, intuitively, the time needed by a randomized algorithm to guarantee a postulated accuracy. We show in the paper that the relative running time is a useful performance measure for randomized satisfiability testing algorithms. In particular, we show that the running time of GSAT relative to a prescribed accuracy grows exponentially with the size of the problem.
Related work where the emphasis has been on fine tuning parameter settings [PW96, GW95] has shown somewhat different results in regard to the increase in time as the size of the problems grow. The growth shown by [PW96] is the retropective variation of maxflips rather than the total number of flips. The number of variables for the 3-CNF randomized instances reported [GW95] are . Although our results are also limited by the ability of complete algorithms to determine satisfiable instances, we have results for variable instances in the crossover region. The focus in our work is on maintaining accuracy as the size of the problems increase.
Second, we study the dependence of the accuracy and the relative running time on the number of satisfying assignments that the input formula admits. Intuitively, the more satisfying assignments the input formula has, the better the chance that a randomized algorithm finds one of them, and the shorter the time needed to do so. Again, our results quantify these intuitions. We show that the performance of GSAT increases exponentially with the growth in the number of satisfying assignments.
These results have interesting implications for the problem of constructing sets of test cases for experimenting with satisfiability algorithms. It is now commonly accepted that random -CNF formulas from the cross-over region are “difficult” from the point of view of deciding their satisfiability. Consequently, they are good candidates for testing satisfiability algorithms. These claims are based on the studies of the performance of the Davis-Putnam procedure. Indeed, on average, it takes the most time to decide satisfiability of CNF formulas randomly generated from the cross-over region. However, the suitability of formulas generated randomly from the cross-over region for the studies of the performance of randomized algorithms is less clear. Our results indicate that the performance of randomized algorithms critically depends on the number of satisfying assignments and much less on the density of the problem. Both under-constrained and over-constrained problems with a small number of satisfying assignments turn out to be hard for randomized algorithms. In the same time, Davis-Putnam procedure, while sensitive to the density, is quite robust with respect to the number of satisfying truth assignments.
On the other hand, there are classes of problems that are “easy” for Davis-Putnam procedure. For instance, Davis-Putnam procedure is very effective in finding 3-colorings of graphs from special classes such as 2-trees (see Section 4 for definitions). Thus, they are not appropriate benchmarks for Davis-Putnam type algorithms. However, a common intuition is that structured problems are “hard” for randomized algorithms [SKC96, SK93, SKC94]. In this paper we study this claim for the formulas that encode 3- and 4-coloring problem for 2-trees. We show that GSAT’s running time relative to a given accuracy grows exponentially with the size of a graph. This provides a formal evidence to the “hardness” claim for this class of problems and implies that, while not useful in the studies of complete algorithms such as Davis-Putnam method, they are excellent benchmarks for studying the performance of randomized algorithms.
The main contribution of our paper is not as much a discovery of an unexpected behavior of randomized algorithms for testing satisfiability as it is a proposed methodology for studying them. Our concepts of the accuracy and the relative running time allow us to quantify claims that are often accepted on the basis of intuitive arguments but have not been formally pinpointed.
In the paper, we apply our approach to the algorithm RWS-GSAT from [SK93, SKC94]. This algorithm is commonly regarded as one of the best randomized algorithms for satisfiability testing to date. For our experiments we used walksat version 35 downloaded from ftp.research.att.com/dist/ai and run on a SPARC Station 20.
## 2 Accuracy and running time
In this section, we will formally introduce the notion of the accuracy of a randomized algorithm . We will then define the concept of the running time relative to accuracy.
Let be a finite set of satisfiable CNF formulas and let
be a probability distribution defined on
. Let be a sound algorithm (randomized or not) to test satisfiability. By the accuracy of (relative to the probability space ), we mean the probability that finds a satisfying assignment for a formula generated from according to the distribution . Clearly, the accuracy of complete algorithms (for all possible spaces of satisfiable formulas) is 1 and, intuitively, the higher the accuracy, the more “complete” is the algorithm for the space .
When studying and comparing randomized algorithms that are not complete, accuracy seems to be an important characteristics. It needs to be taken into account — in addition to the running time. Clearly, very fast algorithms that often return no satisfying assignments, even if they exist, are not satisfactory. In fact, most of the work on developing better randomized algorithms can be viewed as aimed at increasing the accuracy of these algorithms. Despite this, the accuracy is rarely explicitly mentioned and studied (see [Spe96, MSG97]).
We will propose now an approach through which the running times of randomized satisfiability testing algorithms can be compared. We will restrict our considerations to the class of randomized algorithms designed according to the following general pattern. These algorithms consist of a series of tries. In each try, a truth assignment is randomly generated. This truth assignment is then subject to a series of local improvement steps aimed at, eventually, reaching a satisfying assignment. The maximum number of tries the algorithm will attempt and the length of each try are the parameters of the algorithm. They are usually specified by the user. We will denote by the maximum number of tries and by — the maximum number of local improvement steps. Algorithms designed according to this pattern differ, besides possible differences in the values and , in the specific definition of the local improvement process. A class of algorithms of this structure is quite wide and contains, in particular, the GSAT family of algorithms, as well as algorithms based on the simulated annealing approach.
Let be a randomized algorithm falling into the class described above. Clearly, its average running time on instances from the space of satisfiable formulas depends, to a large degree, on the particular choices for and . To get an objective measure of the running time, independent of and , when defining time, we require that a postulated accuracy be met. Formally, let , , be a real number (a postulated accuracy). Define the running time of relative to accuracy , , to be the minimum time such that for some positive integers and , the algorithm with the maximum of tries and with the maximum of local improvement steps per try satisfies:
1. the average running time on instances from is at most , and
2. the accuracy of on is at least .
Intuitively, is the minimum expected time that guarantees accuracy . In Section 3
, we describe an experimental approach that can be used to estimate the relative running time.
The concepts of accuracy and accuracy relative to the running time open a number of important (and, undoubtedly, very difficult) theoretical problems. However, in this paper we will focus on an experimental study of accuracy and relative running time for a GSAT-type algorithm. These algorithms follow the following general pattern for the local improvement process. Given a truth assignment, GSAT selects a variable such that after its truth value is flipped (changed to the opposite one) the number of unsatisfied clauses is minimum. Then, the flip is actually made depending on the result of some additional (often again random) procedure.
In our experiments, we used two types of data sets. Data sets of the first type consist of randomly generated 3-CNF formulas [MSL92]. Data sets of the second type consist of CNF formulas encoding the -colorability problem for randomly generated 2-trees. These two classes of data sets, as well as the results of the experiments, are described in detail in the next two sections.
## 3 Random 3-CNF formulas
Consider a randomly generated 3-CNF formula , with variables and the ratio of clauses to variables equal to . Intuitively, when increases, the probability that is satisfiable should decrease. It is indeed so [MSL92]. What is more surprising, it switches from being close to one to being close to zero very abruptly in a very small range from approximately to . The set of 3-CNF formulas at the cross-over region will be denoted by . Implementations of the Davis-Putnam procedure take, on average, the most time on 3-CNF formulas generated (according to a uniform probability distribution) from the cross-over regions. Thus, these formulas are commonly regarded as good test cases for experimental studies of the performance of satisfiability algorithms [CA93, Fre96].
We used seven sets of satisfiable 3-CNF formulas generated from the cross-over regions , . These data sets are denoted by . Each data set was obtained by generating randomly 3-CNF formulas with variables and (for ) and (for ) clauses. For each formula, the Davis-Putnam algorithm was then used to decide its satisfiability. The first one thousand satisfiable formulas found in this way were chosen to form the data set.
The random algorithms are often used with much larger values of than we have reported in this paper. The importance of accuracy in this study required that we have only satisfiable formulas (otherwise, the accuracy cannot be reliably estimated). This limited the size of randomly generated 3-CNF formulas used in our study since we had to use a complete satisfiability testing procedure to discard those randomly generated formulas that were not satisfiable. In Section 5, we discuss ways in which hard test cases for randomized algorithms can be generated that are not subject to the size limitation.
For each data set , we determined values for , say and for use with RWS-GSAT, big enough to result in the accuracy at least 0.98. For instance, for , ranged from to , with the increment of 100, and ranged from 5 to 50, with the increment of 5. Next, for each combination of and , we ran RWS-GSAT on all formulas in and tabulated both the running time and the percentage of problems for which the satisfying assignment was found (this quantity was used as an estimate of the accuracy). These estimates and average running times for the data set are shown in the tables in Figure 1.
Fixing a required accuracy, say at a level of , we then looked for the best time which resulted in this (or higher) accuracy. We used this time as an experimental estimate for . For instance, there are 12 entries in the accuracy table with accuracy or more. The lowest value from the corresponding entries in the running time table is 0.03 sec. and it is used as an estimate for .
The relative running times for RWS-GSAT run on the data sets , , and for and , are shown in Figure 2. Both graphs demonstrate exponential growth, with the running time increasing by the factor of 1.5 - 2 for every 50 additional variables in the input problems. Thus, while GSAT outperforms Davis-Putnam procedure for instances generated from the critical regions, if we prescribe the accuracy, it is still exponential and, thus, will quickly reach the limits of its applicability. We did not extend our results beyond formulas with up to 400 variables due to the limitations of the Davis-Putnam procedure, (or any other complete method to test satisfiability). For problems of this size, GSAT is still extremely effective (takes only about 2.5 seconds). Data sets used in Section 5 do not have this limitation (we know all formulas in these sets are satisfiable and there is no need to refer to complete satisfiability testing programs). The results presented there also illustrate the exponential growth of the relative running time and are consistent with those discussed here.
## 4 Number of satisfying assignments
It seems intuitive that accuracy and running time would be dependent on the number of possible satisfying assignments. Studies using randomly generated 3-CNF formulas [CFG96] and 3-CNF formulas generated randomly with parameters allowing the user to control the number of satisfiable solutions for each instance [CI95] show this correlation.
In the same way as for the data sets , we constructed data sets , where , and , . Each data set consists of 100 satisfiable 3-CNF formulas generated from the cross-over region and having more than and no more than satisfying assignments. Each data set was formed by randomly generating 3-CNF formulas from the cross-over region and by selecting the first 100 formulas with the number of satisfying assignments falling in the prescribed range (again, we used the Davis-Putnam procedure, here).
For each data set we ran the RWS-GSAT algorithm with and thus, allowing the same upper limits for the number of random steps for all data sets (these values resulted in the accuracy of .99 in our experiments with the data set discussed earlier). Figure 3 summarizes our findings. It shows that there is a strong relationship between accuracy and the number of possible satisfying assignments. Generally, instances with small number of solutions are much harder for RWS-GSAT than those with large numbers of solutions. Moreover, this observation is not affected by how constrained the input formulas are. We observed the same general behavior when we repeated the experiment for data sets of 3-CNF formulas generated from the under-constrained region (100 variables, 410 clauses) and over-constrained region (100 variables, 450 clauses), with under-constrained instances with few solutions being the hardest.
These results indicate that, when generating data sets for experimental studies of randomized algorithms, it is more important to ensure that they have few solutions rather than that they come from the critically constrained region.
## 5 CNF formulas encoding k-colorability
To expand the scope of applicability of our results and argue their robustness, we also used in our study data sets consisting of CNF formulas encoding the -colorability problem for graphs. While easy for Davis-Putnam procedure (which resolves their satisfiability in polynomial time), formulas of this type are believed to be “hard” for randomized algorithms and were used in the past in the experimental studies of their performance. In particular, it was reported in [SK93] that RWS-GSAT does not perform well on such inputs (see also [JAMS91]).
Given a graph with the vertex set and the edge set , we construct the CNF formula as follows. First, we introduce new propositional variables , and . The variable expresses the fact that the vertex is colored with the color . Now, we define to consist of the following clauses:
1. , for every edge from ,
2. , for every vertex of ,
3. , for every vertex of and for every , .
It is easy to see that there is a one-to-one correspondence between -colorings of and satisfying assignments for . To generate formulas for experimenting with RWS-GSAT (and other satisfiability testing procedures) it is, then, enough to generate graphs and produce formulas .
In our experiments, we used formulas that encode -colorings for graphs known as -trees. The class of 2-trees is defined inductively as follows:
1. A complete graph on three vertices (a “triangle”) is a 2-tree
2. If is a 2-tree than a graph obtained by selecting an edge in , adding to a new vertex and joining to and is also a 2-tree.
A 2-tree with 6 vertices is shown in Fig. 4. The vertices of the original triangle are labeled 1, 2 and 3. The remaining vertices are labeled according to the order they were added.
The concept of 2-trees can be generalized to -trees, for an arbitrary . Graphs in these classes are important. They have bounded tree-width and, consequently, many NP-complete problems can be solved for them in polynomial time [AP89].
We can generate 2-trees randomly by simulating the definition given above and by selecting an edge for “expansion” randomly in the current 2-tree . We generated in this way families , for , each consisting of one hundred randomly generated 2-trees with vertices. Then, we created sets of CNF formulas , for . Each formula in a set has exactly 6 satisfying assignments (since each 2-tree has exactly 6 different 3-colorings). Thus, they are appropriate for testing the accuracy of RWS-GSAT.
Using CNF formulas of this type has an important benefit. Data sets can be prepared without the need to use complete (but very inefficient for large inputs) satisfiability testing procedures. By appropriately choosing the underlying graphs, we can guarantee the satisfiability of the resulting formulas and, often, we also have some control over the number of solutions (for instance, in the case of 3-colorability of 2-trees there are exactly 6 solutions).
We used the same methodology as the one described in the previous section to tabulate the accuracy and the running time of RSW-GSAT for a large range of choices for the parameters and . Based on these tables, as before, we computed estimates for the times for , for each of the data sets. The results that present the running time as a function of the number of vertices in a graph (which is of the same order as the number of variables in the corresponding CNF formula) are gathered in Figure 5. They show that RWS-GSAT’s performance deteriorates exponentially (time grows by the factor of for every 50 additional vertices).
An important question is: how to approach constraint satisfaction problems if they seem to be beyond the scope of applicability of randomized algorithms? A common approach is to relax some constraints. It often works because the resulting constraint sets (theories) are “easier” to satisfy (admit more satisfying assignments). We have already discussed the issue of the number of solutions in the previous section. Now, we will illustrate the effect of increasing the number of solutions (relaxing the constraints) in the case of the colorability problem. To this end, we will consider formulas from the spaces , representing 4-colorability of 2-trees. These formulas have exponentially many satisfying truth assignments (a 2-tree with vertices has exactly 4-colorings). For these formulas we also tabulated the times , for , as a function of the number of vertices in the graph. The results are shown in Figure 6.
Thus, despite the fact the size of a formula from is larger than the size of a formula from by the factor of , RWS-GSAT’s running times are much lower. In particular, within .5 seconds RWS-GSAT can find a 4-coloring of randomly generated 2-trees with 500 vertices. As demonstrated by Figure 5, RWS-GSAT would require thousands of seconds for 2-trees of this size to guarantee the same accuracy when finding 3-colorings. Thus, even a rather modest relaxation of constraints can increase the number of satisfying assignments substantially enough to lead to noticeable speed-ups. On the other hand, even though “easier”, the theories encoding the 4-colorability problem for 2-trees still are hard to solve by GSAT as the rate of growth of the relative running time is exponential (Fig. 6).
The results of this section further confirm and provide quantitative insights into our earlier claims about the exponential behavior of the relative running time for GSAT and on the dependence of the relative running time on the number of solutions. However, they also point out that by selecting a class of graphs (we selected the class of 2-trees here but there are, clearly, many other possibilities) and a graph problem (we focused on colorability but there are many other problems such as hamiltonicity, existence of vertex covers, cliques, etc.) then encoding these problems for graphs from the selected class yields a family of formulas that can be used in testing satisfiability algorithms. The main benefit of the approach is that by selecting a suitable class of graphs, we can guarantee satisfiability of the resulting formulas and can control the number of solutions, thus eliminating the need to resort to complete satisfiability procedures when preparing the test cases. We intend to further pursue this direction.
## 6 Conclusions
In the paper we formally stated the definitions of the accuracy of a randomized algorithm and of its running time relative to a prescribed accuracy. We showed that these notions enable objective studies and comparisons of the performance and quality of randomized algorithms. We applied our approach to study the RSW-GSAT algorithm. We showed that, given a prescribed accuracy, the running time of RWS-GSAT was exponential in the number of variables for several classes of randomly generated CNF formulas. We also showed that the accuracy (and, consequently, the running time relative to the accuracy) strongly depended on the number of satisfying assignments: the bigger this number, the easier was the problem for RWS-GSAT. This observation is independent of the “density” of the input formula. The results suggest that satisfiable CNF formulas with few satisfying assignments are hard for RWS-GSAT and should be used for comparisons and benchmarking. One such class of formulas, CNF encodings of the 3-colorability problem for 2-trees was described in the paper and used in our study of RWS-GSAT.
Exponential behavior of RWS-GSAT points to the limitations of randomized algorithms. However, our results indicating that input formulas with more solutions are “easier” for RWS-GSAT to deal with, explain RWS-GSAT’s success in solving some large practical problems. They can be made “easy” for RWS-GSAT by relaxing some of the constraints.
## References
• [AP89] S. Arnborg and A. Proskurowski. Linear time algorithms for np-hard problems restricted to partial k-trees. Discrete Appl. Math., 23:11–24, 1989.
• [CA93] James M. Crawford and Larry D. Auton. Experimental results on the crossover point in satisfiability problems. In AAAI-93, 1993.
• [CFG96] Dave Clark, Jeremy Frank, Ian Gent, Ewan MacIntyre, Neven Tomov, and Toby Walsh. Local search and the number of solutions. In Proceeding of CP-96, 1996.
• [CI95] Cyungki Cha and Kazuo Iwama. Performance test of local search algorithms using new types of random cnf formulas. In Proceedings of IJCAI, 1995.
• [DABC96] O. Dubois, P. Andre, Y. Boufkhad, and J. Carlier. Sat versus unsat. DIMACS Cliques, Coloring and Satisfiability, 26, 1996.
• [DP60] M. Davis and H. Putnam. A computing procedure for quantification theory. Journal of Association for Computing Machines, 7, 1960.
• [Fre96] Jon W. Freeman. Hard random 3-sat problems and the davis-putnam procedure. Artificial Intelligence, 81, 1996.
• [GW95] Ian P. Gent and Toby Walsh. Unsatisfied variables in local search. In Proceedings of AISB-95, 1995.
• [JAMS91] David Johnson, Cecilia Aragon, Lyle McGeoch, and Catherine Schevon. Optimization by simulated annealing: An experimental evaluation; part ii, graph coloring and number partitioning. Operations Research, 39(3), May-June 1991.
• [KS92] Henry A. Kautz and Bart Selman. Planning as satisfiability. In Proceedings of the 10th European Conference on Artificial Intelligence, Vienna, Austria, 1992.
• [MSG97] Bertrand Mazure, Lakhdar Saís, and Éric Grégoire. Tabu search for sat. In Proceedings of the Fourteenth National Conference on Artificial Intelligence (AAAI-97). MIT Press, July 1997.
• [MSL92] David Mitchell, Bart Selman, and Hector Levesque. Hard and easy distributions of sat problems. In Proceedings of the Tenth National Conference on Artificial Intelligence (AAAI-92), July 1992.
• [PW96] Andrew J. Parkes and Joachim P. Walser. Tuning local search for satisfiability testing. In Proceeding of the Thirteen National Conference on Artificial Intelligence(AAAI-96), pages 356–362, 1996.
• [SK93] Bart Selman and Henry A. Kautz. Domain-independent extensions to gsat: Solving large structured satisfiability problems. In Proceedings of IJCAI-93, 1993.
• [SKC94] Bart Selman, Henry A. Kautz, and Bram Cohen. Noise strategies for improving local search. In Proceedings of the Twelfth National Conference on Artificial Intelligence (AAAI-94), 1994.
• [SKC96] Bart Selman, Henry A. Kautz, and Bram Cohen. Local search stragies for satisfiability. DIMACS Cliques, Coloring and Satisfiability, 26, 1996.
• [SLM92] Bart Selman, Hector Levesque, and David Mitchell. A new method for solving hard satisfiability problems. In Proccedings of the Tenth National Conference on Artificial Intelligence(AAAI-92), July 1992.
• [Spe96] William M. Spears. Simulated annealing for hard satisfiability problems. DIMACS Cliques, Coloring and Satisfiability, 26, 1996.
|
{}
|
Single Variable Calculus, Chapter 3, 3.8, Section 3.8, Problem 20
At what rate is the boat approachng the dock when it is 8m from the dock?
Illustration:
Required: $\displaystyle \frac{dx}{dt}$ when $x = 8m$
Solution:
By using Pythagorean Theorem we have,
$z^2 = 1^2 + x^2 \qquad$ Equation 1
taking the derivative with respect to time
\begin{aligned} \cancel{2} z \frac{dz}{dt} =& \cancel{2}x \frac{dx}{dt} \\ \\ \frac{dx}{dt} =& \frac{z}{x} \frac{dz}{dt} \qquad \text{Equation 2} \end{aligned}
We can get the value of $z$ by substituting $x = 8$ in equation 1
\begin{aligned} z^2 =& 1^2 +x^2 \\ \\ z^2 =& 1^2 + 8^2 \\ \\ z =& \sqrt{65} m \end{aligned}
Now, using equation 2 to solve for the unknown
\begin{aligned} \frac{dx}{dt} =& \frac{\sqrt{65}}{8} (1) \\ \\ \frac{dx}{dt} =& \frac{\sqrt{65}}{8} m/s \end{aligned}
This means that the boat is approaching the dock at a rate of $\displaystyle \frac{\sqrt{65}}{8} m/s$ when the boat is $8 m$ from the dock.
Posted on
|
{}
|
# Wrong vertical spacing under roots
I have no clue why the left root is way too high compared to the right one, whilst they are identical. Is there a way to set the vertical spacing under the root ? Tried \vphantom, but this didn't do anything.
\documentclass{article}
\usepackage{amsmath}
\usepackage{amssymb}
\begin{document}
$\displaystyle\mathcal{E}(x_{0},\frac{b}{a}\cdot\sqrt{a^2-x_{0}^2})$ en $\mathcal{C} (x_{0},\sqrt{a^2-x_{0}^2})$.
\end{document}
-
They are different because that's what \displaystyle does. It tells the compiler that it should act as if there were no text around the equation, and so it can take as much (vertical, in this case) space as it wants. The second equation is in the default style for inline formulas (achievable with the command \textstyle if needed), and as such it is typeset in a smaller way.
If you want both square roots to be the same size, you'll have to add \displaystyle at the beginning of the second $...$ block as well.
For setting the space below the root sign, provided you are using the display style (whether it is because you inserted \displaystyle or because you are in a display environment, such as equation), \vphantom has the desired effect.
\documentclass{article}
\usepackage{amsmath}
\usepackage{amssymb}
\begin{document}
In display style :
$\displaystyle\mathcal{E}(x_{0},\frac{b}{a}\cdot\sqrt{a^2-x_{0}^2})$ en $\displaystyle \mathcal{C}(x_{0},\sqrt{a^2-x_{0}^2})$.
In text style :
$\mathcal{E}(x_{0},\frac{b}{a}\cdot\sqrt{a^2-x_{0}^2})$ en $\mathcal{C}(x_{0},\sqrt{a^2-x_{0}^2})$.
Only the fraction is in display style here :
$\mathcal{E}(x_{0},{\displaystyle\frac{b}{a}}\cdot\sqrt{a^2-x_{0}^2})$ en $\mathcal{C}(x_{0},\sqrt{a^2-x_{0}^2})$.
And with \textasciibackslash vphantom :
$\displaystyle\sqrt{a^{2}-x_{{0}}^{2}}$
compared to
$\displaystyle\sqrt{\vphantom{\frac{a}{b}}a\^{2}-x_{{0}}^{2}}$
\end{document}
And an illustration of the effect of \vphantom :
-
Or use {\textstyle\sqrt{...}} to make the first root smaller. – Ian Thompson Sep 26 '12 at 14:46
@IanThompson : Thanks, editted to include that. – T. Verron Sep 26 '12 at 14:53
The \textstyle is the simplest solution and does what I want. THX! – Petoetje59 Sep 26 '12 at 15:00
@Petoetje59 : For inline formulas (as in your example), \textstyle is the default style, there is nothing you need to add. Just remove \displaystyle from the first formula, or add it in braces enclosing the fraction if you want the fraction to be in displaystyle, this is the most logical way of coding it. – T. Verron Sep 26 '12 at 15:02
|
{}
|
# 3.2.2 - Binomial Random Variables
3.2.2 - Binomial Random Variables
A binary variable is a variable that has two possible outcomes. For example, sex (male/female) or having a tattoo (yes/no) are both examples of a binary categorical variable.
A random variable can be transformed into a binary variable by defining a “success” and a “failure”. For example, consider rolling a fair six-sided die and recording the value of the face. The random variable, value of the face, is not binary. If we are interested, however, in the event A={3 is rolled}, then the “success” is rolling a three. The failure would be any value not equal to three. Therefore, we can create a new variable with two outcomes, namely A = {3} and B = {not a three} or {1, 2, 4, 5, 6}. This new variable is now a binary variable.
Binary Categorical Variable
A binary categorical variable is a variable that has two possible outcomes.
### The Binomial Distribution
The binomial distribution is a special discrete distribution where there are two distinct complementary outcomes, a “success” and a “failure”.
We have a binomial experiment if ALL of the following four conditions are satisfied:
1. The experiment consists of n identical trials.
2. Each trial results in one of the two outcomes, called success and failure.
3. The probability of success, denoted p, remains the same from trial to trial.
4. The n trials are independent. That is, the outcome of any trial does not affect the outcome of the others.
If the four conditions are satisfied, then the random variable $$X$$=number of successes in $$n$$ trials, is a binomial random variable with
\begin{align}
&\mu=E(X)=np &&\text{(Mean)}\\
&\text{Var}(X)=np(1-p) &&\text{(Variance)}\\
&\text{SD}(X)=\sqrt{np(1-p)} \text{, where $$p$$ is the probability of the “success."} &&\text{(Standard Deviation)}\\
\end{align}
A Note on Notation! Some common notation for “success” that you may see will be either $$p$$ or $$\pi$$ to represent the probability of “success” and usually $$q=1-p$$ to represent the probability of “failure”. $$\pi$$ is what is used in text. “Success” is defined as whatever the researcher decides…not just a positive outcome. The symbol $$\pi$$ is this case does NOT refer the numerical value 3.14
$$p \;(or\ \pi)$$ = probability of success
## Example 3-5: Prior Convictions
Let's use the example from the previous page investigating the number of prior convictions for prisoners at a state prison at which there were 500 prisoners. Define the “success” to be the event that a prisoner has no prior convictions. Find $$p$$ and $$1-p$$.
Let Success = no priors (0)
Let Failure = priors (1, 2, 3, or 4)
Looking back on our example, we can find that:
$$p=0.16$$
$$1-p=1-0.16=0.84$$
Verify by $$p+(1-p)=1$$
## Example 3-6: Crime Survey
An FBI survey shows that about 80% of all property crimes go unsolved. Suppose that in your town 3 such crimes are committed and they are each deemed independent of each other. What is the probability that 1 of 3 of these crimes will be solved?
First, we must determine if this situation satisfies ALL four conditions of a binomial experiment:
1. Does it satisfy a fixed number of trials? YES the number of trials is fixed at 3 (n = 3.)
2. Does it have only 2 outcomes? YES (Solved and unsolved)
3. Do all the trials have the same probability of success? YES (p = 0.2)
4. Are all crimes independent? YES (Stated in the description.)
To find the probability that only 1 of the 3 crimes will be solved we first find the probability that one of the crimes would be solved. With three such events (crimes) there are three sequences in which only one is solved:
• Solved First, Unsolved Second, Unsolved Third = (0.2)(0.8)( 0.8) = 0.128
• Unsolved First, Solved Second, Unsolved Third = (0.8)(0.2)(0.8) = 0.128
• Unsolved First, Unsolved Second, Solved Third = (0.8)(0.8)(0.2) = 0.128
We add these 3 probabilities up to get 0.384. Looking at this from a formula standpoint, we have three possible sequences, each involving one solved and two unsolved events. Putting this together gives us the following: $$3(0.2)(0.8)^2=0.384$$
The example above and its formula illustrates the motivation behind the binomial formula for finding exact probabilities.
The Binomial Formula
For a binomial random variable with probability of success, $$p$$, and $$n$$ trials...
$$f(x)=P(X = x)=\dfrac{n!}{x!(n−x)!}p^x(1–p)^{n-x}$$ for $$x=0, 1, 2, …, n$$
A Note on Notation! The exclamation point (!) is used in math to represent factorial operations. The factorial of a number means to take that number and multiply it by every number that comes before it - down to one (excluding 0). For example, 3! = 3 × 2 × 1 = 6 Remember 1! = 1 Remember 0! = 1
## Graphical Displays of Binomial Distributions
The formula defined above is the probability mass function, pmf, for the Binomial. We can graph the probabilities for any given $$n$$ and $$p$$. The following distributions show how the graphs change with a given n and varying probabilities.
## Example 3-7: Crime Survey Continued...
For the FBI Crime Survey example, what is the probability that at least one of the crimes will be solved?
Here we are looking to solve $$P(X \ge 1)$$.
There are two ways to solve this problem: the long way and the short way.
The long way to solve for $$P(X \ge 1)$$. This would be to solve $$P(x=1)+P(x=2)+P(x=3)$$ as follows:
$$P(x=1)=\dfrac{3!}{1!2!}0.2^1(0.8)^2=0.384$$
$$P(x=2)=\dfrac{3!}{2!1!}0.2^2(0.8)^1=0.096$$
$$P(x=3)=\dfrac{3!}{3!0!}0.2^3(0.8)^0=0.008$$
We add up all of the above probabilities and get 0.488...OR...we can do the short way by using the complement rule. Here the complement to $$P(X \ge 1)$$ is equal to $$1 - P(X < 1)$$ which is equal to $$1 - P(X = 0)$$. We have carried out this solution below.
\begin{align} 1–P(x<1)&=1–P(x=0)\\&=1–\dfrac{3!}{0!(3−0)!}0.2^0(1–0.2)^3\\ &=1−1(1)(0.8)^3\\ &=1–0.512\\ &=0.488 \end{align}
In such a situation where three crimes happen, what is the expected value and standard deviation of crimes that remain unsolved? Here we apply the formulas for expected value and standard deviation of a binomial.
\begin{align} \mu &=E(X)\\ &=3(0.8)\\ &=2.4 \end{align} \begin{align} \text{Var}(X)&=3(0.8)(0.2)=0.48\\ \text{SD}(X)&=\sqrt{0.48}\approx 0.6928 \end{align}
Note: X can only take values 0, 1, 2, ..., n, but the expected value (mean) of X may be some value other than those that can be assumed by X.
## Example 3-8: Cross-Fertilizing
Cross-fertilizing a red and a white flower produces red flowers 25% of the time. Now we cross-fertilize five pairs of red and white flowers and produce five offspring. Find the probability that there will be no red-flowered plants in the five offspring.
Y = # of red flowered plants in the five offspring. Here, the number of red-flowered plants has a binomial distribution with $$n = 5, p = 0.25$$.
\begin{align} P(Y=0)&=\dfrac{5!}{0!(5−0)!}p^0(1−p)^5\\&=1(0.25)^0(0.75)^5\\&=0.237 \end{align}
## Try it!
Refer to example 3-8 to answer the following.
1. Find the probability that there will be four or more red-flowered plants.
\begin{align} P(\mbox{Y is 4 or more})&=P(Y=4)+P(Y=5)\\ &=\dfrac{5!}{4!(5-4)!} {p}^4 {(1-p)}^1+\dfrac{5!}{5!(5-5)!} {p}^5 {(1-p)}^0\\ &=5\cdot (0.25)^4 \cdot (0.75)^1+ (0.25)^5\\ &=0.015+0.001\\ &=0.016\\ \end{align}
2. Of the five cross-fertilized offspring, how many red-flowered plants do you expect?
\begin{align} \mu &=5⋅0.25\\&=1.25 \end{align}
3. What is the standard deviation of Y, the number of red-flowered plants in the five cross-fertilized offspring?
\begin{align} \sigma&=\sqrt{5\cdot0.25\cdot0.75}\\ &=0.97 \end{align}
[1] Link ↥ Has Tooltip/Popover Toggleable Visibility
|
{}
|
## Calculus (3rd Edition)
$-u$
Using the right hand rule, curl your right hand from $v$ to $w$ to get the direction of $v\times w$. Your thumb will point in the direction of $-u$.
|
{}
|
# Synopsis: Planet Search Finds No Dark Matter Black Holes
Using data from a planet-hunting mission, scientists place new limits on a supposed population of black holes that could act as dark matter.
Dark matter remains such a mystery that we’re still unsure whether it’s made of microscopic particles or macroscopic bodies. On the “macro” side, dark matter could consist of relatively small black holes that formed in the early Universe. We might detect one of these so-called primordial black holes as gravitational lenses of background stars. A new analysis of data from the Kepler mission’s search for Earth-sized planets finds no black hole lensing events. From this nondetection, the researchers, reporting in Physical Review Letters, rule out part of the mass range previously thought still available for dark matter black hole candidates.
Dark matter provides the missing mass needed to keep galaxies from flying apart, but the components of this dark matter are not well constrained as far as their individual size goes. Primordial black holes, which come out of certain models of the early Universe, have a wide range of possible masses. However, certain black hole masses have already been excluded as the dominant form of dark matter because they would have shown up in astronomical data. The currently viable range for primordial black holes is between ${10}^{-2}$ and ${10}^{-7}$ Earth masses.
Now Kim Griest of the University of California, San Diego, and his colleagues have reduced this range further using Kepler observations. For four years, the Kepler satellite monitored roughly $150,000$ stars at a distance of about 3200 light years. If a primordial black hole passed in front of one of these stars, the star would become temporarily brighter because of gravitational lensing. Griest et al. sifted through the available Kepler data and turned up zero events that matched their lensing criteria, which implies that moon-sized black holes (around ${10}^{-3}$ Earth masses) can’t make up all of the dark matter in the Milky Way. – Michael Schirber
### Announcements
More Announcements »
## Previous Synopsis
Atomic and Molecular Physics
Nanophysics
## Related Articles
Astrophysics
### Synopsis: Emptiness Constrains the Universe
The distribution of galaxies around regions of relatively empty space can be used to constrain cosmological parameters. Read More »
Cosmology
### Synopsis: A Relativistic View of a Clumpy Universe
Cosmologists have begun using fully relativistic models to understand the effects of inhomogeneous matter distribution on the evolution of the Universe. Read More »
Astrophysics
### Synopsis: Solar Cycle Affects Cosmic Ray Positrons
Discrepancies in the positron content of cosmic rays measured at different times are explained by the periodic reversal of the solar magnetic field’s direction. Read More »
|
{}
|
### Testing correctness: overview
There is considerable emphasis in the MCMC and SMC literatures on efficiency (in the computational sense of efficiency), but much less on correctness. Here we define correctness as follows: an MCMC procedure producing samples $$X_1, X_2, \dots$$ is correct if ergodic averages based any integrable functions $$f$$ admit a law of large number converging to the posterior expectation of the function under the target posterior distribution $$\pi$$, i.e.
\begin{align} \frac{1}{N} \sum_{i=1}^N f(X_i) \to \int f(x) \pi(x) \text{d}x \text{ a.s., } N\to\infty. \end{align}
For SMC-based algorithm, we use a similar definition where instead the number of particles is increased to infinity.
Correctness can fail for two main reasons: some mathematical derivations used to construct the MCMC or SMC algorithm are incorrect, or, the software implementation is buggy. The tests we describe here can detect both kinds of problems.
Correctness has nothing to do with what the literature vaguely calls "convergence diagnostic". Whether a LLN holds or not does not depend on "burn-in", having low autocorrelation, etc.
In full generality, testing if a deterministic sequence converges to a certain limit is not possible. On top of that, the sequence we have here is composed of random variables, and moreover these random variables are relatively expensive to simulate in problems of practical interest. Hence our definition of correctness appears hopelessly difficult to check.
Surprisingly, there exists a set of effective tools to check MCMC and SMC correctness. Some of these tools are in the literature but not emphasized. We compile and improve them in this document, and describe their implementation in Blang.
Analysis of MCMC and SMC correctness is a great example of an area where fairly theoretical concepts can have a large impact to a practical problem. Crucially, as we show here, MC theory will allow us to derive useful tests that are not based on asymptotics and that have exact, easy to compute false positive probabilities. Moreover, in some cases, we can even get non-probabilistic tests, i.e. with zero false positive and zero false negative probabilities. In other cases, tests do not have formal guarantees to find all bugs, but in practice they are useful to identify common ones.
### Testing strategies
We provide a non-standard replacement implementation of bayonet.distributions.Random which can be used to enumerates all the probability traces used by an arbitrary discrete random process. In particular, many inference engines' code manipulate models through interfaces that are agnostic to the model being continuous or discrete, so we can achieve code coverage of the inference engines using discrete models. See bayonet.distributions.ExhaustiveRandom.
We use this for example to test the unbiasness of the normalization constant estimate provided by our SMC implementation.
package blang.validation import java.util.function.Supplier import bayonet.distributions.ExhaustiveDebugRandom class UnbiasnessTest { def static double expectedZEstimate(Supplier<Double> logZEstimator, ExhaustiveDebugRandom exhausiveRand) { var expectation = 0.0 var nProgramTraces = 0 while (exhausiveRand.hasNext) { val logZ = logZEstimator.get expectation += Math.exp(logZ) * exhausiveRand.lastProbability nProgramTraces++ } println("nProgramTraces = " + nProgramTraces) return expectation } }
This can be called with a small finite model, e.g. a short HMM here, but making it is large enough to achieve code coverage (Blang and the BlangIDE are compatible with the eclemma code coverage tool).
The output of the test has the form:
nProgramTraces = 23868 true normalization constant Z: 0.345 expected Z estimate over all traces: 0.34500000000000164
Showing that indeed our implementation of SMC is unbiased.
This can also be used to test numerically that transition probabilities of small state space discrete kernels are indeed invariant with respect to the target (facilities to help automating this will be developed as part of a future release).
Finally, when a model is fully discrete and all generate{..} blocks have the property that for each realization there is at most one execution trace generating it, then we can check that the logf and randomness used in the generate block match by using the arguments:
--engine Exact --engine.checkLawsGenerateAgreement true
##### Background
The basis of this test is from Geweke (2004) (this is the Geweke paper on correctness of MCMC, not to be confused with Geweke's convergence diagnostic, which as mentioned above, is unrelated), but we modify it in an important way.
Given a Blang model, as in the Geweke paper, we assume it supports two methods for simulating the random variables in the model: one via forward simulation (and since no data is used in this correctness check, it is reasonable to assume all variables can be filled in this fashion; this will be true as long appropriate generate blocks are provided for the constituents); the other, via application of MCMC transition kernels.
Let $$X$$ denote all the variables in the model. Let $$X \sim \pi$$ denote the forward simulation process. Let $$X' | X \sim T(\cdot | X)$$ denote a combination of kernels including those we want to test for invariance with respect to $$\pi$$. Let $$T_i$$ denote the individual kernels which are combined into $$T$$ via either mixture or composition. Typically, $$T$$ is irreducible but not the individual $$T_i$$'s.
Both our test and Geweke's depend on one or several real-valued test functions $$f$$, and on comparing two sets of simulations. However these two sets will be defined differently.
In Geweke's method, the two sets are:
1. The marginal-conditional simulator:
• For $$m \in \{1, 2, \dots, M_1\}$$
• $$X_m \sim \pi$$
• $$F_m = f(X_m)$$
2. The successive-conditional simulator:
• $$X_1 \sim \pi$$
• $$G_1 = f(X_1)$$
• For $$m \in \{2, 3, \dots, M_2\}$$
• $$X_m | X_{m-1} \sim T(\cdot|X_{m-1})$$. Here our definition of $$T$$ hides some details in Geweke's method, in particular that one of the $$T_i$$'s re-generate the data given the current parameter values.
• $$G_m = f(X_m)$$
Then an approximate test comparing $$\{F_1, F_2, \dots, F_{M_1}\}$$ and $$\{G_1, G_2, \dots, G_{M_2}\}$$ is derived based on an asymptotic result.
The method has several limitations:
• The validity of the approximate test relies on $$T$$ being irreducible, which means that individual kernels $$T_i$$ cannot be tested in isolation. Therefore, when the test fails, it can be time consuming to determine the root cause.
• Whether the $$p$$ value exceed a set threshold cannot be known exactly. A leap of faith is needed to assume the asymptotics are sufficiently accurate. Verifying if this is the case can be very difficult in practice. More seriously, the problem is compounded when several such tests need to be combined using a multiple-testing framework. As a consequence Geweke test is not used as an automatic test unit to the best of our knowledge and practitioners typically recommend visual inspection of P-P plots rather than turning Geweke tests into unit tests.
• The validity of the approximate test also relies on a CLT for Markov chains to hold, which in turn typically involves establishing geometric ergodicity. Proving geometric ergodicity is model-dependent and quite involved compared to the weaker conditions required for a law of large numbers.
We call our alternative the Exact Invariance Test (EIT), and we use it heavily to establish Blang's SDK correctness. EIT has the following properties:
• The test does not rely on irreducibility. This means that individual kernels $$T_i$$ can be tested individually, which markedly narrows down the code to be reviewed in the event of bug detection.
• EIT does not rely on asymptotic results. It is an exact test.
• The test does not rely on geometric ergodicity.
EIT compares the following two sets of samples, $$\{F_1, F_2, \dots, F_{M_1}\}$$ and $$\{H_1, H_2, \dots, H_{M_3}\}$$:
1. Those coming from the marginal-conditional simulator, as described above, $$\{F_1, F_2, \dots, F_{M_1}\}$$.
2. The exact invariant simulator:
• For $$m \in \{1, 3, \dots, M_3\}$$
• $$X_{1,m} \sim \pi$$
• For $$k \in \{2, 3, \dots, K\}$$
• $$X_{k,m} | X_{k-1,m} \sim T_i(\cdot|X_{k-1,m})$$
• $$H_m = f(X_{K,m})$$
By construction, for any $$K \ge 1, j \in \{1, \dots, M_1\}, l \in \{1, \dots, M_3\}$$, the random variable $$F_j$$ is equal in distribution to the random variable $$F_l$$ if and only if the kernel $$T_i$$ is $$\pi$$ invariant. This means that an exact test can be trivially constructed (for example if $$f$$ takes on a finite number of values, Fisher's exact test can be used). Alternatively, tests with simple to analyze asymptotics such as the Kolmogorov–Smirnov can be used. Critically, since the $$H_m$$'s are independent, the asymptotics here do not depend on irreducibility or geometric ergodicity, and off-the-shelf iid tests can be used directly. The terminology "Exact" in EIT refers to the random variables being exactly equal in distribution under the null rather than the exactness of the frequentist procedure being used to assess if indeed the two sets are equal in distribution. Note that we have here a rare instance where a point null hypothesis is indeed a well grounded approach, even when using very large values for $$M_1, M_3$$.
Here $$K \ge 1$$ is a parameter controlling the power of test. Exact tests are available for any finite value.
For completeness, we also review below the Cook et al. test.
• For $$m \in \{1, 3, \dots, M\}$$
• $$\tilde X_{m} \sim \pi$$. A subset of the coordinates is held fix for the rest and viewed as synthetic data (i.e. in contrast to Geweke's test, $$T$$ does not contain kernels modifying the "data" coordinates of $$X$$).
• Set $$X_{1,m}$$ to be equal to $$\tilde X_{m}$$ for observed coordinates, and to some user-specified initialization value otherwise.
• For $$k \in \{2, 3, \dots, K_2\}$$
• $$X_{k,m} | X_{k-1,m} \sim T(\cdot|X_{k-1,m})$$
• Compute $$Q_m = \frac{1}{K_2} \sum_{k=1}^{K_2} I[f(\tilde X_{m}) < f(X_{k,m})]$$
Then if the code is correct, the distribution of $$Q_i$$ converges to the uniform distribution as $$K_2 \to \infty$$, provided the chain is irreducible. Hence this method suffers from the same limitation as Geweke's in terms of not being able to narrow down the error to a single $$T_i$$, and being approximate for any finite $$K_2$$.
##### Implementation of EIT in Blang
Prepare the EIS tests by assembling objects of type Instance, for which the constructor's first argument is a Blang model, and the following arguments are one or more test functions, playing the role of $$f$$ in the previous section. See for example the list of instances used to test Blang's SDK.
Once the instances are available, you can create an automatic test suite by adding a file under the src/test directory based on the following example:
package blang import blang.validation.ExactInvarianceTest import org.junit.Test import blang.validation.DeterminismTest class TestSDKDistributions { @Test def void exactInvarianceTest() { test(new ExactInvarianceTest) } def static void test(ExactInvarianceTest test) { setup(test) println("Corrected pValue = " + test.correctedPValue) test.check() } def static void setup(ExactInvarianceTest test) { test => [ nPosteriorSamplesPerIndep = 500 // 1000 creates a travis time out for (instance : new Examples().all) { test.add(instance) } ] } @Test def void determinismTest() { new DeterminismTest => [ for (instance : new Examples().all) { check(instance) } ] } }
This takes multiple testing correction under account. The example above also shows a related test, DeterminismTest which ensure reproducibility, i.e. that using the same random seed leads to exactly the same result.
By virtue of being standard JUnit tests, it is easy to use continuous integration tools such as Travis so that the tests are ran on a remote server each time a commit is made.
For individual univariate continuous distributions, this test checks using numerical integration that the normalization is one.
package blang; import org.junit.Assert; import org.junit.Test; import blang.core.IntDistribution; import blang.distributions.Generators; import blang.distributions.YuleSimon; import blang.types.StaticUtils; import blang.validation.NormalizationTest; import humi.BetaNegativeBinomial; public class TestSDKNormalizations extends NormalizationTest { private Examples examples = new Examples(); @Test public void normal() { // check norm from -infty to +infty (by doubling domain of integration) checkNormalization(examples.normal.model); } @Test public void beta() { // check norm on a close interval checkNormalization(examples.beta.model, Generators.ZERO_PLUS_EPS, Generators.ONE_MINUS_EPS); } @Test public void testExponential() { // approximate 0, infty interval checkNormalization(examples.exp.model, 0.0, 10.0); } @Test public void testGamma() { checkNormalization(examples.gamma.model, 0.0, 15.0); } @Test public void testYuleSimon() { IntDistribution distribution = YuleSimon.distribution(StaticUtils.fixedReal(3.5)); double sum = 0.0; for (int i = 0; i < 100; i++) sum += Math.exp(distribution.logDensity(i)); Assert.assertEquals(1.0, sum, 0.01); } @Test public void testBNB() { IntDistribution distribution = BetaNegativeBinomial.distribution(StaticUtils.fixedReal(3.5), StaticUtils.fixedReal(1.2), StaticUtils.fixedReal(3.0)); double sum = 0.0; for (int i = 0; i < 1000; i++) sum += Math.exp(distribution.logDensity(i)); Assert.assertEquals(1.0, sum, 0.01); } }
Having correct normalization is important when we put priors on the parameters and sample the values of the parameters. In such cases the value of the normalization constant plays an often overlooked role in the correctness of transition kernels.
|
{}
|
# OpenGL How to optimize scene rendering
## Recommended Posts
##### Share on other sites
Have you profiled to see where the bottleneck is? I love the debug info you have in your HUD, but that isn't really enough to go on. In particular, are you GPU bound or CPU bound (I would assume the former)?
First off, I am pretty sure that switching shaders ought to be more expensive than switching textures - most engines sort by material rather than texture.
Secondly, by my estimation you are only rendering about 30 triangles per batch, which is very low for modern cards. Can you increase this at all?
##### Share on other sites
No, I dont have much experience profiling code, and I'm using Delphi so I really dont have any idea how to do it.
But probably CPU bound, as the scene geometry is far from complex for my gfx card.
Already changed the sorting to include shaders and the fps increased a little.
"Secondly, by my estimation you are only rendering about 30 triangles per batch, which is very low for modern cards. Can you increase this at all?"
Hmmm, as you can, most of the meshes in the scene are very low poly trees.
How can I batch more than one mesh per call, since every mesh has it own transformation matrix?
##### Share on other sites
Quote:
Original post by RelfosHow can I batch more than one mesh per call, since every mesh has it own transformation matrix?
Either by using the instancing extension (only available on the newest nVidia cards) or by pseudo-instancing. Interestingly, pseudo-instancing seems to be no slower than "real" instancing (at least not measurable).
For pseudo-instancing, you pass your transform matrix (or something else) as color or texture coordinate using the glTexCoord or glColor functions (or glVertexAttrib ). Note that you do not have to add something extra to your vertex buffer data, but you use these functions to set the state once before rendering each object. It is the same as you would do in immediate mode (although it is not immediate mode!). This is very little overhead and effectively lets you draw hundreds of trees in one batch.
In your vertex shader, you figure out what to make of that data. It is pretty much up to you what you want to do. For example, if you want to be able to scale your trees and place them at different locations, but you don't care about orientation, that will fit neatly into one attribute. Construct a matrix from it, and multiply.
##### Share on other sites
Wow, I could never think of doing that, very clever indeed, I'll try it then, thanks :)
##### Share on other sites
Quote:
Original post by RelfosAbout the shadow volume itself, its done in the CPU, and even if I disable it in this scene, only gain about 5 or 6 more fps.
Only gain 5 or more fps? That's a lot of gain for this particular scene.
12 FPS = 83 ms.
12 + 5 = 17 FPS = 58 ms.
Speed gain = (83 - 58) / 83 = 30%
If the framerate with shadows was 30 FPS, it would probably be around 44 FPS with shadows turned off. Please stop using FPS to measure framerate and use milliseconds per frame instead. Speed gains are more obvious that way.
##### Share on other sites
Yeah, it's really hard to say where the slowdown is without a little profiling. I believe the newer high end versions of Visual C++ come with a profiler, although I moved to Linux before that was the case so I don't know too much about it. I don't know any free Windows profilers off the top of my head, but since you're doing OpenGL you could always install some flavor of Linux and use Valgrind, whish is about as powerful as they come ;) Seriously though...
My first observation though, is that 14k polygons is a LOT for a scene of the complexity I'm seeing in that screenshot. Is that how many are being passed to the renderer, or are a lot of those being culled out? I wasn't too clear on how you described that.
One thing I would try is simply turning off ALL of the special effects in the scene, then turning them on again one by one to see where the biggest performance hits come in. That is, first start by rendering it as a completely static scene with no shadows, no fog, and only a simple passthrough shader that doesn't do anything fancy. Then activate all those features individually, and test the speed that way (i.e. test with ONLY shadows, ONLY fog, ONLY animation, ONLY advanced shaders). With those results you should be in a better position to try out specific optimizations or come back with more detailed information. Also, I don't know what video card you have, but my wild guess is the parallax mapping you're doing would be very slow on anything that isn't fairly recent.
The best suggestion I can think of off the top of my head would be to implement some kind of space partitioning structure (assuming you aren't using one) for the entire scene. A basic quadtree for example would be a simple one to implement, and would probably give a nice boost to both the culling and depth sorting.
Also, an instancing technique, as mentioned earlier, might help a little, although with that particular scene it doesn't look like its effect would be that great (just on the trees really, and there aren't all that many of them).
I don't know if you're doing any kind of dynamic level-of-detail work on your terrain, but that might be something to look into to reduce the polygon count being passed to the renderer. I would definitely save this for last though, because dynamic LoD on terrain is pretty complex just by itself.
And finally, is that compiled in debug or release mode?
## Create an account
Register a new account
• ### Forum Statistics
• Total Topics
627778
• Total Posts
2979025
• ### Similar Content
• Hi guys,
With OpenGL not having a dedicated SDK, how were libraries like GLUT and the likes ever written?
Could someone these days write an OpenGL library from scratch? How would you even go about this?
Obviously this question stems from the fact that there is no OpenGL SDK.
DirectX is a bit different as MS has the advantage of having the relationship with the vendors and having full access to OS source code and the entire works.
If I were to attempt to write the most absolute basic lib to access OpenGL on the GPU, how would I go about this?
• Hello! As an exercise for delving into modern OpenGL, I'm creating a simple .obj renderer. I want to support things like varying degrees of specularity, geometry opacity, things like that, on a per-material basis. Different materials can also have different textures. Basic .obj necessities. I've done this in old school OpenGL, but modern OpenGL has its own thing going on, and I'd like to conform as closely to the standards as possible so as to keep the program running correctly, and I'm hoping to avoid picking up bad habits this early on.
Reading around on the OpenGL Wiki, one tip in particular really stands out to me on this page:
For something like a renderer for .obj files, this sort of thing seems almost ideal, but according to the wiki, it's a bad idea. Interesting to note!
So, here's what the plan is so far as far as loading goes:
Set up a type for materials so that materials can be created and destroyed. They will contain things like diffuse color, diffuse texture, geometry opacity, and so on, for each material in the .mtl file. Since .obj files are conveniently split up by material, I can load different groups of vertices/normals/UVs and triangles into different blocks of data for different models. When it comes to the rendering, I get a bit lost. I can either:
Between drawing triangle groups, call glUseProgram to use a different shader for that particular geometry (so a unique shader just for the material that is shared by this triangle group). or
Between drawing triangle groups, call glUniform a few times to adjust different parameters within the "master shader", such as specularity, diffuse color, and geometry opacity. In both cases, I still have to call glBindTexture between drawing triangle groups in order to bind the diffuse texture used by the material, so there doesn't seem to be a way around having the CPU do *something* during the rendering process instead of letting the GPU do everything all at once.
The second option here seems less cluttered, however. There are less shaders to keep up with while one "master shader" handles it all. I don't have to duplicate any code or compile multiple shaders. Arguably, I could always have the shader program for each material be embedded in the material itself, and be auto-generated upon loading the material from the .mtl file. But this still leads to constantly calling glUseProgram, much more than is probably necessary in order to properly render the .obj. There seem to be a number of differing opinions on if it's okay to use hundreds of shaders or if it's best to just use tens of shaders.
So, ultimately, what is the "right" way to do this? Does using a "master shader" (or a few variants of one) bog down the system compared to using hundreds of shader programs each dedicated to their own corresponding materials? Keeping in mind that the "master shaders" would have to track these additional uniforms and potentially have numerous branches of ifs, it may be possible that the ifs will lead to additional and unnecessary processing. But would that more expensive than constantly calling glUseProgram to switch shaders, or storing the shaders to begin with?
With all these angles to consider, it's difficult to come to a conclusion. Both possible methods work, and both seem rather convenient for their own reasons, but which is the most performant? Please help this beginner/dummy understand. Thank you!
• I want to make professional java 3d game with server program and database,packet handling for multiplayer and client-server communicating,maps rendering,models,and stuffs Which aspect of java can I learn and where can I learn java Lwjgl OpenGL rendering Like minecraft and world of tanks
• A friend of mine and I are making a 2D game engine as a learning experience and to hopefully build upon the experience in the long run.
-What I'm using:
C++;. Since im learning this language while in college and its one of the popular language to make games with why not. Visual Studios; Im using a windows so yea. SDL or GLFW; was thinking about SDL since i do some research on it where it is catching my interest but i hear SDL is a huge package compared to GLFW, so i may do GLFW to start with as learning since i may get overwhelmed with SDL.
-Questions
Knowing what we want in the engine what should our main focus be in terms of learning. File managements, with headers, functions ect. How can i properly manage files with out confusing myself and my friend when sharing code. Alternative to Visual studios: My friend has a mac and cant properly use Vis studios, is there another alternative to it?
• Both functions are available since 3.0, and I'm currently using glMapBuffer(), which works fine.
But, I was wondering if anyone has experienced advantage in using glMapBufferRange(), which allows to specify the range of the mapped buffer. Could this be only a safety measure or does it improve performance?
Note: I'm not asking about glBufferSubData()/glBufferData. Those two are irrelevant in this case.
• 11
• 10
• 10
• 23
• 9
|
{}
|
Definitions
# Hydrophilic-lipophilic balance
The Hydrophilic-lipophilic balance of a surfactant is a measure of the degree to which it is hydrophilic or lipophilic, determined by calculating values for the different regions of the molecule, as described by Griffin in 1949 and 1954. Other methods have been suggested, notably in 1957 by Davies.
## Griffin's method
Griffin's method for non-ionic surfactants as described in 1954 works as follows:
$HLB = 20 * Mh / M$
where Mh is the molecular mass of the hydrophilic portion of the Molecule, and M is the molecular mass of the whole molecule, giving a result on an arbitrary scale of 0 to 20. An HLB value of 0 corresponds to a completely hydrophobic molecule, and a value of 20 would correspond to a molecule made up completely of hydrophilic components.
The HLB value can be used to predict the surfactant properties of a molecule:
• A value from 0 to 3 indicates an antifoaming agent
• A value from 4 to 6 indicates a W/O emulsifier
• A value from 7 to 9 indicates a wetting agent
• A value from 8 to 18 indicates an O/W emulsifier
• A value from 13 to 15 is typical of detergents
• A value of 10 to 18 indicates a solubiliser or hydrotrope.
## Davies' method
In 1957, Davies suggested a method based on calculating a value based on the chemical groups of the molecule. The advantage of this method is that it takes into account the effect of strongly and less strongly hydrophilic groups. The method works as follows:
$HLB = 7 + m*Hh - n*Hl$
with:
m - number of hydrophilic groups in the molecule
Hh - Value of the hydrophilic groups
n - Number of lipophilic groups in the molecule
Hl - Value of the lipophilic groups
## References
Search another word or see Hydrophilicon Dictionary | Thesaurus |Spanish
|
{}
|
# Comparing industrial robot arms [closed]
I'd like to study the capabilities of industrial robot arms. For example, to answer the question how does price vary with precision, speed, reach and strength?
Is there a database of industrial robot arms including information like the price, precision, speed, reach and strength of each model?
• I'm voting to close this because I don't feel that there's any "right" answer - everything could probably be found out with an online search. That is, I feel that the answers are all going to be "primarily opinion-based". Jul 23, 2015 at 12:37
• @Chuck: The right answer than I am looking for is a link to a database that provides this information.
– J D
Jul 23, 2015 at 21:42
For example, how does price vary with precision, speed, reach and strength?
The price vary a lot, from a couple of hundreds of bucks to hundreds of thousands of dollars ( Willow Garage's the one-armrobot PR2 costs \$285,000 and The two-armed costs \$400,000 ), it goes up- as you can guess- whenever the robot arm is precise, fast, long, strong, collaborative (i.e. designed to interact with its environment), or have more DoF's (Degrees of Freedom). Also, take into account the flanges, tools, the palette of software they come with, certifications, etc. It really depends.
Is there a database of industrial robot arms including information like the price, precision, speed, reach and strength of each model?
I don't know of such a database, but you can find all these information (except prices, these companies are pretty secretive, they usually reveal it to potential customers. I work on KUKA's robots in my company, I didn't know about their price until our PR told me about the deal) on the Robot's manuals available on the companies' websites. If you want to narrow down your choices, check what's available on the applications you want your robot to perform (if the robot will do repetitive work inside a confined space, there's no need to buy an expensive collaborative robot). I think, the big guys in the industry like Kuka, ABB, or Fanuc have a complete range of robots for different applications and at different sizes. Check this link out:
http://blog.robotiq.com/bid/63528/What-are-the-different-types-of-industrial-robots
• "except prices". Yeah, the price is the thing I'm really struggling with. It seems the only robot vendors who advertise prices are those competing on price such as the Baxter robot from Rethink Robotics. Interesting you say that collaborative robots are expensive when Baxter and the Universal Robotics UR series are relatively cheap. Do you mean collaborative robots from the likes of KUKA, FANUC and ABB?
– J D
Jul 23, 2015 at 21:47
• Most of the sophisticated robot arms are not destined for personal use that's why they seem very expensive, but rather to industrial one. They perform routine, repetitive and tedious works 24h/7 in production lines, so they replace many humans while eliminating human errors, which is cost saving if you are a Car Manufacturer like WV or Mercedes. Jul 26, 2015 at 14:34
• What I mean by " collaborative robots" are the ones that interact with humans without hurting them. The one I know about is KUKA's LBR iiwa kuka-robotics.com/germany/en/products/industrial_robots/… that has torque sensors on each axis, so it can detect any collision or contact with humans. Jul 26, 2015 at 14:39
• Yeah, the KUKA LBR iiwa is $200k. – J D Jul 26, 2015 at 22:41 • I don't think it's$200k, in Europe it's around €55k Jul 28, 2015 at 8:43
|
{}
|
# miscellaneous
After years' study in probability, I felt more and more happy with the construction of probability space, it's so simple seemingly but interesting that may be applied to many more areas rather than math.
Let’s review a little bit about the probability space in math. Given a space $(\Omega, \mathbb{F}, \mathbb{P})$, assume that there exists a r.v $X s.t. \{\omega|X(\omega)>0\}\in\mathbb{F}$, then we can measure the events $\{X>0\}$ by using $\mathbb{P}(X>0)$. What’s more, adding a time dimension to make it a stochastic process $X_t(\omega)$, then we have properties like $\mathbb{F}_t\subset\mathbb{F}_{t^+}$ and so on.
Secondly, let’s consider another fact, different people have different decisions when given tasks, why is that? That apparently depends on the knowledge we own, how do I relate this decision making base on probability space construction idea? This attracts me these days and I’ll post what I think later.
To be continued……
|
{}
|
I am using the "setspace" package to format my document with double spacing (requirement for an assignment). One of my section headers takes up two lines, though, and these section headers look better with single spacing. I went ahead and fixed it with the following.
Nunc venenatis nulla eu arcu pellentesque eu molestie nunc condimentum.
Donec sodales lacinia dictum.
Sed aliquam turpis quis enim bibendum pharetra.
This is the last paragraph in section i.
\singlespace
\section{The Next Section Which Has a Fairly Long Name that Stretches Over Two Lines}
\doublespace
This is the first paragraph in section i+1.
Cras ut tortor vel dui ultricies dapibus vitae sit amet nisi.
Aliquam rhoncus leo id eros volutpat faucibus.
Integer lectus elit, varius et semper eget, tristique vel odio.
This is the only case (so far) in my paper where the heading requires two lines, so it's not a big deal to fix it with this hack. However, I can imagine as the paper gets longer and has more sections, it could become more tedious to add this hack multiple times.
Is there any way I can indicate once that section headings should be single-spaced and paragraphs should be double-spaced, rather than adding this hack multiple times throughout my document?
• I think that \let\OldSection\section \renewcommand*{\section}[1]{\singlespace\OldSection{#1}\doublespace} will give you the results you are looking for. – Peter Grill Apr 2 '13 at 3:06
• @PeterGrill be careful! With your redefinition you loose the optional argument for \section. – Gonzalo Medina Apr 2 '13 at 3:13
• @GonzaloMedina: Yeah, always forget about those -- plus you beat me to it, so your answer is better. So probably better to use \usepackage{letltxmacro} \LetLtxMacro\OldSection\section \renewcommand*{\section}[2][]{\singlespace\OldSection[#1]{#2}\doublespace}. – Peter Grill Apr 2 '13 at 3:16
• @PeterGrill that will also produce problems with an eventual \tableofcontents. – Gonzalo Medina Apr 2 '13 at 3:25
• @GonzaloMedina: Hmmm.. that seems like a good follow up question as I can't see why it would effect the TOC. – Peter Grill Apr 2 '13 at 3:35
You can use the etoolbox package to insert \singlespacing just before the sectional units, and then to append \doublespacing:
\documentclass{article}
\usepackage{setspace}
\usepackage{etoolbox}
\makeatletter
\pretocmd{\@sect}{\singlespacing}{}{}
\pretocmd{\@ssect}{\singlespacing}{}{}
\apptocmd{\@sect}{\doublespacing}{}{}
\apptocmd{\@ssect}{\doublespacing}{}{}
\makeatother
\doublespacing
\begin{document}
Nunc venenatis nulla eu arcu pellentesque eu molestie nunc condimentum.
Donec sodales lacinia dictum.
Sed aliquam turpis quis enim bibendum pharetra.
This is the last paragraph in section i.
\section{The Next Section Which Has a Fairly Long Name that Stretches Over Two Lines}
This is the first paragraph in section i+1.
Cras ut tortor vel dui ultricies dapibus vitae sit amet nisi.
Aliquam rhoncus leo id eros volutpat faucibus.
Integer lectus elit, varius et semper eget, tristique vel odio.
\end{document}
This will apply to \section, \subsection, \subsubsection.
Another option is to use the titlesec package:
\documentclass{article}
\usepackage{setspace}
\usepackage{titlesec}
\titleformat{\section}
{\singlespacing\normalfont\Large\bfseries}{\thesection}{1em}{}
\titleformat{\subsection}
{\singlespacing\normalfont\large\bfseries}{\thesubsection}{1em}{}
\titleformat{\subsubsection}
{\singlespacing\normalfont\normalsize\bfseries}{\thesubsubsection}{1em}{}
\doublespacing
\begin{document}
Nunc venenatis nulla eu arcu pellentesque eu molestie nunc condimentum.
Donec sodales lacinia dictum.
Sed aliquam turpis quis enim bibendum pharetra.
This is the last paragraph in section i.
\section{The Next Section Which Has a Fairly Long Name that Stretches Over Two Lines}
This is the first paragraph in section i+1.
Cras ut tortor vel dui ultricies dapibus vitae sit amet nisi.
Aliquam rhoncus leo id eros volutpat faucibus.
Integer lectus elit, varius et semper eget, tristique vel odio.
\end{document}
Or, using the reduced syntax:
\usepackage{titlesec}
\titleformat*{\section}{\normalfont\Large\bfseries\singlespacing}
\titleformat*{\subsection}{\normalfont\large\bfseries\singlespacing}
\titleformat*{\subsubsection}{\normalfont\normalsize\bfseries\singlespacing}
By the way, the setspace package provides several commands and environments; the commands (switches) end in "ing": \singlespacing, \onehalfspacing, \doublespacing, whereas the environments are singlespace, onehalfspace, doublespace.
Using \doublespace are you are doing (as a switch) is not entirely correct; the following simple document:
\documentclass{article}
\usepackage{setspace}
\doublespace
\begin{document}
test
\end{document}
when processed will show in the output console a message
(\end occurred inside a group at level 1)
### semi simple group (level 1) entered at line 4 (\begingroup)
which indicates that a group started but was never ended (in this case, the group created by the \doublespace command associated to the environment doublespace). The correct form of using the switch is
\documentclass{article}
\usepackage{setspace}
\doublespacing
\begin{document}
test
\end{document}
and, for the corresponding environment:
\documentclass{article}
\usepackage{setspace}
\begin{document}
\begin{doublespace}
test...
\end{doublespace}
\end{document}
If one of the "standard" document classes -- article, report, and book -- or a document class that's based on one of the standard classes is in use, a straightforward solution consists in loading the sectsty package and issuing the instruction \allsectionsfont{\singlespacing} in the preamble.
An MWE (minimum working example):
\documentclass{article}
\usepackage{setspace,lipsum}
\doublespacing
\usepackage{sectsty}
\allsectionsfont{\singlespacing}
\begin{document}
\lipsum[1] % filler text
\section{The Next Section Which Has a Fairly Long Name that Stretches Over Two Lines}
\lipsum[2] % more filler text
\end{document}
|
{}
|
# How to Prepare for PRM Exam – Part I
This post is for students who are preparing for or planning to write the PRM exam.
Professional Risk Manger (PRM) Exam has become quite popular among the risk professionals and students and competes head-on with the respectable Financial Risk Manager (FRM) Exam from GARP. Even though both the exams certify you as risk managers, the exams are quite different.
PRM takes a holistic approach to the fieldof risk management and tests the students on all aspects of risk management. For this purpose, the complete PRM exam is divided into four exams. Candidates are required to pass four exams to achieve the PRM Designation:
• Exam I: Finance Theory, Financial Instruments and Markets
• Exam II: Mathematical Foundations of Risk Measurement
• Exam III: Risk Management Practices
• Exam IV: Case Studies, PRMIA Standards of Best Practice, Conduct and Ethics, Bylaws
The exams are computer-based, conducted at Pearson Vue testing centers. So, you can sign up to write the exam at any Pearson Vue center nearest to your location.
## General Tips for Preparing
Let us first look at some general tips that are relevant for all the four exams. After that we will delve into specific subject matter related tips for each of the four exams.
### Focus of the exam
The focus of the exam is to check your real understanding of the subject, rather then to check your memory power. Most of the questions in the exams are targeted at checking your conceptual understanding of the subject. Sometimes the questions may appear very simple, but they will really be checking your true knowledge. Of course, it is important to remember some important formulas, but more important than that is to get your basics right.
### The study material
PRM publishes the PRM Handbook, which is the official reading material for the PRM exam. Apart from that there are many training providers, who provide PRM coaching.
The best way to prepare is to stick to one set of references to prepare for the exam, rather than diving into study materials from various providers.
For a diligent student, PRM handbook is the best resource to prepare for the exam. If you are preparing with the PRM Handbook, then you can also refer to other books such as Hull, Jorion or Fabozzi to understand any particular topic that you want more clarification on. However, for the most part, the PRM handbook is enough.
If you find PRM handbook difficult to follow (which it is at times), then you may buy study notes from a quality-training provider.
In any case, try to avoid referring to multiple sets of study materials. Doing this will be more confusion than help, because it is difficult to reconcile the differences between the two, and different providers use slightly different notations and approach.
### Exam questions and practice
One drawback with PRM handbook is that it does not have any sample questions. The study guide does provide some sample questions for each exam, but that’s not sufficient. So, most likely, you will have to purchase mock tests from an external training provider. Even though not directly relevant, you should also check out the past exam question papers from FRM. This may not be directly relevant but will help because many topics between the two exams are common.
### Preparation time and schedule
The preparation time for each exam is different.
Exam I and Exam III are lengthiest as they have more topics. Both these exams will require approximately 150 to 200 hours of study time.
For exam IV, how much time you will spend completely depends on your educational background. If you have a math background, it may take very less time (less than 100 hours). Without math background, you will have to build your knowledge from scratch and it can take longer.
Exam IV is the easiest and the study material is very concise. In my view, you won’t need more than 30 hours of study time for this.
The best way to study is to spend maximum time is reading the material (about 70%), and the remaining time (30%), for solving the practice tests and revising the material.
### Use of calculator
In the exam, you are allowed to use an online scientific calculator. No other tools are allowed. So, while preparing for the exam, make it a habit to use the scientific calculator for all your practice, rather then using a spreadsheet. Also, remember that if a problem cannot be solved using the calculator, it will not be asked in the exam!
### Managing time while writing the exam
The following table shows the number of questions in each exam and the exam durations:
Each exam provides plenty of time to answer the questions. Most likely, you will be able to complete the exam within 70% of the allotted time. So, you will have enough time to review the questions.
The best way to approach the paper is to not spend too much time on any question in the first run. Look at a question, and try to answer it. If you can answer it within reasonable time, say 2-3 minute, well and good. If you think you will take longer, just skip that question and move on to the next question. This way you would have gone through all the questions in half the time. In the remaining time you can come back to these unsolved questions and attempt them.
Another important point is that there is no negative marking for wrong answers. So, don’t leave any question unanswered, even if you are not sure of the answer.
This concludes the general tips. In the next posts, I will take up each exam and provide specific preparationtips related to the subject matter.
You can discuss more about the PRM exam at the PRM Exam Forum.
# R Programming Bundle: 25% OFF
Get our R Programming - Data Science for Finance Bundle for just $29$39.
Get it now for just \$29
|
{}
|
## Standing Room Only
In 2012 the global fertility rate - births per woman - was 2.5, compared to 2.0 needed for a stable population, assuming no stillbirths, and that all babies born survived to have children themselves. This means that every couple produced 2.5 children and over the course of a generation the population increases by a factor of
$\frac{2.5}{2.0}=1.25$
.
In 2012 the world population was about 7 billion. After n generations the world population would be
$7 \times 10^9 \times 1.25^n$
.
the Earth is roughly a sphere of
$6370 km = 6.370 \times 10^6 m$
so has a surface area equal to
$4 \pi r^2 = 4 \pi \times (6370 \times 10^6)^2 =5.1 \times 10^14 m^2$
Over the whole surface of the Earth, each person will have only 1 square metre to call his own when the population equals the surface area.
$7 \times 10^9 \times 1.25^n =5.1 \times 10^14$
This will be after
$n = \frac{ln((5,1 \times 10^14)/(7 \times 10^9))}{ln 1.25} = 50$
generations.
If we take a generation as 45 years, there will be standing room only after
$50 \times 45 = 2250$
years.
|
{}
|
# Are ECDSA keys and RSA keys interchangeable?
Is it possible to use an ES512 key pair for RS512 signatures? Is it possible to use an RS512 key pair for ES512 signatures? I've also posted an issue in the NodeJS jwa package.
I created a key pair for EC512 with the following commands:
openssl ecparam -genkey -name secp521r1 -noout -out ecdsa-p521-private.pem
openssl ec -in ecdsa-p521-private.pem -pubout -out ecdsa-p521-public.pem
Then, instead of using ECDSA as I should, I used RSA:
jwa = require('jwa');
rsa = jwa('RS512');
signature = rsa.sign('hello', fs.readFileSync('ecdsa-p521-private.pem').toString());
result = rsa.verify('hello', signature, fs.readFileSync('ecdsa-p521-public.pem').toString());
And the result was true!
Is this an expected behaviour? Is it possible to use ECDSA keys for RSA and vice versa?
Thanks
--- EDIT ---
Here's a list of possible algorithms to choose from (for my json-web-token implementation), so we will have common terminology:
• HS256: HMAC using SHA-256 hash algorithm
• HS384: HMAC using SHA-384 hash algorithm
• HS512: HMAC using SHA-512 hash algorithm
• RS256: RSASSA using SHA-256 hash algorithm
• RS384: RSASSA using SHA-384 hash algorithm
• RS512: RSASSA using SHA-512 hash algorithm
• ES256: ECDSA using P-256 curve and SHA-256 hash algorithm
• ES384: ECDSA using P-384 curve and SHA-384 hash algorithm
• ES512: ECDSA using P-521 curve and SHA-512 hash algorithm
--- EDIT ---
Following Squeamish Ossifrage's suggestion to use ed25519, here is my Node.JS code for signing and verifying "nonstandard ed25519 json-web-tokens".
const jwt = require('jsonwebtoken');
const base64url = require('base64url');
const ed25519 = require('ed25519');
const ed25519Header1 = base64url.encode(Buffer.from('{"alg":"Ed25519","typ":"JWT"}'));
const ed25519Header2 = base64url.encode(Buffer.from('{"typ":"JWT","alg":"Ed25519"}'));
const noneHeader = base64url.encode(Buffer.from('{"alg":"none","typ":"JWT"}'));
function sign(obj, privateKey, options) {
const noneToken = jwt.sign(obj, '', { ...options, algorithm: "none" });
if (jwt.decode(noneToken, {complete: true}).header.typ !== 'JWT') {
throw new Error("Invalid object to sign");
}
const [, payload] = noneToken.split('.');
const signature = base64url.encode(ed25519.Sign(
Buffer.from(${ed25519Header1}.${payload}),
{ privateKey } // passing an object makes sure we want to sign with private-key, not seed.
));
return ${ed25519Header1}.${payload}.${signature}; } function verify(token, publicKey, options) { const [tokenHeader, payload, signature, ...leftover] = token.split('.'); if ((tokenHeader !== ed25519Header1) && (tokenHeader !== ed25519Header2)) { throw new Error("ed25519 jwt malformed - unexpected header"); } const signatureBuffer = base64url.toBuffer(signature || ''); if (base64url.encode(signatureBuffer) !== signature) { throw new Error("ed25519 jwt malformed - invalid signature format"); } // signature is a string, therefore payload is a string (not undefined) if (!ed25519.Verify( Buffer.from(${tokenHeader}.\${payload}),
base64url.toBuffer(signature),
publicKey
)) {
throw new Error("Invalid signature");
}
return jwt.verify([noneHeader, payload, '', ...leftover].join('.'), '', {
...options,
algorithms: ['none']
});
}
a pair of publicKey and privateKey can be generate with the MakeKeyPair(...) function of the ed25519 package (privateKey should be 64 bytes long and publicKey should be 32 bytes long).
The code allows to enjoy both ed25519 signing, and the features of the jsonwebtoken package.
For example:
var token = sign({hello:"world"}, privateKey, {expiresIn:'1 minute'})
console.log(verify(t, publicKey)); // prints { hello: 'world', iat: 1526078738, exp: 1526078798 }
// After 1 minute:
verify(t, publicKey); // Throws "TokenExpiredError: jwt expired"
• This shouldn't work. Maybe the library internally does some remapping from RSA to ECDSA? Also RSA-512 is very badly broken, to the point were any private person can just break any keypair given less thn 100USD, so don't use it in production!
– SEJPM
May 11 '18 at 18:19
• What should I use instead (for asymetric signature)? How to generate the keys? Added a list of options.
– Ozo
May 11 '18 at 18:52
• Please let me know what you think of my nonstandard ed25519-jwt code. Do you see any reason not to publish it to npm?
– Ozo
May 11 '18 at 23:46
• This isn't really a discussion forum—best to keep the question as it was originally, and if you want to ask another question, make a separate question. Best to focus on the general procedure—like what to do with a key designated for Ed25519, and whether to blindly apply whatever algorithm the metadata on the forgery says to apply (answer: NOOOO!)—than on particulars of nodejs code review. May 12 '18 at 13:39
From the source code:
function createKeySigner(bits) {
return function sign(thing, privateKey) {
if (!bufferOrString(privateKey) && !(typeof privateKey === 'object'))
throw typeError(MSG_INVALID_SIGNER_KEY);
thing = normalizeInput(thing);
// Even though we are specifying "RSA" here, this works with ECDSA
// keys as well.
var signer = crypto.createSign('RSA-SHA' + bits);
var sig = (signer.update(thing), signer.sign(privateKey, 'base64'));
return base64url.fromBase64(sig);
}
}
See in particular the comment. What happens here is that even though you specified "RSA with SHA-512" as signature algorithm (the 'RS512' parameter, in that library's weird terminology), the code perfectly knows that the private key is for ECDSA, not RSA (look at the contents of the private key file, it starts with "BEGIN EC PRIVATE KEY"), and so it "helpfully" uses ECDSA instead of RSA, transparently.
No mystery here then, just some Node.js code doing Node.js stuff (i.e. doing what it can instead of what you asked for, and hoping for the best -- a rather dubious methodology for all things related to security, in particular cryptography).
• Just posted a link to your answer on the issue comments. I think this should technically be the right answer to my question as it solves the mystery - even though @squeamish-ossifrage 's answer was also very helpful.
– Ozo
May 11 '18 at 19:30
• I just love that last section of the answer. It seems paramount to everything ECMAScript, and it completely leaves everybody guessing. And it is impossible to base any security proof or review on guesswork. May 12 '18 at 2:00
• The library's terminology follows the JSON Web standard; where that came from I don't know. May 13 '18 at 9:23
NOOOOOOOOOOOOOOOOOOOOOOOOOOOOO!
You are aiming a gun at your foot and your finger is on the trigger and you're getting ready to pull it! Don't do it, buddy! Your foot's got a lot to live for! Also the library you're using is probably broken.
1. ECDSA and RSA are completely unrelated families of cryptosystems. They are in no way compatible whatsoever.
2. JWT is a catastrophe waiting to happen. It's designed to let an adversary control what algorithms use to verify things, including using symmetric-key algorithms with public-key verification, which enables trivial forgery. Just say no to JWT!
3. If RS512 means RSA with a 512-bit modulus, it is completely broken. (If it doesn't mean that, then it's not very well-named.)
4. If EC512 means ECDSA with curve secp521r1 (which is a really bad choice of nomenclature that indicates the authors of the library you're using aren't interested in citing serious crypto standards or literature under the standard well-recognized names), it's a slow but reasonable choice if you are going to use ECDSA at all. secp256r1 would be a more conventional choice at a reasonable 128-bit security level, as would secp256k1, as Bitcoin uses.
5. ECDSA is an archaic slow signature scheme that is full of sharp edges: if your RNG is wedged any time you make a signature (not just when you generate the key), then you may leak the private key. It's possible your ECDSA implementation uses RFC 6979 to avoid this, but are you confident it does? I'm not!
6. Please just use Ed25519 and don't let the document you're verifying dictate what signature scheme you use like JWT naively does.
• 7. Avoid using two keys for different purposes. Especially asymmetric keys. May 11 '18 at 18:26
• a. Just for the record, the node-jwa has over 1M downloads per week on npm: npmjs.com/package/jwa . The very popular jsonwebtoken package uses it. It is not "my broken library", it is "everybody's broken library". b. Which of the listed algorithm should I use for asymetric json-web-token in 2018? What command should I run to generate the keys? I've added the list of algorithms to the question.
– Ozo
May 11 '18 at 18:44
• OK, just found this: auth0.com/blog/… (tldr; all of the listed asymetric algorithms are broken). I will start googling for a NodeJS package for signing and verifying json-web-tokens that supports Ed25519. FML :(
– Ozo
May 11 '18 at 18:59
• Ed25519 is not really in the json-web-token standard, right?
– Ozo
May 11 '18 at 19:09
• I took your advice and published jsonwebtoken-ed25519.
– Ozo
May 23 '18 at 13:19
|
{}
|
## Trigonometry (11th Edition) Clone
Let $\theta$ be the angle in radians. Let $A$ be the area of the sector. Then the ratio of the angle $\theta$ to $2\pi$ is equal to the ratio of the sector area to the area of the whole circle. $\frac{\theta}{2\pi} = \frac{A}{\pi ~r^2}$ $A = \frac{\theta ~r^2}{2}$ Let the original angle be $\theta_1$ and let the original radius be $r_1$: $A_1 = \frac{\theta_1 ~r_1^2}{2}$ We can find an expression for the new area: $A_2 = \frac{\theta_2 ~r_2^2}{2}$ $A_2 = \frac{(\theta_1/2) ~(2r_1)^2}{2}$ $A_2 = 2\times \frac{\theta_1 ~r_1^2}{2}$ $A_2 = 2~A_1$ The new area is twice the size of the original area. Since we solved the question algebraically, we can see that this result holds in general.
|
{}
|
# Math Help - A sequence problem.
1. ## A sequence problem.
Hi, I think I have an idea on how to solve a sequence problem but I could be going badly wrong. To verify if I used the right method I will ask somebody to help me with this question from Multivariable Calculus 6th Edition by James Stewart.
You have to find if {arctan 2n} converges, and if it converges to find its limit.
Do I have to find the limit of 0.218n + 0.889? The decimals are approximate so I don't think this could be it.
2. Originally Posted by Undefdisfigure
Hi, I think I have an idea on how to solve a sequence problem but I could be going badly wrong. To verify if I used the right method I will ask somebody to help me with this question from Multivariable Calculus 6th Edition by James Stewart.
You have to find if {arctan 2n} converges, and if it converges to find its limit.
Do I have to find the limit of 0.218n + 0.889? The decimals are approximate so I don't think this could be it.
Hint: arctan(2n) converges to the same thing arctan(n) does
3. Originally Posted by Jhevon
Hint: arctan(2n) converges to the same thing arctan(n) does
I'm having cognitive flashes on how to put this together but I can't quite make it. I'll keep reading my text and maybe I'll get it, I'll put this one aside and return to it. I know your probably reluctant to give me the answer since you gave me a hint.
4. Originally Posted by Undefdisfigure
I'm having cognitive flashes on how to put this together but I can't quite make it. I'll keep reading my text and maybe I'll get it, I'll put this one aside and return to it. I know your probably reluctant to give me the answer since you gave me a hint.
we are talking about sequences, right?
so we want $\lim \arctan 2n$
Let $u = 2n$*
then $\lim \arctan 2n = \lim_{u \to \infty} \arctan u = \frac {\pi}2$
*) the reason why i can substitute a dummy variable here is simply because $\lim_{n \to \infty} 2n = \lim_{n \to \infty} n$
5. Originally Posted by Jhevon
we are talking about sequences, right?
so we want $\lim \arctan 2n$
Let $u = 2n$*
then $\lim \arctan 2n = \lim_{u \to \infty} \arctan u = \frac {\pi}2$
*) the reason why i can substitute a dummy variable here is simply because $\lim_{n \to \infty} 2n = \lim_{n \to \infty} n$
You are very intelligent. Thank you. Did you pass Calculus 5? I failed Calculus 3 and I'm taking it over.
6. Originally Posted by Undefdisfigure
You are very intelligent.
...umm, not really
Thank you.
you're welcome
Did you pass Calculus 5?
yes. but call it adv calc 2. the 5 makes it seem hard and causes people to overestimate my abilities
I failed Calculus 3 and I'm taking it over.
sorry to hear that. better luck this time around. things should be easier this time. i guess i can expect to see you here regularly, huh?
7. Originally Posted by Undefdisfigure
Did you pass Calculus 5?
What topics would Calculus 5 cover??
-Dan
8. Originally Posted by topsquark
What topics would Calculus 5 cover??
My real multidimensional analysis class covered:
• topology of $\mathbb{R}^n$ i.e. open,closed,connectness,compactnes,...
• differenciability in $\mathbb{R}^n$, Taylor's theorem in $\mathbb{R}^n$, the general chain rule, Hessian and critical points
• the implicit function theorem
• theory of integration on a line, theory of integration in $\mathbb{R}^n$, Fubini's theorems, chain of variable - Jabocian
• line integrals, surface integrals, Green's and Divergence theorems
• Fourier series, convergence of Fourier series, theorems of which functions can be written in Fourier series
9. Originally Posted by Jhevon
we are talking about sequences, right?
so we want $\lim \arctan 2n$
Let $u = 2n$*
then $\lim \arctan 2n = \lim_{u \to \infty} \arctan u = \frac {\pi}2$
*) the reason why i can substitute a dummy variable here is simply because $\lim_{n \to \infty} 2n = \lim_{n \to \infty} n$
When arctan u approaches infinity, how do you know that it equals PI/2, is it a hard and fast rule I can find in the text?
10. Originally Posted by Undefdisfigure
When arctan u approaches infinity, how do you know that it equals PI/2, is it a hard and fast rule I can find in the text?
Do you know what the graph of y = arctan(x) looks like? Your original question implies a background that ought to make the answer 'yes'. Contemplate this graph .......
Alternatively, contemplate the graph of y = tan(x) .......
|
{}
|
# Approximate a convolution as a sum of separable convolutions
I want to compute the discrete convolution of two 3D arrays: $A(i, j, k) \ast B(i, j, k)$
Is there a general way to decompose the array $A$ into a sum of a small number of separable arrays? That is:
$A(i, j, k) = \sum_m X_m(i) \times Y_i(j) \times Z_i(k)$
If not, is there a standard way to approximate $A$ as a sum of a relatively small number of separable arrays? $A$ is typically smooth, with a central maximum that dies off towards the edges.
Motivation: I want to compute many iterations of $A \ast B$, with $A$ constant but $B$ changing. $A$ is quite large (16 x 16 x 16, for example), but $B$ is larger still (64 x 1024 x 1024, for example). Direct convolution is very slow to compute. FFT-based convolution is much faster, but still takes tens of seconds for one FFT of $B$ on my machine, and uses a lot of memory. If $A$ is separable, however, three direct convolutions is much faster and uses less memory:
import time, numpy
from scipy.fftpack import fftn, ifftn
from scipy.ndimage import gaussian_filter
a = numpy.zeros((16, 16, 16), dtype=numpy.float64)
b = numpy.zeros((64, 1024, 1024), dtype=numpy.float64)
fftn_a = fftn(a, shape=b.shape)
start = time.clock()
ifftn(fftn_a * fftn(b)).real
end = time.clock()
print "fft convolution:", end - start
start = time.clock()
gaussian_filter(b, sigma=3)
end = time.clock()
print "separable convolution:", end - start
fft convolution: 49 seconds
separable convolution: 9.2 seconds
Alternatively, if someone's got a better way to compute this type of convolution, I'm all ears.
-
A separable 2D kernel is a rank one kernel (in the sense of matrix rank). So one natural approach is to approximate your 2D kernel with a rank one approximation.
suppose F is your 2D kernel and F* is an approximation, min ||F - F*|| s.t. rank(F*)=1 has an analytical solution (the norm we are minimizing here is the frobenius norm). That solution is the svd approximation of F, truncated to 1 singular value. You can then easily extract your 1D kernel from that matrix.
However, the definition of rank (generalization of) is not quite clear in higher dimension. You might wanna check high order svd algorithm to see if you can adapt it to something usable for your problem.
However, if your 3D kernel is symmetric along the 3 Cartesian axis you could try the following heuristic: taking the middle 2D slice, finding the best rank 1 approximation with the svd method, extracting the 1D kernel from it and building a 3D kernel with it, it might give you something acceptable.
Michael
-
Two ways to to find a separable approximation $x \otimes y \otimes z$ to a 3d filter A,
i.e. minimize $|A - x \otimes y \otimes z|$:
1) scipy.optimize.leastsq can easily handle a sum of 48 squares,
A.ravel() - np.outer( np.outer( x, y ), z ).ravel()
2) for fixed y and z, the optimal x is just a projection; so optimize x y z x y z ... in turn. I'd expect this to converge pretty fast, but haven't tried it.
With either method, you can of course add more terms, minimize $|A - x \otimes y \otimes z - x2 \otimes y2 \otimes z2 \dots|$ .
Have you timed scipy.ndimage.filters.convolve vs. 3 convolve1d s ?
-
You should check the following links for a discussion in 2D:
http://blogs.mathworks.com/steve/2006/10/04/separable-convolution/ http://blogs.mathworks.com/steve/2006/11/28/separable-convolution-part-2/
In short not all kernels are separable, you can test for separability based on the rank of the kernel matrix, and you can approximate a separable kernel in an LS sense (using SVD).
For 3D you may presumably find something along the same lines (using tensors?).
Peter
-
This is a good answer to the question "How can I tell if my kernel is separable?". My question is, is there a good way to take a non-separable kernel and represent it as a sum of separable kernels? You say something about a LS sense using SVD. Can you elaborate on this point? – Andrew Oct 10 '12 at 21:56
|
{}
|
# 22.2 Breadth-first search
## 22.2-1
Show the $d$ and $\pi$ values that result from running breadth-first search on the directed graph of Figure 22.2(a), using vertex $3$ as the source.
$$\begin{array}{c|cccccc} \text{vertex} & 1 & 2 & 3 & 4 & 5 & 6 \\ \hline d & \infty & 3 & 0 & 2 & 1 & 1 \\ \pi & \text{NIL} & 4 & \text{NIL} & 5 & 3 & 3 \end{array}$$
## 22.2-2
Show the $d$ and $\pi$ values that result from running breadth-first search on the undirected graph of Figure 22.3, using vertex $u$ as the source.
$$\begin{array}{c|cccccc} \text{vertex} & r & s & t & u & v & w & x & y \\ \hline d & 4 & 3 & 1 & 0 & 5 & 2 & 1 & 1 \\ \pi & s & w & u & \text{NIL} & r & t & u & u \end{array}$$
## 22.2-3
Show that using a single bit to store each vertex color suffices by arguing that the $\text{BFS}$ procedure would produce the same result if lines 5 and 14 were removed.
(Removed)
## 22.2-4
What is the running time of $\text{BFS}$ if we represent its input graph by an adjacency matrix and modify the algorithm to handle this form of input?
The time of iterating all edges becomes $O(V^2)$ from $O(E)$. Therefore, the running time is $O(V + V^2)$.
## 22.2-5
Argue that in a breadth-first search, the value $u.d$ assigned to a vertex $u$ is independent of the order in which the vertices appear in each adjacency list. Using Figure 22.3 as an example, show that the breadth-first tree computed by $\text{BFS}$ can depend on the ordering within adjacency lists.
(Removed)
## 22.2-6
Give an example of a directed graph $G = (V, E)$, a source vertex $s \in V$, and a set of tree edges $E_\pi \subseteq E$ such that for each vertex $v \in V$, the unique simple path in the graph $(V, E_\pi)$ from $s$ to $v$ is a shortest path in $G$, yet the set of edges $E_\pi$ cannot be produced by running $\text{BFS}$ on $G$, no matter how the vertices are ordered in each adjacency list.
(Removed)
## 22.2-7
There are two types of professional wrestlers: "babyfaces" ("good guys") and "heels" ("bad guys"). Between any pair of professional wrestlers, there may or may not be a rivalry. Suppose we have $n$ professional wrestlers and we have a list of $r$ pairs of wrestlers for which there are rivalries. Give an $O(n + r)$-time algorithm that determines whether it is possible to designate some of the wrestlers as babyfaces and the remainder as heels such that each rivalry is between a babyface and a heel. If it is possible to perform such a designation, your algorithm should produce it.
(Removed)
## 22.2-8 $\star$
The diameter of a tree $T = (V, E)$ is defined as $\max_{u,v \in V} \delta(u, v)$, that is, the largest of all shortest-path distances in the tree. Give an efficient algorithm to compute the diameter of a tree, and analyze the running time of your algorithm.
Suppose that a and b are the endpoints of the path in the tree which achieve the diameter, and without loss of generality assume that $a$ and $b$ are the unique pair which do so. Let $s$ be any vertex in $T$. We claim that the result of a single $\text{BFS}$ will return either $a$ or $b$ (or both) as the vertex whose distance from $s$ is greatest.
To see this, suppose to the contrary that some other vertex $x$ is shown to be furthest from $s$. (Note that $x$ cannot be on the path from $a$ to $b$, otherwise we could extend). Then we have
$$d(s, a) < d(s, x)$$
and
$$d(s, b) < d(s, x).$$
Let $c$ denote the vertex on the path from $a$ to $b$ which minimizes $d(s, c)$. Since the graph is in fact a tree, we must have
$$d(s, a) = d(s, c) + d(c, a)$$
and
$$d(s, b) = d(s, c) + d(c, b).$$
(If there were another path, we could form a cycle). Using the triangle inequality and inequalities and equalities mentioned above we must have
\begin{aligned} d(a, b) + 2d(s, c) & = d(s, c) + d(c, b) + d(s, c) + d(c, a) \\ & < d(s, x) + d(s, c) + d(c, b). \end{aligned}
I claim that $d(x, b) = d(s, x) + d(s, b)$. If not, then by the triangle inequality we must have a strict less-than. In other words, there is some path from $x$ to $b$ which does not go through $c$. This gives the contradiction, because it implies there is a cycle formed by concatenating these paths. Then we have
$$d(a, b) < d(a, b) + 2d(s, c) < d(x, b).$$
Since it is assumed that $d(a, b)$ is maximal among all pairs, we have a contradiction. Therefore, since trees have $|V| - 1$ edges, we can run $\text{BFS}$ a single time in $O(V)$ to obtain one of the vertices which is the endpoint of the longest simple path contained in the graph. Running $\text{BFS}$ again will show us where the other one is, so we can solve the diameter problem for trees in $O(V)$.
## 22.2-9
Let $G = (V, E)$ be a connected, undirected graph. Give an $O(V + E)$-time algorithm to compute a path in $G$ that traverses each edge in $E$ exactly once in each direction. Describe how you can find your way out of a maze if you are given a large supply of pennies.
First, the algorithm computes a minimum spanning tree of the graph. Note that this can be done using the procedures of Chapter 23. It can also be done by performing a breadth first search, and restricting to the edges between $v$ and $v.\pi$ for every $v$. To aide in not double counting edges, fix any ordering $\le$ on the vertices before hand. Then, we will construct the sequence of steps by calling $\text{MAKE-PATH}(s)$, where $s$ was the root used for the $\text{BFS}$.
1 2 3 4 5 6 7 MAKE-PATH(u) for each v ∈ Adj[u] but not in the tree such that u ≤ v go to v and back to u for each v ∈ Adj[u] but not equal to u.π go to v perform the path proscribed by MAKE-PATH(v) go to u.π
|
{}
|
## Introductory Statistics 9th Edition
a. $69C_5$+26 = 11238539 P(winning the jackpot) = $\frac{1}{11238539}$ = 0.00000008897 b. a. $69C_5$ = 11238513 P(winning the jackpot) = $\frac{1}{11238513}$ = 0.00000008897
|
{}
|
# How to prove that $\mathrm{var}(X-E(X|Y)) \leq \mathrm{var}(X)$?
I tried to solve this exercise but got stuck: Assume we have the random variables $X$ and $Y$ where $E(X) = 0$. How can we prove the following inequality $\operatorname{Var} (X-E(X|Y)) \leq\operatorname{Var}(X)$?
I tried to write out the rhs: $\operatorname{Var}(X-E(X|Y)) = \operatorname{Var}(X) + \operatorname{Var}(E(X|Y)) - 2\operatorname{Cov}(X,E(X|Y))$ Since $E(X) = 0$, we have $\operatorname{Var}(E(X|Y)) = E(E(X|Y)^2)$ and $\operatorname{Cov}(X,E(X|Y)) = E(X*E(X|Y))$.
Thus we have $\operatorname{Var}(X) + E(E(X|Y)^2) - 2E(XE(X|Y)) \leq \operatorname{Var}(X)$ giving $E(E(X|Y)^2) \leq 2E(XE(X|Y))$
But that did not get me anywhere as you can see, does anybody have any tips? OR do you think there might be a typo in the exercise?
• Hint: Start rather from $$X=(X-E(X\mid Y))+E(X\mid Y)$$ hence $$\mathrm{var}(X)=\ldots$$ – Did Aug 17 '17 at 15:10
• Tnx for the reply! However, it didnt get me anywhere. I tried expanding, but nothing cancelled, care to give another tip? =)) $Var((X-E(X|Y)) + E(X|Y)) = Var(X-E(X|Y)) + Var(E(X|Y)) -2Cov(X-E(X|Y),E(X|Y))$ – LearningMath Aug 18 '17 at 11:20
• Let us turn to the covariance then (with a factor $+2$ instead of $-2$), which involves three terms: $$E((X-E(X\mid Y))E(X\mid Y))\qquad E(X-E(X\mid Y))\qquad E(E(X\mid Y))$$ What can one say about each of these terms? – Did Aug 18 '17 at 13:17
• Opps +2 and not -2 , as you said, however, i still do not see how this track gets me anywhere: We have $E(X-E(X|Y)) = 0$ and $E(E(X|Y)) = 0$ and finally $Cov(X-E(X|Y),E(X|Y)) = E((X-E(X|Y))E(X|Y)) = E(XE(X|Y))-E(E(X|Y)^2)$ But this will only yield my original $E(E(X|Y)^2)<2E(XE(X|Y))$ :( – LearningMath Aug 18 '17 at 14:04
• What you are lacking is a characterization of $E(X\mid Y)$ (until now, everything you tried would work with another random variable instead, which is a bad sign, right?). So, what is specific about $Z=E(X\mid Y)$, which makes that $E(XW)=E(ZW)$ for every $W$ such that $____$? IOW, how do you define $Z$? – Did Aug 18 '17 at 14:41
@Did's hints are spot on and you don't need $E(X)=0$.
Let $Z=E(X|Y)$, consider $$\text{Var}(X)=\text{Var}(X-Z+Z)=\text{Var}(X-Z)+\text{Var}(Z)+2\text{Cov}(X-Z,Z).$$ Note that \begin{gather} E(X-Z|Y)=E(X-E(X|Y)|Y)=E(X|Y)-E(X|Y)=0\\ \implies E(X-Z)=0\implies \text{Cov}(X-Z,Z)=E[(X-Z)Z]. \end{gather} In going from the first line to the second line, we have used the Law of Iterated Expectations: $$E(X-Z)=E[E(X-Z|Y)]=E(0)=0.$$ For $E[(X-Z)Z]$, we can condition on $Y$ again: $$E[(X-Z)Z|Y]=ZE(X|Y)-Z^2=Z^2-Z^2=0\implies E[(X-Z)Z]=0.$$ Thus, we have $$\text{Var}(X)=\text{Var}(X-Z)+\text{Var}(Z)\geq\text{Var}(X-Z)=\text{Var}[X-E(X|Y)].$$
• I think the OP has problems understanding the implication: $E(X-Z|Y) =0 \Rightarrow E(X-Z)=0$. – Harto Saarinen Nov 17 '17 at 8:30
|
{}
|
# History of Electromagnetic Field Tensor
I'm curious to learn how people discovered that electric and magnetic fields could be nicely put into one simple tensor.
It's clear that the tensor provides many beautiful simplifications to the original theory, by applying the abstract theory of tensors to this particular problem. For example, the strange formulas for the transformation of electric and magnetic fields in different reference frames can be explained as the transformation laws of a 2-tensor. The interdependence of the two fields in this transformation, and the fact that electric and magnetic fields are in some ways the same thing in the classical theory, can be explained by this two tensor. The various ad-hoc formulas that make up Maxwell's equations, some of them with curls, some with divergence, can be explained in one beautiful formula by declaring the exterior derivative of the tensor to be 0. The cross product can also be explained as an operation on anti-symmetric tensors.
So, it's clear once someone shows you the tensor formulation that it beautifully weaves together all the parts of the "elementary" (i.e. non-tensorial) theory. My question is, how did people discover this formulation in the first place? What was the motivation, and what is the history?
Some thoughts: It's true that the elementary theory provides some hints to the tensor formulation (such as some of the things I list above), but these small hints are not quite enough to motivate all the intricacies of the tensor formula, especially if one has not seen tensors before. Was the theory of tensors already floating around in the time that the field tensor was discovered, and hence did tensor experts simply notice that electromagnetism smelled like a 2-tensor? If this is the case, how did people initially realize that tensors were important in physics? Otherwise, what did happen? Why were people motivated to do it? And, wasn't the original formulation good enough, albeit not quite as mathematically elegant?
Another related question, did people know the transformation laws for electric and magnetic fields before Einstein came along? Today, you can usually only find those in books on special relativity, or in the very last chapter of a book on electromagnetism, usually a chapter on special relativity. Therefore, if you were reading a book on electromagnetism, then before you got to the chapter on relativity, you would have thought that force vectors and hence electric fields are invariant under change of reference frame, just like forces in Newtonian mechanics.
-
For those who don't know about the electromagnetic field tensor, see en.wikipedia.org/wiki/Electromagnetic_tensor. – Davidac897 Mar 16 '11 at 20:53
The earliest reference I could find is a 1873 Maxwell's treatise. That's probably as early as it gets, given that he introduced the complete set of Maxwell's equations only few years earlier in 1865. I don't know much about historical background though, so I'll just leave this as a comment. – Marek Mar 17 '11 at 10:05
Hi Marek, do you have a page number where it appears first by any chance? – luksen Mar 17 '11 at 10:20
@luksen: I only have a $n$th hand information on that, so I am afraid I don't know. Also, just now I found another (incompatible) information that the tensor was introduced by Minkowski in 1908. Not sure who to believe and trying to read those old texts is rather tiresome... – Marek Mar 17 '11 at 11:24
Where in the book does Maxwell mention the tensor? – Davidac897 Mar 17 '11 at 23:25
The earliest instance I have found is Minkowski's "Die Grundgleichungen für die elektromagnetischen Vorgänge in bewegten Körpern" in "Nachrichten von der Georg-Augusts-Universität und der Königl. Gesellschaft der Wissenschaften zu Göttingen" from 1908.
A digitized version is found at
http://echo.mpiwg-berlin.mpg.de/ECHOdocuViewfull?start=1&viewMode=images&ws=1.5&mode=imagepath&url=/mpiwg/online/permanent/library/WBPZCG9Q/pageimg&pn=1
go to page 17/18 to read:
"Ich lasse nun an diesen Gleichungen wieder durch eine veränderte Schreibweise eine noch versteckte Symmetrie hervortreten"
roughly
"I will now, through another notation, reveal a yet hidden symmetry"
and he goes on to describe the field tensor.
-
Is there an english translation of this? – Physiks lover Jul 27 '12 at 21:25
there is a translation into english by Meghnad Saha here: archive.org/details/principleofrelat00eins you can download a PDF and will find the relevant setion on page 21. it's also published on wikisource: en.wikisource.org/wiki/… the translation reads "By employing a modified form of writing, I shall now cause a latent symmetry in these equations to appear." – luksen Jul 28 '12 at 17:18
Thankyou so much! – Physiks lover Jul 28 '12 at 18:37
The purpose to the electromagnetism tensor was to demonstrate the Lorentz covariance of Maxwell's equations. It is central to the discovery of special relativity, and it got everyone excited about relativity at the time. Einstein had nothing to do with it.
-
I think that it was Minkowski's taking up Einstein's ideas that got Einstein more noticed and hence a job. Of course Minkowski gave Einstein the credit (and already had a job himself). – joseph f. johnson Jan 15 '12 at 3:30
@josephf.johnson: It was more Planck pushing this. Planck got the work published and promoted Einstein's ideas, telling people to ignore the crazy photon paper that started everything off, as it was clearly wrong, but the rest is good. – Ron Maimon Jul 28 '12 at 7:42
@Ron you're right on this so I'm curious as to which book you got this from? – Physiks lover Jul 28 '12 at 18:50
@Physikslover: You pick up gossip here and there, it was 20 years ago, I don't remember. There's a famous quote by Planck: when they asked him "What is your greatest contribution to physics?" he said "Albert Einstein". Planck wrote letters to people to get them to take relativity seriously (he was editor of Annalen and wrote a 1906 followup which did Hamilton's principle in relativity). Repeat the Planck quote with Louis Witten and Ed Witten to get a modern variation on this story. – Ron Maimon Jul 28 '12 at 20:21
Regarding whether people figured out the Lorentz transformation of E&M fields before Einstein, the answer (of course) is "sort of." According to my physics professor (Columbia U), people realized that the fields transformed according to the Lorentz transformation (although of course it wasn't yet called that), shortly before Maxwell came up with his laws. My professor roughly said that people had an idea that there was this strange dependence (Lorentz) on relative velocity of reference frames, but they didn't know what its significance was, or its relation to, for example, the not-yet-discovered position/time transformation or other Lorentz transformations.
-
I am somewhat doubtful of the above account, unless by "people" you meant Lorentz himself. An interesting tidbit: in Lorentz's 1895 paper which started this whole business, he only showed that Maxwell's theory of electromagnetism obeys the transformation laws that now bears his name up to first order. That is, he threw away all $v^4/c^4$ terms as untreated higher order corrections. It was in 1899 that he realized the transformation is exact. – Willie Wong Apr 10 '11 at 0:09
I am not going to try to answer the whole question, just one small part: tensors were already known before Maxwells' theory, they were used to study elasticity. In fact, the name 'tensor' comes from 'tension', an obviously important quantity in elasticity and mechanics (of continuous media) in general.
Tensors were also already being used in the 19th century to study algebraic forms and quadratic differential forms, though tensor calculus took its modern form with the work of Levi-Civita and Ricci.
-
|
{}
|
# Heat Capacity of a Calorimeter
In calorimetry it is often desirable to know the heat capacity of the calorimeter itself rather than the heat capacity of the entire calorimeter system (calorimeter and water). The heat (q) released by a reaction or process is absorbed by the calorimeter and any substances in the calorimeter. If the only other substance in the calorimeter is water, the following energy balance exists:
q = qcal + qw
where qcal is the heat flow for the calorimeter and qw is the heat flow for the water.
Both of these individual heat flows can be related to the heat capacity and temperature change for the substance.
qcal = Ccal ?T
qw = Cw
?T
where Ccal is the heat capacity of the calorimeter and Cw is the heat capacity of the water. Because the water and calorimeter are in thermal equilibrium, they both have the same temperature and thus ?T is the same for both. The consequence is that the heat capacity of the entire system (C) is the sum of the heat capacities for the individual components.
C = Ccal + Cw
The heat capacity is an extensive property; that is, the heat capacity depends upon the amount of substance present. The calorimeter exists as a fixed unit, thus its heat capacity is a fixed value. The amount of water in the calorimeter, however, can vary, and thus the heat capacity of the water can vary. When dealing with variable amounts of material, one often prefers to use an intensive measure of the heat capacity. One common intensive version of the heat capacity is the specific heat capacity (s), which is the heat capacity of one gram of a substance.
sw = Cw mw
Because the mass of water (mw) and the specific heat capacity of water are both known, one can readily calculate the heat capacity of the water. The joule (J) is defined based upon the specific heat capacity of water:
sw = 4.184 J oC-1 g-1
Overall one can write
C = Ccal + sw mw
Quantity Symbol Unit Meaning
heat q J Energy transfer that produces or results from a difference in temperature
temperature T oC or K Measure of the kinetic energy of molecular motiom
temperature change ?T oC or K Difference between the final and initial temperatures for a process
mass m g Amount of material present
heat capacity C J oC-1 or J K-1 Heat required to change the temperature of a substance one degree
specific heat capacity s J oC-1 g-1 or J K-1 g-1 Heat required to change the temperature of one gram of a substance one degree
### Experiment
Objective:
• Determine the heat capacity of the calorimeter (Ccal).
Approach:
• Use the heating element to transfer a known amount of heat to the calorimeter system.
• Observe the temperature of the system before and after the heating process.
• Calculate the change in temperature for the system.
• Calculate the heat capacity of the entire calorimeter system.
• Use the mass of water and the specific heat capacity of the water to calculate the heat capacity of the water.
• Calculate the heat capacity of the calorimeter.
In this experiment, the heating element is set to operate for 5 seconds, during which time the heating element will transfer a total of 100 kJ of heat to the calorimeter.
Perform the experiment using one of the three options for the mass of water in the calorimeter. After choosing the mass of water, be sure to reset the calorimeter.
To begin the experiment, record the initial temperature, and select the "Start" button to begin the heating process. When the heating process is finished, record the final temperature and calculate the heat capacity of the system.
After running a simulation, it is necessary to reset the system before running another simulation.
Mass of Water 500. g 1000. g 1500. g
### Contributors
09:57, 2 Oct 2013
## Tags
### Textbook Maps
An NSF funded Project
|
{}
|
# How do you solve the following system: -3y + x = -3, -5x − y = 14 ?
May 15, 2018
#### Answer:
color(green)(x= -2(13/16), y = 1/16
#### Explanation:
$x - 3 y = - 3$, Eqn (1)
$- 5 x - y = 14$, Eqn (2)#
5 * Eqn(1) + Eqn (2) is
$5 x - 15 y - 5 x - y = - 15 + 14$
$- 16 y = - 1$
$y = \frac{1}{16}$
Substituting value of y in Eqn (1),
$x - \frac{3}{16} = - 3$
$x = - 3 + \frac{3}{16} = - 2 \left(\frac{13}{16}\right)$
May 15, 2018
#### Answer:
$x = - \frac{45}{16}$ , or $- 2.8125$
$y$ = $\frac{1}{16}$
#### Explanation:
Here's our system:
$- 3 y + x = - 3$
$- 5 x - y = 14$
Solving By Substitution
First, let's solve for a variable. I'll choose x, since it appears first. We'll solve for x by using the first equation:
$- 3 y + x = - 3$
Add 3y to both sides in order to negate -3y. You should now have:
$x = 3 y - 3$
Now, substitute this value in the second equation:
$- 5 \left(3 y - 3\right) - y = 14$
Distribute -5 to all terms in the parentheses. Remember negative and positive multiplication rules. (Two negatives make a positive!)
$- 15 y + 15 - y = 14$
Now, combine like terms.
$- 16 y + 15 = 14$
Now, subtract 15 from both sides in order to solve for y.
$- 16 y = - 1$
Now, divide by $- 16$ to isolate for $y$.
$- \frac{1}{-} 16$ = $y$
Because two negatives make a positive, $y$ becomes $\frac{1}{16}$.
Now, plug y in the simplified equation used to solve for x earlier:
$x = 3 y - 3$
Substitute $y$ for $y$'s value.
$x = 3 \left(\frac{1}{16}\right) - 3$
Multiply 3 by 1/16 to get 3/16.
$x = \left(\frac{3}{16}\right) - 3$
$x = - \frac{45}{16}$ , or $- 2.8125$
|
{}
|
### Author Guidelines
TEXT: Booq Antiqua should be written in 10 font size. It should not exceed 20 pages apart from scanning articles including references and figures.
TITTLE: The title of the manuscript should be written with the first letters of the manuscript. The title should be short and clear. The first letters of the first degree titles in the text should be written in italics and the first word of the title should be capitalized for upper and lower, second and lower headings.
ABSTRACT: Between 200-250 words should be written in Turkish and English. The title of the work should be written in English.
KEY WORDS: At least 3 maximum 7 should be given in both Turkish and English.
SUBTITLES: In general, the introduction, method, findings, results, discussion and covers the parts related to the recommendations.
REFERENCES: Must comply with APA 6 style.
TABLE AND FIGURES: All images without a table (photos, drawings, diagrams, graphs, maps, etc.) should be named as a figure. Each table and figure should be placed in the text. All figures should be numbered consecutively in the text. The figures should be aligned with the text from the right and left. Smaller font can be used when text does not fit on the page.
FORMULAS AND UNITS: All formulas in the text should be written in equation format. Formulas (1), (2) should be given the sequence number. All units must be in the SI unit system.
SYMBOLS AND ABBREVIATIONS: Text can be given at the end of the text before the sources.
ACKNOWLEDGMENT: You should be given before the sources to thank people or institutions.
|
{}
|
# Quantum tunneling. Probability of transmission
A) Consider a particle incident left to right on a barrier potential, as in the figure. What is the particle’s wave- function if its energy is $E > V_o$? And if $E < V_o$? You do not need to complete the full calculation, but explain in detail how you would get to the result and draw the waveform, indicating clearly the difference between regions I,II and III.
B) “Quantum tunneling” or ”barrier penetration” is not an experience of everyday life. A sprinter of mass 70 kg running at 5 m/s does not have enough kinetic energy to leap a wall of height 5 meters, even if all of that kinetic energy could be directed into an upward leap. If the wall is 0.2 meters thick, estimate the probability of the sprinter being able to ”quantum-tunnel” through it, rather than leaping over.
a) For the case $E>V0$ the wave function is a simple wave. Its general expression is
In region I)
$Psi(x) = A1*exp(i*k1*x) + A2*exp(-i*k1*x)$
(where k is the wave number and needs to be REAL)
First term $A1*exp(i*k1*x)$ is the transmitted part, second term $A2*exp(-i*k2*x))$ is the reflected term
$k1 = 2*pi/lambda = p/h_bar = sqrt{(2*m*E)/hbar}$
In region II) we have
$Psi(x) = B1*exp(i*k2*x)+B2*exp(-i*k2*x)$
$k2 = 2*pi/lambda =p/hbar =sqrt{(2*m*(E-V0))/hbar}$
In region III)
$Psi(x) = C1*exp(i*k3*x) +C2*exp(-ik3*x)$
$k3 = 2*pi/lambda = p/hbar = sqrt{(2*m*E)/hbar}$
The coefficients need to satisfy the continuity conditions for Psi(x) and $d Psi/d x$ at $x =0$ and $x=L$
That is:
$A1+A2 = B1+B2$
$B1*exp(i*k2*L) +B2*exp(..) =C1*exp(i*k3*L) +C2*exp(..)$
or rewritten
$i k_1A_1 -i k_2A_2 =i k_2B_1 -i k_2B_2$
$i k_2B_1*exp(i k_2L) -i k_2B_2exp(-i k_2L) = i k_3C_1*exp(i k_3L) -i k_3C_2*exp(…)$
b) in this case $(E < V0)$ only the expression in region II is modified from above. In region I and III) there will be the same expressions for Psi as above.
In region II we have) $(V0 > E)$ an exponential real function
$Psi(x) = B1*exp(k2*x) +Bx*exp(-k2*x)$
where k2 is the wave number REAL
$k2 = sqrt{(2*m*(V0-E))/ hbar}$
The same conditions for Psi(x) and $d Psi/d x$ at $x=0$ and $x =L$ apply
Question B) with sprinter
Suppose one has only the transmitted wave
In region I) we have a wave $Psi(x) = A*exp(i*k1*x)$
In region II) an exponentially decreasing function $Psi(x) = B*exp(-k2*x)$
In region III) we have again the direct transmitted wave $Psi(x) = C*exp(i*k3*x)$
Continuity of $Psi(x)$ at $x= 0$ and $x =L$
$A =B$
$B*exp(-k2*L) =C*exp(i*k3*L)$
$C = A*exp(-k2*L)/exp(i*k3*L) = A*exp(-k2*L)*exp(-i*k3*L)$
The ratio of the amplitudes of wave-function in regions III and I) is:
$Psi_III/Psi_I = C*exp(ik3x)/A*exp(i*k1*x)$
The probability of tunneling (coefficient of transmission) is just the square of the modulus of the ratio:
$T = | Psi_III/Psi_I |^2 = |C/A|^2 = exp(-2*k2*L)$
$V0 = m*g*h =70*9.81*5 =3433.5 J$
$E =m v^2/2 =70*5^2/2 = 875 J$
$L =0.2 m$
$k2 = sqrt{(2*m*(V0-E))/hbar} =sqrt{(2*70*(3433.5-875)) /10^{-34}} =6*10^{36}$
$T =exp(-2*k2*L) = exp(-2*6*10^{36}*0.2) =…..$
The windows scientific calculator can not compute such a small value.
|
{}
|
Blackwell optimality in Markov decision processes with partial observation - Archive ouverte HAL Access content directly
Journal Articles Annals of Statistics Year : 2002
Blackwell optimality in Markov decision processes with partial observation
(1) , (2) , (3)
1
2
3
Dinah Rosenberg
• Function : Author
• PersonId : 860831
Nicolas Vieille
• Function : Author
• PersonId : 831350
Eilon Solan
• Function : Author
• PersonId : 858141
Abstract
A Blackwell $\epsilon$-optimal strategy in a Markov Decision Process is a strategy that is $\epsilon$-optimal for every discount factor sufficiently close to 1. We prove the existence of Blackwell $\epsilon$-optimal strategies in finite Markov Decision Processes with partial observation.
Dates and versions
hal-00464998 , version 1 (18-03-2010)
Identifiers
• HAL Id : hal-00464998 , version 1
• DOI :
Cite
Dinah Rosenberg, Nicolas Vieille, Eilon Solan. Blackwell optimality in Markov decision processes with partial observation. Annals of Statistics, 2002, Vol.30,n°4, pp.1178-1193. ⟨10.1214/aos/1031689022⟩. ⟨hal-00464998⟩
121 View
|
{}
|
# Extension : Extension 2.1: Image Processor With Iteration
Authors
## Manipulating an Image Raster
In the imagetransforms package, modify the provided Transforms Java program to implement the methods as described below. Your methods will use iteration (either while loops or for loops) to operate on the pixels of a picture.
### Notes
• To run this extension, right (control) click on Main and choose Run as... then Java Application.
This extension uses the same base code as the filter extension.
• The Picture class is a Sedgewick class. The coordinates are expressed using the standard in computer science graphics: the coordinate (x,y) denotes the pixel that is
• x pixels to the right of the leftmost edge of the picture
• y pixels down from the top edge of the picture
Thus, (0,0) is the top left corner of the picture.
• Pixel addressing follows the common Java convention: if there are w horizontal pixels, then 0 is the first (leftmost) and w-1 is the last (rightmost).
• Almost all of the methods are written in terms of two Picture parameters:
• source: the picture to be used as input to your transformation
• target: the picture area to be used as output from your transformation
There is just one exception: the gradient method produces its output without needing a source. Its only parameter is target.
• Each Picture has a width and a height, and these can be obtained for a Picture p as follows:
int width = p.width();
int height = p.height();
• To find the Color of a pixel at location x, y, you use
Color c = source.get(x,y);
• To set the Color of a pixel at location x, y to the color c, you use
target.set(x,y,c);
## Instructions
Each of the methods described below is found in the Transforms class.
1. The provided method flipHoriz(Picture source, Picture target) flips the image horizontally.
Look at the code given to you for this example carefully. It is broken into simple steps and the comments help explain why the pixel indexing works.
2. Complete the method flipVert(Picture source, Picture target) that flips the image vertically.
3. Complete the method flipHorizLeftHalf(Picture source, Picture target) that flips the left half of the image horizontally.
The left half of the target image should be same as the source, but the right half of the target image should be the mirror of the left half of the source.
4. Complete the method flipVertBotHalf(Picture source, Picture target) that flips the bottom half of the image vertically.
5. Complete the method gradient(Picture target) that takes a single Picture as a parameter.
Your code should create a color gradient by computing the following for each pixel:
• The amount of red in each pixel increases gradually from 0 at the left edge of the image to 255 at the right edge of the image.
• The amount of green in each pixel increases gradually from 0 at the top edge of the image to 255 at the bottom edge of the image.
• The amount of blue in every pixel should be 128.
Thus, each pixel will have a different color depending on its position. For example, the pixel at the top left will have red=0, green=0, and blue=128. The pixel about 1/4 of the way down on the right edge will have red=255, green=64, and blue=128.
Develop an expression for the amount of red and green in each pixel, given the x and y position of that pixel and the width and height of the image:
int amountRed = .....
int amountGreen = ....
Then set the pixel at (x,y) to a color based on those computations:
target.set(x, y, new Color(amountRed, amountGreen, 128));`
|
{}
|
Home > worlds > A quarter-million dollar world
## A quarter-million dollar world
February 2nd, 2011
Image Source.
The Kepler Candidates were just announced! My immediate sensation at seeing a copy of the associated paper is not unlike those cheesy contests where you’re allowed 60 seconds in a grocery store to grab whatever you can grab for free.
The most remarkable and unexpected development seems to be contained in Table 6 of the paper. Here, it looks as if candidates identified during the first four months of data collection have had their confidence levels increased through the use of additional transit measurements taken after September 16th, 2009. This allows for the identification of fifty candidate planets that might be considered prospects for potential “habitability”.
I ran the fifty planets in the table through my valuation formula (see here, and here.)
The total value of the planets in Kepler paper’s Table 6 is USD 295,897.65. As with most distributions of wealth, this one is highly inequitable — the most valuable planet candidate in the newly released crop is KOI 326.01, to which the formula assigns a value of USD 223,099.93. Assuming 5g/cc density, this planet has a mass of ~0.6 Earth masses, which is actually a little on the low side as far as the valuation formula is ensured. Nevertheless, USD 223,099.93 is a huge increase in value over Gl 581c, which charts at USD 158.32.
Back in 2009, I wrote that (in my opinion) the appropriate threshold for huge media excitement is USD 1M. With the planets in Table 6 of the paper, we are starting to get very close to that.
Here are the planets in the table with a formula valuation greater than one penny:
(These numbers are associated with a little bit of uncertainty. I’m using Kepler magnitudes rather than V magnitudes, and assuming 5 gm/cc. I’m also assuming that stellar mass goes as stellar radius. Running a cross correlation with the other tables in the paper will change the values slightly, but not substantially.)
Categories: worlds Tags:
1. February 2nd, 2011 at 12:30 | #1
On the other hand, the study of Selsis et al. (2007) of the expected conditions in the infamous system Gliese 581 pointed out that a necessary condition for habitability is a Teq below about 270 K. Some of those candidates would need some pretty reflective atmospheres to support habitable conditions…
2. February 6th, 2011 at 06:09 | #2
In fact having taken more of a look at this, I think your valuation formula needs adjustment: it is calibrated for planets too close to the star, at least using the Kepler team assumptions for computing Teq values. Doing the calculation in terms of incident flux, 326.01 comes out at 2.88 times the Earth and too close to the star to be habitable. Doing a comparison with the habitable zone formulae in Underwood, Jones and Sleep (2003), only 1026.01, 1503.01 and 854.01 in your list of candidates come out in the most conservative habitable zone. 1361.01 and 87.01 are between the “Early Venus” and runaway greenhouse limits and the rest are too hot.
3. February 6th, 2011 at 08:10 | #3
I really like this concept, and am super excited that there are already 2 candidates more valuable than Mars!
But I think your formula only works for the value to society of observing a planet (or moon) from Earth, Earth’s orbits, and it’s Lagrange points. For instance, it can be easily demonstrated that we value exploring Mars by more than $14,000. It will cost about$1.8 billion USD to launch the Mars Science Lab in November of this year. And that doesn’t include the three other missions currently operating there and the several others in development. But the difference is that those are missions to make observations THERE, either in orbit or from the surface. We probably don’t spend much at all on observing Mars from the Earth, and only take occasional shots from Hubble – because there isn’t much interesting left to learn from this distance.
So I think there would be utility in creating a second equation to calculate the value to society of traveling to a planet or moon with potential to harbor or sustain life. And distance (or delta V) from the Earth to that location would be weighed a lot more heavily in this sister equation.
|
{}
|
# Can we perform pause operation while replaying .wav file in ltspice?
#### santlal
Joined Jan 14, 2020
19
I have recorded a node voltage value in a .wav file using LTSPICE simulator. While replaying the recorded signal values, I want a pause of 5 millisecond after every 10millisecond. How can I perform it in ltspice?
Please, suggest the way. I stucked at this point.
Thank you so much.
Santlal Prajapati
Last edited:
|
{}
|
# Emulating distance on Poincaré disk for different curvatures
The distance $$d_K^H(x,y)$$ between two points on the hyperboloid $$H_K$$ with curvature $$K<0$$ can be emulated on the distance $$d_{-1}(x,y)$$ of the hyperboloid $$H_{-1}$$ of curvature ($$K=-1$$) as follows: $$d_K^H(x,y)=R\cdot d_{-1}^H(x/R,y/R)$$ where $$R$$ is the radius and is related to the curvature as follows: $$R=\frac{1}{\sqrt{-K}}$$.
Do you know about a simple formula to do a similar emulation with the distance $$d_K^D(x,y)$$ on the Poincaré disk $$D_K$$ of curvature $$K$$?
For $$K=-1$$ the distance on the Poincaré disk $$D_{-1}$$ is: $$d_{-1}^P(x,y) = arccosh\left( 1+\frac{2||x-y||_2^2}{(1-||x||^2_2)(1-||y||_2^2)} \right)$$ So I'm looking for an expression of the form: $$d_K^P(x,y)=\cdots d_1^P(\cdots x\cdots, \cdots y\cdots).$$ where the $$(\cdots)$$-parts are just replaced with some function or expression in terms of $$K$$ (or $$R$$).
So far I've tried to project the points from the hyperboloid to the Poincaré disk. But it didn't turn out to be a nice expression.
2. the scale of the model (for example, the radius of the Poincaré disk, or a parameter of the hyperboloid, or the radius of a sphere represented in the $$(x,y,z)$$ coordinates).
If you have a distance formula $$d(\overrightarrow{x})$$ for a model of radius $$R$$, then you can define a distance formula on a model of radius $$RC$$ with $$d'(\overrightarrow{x}) = d(\overrightarrow{x}/C)$$. The new model is clearly isometric to the old model, so this does not change the Gaussian curvature.
On the other hand, if you define $$d'(\overrightarrow{x}) = Cd(\overrightarrow{x})$$, this multiplies the radius of curvature by $$C$$. Just like on a sphere: if you take the distance formulas for a sphere of radius 1 (in whatever coordinates), and multiply them by 10, you get the distance formulas for a manifold isometric to a sphere of radius 10, and thus with a smaller curvature, but still parametrized as if it was a sphere of radius 1.
So the $$/R$$ part in your hyperboloid formula controls the scale of the model, and the $$R\cdot$$ part controls the Gaussian curvature. If you want a metric on the Poincaré disk of radius 1 that makes the curvature different, just multiply your formula by a constant. If you also want to change the radius of the Poincaré disk for some reason, then you have to also adjust the coordinates.
|
{}
|
# AS Mathematics - Core 1 - Indices Problems
1. Oct 30, 2011
### novamatt
I am struggling with my mathematics indices homework. I think I have some correct answers so far but I'm pretty stumpted on some others. How would I go about solving the following equations.
1. The problem statement, all variables and given/known data
1) 25 to the $\frac{1}{2}$ - $\frac{5 to the -4}{5 to the -5}$
2. Relevant equations
3. The attempt at a solution
1) 25 to the $\frac{1}{2}$ = $\sqrt{25}$ = 5 i know that
i get lost on the next bit... i end up with
$\frac{-625}{-3215}$ and therefor
5 - $\frac{1}{5}$ = 4$\frac{4}5{}$ but this is not in index form?? if i take it further...
$\sqrt[5]{4}$ to the 4 and from there i have no idea how to go further, plus i am positive i've gone wrong at a much earlier stage.
2. Oct 30, 2011
### Staff: Mentor
Edited to make readable.
3. Oct 30, 2011
### Staff: Mentor
Use the fact that a-n = 1/an.
Also, at this stage in mathematics, we pretty much never write things like 4 $\frac{4}{5}$. It looks too much like 4 * 4/5. Instead of mixed numbers like this, improper fractions such as 24/5 are preferred.
4. Oct 30, 2011
### novamatt
thanks for the assistance its a bit late for me to crack on with the problem at the moment I will return tomorrow and have another go.
Could you please tell me how you put the 1/2 index power or direct me to a link where I can learn how to do this... is it simply html code like the ones provided on your symbol toolbar?
5. Oct 30, 2011
### Staff: Mentor
You can write exponents, as we in the US call them, by clicking Go Advanced, which opens another menu. Use the X2 button to make exponents.
Know someone interested in this topic? Share this thread via Reddit, Google+, Twitter, or Facebook
|
{}
|
# 07.2 Set Up 4 PointsSet up, but do not compute, the integral SS cos(z2 + y2) dA using polc ar
###### Question:
07.2 Set Up 4 Points Set up, but do not compute, the integral SS cos(z2 + y2) dA using polc ar coordinates: Upload your answer here: Please select file(s) Select file(s)
#### Similar Solved Questions
##### Let D be the disk of radius 3:22 + y? < 9.Calculate the double integraldrdy.
Let D be the disk of radius 3: 22 + y? < 9. Calculate the double integral drdy....
##### Amassattached both . spring (with given spring constantdashpot (with pivon damping constant The mass motion with initial position Xo and initial velocity Find the position function x(t) and determine whether the motion overdamned critically damped, underdamdeo Ifit is underdamped_ write the position function in the form x(t) -C cos ( 04" (1) . Also, find the undamped position Junciian u(l) = Co cos thal would resull the mass Ihe spring were set motion wilh Ihe same initial position and velo
Amass attached both . spring (with given spring constant dashpot (with pivon damping constant The mass motion with initial position Xo and initial velocity Find the position function x(t) and determine whether the motion overdamned critically damped, underdamdeo Ifit is underdamped_ write the positi...
##### In order to establish a control chart for the mean of a process, 20 samples each...
In order to establish a control chart for the mean of a process, 20 samples each of size 4 are collected. We note that i-Ti = 4000 and X, si = 500. The value of the upper control limit of the chart for the mean is approximately equal to 204.5. True False...
##### Question }carboxylic acid that contain bromnine with the formula The following compound chart (which runs only from 0-8 ppm) CH OZBr: The peak at [0.97ppm was moved onto the for clarity Give its structure and assign the chemical shift values:EUMREGHHO-Br
Question } carboxylic acid that contain bromnine with the formula The following compound chart (which runs only from 0-8 ppm) CH OZBr: The peak at [0.97ppm was moved onto the for clarity Give its structure and assign the chemical shift values: EUMRE GHHO-Br...
##### Instrucciones al estudiante: En la seccion de "Tools" en la plataforma Blackboard usted encontrara una seccion Ilamada "Discussion Board". Llegue hasta ella en donde usted encontrara un foro de discusion Ilamado "Integracion de funciones trigonometricas inversas" y abra un "thread" en donde ofrecera una explicacion para la situacion que se presenta a continuacion La actividad tiene un valor de 10 puntos que se distribuyen como sigue: 4 puntos por su explic
Instrucciones al estudiante: En la seccion de "Tools" en la plataforma Blackboard usted encontrara una seccion Ilamada "Discussion Board". Llegue hasta ella en donde usted encontrara un foro de discusion Ilamado "Integracion de funciones trigonometricas inversas" y abra...
##### 5. A shop has an average of five customers per hour (a) Assume that the time T between any two cu...
A shop has an average of five customers per hour 5. A shop has an average of five customers per hour (a) Assume that the time T between any two customers' arrivals is an exponential random variable. (b) Assume that the number of customers who arrive during a given time period is Poisson. What (...
##### If understate expenses to inflate profit which account and assertion are at risk of misstatement?
If understate expenses to inflate profit which account and assertion are at risk of misstatement?...
##### 1. The logic of hypothesis testing Aa Aa A Federal Communications Commission (FCC) economist conducts a...
1. The logic of hypothesis testing Aa Aa A Federal Communications Commission (FCC) economist conducts a statistical study to test her hunch that consumers' mean monthly phone bill changes following the purchase of an iPhone. The economist should use a hypothesis test because she: Knows or can ca...
##### Your firm uses a periodic review system for all SKUs classified, using ABC analysis, as B or C it...
please this is my last chance Your firm uses a periodic review system for all SKUs classified, using ABC analysis, as B or C items. Further, it uses a continuous review system for all SKUs classified as A items. The demand for a specific SKU, currently classified as an A item, has been dropping. Yo...
##### Question 25 4 pts Which of these is more true for a big animals than it...
Question 25 4 pts Which of these is more true for a big animals than it is for small animal? rapid loss of body heat O requires a lot of oxygen high metabolic rates O high surface area to volume ratios all of the above Question 30 4 pts In 1976, the average beak depth of Geospiza fortis was 9.5 mm. ...
##### For the following reactions at constant pressure, predict if $Delta H>Delta E, Delta H<Delta E$, or $Delta H=Delta E .$a. $2 mathrm{HF}(g) longrightarrow mathrm{H}_{2}(g)+mathrm{F}_{2}(g)$b. $mathrm{N}_{2}(mathrm{~g})+3 mathrm{H}_{2}(mathrm{~g}) longrightarrow 2 mathrm{NH}_{3}(mathrm{~g})$c. $4 mathrm{NH}_{3}(g)+5 mathrm{O}_{2}(g) longrightarrow 4 mathrm{NO}(g)+6 mathrm{H}_{2} mathrm{O}(g)$
For the following reactions at constant pressure, predict if $Delta H>Delta E, Delta H<Delta E$, or $Delta H=Delta E .$ a. $2 mathrm{HF}(g) longrightarrow mathrm{H}_{2}(g)+mathrm{F}_{2}(g)$ b. $mathrm{N}_{2}(mathrm{~g})+3 mathrm{H}_{2}(mathrm{~g}) longrightarrow 2 mathrm{NH}_{3}(mathrm{~g})$ c...
##### Mhuian 9 1 8 cuneclly deacribon [ [ 1
Mhuian 9 1 8 cuneclly deacribon [ [ 1...
##### [O" & Ornolysis When tretated with cxcess hydroFen ad 1 platinum cbst. bocoma> 7-ethyl-' isoprop lundcccTreatrent of X with ciccs 0l czone folloued b} dimcthyl sulfide und water . gives the following products:Propose the stncturc (or the Unkinp6enAudX
[O" & Ornolysis When tretated with cxcess hydroFen ad 1 platinum cbst. bocoma> 7-ethyl-' isoprop lundccc Treatrent of X with ciccs 0l czone folloued b} dimcthyl sulfide und water . gives the following products: Propose the stncturc (or the Unkinp6enAudX...
##### Prolons and clerttun: doxAr ' have? WC, IJe b) EJP. ISe" IJp, I0IOP, Idec) Jp JeFart MWfthe dcnsity of 4 sample of skim milk is | Og ml how Aman} skm millz Hant: M- PATLTeir 10S7412nd [QOUml = IL 13Sq 0fDunng voleanic cruption on Mount Piratubo; Fhtlitnc tic Ltc floue IZin min Atthis Fale;how far in al4 mle 0 umncten Ine L4 Lavcl IF Hiat; Lolm Tmnno Ikm4 TAssuming that thc acnge scbool hug with books Wciphs 5 pourads How' Iiny HgT< wall this ke cquivalenl to? Ichoy WI hc '
Prolons and clerttun: doxAr ' have? WC, IJe b) EJP. ISe" IJp, I0 IOP, Ide c) Jp Je Fart M Wfthe dcnsity of 4 sample of skim milk is | Og ml how Aman} skm millz Hant: M- PATLTeir 10S7412nd [QOUml = IL 13Sq 0f Dunng voleanic cruption on Mount Piratubo; Fhtlitnc tic Ltc floue IZin min Atthis ...
##### Statstic Catbird Assignment Catbud: aeiella td6llngh54 1e OEe oftemcu abundart bird: found Ee C lde Biblogical Station Freticu teterch guggete lt _ler Eeu Erst re cflife catbird: become more _llel 1 indirg food _4 thould teizk Ejrethan Elje bid Ino bate Aelged JeILaz Jorze & Gemieltet d.iojene? 63777023 hatch-7ex caibrd; &d extbrdt afer Laeir aten-VeztQuerrion What doez this box-aud ubisker plot Jell ut Wbv Woud TEuAltg cur dati W Lns be helpful or_ pportiotr (lpt)QuaironWhat i our in
Statstic Catbird Assignment Catbud: aeiella td6llngh54 1e OEe oftemcu abundart bird: found Ee C lde Biblogical Station Freticu teterch guggete lt _ler Eeu Erst re cflife catbird: become more _llel 1 indirg food _4 thould teizk Ejrethan Elje bid Ino bate Aelged JeILaz Jorze & Gemieltet d.iojen...
##### Suppose 150.0 mL of 0.50 M HCl and 150.0 mL of 0.50 M NaOH, both initially at 23.0°C, are mixed in a thermos flask. When...
Suppose 150.0 mL of 0.50 M HCl and 150.0 mL of 0.50 M NaOH, both initially at 23.0°C, are mixed in a thermos flask. When the reaction is complete, the temperature is 26.4°C. Assuming that the solutions have the same heat capacity as pure water, compute the heat released (in kJ). A: 3.64 B...
##### A 100 gm ball strikes a wall at 600 (with respect to the wall) with a...
A 100 gm ball strikes a wall at 600 (with respect to the wall) with a velocity of 8 m/sec and bounces back with three forth of its speed at the same angle with respect to the wall. If the ball remains in contact for 0.1 sec, what is the force? A 100 gm ball strikes a wall at 60° (with respect ...
##### Y = r" +n+r Cnx'the solution of the differential equationry" + (32 + 5)y' - y = 0, = > 0.Whatthevalues?(ii) FindC1,(your aswer shouldfunction of(iii) Find the coefficient recursive relation, find the expression {1,r) such thatCn+l = g(n,r)cn, n > 1.
y = r" + n+r Cnx' the solution of the differential equation ry" + (32 + 5)y' - y = 0, = > 0. What the values? (ii) Find C1, (your aswer should function of (iii) Find the coefficient recursive relation, find the expression {1,r) such that Cn+l = g(n,r)cn, n > 1....
##### (3 points) Consider two blocks A and B of different materials with specific heats CA and 3C4' respectively Block A has a mass of 3 kg and block B has a mass of 2 kg: Block A has a 20K higher temperature than block B. We bring the blocks in thermal contact and wait until thermal equilibrium has been achieved. The equilibrium temperature is 40OK. What were the initial temperatures of the two blocks? Show your calculation! (Hint: set the temperature of block A at T+ 2OK and that of block B at
(3 points) Consider two blocks A and B of different materials with specific heats CA and 3C4' respectively Block A has a mass of 3 kg and block B has a mass of 2 kg: Block A has a 20K higher temperature than block B. We bring the blocks in thermal contact and wait until thermal equilibrium has ...
Suppose $R 7 C such that s(t) = 0 whenever < 3 or t 2 4 Define r : R - C by r(t) :C ei2r(t_u)sk s(u) du: kez (a) Show that r(v) = s(v) for all v €z (b) How are functions r, 8 related to each other? Sketch a picture of 'generic example" of such functions.... 2 answers ##### For each right triangle, draw the rectangle made by drawing sides opposite the two shorter sides in the triangle For each right triangle, draw the rectangle made by drawing sides opposite the two shorter sides in the triangle. Find the area of each rectangle... 1 answer ##### Describe what Apple Inc. was able to accomplish by selling their products directly through their Apple... Describe what Apple Inc. was able to accomplish by selling their products directly through their Apple stores. Give at least 4 things.... 1 answer ##### If you transmit radio signals to Mercury when Mercury is closest to Earth and wait to hear the radar echo, how long will you wait? (Note: The speed of light is$3.0 \times 10^{5} \mathrm{km} / \mathrm{s}$. Other relevant information can be found in Celestial Profile 4 and Appendix Table A-10.) If you transmit radio signals to Mercury when Mercury is closest to Earth and wait to hear the radar echo, how long will you wait? (Note: The speed of light is$3.0 \times 10^{5} \mathrm{km} / \mathrm{s}$. Other relevant information can be found in Celestial Profile 4 and Appendix Table A-10.)... 5 answers ##### The function flx) =3x2 X is concave up at x= 2 Select one: True False The function flx) =3x2 X is concave up at x= 2 Select one: True False... 1 answer ##### Their CTO uckle is as follows:$ 5,000 are relevant to the desire to buy a...
their CTO uckle is as follows: $5,000 are relevant to the desire to buy a larger house? of of Sharon and Huckle is as foi 2. The most recent balance sheet of Sharo Liabilities and Net Worth Assets$1,000 Credit card balances Stock mutual funds (expected rate of return 12%) 15,000 300,000 15,000 9,0...
##### 2 Let D and E be measurable sets and f a function with domain DU E We proved that f i measurable on DU E if and only if its restrictions t0 D and E are measurable. Is the same true if " measurable is replaced by "continuous"?
2 Let D and E be measurable sets and f a function with domain DU E We proved that f i measurable on DU E if and only if its restrictions t0 D and E are measurable. Is the same true if " measurable is replaced by "continuous"?...
##### What is boiling? When does a substance boil?
What is boiling? When does a substance boil?...
The cash price of this machine was $38,000. Related expenditures included: sales tax$1,700, shipping costs $150, insurance during shipping$80, installation andtesting costs $70, and$100 of oil and lubricants to be used with the machinery during its first year of operations. Pele estimates that th...
|
{}
|
# Homework 1
Due: Wednesday 2020-01-29 at 11:59pm
# Getting started
• Go to the course organization on GitHub: https://github.com/sta-363-s20.
• Find the repo starting with hw-01 and that has your team name at the end (this should be the only hw-01 repo available to you).
• In the repo, click on the green Clone or download button, select Use HTTPS (this might already be selected by default, and if it is, you’ll see the text Clone with HTTPS as in the image below). Click on the clipboard icon to copy the repo URL.
• Go to RStudio Cloud and into the course workspace. Create a New Project from Git Repo. You will need to click on the down arrow next to the New Project button to see this option.
• Copy and paste the URL of your assignment repo into the dialog box:
• Hit OK, and you’re good to go!
# Packages
In this lab we will work with the tidyverse package, the broom package, and the ISLR package. So we need to load them:
library(tidyverse)
library(broom)
library(ISLR)
Note that this package is also loaded in your R Markdown document.
# Housekeeping
## Git configuration
• Go to the Terminal pane
• Type the following two lines of code, replacing the information in the quotation marks with your info.
git config --global user.email "your email"
git config --global user.name "your name"
To confirm that the changes have been implemented, run the following:
git config --global user.email
git config --global user.name
## Project name:
Currently your project is called Untitled Project. Update the name of your project to be “HW 01”.
# Warm up
Before we begin, let’s warm up with some simple exercises.
## YAML:
Open the R Markdown (Rmd) file in your project, change the author name to your name, and knit the document.
## Commiting and pushing changes:
• Go to the Git pane in your RStudio.
• View the Diff and confirm that you are happy with the changes.
• Add a commit message like “Update team name” in the Commit message box and hit Commit.
• Click on Push. This will prompt a dialogue box where you first need to enter your user name, and then your password.
1. I collect a set of data ($$n = 100$$ observations) containing a single predictor and a quantitative response. I then fit a linear regression model to the data, as well as a separate cubic regression, i.e. $$Y = \beta_0 + \beta_1X + \beta_2X^2 + \beta_3X^3 + \epsilon$$.
• (1.1) Suppose that the true relationship between X and Y is linear, i.e. $$Y = \beta_0 + \beta_1X + \epsilon$$. Consider the training residual sum of squares (RSS) for the linear regression, and also the training RSS for the cubic regression. Would we expect one to be lower than the other, would we expect them to be the same, or is there not enough information to tell? Justify your answer.
• (1.3) Suppose that the true relationship between X and Y is not linear, but we don’t know how far it is from linear. Consider the training RSS for the linear regression, and also the training RSS for the cubic regression. Would we expect one to be lower than the other, would we expect them to be the same, or is there not enough information to tell? Justify your answer.
1. Using the Auto data from the ISLR package, perform a simple linear regression with mpg as the response and horsepower as the predictor. Is there a relationship between the predictor and the response? How strong is the relationship between the predictor and the response? Is the relationship between the predictor and the response positive or negative? What is the 95% confidence interval? What is the predicted mpg associated with a horsepower of 98?
The code to fit the linear model is provided below.
model <- lm(mpg ~ horsepower, data = Auto)
tidy(model, conf.int = TRUE)
The lm() function fits the linear model. The tidy() function from the broom package takes the model output and puts it into a nice tidy data frame.
1. Using the Carseats data from the ISLR package, fit a multiple regression model to predict Sales using Price, Urban, and US. Provide an interpretation of each coefficient in the model. Be careful—some of the variables in the model are qualitative. For which of the predictors can you reject the null hypothesis $$H_0 : \beta_j = 0$$?
The code to fit the linear model is provided below.
model <- lm(Sales ~ Price + Urban + US, data = Carseats)
tidy(model, conf.int = TRUE)
|
{}
|
How do you solve -4=1/2x+3?
$x = - 14$
$- 4 = \frac{1}{2} x + 3$
$\textcolor{w h i t e}{\text{XXXXX}}$$\Rightarrow - 7 = \frac{1}{2} x$
$\textcolor{w h i t e}{\text{XXXXX}}$$\Rightarrow - 14 = x$
|
{}
|
IACR News
Updates on the COVID-19 situation are on the Announcement channel.
Here you can see all recent updates to the IACR webpage. These updates are also available:
24 March 2020
London, United Kingdom, 21 May 2020
Event Calendar
Event date: 21 May 2020
22 March 2020
Estuardo Alpirez Bock, Alexander Treff
ePrint Report
In 2017, the first CHES Capture the Flag Challenge was organized in an effort to promote good design candidates for white-box cryptography. In particular, the challenge assessed the security of the designs with regard to key extraction attacks. A total of 94 candidate programs were submitted, and all of them were broken eventually. Even though most candidates were broken within a few hours, some candidates remained robust against key extraction attacks for several days, and even weeks. In this paper, we perform a qualitative analysis on all candidates submitted to the CHES 2017 Capture the Flag Challenge. We test the robustness of each challenge against different types of attacks, such as automated attacks, extensions thereof and reverse engineering attacks. We are able to classify each challenge depending on their robustness against these attacks, highlighting how challenges vulnerable to automated attacks can be broken in a very short amount of time, while more robust challenges demand for big reverse engineering efforts and therefore for more time from the adversaries. Besides classifying the robustness of each challenge, we also give data regarding their size and efficiency and explain how some of the more robust challenges could actually provide acceptable levels of security for some real-life applications.
Daniel J. Bernstein, Luca De Feo, Antonin Leroux, Benjamin Smith
ePrint Report
Let $\mathcal{E}/\mathbb{F}_q$ be an elliptic curve, and $P$ a point in $\mathcal{E}(\mathbb{F}_q)$ of prime order $\ell$. Vélu's formulae let us compute a quotient curve $\mathcal{E}' = \mathcal{E}/\langle{P}\rangle$ and rational maps defining a quotient isogeny $\phi: \mathcal{E} \to \mathcal{E}'$ in $\widetilde{O}(\ell)$ $\mathbb{F}_q$-operations, where the $\widetilde{O}$ is uniform in $q$. This article shows how to compute $\mathcal{E}'$, and $\phi(Q)$ for $Q$ in $\mathcal{E}(\mathbb{F}_q)$, using only $\widetilde{O}(\sqrt{\ell})$ $\mathbb{F}_q$-operations, where the $\widetilde{O}$ is again uniform in $q$. As an application, this article speeds up some computations used in the isogeny-based cryptosystems CSIDH and CSURF.
Onur Gunlu, Efe Bozkir, Wolfgang Fuhl, Rafael F. Schaefer, Enkelejda Kasneci
ePrint Report
Head mounted displays bring eye tracking into daily use and this raises privacy concerns for users. Privacy-preservation techniques such as differential privacy mechanisms are recently applied to the eye tracking data obtained from such displays; however, standard differential privacy mechanisms are vulnerable to temporal correlations in the eye movement features. In this work, a transform coding based differential privacy mechanism is proposed for the first time in the eye tracking literature to further adapt it to statistics of eye movement feature data by comparing various low-complexity methods. Fourier Perturbation Algorithm, which is a differential privacy mechanism, is extended and a scaling mistake in its proof is corrected. Significant reductions in correlations in addition to query sensitivities are illustrated, which provide the best utility-privacy trade-off in the literature for the eye tracking dataset used. The differentially private eye movement data are evaluated also for classification accuracies for gender and document-type predictions to show that higher privacy is obtained without a reduction in the classification accuracies by using proposed methods.
20 March 2020
George Teseleanu
ePrint Report
The Hill cipher is a classical poly-alphabetical cipher based on matrices. Although known plaintext attacks for the Hill cipher have been known for almost a century, feasible ciphertext only attacks have been developed only about ten years ago and for small matrix dimensions. In this paper we extend the ciphertext only attacks for the Hill cipher in two ways. First, we present two attacks for the affine version of the Hill cipher. Secondly, we show that the presented attacks can be extended to several modes of operations. We also provide the reader with several experimental results and show how the message's language can influence the presented attacks.
19 March 2020
UC Berkeley
Job Posting
The Security & Crypto Group in the EECS Department at UC Berkeley welcomes inquiries for postdoctoral fellowships in the area of secure multi-party computation. Please send a CV to raluca.popa@berkeley.edu, and list at least three letter writers in the CV.
Closing date for applications:
Contact: raluca.popa@berkeley.edu
Daniel Escudero, Satrajit Ghosh, Marcel Keller, Rahul Rachuri, Peter Scholl
ePrint Report
This work introduces novel techniques to improve the translation between arithmetic and binary data types in multi-party computation. To this end, we introduce a new approach to performing these conversions, using what we call extended doubly-authenticated bits (edaBits), which correspond to shared integers in the arithmetic domain whose bit decomposition is shared in the binary domain. These can be used to considerably increase the efficiency of non-linear operations such as truncation, secure comparison and bit-decomposition.
Our edaBits are similar to the daBits technique introduced by Rotaru et al. (Indocrypt 2019). However, our main observations are that (1) applications that benefit from daBits can also benefit from edaBits in the same way, and (2) we can generate edaBits directly in a much more efficient way than computing them from a set of daBits. Technically, the second contribution is much more challenging, and involves a novel cut and choose technique that may be of independent interest, and requires taking advantage of natural tamper-resilient properties of binary circuits that occur in our construction to obtain the best level of efficiency. Finally, we show how our edaBits can be applied to efficiently implement various non-linear protocols of interest, and we thoroughly analyze their correctness for both signed and unsigned integers.
The results of this work can be applied to any corruption threshold, although they seem best suited to dishonest majority protocols such as SPDZ. We implement and benchmark our constructions, and experimentally verify that our technique yield a substantial increase in efficiency. Our edaBits save in communication by a factor that lies between 2 and 170 for secure comparisons with respect to a purely arithmetic approach, and between 2 and 60 with respect to using daBits. Improvements in throughput per second are more subdued but still as high as a factor of 47. We also apply our novel machinery to the tasks of biometric matching and convolutional neural networks, obtaining a noticeable improvement as well.
18 March 2020
Nicholas Genise, Daniele Micciancio, Chris Peikert, Michael Walter
ePrint Report
Discrete Gaussian distributions over lattices are central to lattice-based cryptography, and to the computational and mathematical aspects of lattices more broadly. The literature contains a wealth of useful theorems about the behavior of discrete Gaussians under convolutions and related operations. Yet despite their structural similarities, most of these theorems are formally incomparable, and their proofs tend to be monolithic and written nearly "from scratch,'' making them unnecessarily hard to verify, understand, and extend.
In this work we present a modular framework for analyzing linear operations on discrete Gaussian distributions. The framework abstracts away the particulars of Gaussians, and usually reduces proofs to the choice of appropriate linear transformations and elementary linear algebra. To showcase the approach, we establish several general properties of discrete Gaussians, and show how to obtain all prior convolution theorems (along with some new ones) as straightforward corollaries. As another application, we describe a self-reduction for Learning With Errors~(LWE) that uses a fixed number of samples to generate an unlimited number of additional ones (having somewhat larger error). The distinguishing features of our reduction are its simple analysis in our framework, and its exclusive use of discrete Gaussians without any loss in parameters relative to a prior mixed discrete-and-continuous approach.
As a contribution of independent interest, for subgaussian random matrices we prove a singular value concentration bound with explicitly stated constants, and we give tighter heuristics for specific distributions that are commonly used for generating lattice trapdoors. These bounds yield improvements in the concrete bit-security estimates for trapdoor lattice cryptosystems.
Santosh Ghosh, Michael Kounavis, Sergej Deutsch
ePrint Report
We study the encryption latency of the Gimli cipher, which has recently been submitted to NIST’s Lightweight Cryptography competition. We develop two optimized hardware engines for the 24 round Gimli permutation, characterized by a total latency or 3 and 4 cycles, respectively, in a range of frequencies up to 4.5 GHz. Specifically, we utilize Intel’s 10 nm FinFET process to synthesize a critical path of 15 logic levels, supporting a depth-3 Gimli pipeline capable of computing the result of the Gimli permutation in frequencies up to 3.9 GHz. On the same process technology, a depth-4 pipeline employs a critical path of 12 logic levels and can compute the Gimli permutation in frequencies up to 4.5 GHz. Gimli demonstrates a total unrolled data path latency of 715.9 psec. Compared to our AES implementation, our fastest pipelined Gimli engine demonstrates 3.39 times smaller latency. When compared to the latency of the PRINCE lightweight block cipher, the pipelined Gimli latency is 1.7 times smaller. The paper suggests that the Gimli cipher, and our proposed optimized implementations have the potential to provide breakthrough performance for latency critical applications, in domains such as data storage, networking, IoT and gaming.
Westfälischen Wilhelms-Universität Münster
Job Posting
The Institut for Geoinformatics (ifgi) at the Westfälischen Wilhelms-Universität Münster is seeking candidates for this post subject to the release of the project funds by the funding agency. The three-year position is part of a joint project on the “sovereign and intuitive management of personal location information (SIMPORT)”. The project aims to develop approaches, guidelines and software components that enable users to reclaim sovereignty over their personal location information.
Closing date for applications:
Contact: Prof. Dr. Christian Kray
Job Posting
Responsibilities include but are not limited to conducting research in the areas of homomorphic encryption, post-quantum cryptography, privacy-preserving mechanisms applied to machine learning, and security proofs, and building software prototypes to demonstrate the feasibility of technical solutions. The ideal candidate will initiate and organize the design, development, execution, implementation, documentation and feasibility studies of scientific research projects to fuel SHIELD’s growth in secure computing and cloud product concepts and new business opportunities. They will also pioneer substantial new knowledge of state-of-the-art principles and theories, contribute to scientific literature and conferences, and participate in the development of intellectual property. Required Abilities: 1) Highly competent in interpersonal communication. 2) A self-starter with initiative and a strong drive to identify and resolve technical issues. 3) Able to clearly explain complex concepts and take the lead in team decision-making. Qualifications: 1) PhD in cryptography. 2) One or more publications on cryptography in a top-tier, peer-reviewed conference/journal. 3) Deep expertise in the state-of-the-art of partially-, somewhat-, and fully homomorphic encryption; experience implementing prototypes is a strong asset. 4) Thorough understanding of lattice-based cryptography, including the underlying security problems, parameter selection, implementation, and side-channel resistance. 5) Skill in developing software prototypes in any of the following programming languages: C++, C, Python, Go or Rust. Preferred Qualifications: 1) Familiarity with other post-quantum cryptography families (e.g. hash-based signatures, code-based, isogeny or multivariate quadratic cryptography). 2) Familiarity with the application of privacy-preserving cryptographic techniques to machine learning. 3) Familiarity with highly regulated industries, such as banking, government, and/or health care. For immediate consideration, please submit your CV/resume and transcripts to careers(at)shieldcrypto.com and include “Cryptographer” in the subject line.
Closing date for applications:
Contact: Alhassan Khedr (CTO)
Ruhr University Bochum, Germany
Job Posting
The chair of Security Engineering at Horst Görtz Institute for IT-Security (HGI) at Ruhr University Bochum (Germany) has openings for a post-doc position. We are looking for outstanding candidate with strong background in Electrical/Computer Engineering, and/or Cryptography. The available position is fully funded for two years with possible extensions depending on the candidate's performance. In addition to the usual computer and electrical engineering background, the candidate is expected to be familiar with side-channel analysis attacks, and be able to deal with FPGAs & hardware designs, e.g., VHDL/verilog, which is essential for the project. The candidate should show strong publication records, at least a publication in venues like CHES, EUROCRYPT, ASIACRYPT, CRYPTO, DATE & DAC.
Please send your application via e-mail as a single pdf containing a CV, list of publications, and copies of transcripts and certificates.
Closing date for applications:
Contact: amir (dot) moradi (at) rub (dot) de
Australian Payments Network, Sydney, Australia
Job Posting
The PCI Standards Council (PCI SCSC) was founded in 2006 by American Express, Discover, JCB International, MasterCard and Visa Inc. who share equally in governance and execution of the organisation’s work. Its stated aim is to bring payments industry stakeholders together to develop and drive adoption of data security standards and resources for safe payments worldwide. PCI SSC mandates in the PIN Security Requirements and Testing Procedures: V3 2018 that to achieve “Control Objective 5: Keys are used in a manner that prevents or detects their unauthorised usage”, that “Encrypted symmetric keys must be managed in structures called key blocks. The key usage must be cryptographically bound to the key using accepted methods.” This is PIN Security Requirement 18-3, which further details three acceptable methods of implementing this requirement but also states that these methods are not an exhaustive list. The Australian payments industry does not use key blocks to manage the symmetric keys used as PIN Encrypting Keys (PEK). The question of which other methods are acceptable has been raised, which has resulted in a PCI FAQ. The latest version of which is in the publicly available document PCI PTS PIN Security Requirements, Technical FQAs V3, February 2020, FAQ 27, and requires an independent expert to assess the equivalency of other methods. PCI has also produced several blogs on the case for key blocks and two Informational Supplements, PCI PTS PIN: Cryptographic Key Blocks June 2017 and PCI PIN Security Requirement: PIN Security Requirement 18-3 Key Blocks: June 2019. AusPayNet is seeking to engage an independent expert, who meets the requirements set out by PCI in the PIN Security FAQ 27. This expert must assess the Australian PEK key management methodologies and determine if they provide equivalent levels of protection that prevent or detect their unauthorised usage, as compared to key blocks. AusPayNet is seeking to have the work completed in Q2 2020. For more information or to provide a copy of your CV and some indicative costs.
Closing date for applications:
Contact: Arthur Van Der Merwe - avande22@myune.edu.au
Villanova University, Department of Electrical and Computer Engineering
Job Posting
1. Overall introduction. There are three Ph.D. position openings (full scholarship, tuition & very competitive stipend) at Dr. Jiafeng Harvest Xie's Security & Cryptography (SAC) Lab for the Fall of 2020, located at the Department of Electrical and Computer Engineering of Villanova University (PA, USA).
2. Research area. Post quantum cryptography hardware, fault detection/attack, and cryptanalysis.
3. Qualification. Preferred to have research experience in the areas of cryptographic engineering, fault detection, cryptanalysis, and VLSI design. Students from electrical/computer engineering, computer science, and cryptography (applied mathematics) or other related majors are WARMLY welcome! Programming skills such as HDL, C++, Python will be more favorable.
4. Application process. Interested students can directly send the CV/resume to Dr. Jiafeng Harvest Xie's email: jiafeng.xie@villanova.edu.
6. Additional information. Villanova University is a private research university located in Radnor Township, a suburb northwest of Philadelphia, Pennsylvania. U.S. News & World Report ranks Villanova as tied for the 46th best National University in the U.S. for 2020.
7. PI introduction. Dr. Jiafeng Harvest Xie is currently an Assistant Professor at the Department of Electrical and Computer Engineering of Villanova University. His research interests include cryptographic engineering, hardware security, and VLSI digital design. He is the Best Paper Awardee of IEEE HOST 2019. He is also the Associate Editor for Microelectronics Journal, IEEE Access, and IEEE Trans. Circuits and Systems II.
Closing date for applications:
Contact: Dr. Jiafeng Harvest Xie, email: jiafeng.xie@villanova.edu
Tampere University
Job Posting
The Network and Information Security Group is currently looking for up to 2 motivated and talented researchers (Postdoctoral Researchers) to contribute to research projects related to applied cryptography, security and privacy. The successful candidates will be working on the following topics (but not limited to):
• Searchable Encryption and data structures enabling efficient search operations on encrypted data;
• Restricting the type of access given when granting access to search over one's data;
• Processing of encrypted data in outsourced and untrusted environments;
• Applying encrypted search techniques to SGX environments;
• Revocable Attribute-Based Encryption schemes and their application to cloud services;
• Functional Encryption;
• Privacy-Preserving Analytics;
• IoT Security.
• Programming skills is a must.
The positions are strongly research-focused. Activities include conducting both theoretical and applied research, design of secure and/or privacy-preserving protocols, software development and validation, reading and writing scientific articles, presentation of the research results at seminars and conferences in Finland and abroad, acquiring (or assisting in acquiring) further funding.
Closing date for applications:
Contact: Antonis Michalas
Yibin Xu, Yangyu Huang
ePrint Report
Traditional Blockchain Sharding approaches can only tolerate up to n/3 of nodes being adversary because they rely on the hypergeometric distribution to make a failure (an adversary does not have n/3 of nodes globally but can manipulate the consensus of a Shard) hard to happen. The system must maintain a large Shard size (the number of nodes inside a Shard) to sustain the low failure probability so that only a small number of Shards may exist. In this paper, we present a new approach of Blockchain Sharding that can withstand up to n/2 of nodes being bad. We categorise the nodes into different classes, and every Shard has a fixed number of nodes from different classes. We prove that this design is much more secure than the traditional models (only have one class) and the Shard size can be reduced significantly. In this way, many more Shards can exist, and the transaction throughput can be largely increased. The improved Blockchain Sharding approach is promising to serve as the foundation for decentralised autonomous organisations and decentralised database.
Christof Beierle, Gregor Leander
ePrint Report
We consider $n$-bit permutations with differential uniformity of 4 and null nonlinearity. We first show that the inverses of Gold functions have the interesting property that one component can be replaced by a linear function such that it still remains a permutation. This directly yields a construction of 4-uniform permutations with trivial nonlinearity in odd dimension. We further show their existence for all $n = 3$ and $n \geq 5$ based on a construction in [1]. In this context, we also show that 4-uniform 2-1 functions obtained from admissible sequences, as defined by Idrisova in [8], exist in every dimension $n = 3$ and $n \geq 5$. Such functions fulfill some necessary properties for being subfunctions of APN permutations. Finally, we use the 4-uniform permutations with null nonlinearity to construct some 4-uniform 2-1 functions from $\mathbb{F}_2^n$ to $\mathbb{F}_2^{n-1}$ which are not obtained from admissible sequences. This disproves a conjecture raised by Idrisova.
Wulu Li, Yongcan Wang, Lei Chen, Xin Lai, Xiao Zhang, Jiajun Xin
ePrint Report
Linkable ring signature (LRS) plays a major role in the Monero-type cryptocurrencies, as it provides the anonymity of initiator and the prevention of double spending in transactions. In this paper, we propose SLRS: a simpler and modular construction of linkable ring signature scheme, which only use standard ring signature as component, without any additional one-time signatures or zero-knowledge proofs. SLRS is more efficient than existing schemes in both generation and verification. Moreover, we use SLRS to construct an efficient and compact position-preserving linkable multi-ring signature to support application in Monero-type cryptocurrencies. We also give the security proofs, implementation as well as the performance comparisons between SLRS, Ring-CT and Ring-CT 3.0 in size and efficiency.
Vidal Attias, Luigi Vigneri, Vassil Dimitrov
ePrint Report
Proof of Work is a prevalent mechanism to prove investmentof time in blockchain projects. However the use of massive parallelismand specialized hardware gives an unfair advantage to a small portion ofnodes and raises environmental and economical concerns. In this paperwe provide an implementation study of two Verifiable Delay Functions, anew cryptographic primitive achieving Proof of Work goals in an unpar-allelizable way. We provide simulation results and an optimization basedon a multiexponentiation algorithm.
17 March 2020
Sergey Agievich
ePrint Report
In the modified CTR (CounTeR) mode known as CTR2, nonces are encrypted before constructing sequences of counters from them. This way we have only probabilistic guarantees for non-overlapping of the sequences. We show that these guarantees, and therefore the security guarantees of CTR2, are strong enough in two standard scenarios: random nonces and non-repeating nonces. We also show how to extend CTR2 to an authenticated encryption mode which we call CHE (Counter-Hash-Encrypt). To extend, we use one invocation of polynomial hashing and one additional block encryption.
|
{}
|
# Pygame version of my 3D Tic Tac Toe/Connect 4
I posted a question a while back asking for some feedback on the code of a game I made (it was limited to typing the input and drawing the output in ASCII).
Now I've got it linked up with pygamef. Does anything look out of place? Do you notice any bugs? Do the colours work? Is there anything particularly annoying?
Use CTRL+SHIFT+d while in options (hit ESC to bring them up if you've already started the game) to reveal the debug settings, and enable to see the mouse coordinate conversion and AI stuff going on under the hood.
Instructions
The aim is to get as many complete rows as you can, and the grid will flip every 3 turns to throw you off, otherwise it gets a bit easy. The game ends when all spaces are taken (though this is a bit annoying when you are having to fill in the last few ones, so I'll just make it end when there are no points left).
At this time, I still need to make the instructions page and a 'player x won' page, though everything else is working without bugs as far as I can tell.
Normal game:
With debug enabled:
To see the entire thing, you'll need this link. If you don't have pygame (or python for that matter), here is a standalone version of the game from py2exe.
class MouseToBlockID(object):
"""Converts mouse coordinates into the games block ID.
The first part is to calculate which level has been clicked, which
then allows the code to treat the coordinates as level 0. From this
point, it finds the matching chunks from the new coordinates which
results in two possible blocks, then it calculates how they are
conected (highest one is to the left if even+odd, otherwise it's to
the right), and from that, it's possible to figure out which block
the cursor is over.
A chunk is a cell of a 2D grid overlaid over the isometric grid.
Each block is split into 4 chunks, and each chunk overlaps two
blocks.
"""
def __init__(self, x, y, grid_main):
self.x = x
self.y = y
self.y_original = y
self.grid_main = grid_main
self._to_chunk()
def _to_chunk(self):
"""Calculate which chunk the coordinate is on."""
y_offset = self.grid_main.size_y * 2 + self.grid_main.padding
self.y_coordinate = int((self.grid_main.centre - self.y) / y_offset)
self.y += y_offset * self.y_coordinate
chunk_size_x = self.grid_main.size_x / self.grid_main.segments
chunk_size_y = self.grid_main.size_y / self.grid_main.segments
self.height = int((self.grid_main.centre - self.y) / chunk_size_y)
self.width = int((self.x + self.grid_main.size_x + chunk_size_x) / chunk_size_x) -1
def find_x_slice(self):
"""Find block IDs that are on the x segment"""
past_middle = self.width >= self.grid_main.segments
values = []
if self.width >= self.grid_main.segments:
count = 0
while True:
n_multiple = self.grid_main.segments * count
width_addition = self.width - self.grid_main.segments + count
if width_addition < self.grid_main.segments - 1:
else:
break
count += 1
elif self.width >= 0:
starting_point = self.grid_main.segments - self.width
values.append((starting_point - 1) * self.grid_main.segments)
for i in range(starting_point, self.grid_main.segments):
n_multiple = self.grid_main.segments * i
if 0 < i < self.grid_main.segments:
else:
break
return values
def find_y_slice(self):
"""Find block IDs that are on the y segment"""
height = self.height
past_middle = height >= self.grid_main.segments
if past_middle:
height = 2 * self.grid_main.segments - 1 - height
values = []
count = 0
while True:
n_multiple = count * self.grid_main.segments
else:
break
count += 1
if past_middle:
values = [pow(self.grid_main.segments, 2) - i - 1 for i in values]
return values
def find_overlap(self):
"""Combine the block IDs to find the 1 or 2 matching ones."""
x_blocks = self.find_x_slice()
y_blocks = self.find_y_slice()
if self.y_coordinate >= self.grid_main.segments:
return []
return [i for i in x_blocks if i in y_blocks]
def find_block_coordinates(self):
"""Calculate the coordinates of the block IDs, or create a fake
block if one is off the edge.
Returns a list sorted by height.
If only one value is given for which blocks are in the chunk, that
means the player is on the edge of the board. By creating a fake
block off the side of the board, it allows the coorect maths to be
done without any modification.
"""
matching_blocks = self.find_overlap()
if not matching_blocks:
return None
matching_coordinates = {i: self.grid_main.relative_coordinates[i]
for i in matching_blocks}
#Create new value to handle 'off edge' cases
if len(matching_coordinates.keys()) == 1:
single_coordinate = matching_coordinates[matching_blocks[0]]
new_location = (0, -self.grid_main.centre)
#Workaround to handle the cases in the upper half
if self.height < self.grid_main.segments:
top_row_right = range(1, self.grid_main.segments)
top_row_left = [i * self.grid_main.segments
for i in range(1, self.grid_main.segments)]
if self.width >= self.grid_main.segments:
top_row_right.append(0)
else:
top_row_left.append(0)
if matching_blocks[0] in top_row_left:
new_location = (single_coordinate[0] - self.grid_main.x_offset,
single_coordinate[1] + self.grid_main.y_offset)
elif matching_blocks[0] in top_row_right:
new_location = (single_coordinate[0] + self.grid_main.x_offset,
single_coordinate[1] + self.grid_main.y_offset)
matching_coordinates[-1] = new_location
return sorted(matching_coordinates.items(), key=lambda (k, v): v[1])
def calculate(self, debug=0):
"""Calculate which block ID the coordinates are on.
This calculates the coordinates of the line between the two
blocks, then depending on if a calculation results in a positive
or negative number, it's possible to detect which block it falls
on.
By returning the (x1, y1) and (x2, y2) values, they can be linked
with turtle to see it how it works under the hood.
"""
all_blocks = self.find_block_coordinates()
if all_blocks is None:
return None
highest_block = all_blocks[1][1]
line_direction = self.width % 2 == self.height % 2
if self.grid_main.segments % 2:
line_direction = not line_direction
#print self.width, self.height
x1, y1 = (highest_block[0],
highest_block[1] - self.grid_main.y_offset * 2)
negative = int('-1'[not line_direction:])
x2, y2 = (x1 + self.grid_main.x_offset * negative,
y1 + self.grid_main.y_offset)
sign = (x2 - x1) * (self.y - y1) - (y2 - y1) * (self.x - x1)
sign *= negative
#Return particular things when debugging
if debug == 1:
return (x1, y1), (x2, y2)
if debug == 2:
return sign
selected_block = all_blocks[sign > 0][0]
#If extra block was added, it was -1, so it is invalid
if selected_block < 0:
return None
return selected_block + self.y_coordinate * pow(self.grid_main.segments, 2)
class CoordinateConvert(object):
def __init__(self, width, height):
self.width = width
self.height = height
self.centre = (self.width / 2, self.height / 2)
def to_pygame(self, x, y):
x = x - self.centre[0]
y = self.centre[1] - y
return (x, y)
def to_canvas(self, x, y):
x = x + self.centre[0]
y = self.centre[1] - y
return (x, y)
class GridDrawData(object):
"""Hold the relevant data for the grid, to allow it to be shown."""
def __init__(self, length, segments, angle, padding=5):
self.length = length
self.segments = segments
self.angle = angle
self._calculate()
def _calculate(self):
"""Perform the main calculations on the values in __init__.
This allows updating any of the values, such as the isometric
angle, without creating a new class."""
self.x_offset = self.size_x / self.segments
self.y_offset = self.size_y / self.segments
self.chunk_height = self.size_y * 2 + self.padding
self.centre = (self.chunk_height / 2) * self.segments - self.padding / 2
self.size_x_sm = self.size_x / self.segments
self.size_y_sm = self.size_y / self.segments
#self.segments_sq = pow(self.segments, 2)
#self.grid_data_len = pow(self.segments, 3)
#self.grid_data_range = range(self.grid_data_len)
self.length_small = self.length / self.segments
self.relative_coordinates = []
position = (0, self.centre)
for j in range(self.segments):
checkpoint = position
for i in range(self.segments):
self.relative_coordinates.append(position)
position = (position[0] + self.x_offset,
position[1] - self.y_offset)
position = (checkpoint[0] - self.x_offset,
checkpoint[1] - self.y_offset)
#Absolute coordinates for pygame
chunk_coordinates = [(0, - i * self.chunk_height) for i in range(self.segments)]
self.line_coordinates = [((self.size_x, self.centre - self.size_y),
(self.size_x, self.size_y - self.centre)),
((-self.size_x, self.centre - self.size_y),
(-self.size_x, self.size_y - self.centre)),
((0, self.centre - self.size_y * 2),
(0, -self.centre))]
for i in range(self.segments):
chunk_height = -i * self.chunk_height
self.line_coordinates += [((self.size_x, self.centre + chunk_height - self.size_y),
(0, self.centre + chunk_height - self.size_y * 2)),
((-self.size_x, self.centre + chunk_height - self.size_y),
(0, self.centre + chunk_height - self.size_y * 2))]
for coordinate in self.relative_coordinates:
start = (coordinate[0], chunk_height + coordinate[1])
self.line_coordinates += [(start,
(start[0] + self.size_x_sm, start[1] - self.size_y_sm)),
(start,
(start[0] - self.size_x_sm, start[1] - self.size_y_sm))]
class RunPygame(object):
overlay_marker = '/'
player_colours = [GREEN, LIGHTBLUE]
empty_colour = YELLOW
fps_idle = 15
fps_main = 30
fps_smooth = 120
overlay_width = 500
def __init__(self, C3DObject, screen_width=640, screen_height=860, default_length=200, default_angle=24):
self.C3DObject = C3DObject
self.width = screen_width
self.height = screen_height
self.length = default_length
self.angle = default_angle
self.player = int(not self.C3DObject.current_player)
self.convert = CoordinateConvert(self.width, self.height)
self.to_pygame = self.convert.to_pygame
self.to_canvas = self.convert.to_canvas
def _next_player(self):
self.player = int(not self.player)
def _previous_player(self):
self._next_player()
def play(self, p1=False, p2=Connect3D.bot_difficulty_default, allow_shuffle=True, end_when_no_points_left=False):
#Setup pygame
pygame.init()
self.screen = pygame.display.set_mode((self.width, self.height))
self.clock = pygame.time.Clock()
pygame.display.set_caption('Connect 3D')
background_colour = BACKGROUND
self.backdrop = pygame.Surface((self.width, self.height))
self.backdrop.set_alpha(196)
self.backdrop.fill(WHITE)
#Import the font
self.font_file = 'Miss Monkey.ttf'
try:
pygame.font.Font(self.font_file, 0)
except IOError:
self.font_lg = pygame.font.Font(self.font_file, 36)
self.font_lg_size = self.font_lg.render('', 1, BLACK).get_rect()[3]
self.font_md = pygame.font.Font(self.font_file, 24)
self.font_md_size = self.font_md.render('', 1, BLACK).get_rect()[3]
self.font_sm = pygame.font.Font(self.font_file, 18)
self.font_sm_size = self.font_sm.render('', 1, BLACK).get_rect()[3]
self.draw_data = GridDrawData(self.length,
self.C3DObject.segments,
self.angle,
#NOTE: These will all be cleaned up later, the grouping isn't great currently
held_keys = {'angle': 0,
'size': 0}
#Store one off instructions to wipe later
game_flags = {'clicked': False,
'mouse_used': True,
'quit': False,
'recalculate': False,
'reset': False,
'hover': False,
'flipped': False,
'disable_background_clicks': False,
'winner': None}
#Store information that shouldn't be wiped
game_data = {'players': [p1, p2],
'overlay': 'options',
'move_number': 0,
'shuffle': [allow_shuffle, 3],
'debug': False}
#Store temporary things to update
store_data = {'waiting': False,
'waiting_start': 0,
'shuffle_count': 0,
'temp_fps': self.fps_main,
'player_hover': None,
'shuffle_hover': None,
'new_game': False,
'continue': False,
'exit': False,
'instructions': False,
'debug_hover': None}
block_data = {'id': None,
'object': None,
'taken': False}
tick_data = {'old': 0,
'new': 0,
'update': 4, #How many ticks between each held key command
'total': 0}
mouse_data = pygame.mouse.get_pos()
#How long to wait before accepting a move
moving_wait = 0.5
#For controlling how the angle and length of grid update
angle_increment = 0.25
angle_max = 35
length_exponential = 1.1
length_increment = 0.5
length_multiplier = 0.01
time_current = time.time()
time_update = 0.01
while True:
self.clock.tick(store_data['temp_fps'] or self.fps_idle)
tick_data['new'] = pygame.time.get_ticks()
if game_flags['quit']:
return self.C3DObject
#Check if no spaces are left
if '' not in self.C3DObject.grid_data:
game_flags['winner'] = self.C3DObject._get_winning_player()
#Need to come up with some menu for the winner
#Print so it reminds me each time this happens
print 'finish this'
#Reset loop
self.screen.fill(background_colour)
if tick_data['total']:
game_flags['recalculate'] = False
game_flags['mouse_used'] = False
game_flags['clicked'] = False
game_flags['flipped'] = False
game_flags['disable_background_clicks'] = False
store_data['temp_fps'] = None
tick_data['total'] += 1
#Reinitialise the grid
if game_flags['reset']:
game_flags['reset'] = False
game_data['move_number'] = 0
game_data['shuffle'][0] = allow_shuffle
game_data['players'] = (p1, p2)
self.C3DObject = Connect3D(self.C3DObject.segments)
game_flags['hover'] = None
game_flags['recalculate'] = True
store_data['waiting'] = False
game_flags['winner'] = None
if game_flags['hover'] is not None:
if self.C3DObject.grid_data[game_flags['hover']] == self.overlay_marker:
self.C3DObject.grid_data[game_flags['hover']] = ''
game_flags['hover'] = None
if game_data['overlay']:
game_flags['disable_background_clicks'] = True
#Delay each go
if store_data['waiting']:
game_flags['disable_background_clicks'] = True
if store_data['waiting_start'] < time.time():
game_flags['recalculate'] = True
attempted_move = self.C3DObject.make_move(store_data['waiting'][1], store_data['waiting'][0])
if attempted_move is not None:
game_data['move_number'] += 1
self.C3DObject.update_score()
store_data['shuffle_count'] += 1
if store_data['shuffle_count'] >= game_data['shuffle'][1] and game_data['shuffle'][0]:
store_data['shuffle_count'] = 0
self.C3DObject.shuffle()
game_flags['flipped'] = True
else:
game_flags['flipped'] = False
else:
self._next_player()
print "Invalid move: {}".format(store_data['waiting'][0])
store_data['waiting'] = False
else:
try:
self.C3DObject.grid_data[store_data['waiting'][0]] = 9 - store_data['waiting'][1]
except TypeError:
print store_data['waiting'], ai_turn
raise TypeError('trying to get to the bottom of this')
#Run the AI
ai_turn = None
if game_data['players'][self.player] is not False:
if not game_flags['disable_background_clicks'] and game_flags['winner'] is None:
ai_turn = SimpleC3DAI(self.C3DObject, self.player, difficulty=game_data['players'][self.player]).calculate_next_move()
#Event loop
for event in pygame.event.get():
if event.type == pygame.QUIT:
return
#Get single key presses
if event.type == pygame.KEYDOWN:
game_flags['recalculate'] = True
if event.key == pygame.K_ESCAPE:
if game_data['overlay'] is None:
game_data['overlay'] = 'options'
else:
game_data['overlay'] = None
if event.key == pygame.K_RIGHTBRACKET:
self.C3DObject.segments += 1
game_flags['reset'] = True
if event.key == pygame.K_LEFTBRACKET:
self.C3DObject.segments -= 1
self.C3DObject.segments = max(1, self.C3DObject.segments)
game_flags['reset'] = True
if event.key == pygame.K_UP:
held_keys['angle'] = 1
if event.key == pygame.K_DOWN:
held_keys['angle'] = -1
if event.key == pygame.K_RIGHT:
held_keys['size'] = 1
if event.key == pygame.K_LEFT:
held_keys['size'] = -1
#Get mouse clicks
if event.type == pygame.MOUSEBUTTONDOWN:
game_flags['clicked'] = event.button
game_flags['mouse_used'] = True
if event.type == pygame.MOUSEMOTION:
game_flags['mouse_used'] = True
#Get held down key presses, but only update if enough ticks have passed
key = pygame.key.get_pressed()
update_yet = False
if tick_data['new'] - tick_data['old'] > tick_data['update']:
update_yet = True
tick_data['old'] = pygame.time.get_ticks()
if held_keys['angle']:
if not (key[pygame.K_UP] or key[pygame.K_DOWN]):
held_keys['angle'] = 0
elif update_yet:
self.draw_data.angle += angle_increment * held_keys['angle']
game_flags['recalculate'] = True
store_data['temp_fps'] = self.fps_smooth
if held_keys['size']:
if not (key[pygame.K_LEFT] or key[pygame.K_RIGHT]):
held_keys['size'] = 0
elif update_yet:
length_exp = (max(length_increment,
(pow(self.draw_data.length, length_exponential)
- 1 / length_increment))
* length_multiplier)
self.draw_data.length += length_exp * held_keys['size']
game_flags['recalculate'] = True
store_data['temp_fps'] = self.fps_smooth
#Update mouse information
if game_flags['mouse_used'] or game_flags['recalculate']:
game_flags['recalculate'] = True
mouse_data = pygame.mouse.get_pos()
x, y = self.to_pygame(*mouse_data)
block_data['object'] = MouseToBlockID(x, y, self.draw_data)
block_data['id'] = block_data['object'].calculate()
block_data['taken'] = True
if block_data['id'] is not None and ai_turn is None:
block_data['taken'] = self.C3DObject.grid_data[block_data['id']] != ''
#If mouse was clicked
if not game_flags['disable_background_clicks']:
if game_flags['clicked'] == 1 and not block_data['taken'] or ai_turn is not None:
store_data['waiting'] = (ai_turn if ai_turn is not None else block_data['id'], self.player)
store_data['waiting_start'] = time.time() + moving_wait
self._next_player()
#Highlight square
if not block_data['taken'] and not store_data['waiting'] and not game_data['overlay']:
self.C3DObject.grid_data[block_data['id']] = self.overlay_marker
game_flags['hover'] = block_data['id']
#Recalculate the data to draw the grid
if game_flags['recalculate']:
if not store_data['temp_fps']:
store_data['temp_fps'] = self.fps_main
self.draw_data.segments = self.C3DObject.segments
self.draw_data.length = float(max((pow(1 / length_increment, 2) * self.draw_data.segments), self.draw_data.length, 2))
self.draw_data.angle = float(max(angle_increment, min(89, self.draw_data.angle, angle_max)))
self.draw_data._calculate()
if game_flags['reset']:
continue
#Draw coloured squares
for i in self.C3DObject.range_data:
if self.C3DObject.grid_data[i] != '':
chunk = i / self.C3DObject.segments_squared
coordinate = list(self.draw_data.relative_coordinates[i % self.C3DObject.segments_squared])
coordinate[1] -= chunk * self.draw_data.chunk_height
square = [coordinate,
(coordinate[0] + self.draw_data.size_x_sm,
coordinate[1] - self.draw_data.size_y_sm),
(coordinate[0],
coordinate[1] - self.draw_data.size_y_sm * 2),
(coordinate[0] - self.draw_data.size_x_sm,
coordinate[1] - self.draw_data.size_y_sm),
coordinate]
#Player has mouse over square
block_colour = None
if self.C3DObject.grid_data[i] == self.overlay_marker:
if game_data['players'][self.player] is False:
block_colour = mix_colour(WHITE, WHITE, self.player_colours[self.player])
#Square is taken by a player
else:
j = self.C3DObject.grid_data[i]
#Square is being moved into, mix with red and white
mix = False
if isinstance(j, int) and j > 1:
j = 9 - j
moving_block = square
mix = True
block_colour = self.player_colours[j]
if mix:
block_colour = mix_colour(block_colour, GREY)
if block_colour is not None:
pygame.draw.polygon(self.screen,
block_colour,
[self.to_canvas(*corner)
for corner in square],
0)
#Draw grid
for line in self.draw_data.line_coordinates:
pygame.draw.aaline(self.screen,
BLACK,
self.to_canvas(*line[0]),
self.to_canvas(*line[1]),
1)
self._draw_score(game_flags['winner'])
if game_data['debug']:
self._draw_debug(block_data)
if game_data['overlay']:
store_data['temp_fps'] = self.fps_main
self.blit_list = []
self.rect_list = []
self.screen.blit(self.backdrop, (0, 0))
screen_width_offset = (self.width - self.overlay_width) / 2
#Set page titles
if game_data['overlay'] == 'instructions':
subtitle_message = ''
elif game_data['move_number'] + bool(store_data['waiting']) and game_data['overlay'] == 'options':
title_message = 'Options'
subtitle_message = ''
else:
title_message = 'Connect 3D'
subtitle_message = 'By Peter Hunt'
title_text = self.font_lg.render(title_message, 1, BLACK)
title_size = title_text.get_rect()[2:]
subtitle_text = self.font_md.render(subtitle_message, 1, BLACK)
subtitle_size = subtitle_text.get_rect()[2:]
current_height += subtitle_size[1]
if subtitle_message:
if game_data['overlay'] == 'options':
#Player options
players_unsaved = [p1, p2]
players_original = list(game_data['players'])
player_hover = store_data['player_hover']
store_data['player_hover'] = None
options = ['Human', 'Beginner', 'Easy', 'Medium', 'Hard', 'Extreme']
for player in range(len(game_data['players'])):
if players_unsaved[player] is False:
players_unsaved[player] = -1
else:
players_unsaved[player] = get_bot_difficulty(players_unsaved[player], _debug=True)
if players_original[player] is False:
players_original[player] = -1
else:
players_original[player] = get_bot_difficulty(players_original[player], _debug=True)
params = []
for i in range(len(options)):
params.append([i == players_unsaved[player] or players_unsaved[player] < 0 and not i,
i == players_original[player] or players_original[player] < 0 and not i,
[player, i] == player_hover])
option_data = self._draw_options('Player {}: '.format(player),
options,
params,
screen_width_offset,
current_height)
selected_option, options_size = option_data
current_height += options_size
if not player:
else:
#Calculate mouse info
if selected_option is not None:
player_set = selected_option - 1
if player_set < 0:
player_set = False
store_data['player_hover'] = [player, selected_option]
if game_flags['clicked']:
if not player:
p1 = player_set
else:
p2 = player_set
if not game_data['move_number']:
game_data['players'] = (p1, p2)
#Ask whether to flip the grid
options = ['Yes', 'No']
params = []
for i in range(len(options)):
params.append([not i and allow_shuffle or i and not allow_shuffle,
not i and game_data['shuffle'][0] or i and not game_data['shuffle'][0],
not i and store_data['shuffle_hover'] or i and not store_data['shuffle_hover'] and store_data['shuffle_hover'] is not None])
option_data = self._draw_options('Flip grid every 3 goes? ',
['Yes', 'No'],
params,
screen_width_offset,
current_height)
selected_option, options_size = option_data
#Calculate mouse info
store_data['shuffle_hover'] = None
if selected_option is not None:
store_data['shuffle_hover'] = not selected_option
if game_flags['clicked']:
allow_shuffle = not selected_option
if not game_data['move_number']:
game_data['shuffle'][0] = allow_shuffle
#Toggle hidden debug option with ctrl+alt+d
if not (not key[pygame.K_d]
or not (key[pygame.K_RCTRL] or key[pygame.K_LCTRL])
or not (key[pygame.K_RALT] or key[pygame.K_LALT])):
store_data['debug_hover']
options = ['Yes', 'No']
params = []
for i in range(len(options)):
params.append([not i and game_data['debug'] or i and not game_data['debug'],
not i and game_data['debug'] or i and not game_data['debug'],
not i and store_data['debug_hover'] or i and not store_data['debug_hover'] and store_data['debug_hover'] is not None])
option_data = self._draw_options('Show debug info? ',
['Yes', 'No'],
params,
screen_width_offset,
current_height)
selected_option, options_size = option_data
store_data['debug_hover'] = None
if selected_option is not None:
store_data['debug_hover'] = not selected_option
if game_flags['clicked']:
game_data['debug'] = not selected_option
box_height = [current_height]
#Tell to restart game
if game_data['move_number']:
current_height += box_spacing
restart_message = 'Restart game to apply settings.'
restart_text = self.font_md.render(restart_message, 1, BLACK)
restart_size = restart_text.get_rect()[2:]
self.blit_list.append((restart_text, ((self.width - restart_size[0]) / 2, current_height)))
#Continue button
if self._pygame_button('Continue',
store_data['continue'],
current_height,
-1):
store_data['continue'] = True
if game_flags['clicked']:
game_data['overlay'] = None
else:
store_data['continue'] = False
box_height.append(current_height)
current_height += box_spacing
#Instructions button
if self._pygame_button('Instructions' if game_data['move_number'] else 'Help',
store_data['instructions'],
box_height[0],
0 if game_data['move_number'] else 1):
store_data['instructions'] = True
if game_flags['clicked']:
game_data['overlay'] = 'instructions'
else:
store_data['instructions'] = False
#New game button
if self._pygame_button('New Game' if game_data['move_number'] else 'Start',
store_data['new_game'],
box_height[bool(game_data['move_number'])],
bool(game_data['move_number']) if game_data['move_number'] else -1):
store_data['new_game'] = True
if game_flags['clicked']:
game_flags['reset'] = True
game_data['overlay'] = None
else:
store_data['new_game'] = False
#Quit button
if self._pygame_button('Quit to Desktop' if game_data['move_number'] else 'Quit',
store_data['exit'],
current_height):
store_data['exit'] = True
if game_flags['clicked']:
game_flags['quit'] = True
else:
store_data['exit'] = False
#Draw background
pygame.draw.rect(self.screen, WHITE, background_square, 0)
pygame.draw.rect(self.screen, BLACK, background_square, 1)
for i in self.rect_list:
rect_data = [self.screen] + i
pygame.draw.rect(*rect_data)
for i in self.blit_list:
self.screen.blit(*i)
pygame.display.flip()
def _pygame_button(self, message, hover, current_height, width_multipler=0):
multiplier = 3
#Set up text
text_colour = BLACK if hover else GREY
text_object = self.font_lg.render(message, 1, text_colour)
text_size = text_object.get_rect()[2:]
centre_offset = self.width / 10 * width_multipler
text_x = (self.width - text_size[0]) / 2
if width_multipler > 0:
text_x += text_size[0] / 2
if width_multipler < 0:
text_x -= text_size[0] / 2
text_x += centre_offset
text_square = (text_x - self.option_padding * (multiplier + 1),
text_size[0] + self.option_padding * (2 * multiplier + 2),
text_size[1] + self.option_padding * (2 * multiplier - 1))
self.blit_list.append((text_object, (text_x, current_height)))
#Detect if mouse is over it
x, y = pygame.mouse.get_pos()
in_x = text_square[0] < x < text_square[0] + text_square[2]
in_y = text_square[1] < y < text_square[1] + text_square[3]
if in_x and in_y:
return True
return False
def _draw_options(self, message, options, params, screen_width_offset, current_height):
"""Draw a list of options and check for inputs.
Parameters:
message (str): Text to display next to the options.
options (list): Names of the options.
params (list): Contains information on the options.
It needs to have the same amount of records as
options, with each of these being a list of 3 items.
These are used to colour the text in the correct
way.
param[option][0] = new selection
param[option][1] = currently active
param[option][2] = mouse hoving over
screen_width_offset (int): The X position to draw the
text.
current_height (int/float): The Y position to draw the
text.
"""
message_text = self.font_md.render(message, 1, BLACK)
message_size = message_text.get_rect()[2:]
option_text = [self.font_md.render(i, 1, BLACK) for i in options]
option_size = [i.get_rect()[2:] for i in option_text]
option_square_list = []
for i in range(len(options)):
width_offset = (sum(j[0] + 2 for j in option_size[:i])
+ self.padding[0] * (i + 1) #gap between the start
+ message_size[0] + screen_width_offset)
option_square_list.append(option_square)
#Set colours
option_colours = list(SELECTION['Default'])
param_order = ('Waiting', 'Selected', 'Hover')
for j in range(len(params[i])):
if params[i][j]:
rect_colour, text_colour = list(SELECTION[param_order[j]])
if rect_colour is not None:
option_colours[0] = rect_colour
if text_colour is not None:
option_colours[1] = text_colour
rect_colour, text_colour = option_colours
self.rect_list.append([rect_colour, option_square])
self.blit_list.append((self.font_md.render(options[i], 1, text_colour), (width_offset, current_height)))
x, y = pygame.mouse.get_pos()
selected_square = None
for square in range(len(option_square_list)):
option_square = option_square_list[square]
in_x = option_square[0] < x < option_square[0] + option_square[2]
in_y = option_square[1] < y < option_square[1] + option_square[3]
if in_x and in_y:
selected_square = square
return (selected_square, message_size[1])
def _format_output(self, text):
"""Format text to remove invalid characters."""
left_bracket = ('[', '{')
right_bracket = (']', '}')
for i in left_bracket:
text = text.replace(i, '(')
for i in right_bracket:
text = text.replace(i, ')')
return text
def _draw_score(self, winner):
"""Draw the title."""
#Format scores
point_marker = '/'
p0_points = self.C3DObject.current_points[0]
p1_points = self.C3DObject.current_points[1]
p0_font_top = self.font_md.render('Player 0', 1, BLACK, self.player_colours[0])
p1_font_top = self.font_md.render('Player 1', 1, BLACK, self.player_colours[1])
p0_font_bottom = self.font_lg.render(point_marker * p0_points, 1, BLACK)
p1_font_bottom = self.font_lg.render(point_marker * p1_points, 1, BLACK)
p_size_top = p1_font_top.get_rect()[2:]
p_size_bottom = p1_font_bottom.get_rect()[2:]
if winner is None:
go_message = "Player {}'s turn!".format(self.player)
else:
if len(winner) != 1:
go_message = 'The game was a draw!'
else:
go_message = 'Player {} won!'.format(winner[0])
go_font = self.font_lg.render(go_message, 1, BLACK)
go_size = go_font.get_rect()[2:]
self.screen.blit(go_font, ((self.width - go_size[0]) / 2, self.padding[1] * 3))
def _draw_debug(self, block_data):
"""Show the debug information."""
mouse_data = pygame.mouse.get_pos()
x, y = self.to_pygame(*mouse_data)
debug_coordinates = block_data['object'].calculate(debug=1)
if debug_coordinates is not None:
if all(i is not None for i in debug_coordinates):
pygame.draw.aaline(self.screen,
RED,
pygame.mouse.get_pos(),
self.to_canvas(*debug_coordinates[1]),
1)
pygame.draw.line(self.screen,
RED,
self.to_canvas(*debug_coordinates[0]),
self.to_canvas(*debug_coordinates[1]),
2)
possible_blocks = block_data['object'].find_overlap()
y_mult = str(block_data['object'].y_coordinate * self.C3DObject.segments_squared)
if y_mult[0] != '-':
y_mult = '+{}'.format(y_mult)
info = ['DEBUG INFO',
'FPS: {}'.format(int(round(self.clock.get_fps(), 0))),
'Segments: {}'.format(self.C3DObject.segments),
'Angle: {}'.format(self.draw_data.angle),
'Side length: {}'.format(self.draw_data.length),
'Coordinates: {}'.format(mouse_data),
'Chunk: {}'.format((block_data['object'].width,
block_data['object'].height,
block_data['object'].y_coordinate)),
'X Slice: {}'.format(block_data['object'].find_x_slice()),
'Y Slice: {}'.format(block_data['object'].find_y_slice()),
'Possible blocks: {} {}'.format(possible_blocks, y_mult),
'Block weight: {}'.format(block_data['object'].calculate(debug=2)),
'Block ID: {}'.format(block_data['object'].calculate())]
font_render = [self.font_sm.render(self._format_output(i), 1, BLACK) for i in info]
font_size = [i.get_rect()[2:] for i in font_render]
for i in range(len(info)):
message_height = self.height - sum(j[1] for j in font_size[i:])
self.screen.blit(font_render[i], (0, message_height))
#Format the AI text output
ai_message = []
for i in self.C3DObject.ai_message:
#Split into chunks of 50 if longer
message_len = len(i)
message = [self._format_output(i[n * 50:(n + 1) * 50]) for n in range(round_up(message_len / 50.0))]
ai_message += message
font_render = [self.font_sm.render(i, 1, BLACK) for i in ai_message]
font_size = [i.get_rect()[2:] for i in font_render]
for i in range(len(ai_message)):
message_height = self.height - sum(j[1] for j in font_size[i:])
self.screen.blit(font_render[i], (self.width - font_size[i][0], message_height))
Background
The game is designed to be a 4x4x4 grid, but I didn't want to limit it, so everything I did had to be coded to work with any value (I'll refer to these as segments). When playing the game, use [ and ] to change the amount of segments, though be warned the AI will take exponentially longer, as each new segment creates 9x more processing. Also, you can change the side length and angle with the arrow keys.
Mouse coordinate to block ID
I initially drew the game with turtle, which was quite slow, but didn't require coordinates so was easy. However, converting the mouse coordinates into which block it was over wasn't, since the grid was isometric and not normal squares.
Turtle coordinates have (0, 0) in the middle, whereas pygame coordinates have (0, 0) in the top left, so as I wrote this function for turtle, there's an extra layer in place to convert the absolute coordinates from the mouse input into relative coordinates for this.
1. I got which level the mouse was on, and then converted it to the top level, so that I didn't have to worry about getting the code working on all levels.
2. I split the top level into 2D 'chunks' that were half the size of the blocks, so that there was one chunk for each connection between a block. I converted the mouse coordinates into which chunks they were in.
3. With a lot of effort, I figured out 3 formulas (1 for X, 2 for Y) which would get all block IDs on those rows, for any amount of segments
4. I'd compare the lists to find matches between the two, which in the middle of the grid, would result in 2 blocks. At the edge, it'd result in 1, so to get the next part correctly working, I had to make it come up with a fake block, so that it'd be able to compare the two.
5. Using some formula I found for 'detecting if a point is over or under a line' (no idea what that is called), I find if the value is positive or negative, which depending on if the slope of the line is going up or down, can result in the correct block ID. I noticed that if both X and Y chunks are positive, or both are negative, the line between the two blocks slopes one way, and if one is positive and one is negative, the line slopes the other way (this is then reversed for an odd number of segments), so with that final tweak I got it working correctly.
Grid Draw Data
Since the only thing provided is number of segments, angle, line length and padding, each time any of this changes I need to recalculate virtually everything to keep the game working. This class takes care of that, and stores all the required information.
AI
The AI difficulty just determines how likely it is to not notice something, and how likely it is to change its priorities, but it still does all the calculations beforehand. When the game is in the early stages, the chance of not noticing an n-1 row is greatly reduced, since it's obvious to the human eye as well, and otherwise the AI just looks stupid.
It will look ahead 1 move to see if there's any rows of n-1, and if not, for every space in the grid, it'll look ahead 1 move to see if there's any rows of n-2.
If it is n-1, the first priority is to block the enemy, then gain points. This reverses for n-2, otherwise the AI only blocks and never does anything itself. If there is nothing found from this way, it'll determine the maximum number of points that can be gained from each block, and pick the best option (e.g. if you switch to an odd number of segments, the AI will always pick the middle block first).
Something I added yesterday was a bit of prediction, which works alongside the first method I mentioned, as I noticed the extreme AI was easy to trick. If you try trick the AI (as in you line up 2 points in 1 move, so if one is blocked you still get another point), it'll now notice that and block it. Likewise it can do the same to you. You can see this happening if you watch 2 extreme AI battle it out with each other.
Pygame
Since most of the time not much is going on, I made the frame rate variable, so it goes at 15 FPS if there is no movement (any less than that and you notice a delay when you try to move), 30 FPS if you are moving the mouse or have a menu open, and 120 FPS if you are resizing the grid.
The options overlay is drawn as part of the loop after everything else. I slightly modify the layout if the first move has been taken, and disable the instant updating of options (otherwise if you are losing you can temporarily activate the AI and win).
With the way I did the buttons, they look bad if two are next to each other and are a different size, so I tried my best to keep names a similar length (hence why 'help' changes to 'instructions').
### 1. MouseToBlockID
1. Normally an instance of a class represents some thing, that is, a persistent object or data structure. But an instance of MouseToBlockID does not seem to represent any kind of thing. What you need here is a function that takes game coordinates and returns a block index.
See Jack Diederich's talk "Stop Writing Classes".
Since this function makes use of the attributes of the GridDrawData class, this would best be written as a method on that class:
def game_to_block_index(self, gx, gy):
"""Return index of block at the game coordinates gx, gy, or None if
there is no block at those coordinates."""
2. The naming of variables needs work. When you have coordinates in three dimensions, it's conventional to call them "x", "y" and "z". But here you use the name y_coordinate for "z". That's bound to lead to confusion.
3. The code is extraordinarily long and complex for what should be a simple operation. There are more than 200 lines in this class, but converting game coordinates to a block index should be a simple operation that proceeds as follows:
Adjust gy so that it is relative to the origin of the bottom plane (the z=0 plane) rather than relative to the centre of the window:
gy += self.centre
Find z:
z = int(gy // self.chunk_height)
Adjust gy so that it is relative to the origin of its z-plane:
gy -= z * self.chunk_height
Reverse the isometric grid transform:
dx = gx / self.size_x_sm
dy = gy / self.size_y_sm
x = int((dy - dx) // 2)
y = int((dy + dx) // 2)
Check that the result is in bounds, and encode position as block index:
n = self.segments
if 0 <= x < n and 0 <= y < n and 0 <= z < n:
return n ** 3 - 1 - (x + n * (y + n * z))
else:
return None
And that's it. Just twelve lines.
4. It will be handy to encapsulate the transformation from block coordinates to block index in its own method:
def block_index(self, x, y, z):
"""Return the block index corresponding to the block at x, y, z, or
None if there is no block at those coordinates.
"""
n = self.segments
if 0 <= x < n and 0 <= y < n and 0 <= z < n:
return n ** 3 - 1 - (x + n * (y + n * z))
else:
return None
See below for how this can be used to simplify the drawing code.
5. The encoding of block indexes is backwards, with (0, 0, 0) corresponding to block index 63 and (3, 3, 3) to block index 0. You'll see that had to write n ** 3 - 1 - (x + n * (y + n * z)) whereas x + n * (y + n * z) would be the more natural encoding.
### 2. GridDrawData
1. The computation of game coordinates for the endpoints of the lines is verbose, hard to read, and hard to check:
self.line_coordinates = [((self.size_x, self.centre - self.size_y),
(self.size_x, self.size_y - self.centre)),
((-self.size_x, self.centre - self.size_y),
(-self.size_x, self.size_y - self.centre)),
((0, self.centre - self.size_y * 2),
(0, -self.centre))]
What you need is a method that transforms block coordinates into game coordinates:
def block_to_game(self, x, y, z):
"""Return the game coordinates corresponding to block x, y, z."""
gx = (x - y) * self.size_x_sm
gy = (x + y) * self.size_y_sm + z * self.chunk_height - self.centre
return gx, gy
Then you can compute all the lines using block coordinates, which is much easier to read and check:
n = self.segments
g = self.block_to_game
self.lines = [(g(n, 0, n - 1), g(n, 0, 0)),
(g(0, n, n - 1), g(0, n, 0)),
(g(0, 0, n - 1), g(0, 0, 0))]
for i, j, k in itertools.product(range(n+1), range(n+1), range(n)):
self.lines.extend([(g(i, 0, k), g(i, n, k)),
(g(0, j, k), g(n, j, k))])
2. Using block_to_game you can avoid the need for relative_coordinates. Instead of:
for i in self.C3DObject.range_data:
if self.C3DObject.grid_data[i] != '':
chunk = i / self.C3DObject.segments_squared
coordinate = list(self.draw_data.relative_coordinates[i % self.C3DObject.segments_squared])
coordinate[1] -= chunk * self.draw_data.chunk_height
square = [coordinate,
(coordinate[0] + self.draw_data.size_x_sm,
coordinate[1] - self.draw_data.size_y_sm),
(coordinate[0],
coordinate[1] - self.draw_data.size_y_sm * 2),
(coordinate[0] - self.draw_data.size_x_sm,
coordinate[1] - self.draw_data.size_y_sm),
coordinate]
write:
n = self.draw_data.segments
g = self.draw_data.block_to_game
for x, y, z in itertools.product(range(n), repeat=3):
i = self.draw_data.block_index(x, y, z)
if self.C3DObject.grid_data[i] != '':
square = [g(x, y, z),
g(x + 1, y, z),
g(x + 1, y + 1, z),
g(x, y + 1, z)]
### 3. RunPygame
1. game_flags['recalculate'] gets set whenever game_flags['mouse_used'] is set. This means that the grid gets unnecessarily recalculated every time the mouse moves.
### 4. Isometric coordinates
Here's an explanation of how you can derive the isometric coordinate transformation and its inverse. Let's take the forward transformation first. You start with Cartesian coordinates $x, y$ and you want isometric coordinates $ix, iy$.
It's easiest to work this out if you introduce an intermediate set of coordinates: uniform isometric coordinates $ux, uy$ where the scale is the same in both dimensions (the diamonds are squares) and the height and width of each diamond is 1.
Now, the transformations are easy: to go from Cartesian coordinates to uniform isometric coordinates we use: \eqalign{ ux &= {y + x \over 2} \cr uy &= {y - x \over 2} } and then from uniform to plain isometric coordinates we use scale factors $sx, sy$: \eqalign{ ix &= ux·sx \cr iy &= uy·sy } Putting these together: \eqalign{ ix &= (y + x){sx\over2} \cr iy &= (y - x){sy\over2} } To reverse the transformation, treat these as simultaneous equations and solve for $x$ and $y$: \eqalign{ x &= {ix\over sx} - {iy\over sy} \cr y &= {ix\over sx} + {iy\over sy}}
(These formulae aren't quite the same as the ones I used in the code above, but that's because your backwards block numbering scheme required me to swap $x$ and $y$, and because your size_x_sm is half of the scale factor $sx$.)
• Hey thanks, I'm currently down south reading this off my phone so will just reply to a couple of points. As to #3, nice spot, I'd done that so when the grid is recalculated it updates the mouse data, forgot it'd work the other way round too. With #1.1, would you suggest for something so long just having them as separate functions? Putting them in a class for the purpose of grouping makes it feel a bit cleaner to me. I was using y for vertical because that's what Maya uses, and it avoids confusing me :) – Peter Nov 2 '15 at 15:56
• My code was super long and complex since it seemed to be the only way to do it. I obviously can't test yours now (got a small phone so can't even read the code properly), but if it does the exact same job in 12 lines I'll be really amazed haha – Peter Nov 2 '15 at 15:58
• 1. If you need to organize a bunch of functions, you can put them in a module. 2. If you're using y for the vertical coordinate, then the other two (horizontal) coordinates should be named x and z. – Gareth Rees Nov 2 '15 at 15:59
• Just tried #1, and as well as using 10% of the lines, it's also around 8x more efficient, thanks so much (I also can't believe I spent an entire day on something apparently so simple to do, though I suppose mine looked cooler with the debug info enabled haha). I'll aim to stick with z as the vertical axis from now on too. As to #2, I'll properly check it out shortly, but line_coordinates was a messy fix to convert my turtle code to pygame, since I didn't know how else to calculate the end points from block ID. It is probably my least favourite bit, so again, thanks :D – Peter Nov 3 '15 at 8:25
• I did some tests with your #2 and it really pains me to say (since you've coded it so well), both are quite a bit slower. The block_to_game function and vice versa is a really nice idea, but I think the amount of times the function is called causes the hit. #2.1 is around 2-3 times slower (with a slight bug that doubles up some lines though), and #2.2 is around 5x slower. Not a huge problem with the default settings, but if someone cranks up the number of segments, the performance drops quite fast. – Peter Nov 5 '15 at 8:48
|
{}
|
# Can I express any odd number with a power of two minus a prime?
I have been running a computer program trying to see if I can represent any odd number in the form of
$$2^a - b$$
With b as a prime number. I have seen an earlier proof about Cohen and Selfridge regarding odd numbers that are nether a sum or a difference of a power of two and a prime, and I was curious to see if anyone has found an odd number that couldn't be represented using the above formula.
-
Fred Cohen and J. L. Selfridge, Not Every Number is the Sum or Difference of Two Prime Powers contains an example of such a number:
Corollary: 47867742232066880047611079 is prime and neither the sum nor difference of a power of two and a prime.
-
Erdos's argument essentially gives that there is an infinite arithmetic progression of odd numbers containing no element of the form power of two minus a prime.
Define
\begin{array}{cccc} i & a_i & m_i & p_i \\ 1 & 1 & 2 & 3 \\ 2 & 2 & 4 & 5 \\ 3 & 4 & 8 & 17 \\ 4 & 8 & 16 & 257 \\ 5 & 16 & 32 & 65537 \\ 6 & 0 & 64 & 641 \\ 7 & 32 & 64 & 6700417 \\ \end{array}
Set $A_0=\{x:x\equiv 1 \pmod{8}\}$, $A_i=\{x:x\equiv 2^{a_i} \pmod{p_i}\}$ ($1\leq i\leq 7$), $B=A_0\cap A_1\cap\cdots\cap A_7$.
$B$ is an AP consisting of odd numbers. Assume that $x\in B$ and $x=2^n-q$ for some odd prime $q$. For some $1\leq i\leq 7$, we have $n\equiv a_i \pmod{m_i}$, therefore $x\equiv 2^{a_i}-q \pmod{p_i}$. As $x\in A_i$, we also have $x\equiv 2^{a_i} \pmod{p_i}$, consequently $q=p_i$. But this is impossible as $2^n\equiv 0 \pmod{8}$ for $n\geq 3$, $p_i\equiv 1,3,5 \pmod{8}$ and as $x\in A_0$, $x\equiv 7 \pmod{8}$.
-
This argument comes from the paper in which Erdos introduced covering congruences to an unsuspecting world. – Gerry Myerson Jan 5 '14 at 16:19
This made me wonder, does anyone actually know the smallest odd positive number that cannot be written in the form $2^a-p$. Using covering congruences I can proof that 509203 cannot be written the form $2^a-p$. But I suspect there might be smaller numbers that cannot be written in the form $2^a-p$. If a number is not of the form $2^a-p$ can one always use a covering congruences argument to prove this? – Maarten Derickx Jan 6 '14 at 10:19
|
{}
|
# For the love of God somebody save me!
1. Feb 28, 2007
### shwanky
Ok, so it's my first semester of Calculus and I'm completely lost. My professor is this old Korean guy who doesn't speak English and I'm in desperate need of some understanding.
1. |X| = {x if x >= 0 and -x if x <0
lim x->a |X| Does not exists.
My first question is can the limit of an absolute value function ever exists? I understand the mechanics, just not the concept :(.
2. The Squeeze Theorem
If f(X) <= h(X) <= h(x) when x is near a (except possibly at a) and
lim x->a f(X) = lim x->A h(X) = L
then
lim x->a g(X) = L
Question: Prove that lim x->0+ suareroot(X)[1 + sin^2(2PI/X)] = 0.
Sorry for my semi-broken representation. It's a mixture of C++ sytanx and algebra... I don't think it works so well but I hope it can be understood. Anyway, I don't even know where to begin setting up the proof...
Anyone have an tips that might help? I'm going to search the net and try and find something that can help.
Shwank
2. Feb 28, 2007
### mattmns
First, you should probably post a question like this in the homework section, since it will get more responses there, and also that is where homework is supposed to be posted (since I am guessing it is homework, or at least feels like it).
For question 1 I am not sure what you are trying to do.
|x| is continuous and thus the $\lim_{x\to a}|x| = |a|$. So I guess to answer your question, yes the limit of the absolute value function can exist, and it exist for all $x \in \mathbb{R}$.
For the second question you want to find a function that is below suareroot(X)[1 + sin^2(2PI/X)] and one that is above. Of course you want each of these functions such that the limit as x->0+ is 0. So try to think of some functions (ones that you can easily compute the lim as x-> 0 for) and verify that the lower function is below suareroot(X)[1 + sin^2(2PI/X)] and that the upper function is above it.
Could you take the lower function as the zero function? That is, is 0 <= suareroot(X)[1 + sin^2(2PI/X)] for all x? And does the lim x>0+ of 0 = 0? Now try to find a function that is above, and show that it is above for all x, and that the limit x->0+ of that function is 0.
Last edited: Feb 28, 2007
3. Feb 28, 2007
### JonF
Think of these few examples:
Does the limit of |x^2| as x approaches 0 exist?
Does the limit of |x| as x approaches 1 exist?
Let f(x) = sin(x) if x>0, -sin(x) if x<0, and 0 if sin(x) = 0
Does the limit of |f(x)| as x approaches 0 exist?
Can you think of a function g(x), where g(x) is greater than x^(1/2)[1 + sin^2(2PI/X)] for all x and as x approaches 0 from the right g(x) tends towards 0?
Can you think of a function h(x), where h(x) is less than x^(1/2)[1 + sin^2(2PI/X)] for all x and as x approaches 0 from the right h(x) tends towards 0?
**hint** what is the maximal and minimal value of sin^2(2PI/X)
4. Mar 1, 2007
### shwanky
Thank you both for the help. :)... With your hints, and help from tutors at school I was able to understand.
|
{}
|
# Fixed Point in Python
PythonServer Side ProgrammingProgramming
Suppose we have an array A of unique integers sorted in ascending order, we have to return the smallest index i that satisfies A[i] == i. Return -1 if no such i exists. So if the array is like [-10,-5,0,3,7], then the output will be 3, as A[3] = 3 the output will be 3.
To solve this, we will follow these steps −
• For i in range 0 to length of A
• if i = A[i], then return i
• return -1
## Example(Python)
Let us see the following implementation to get a better understanding −
Live Demo
class Solution(object):
def fixedPoint(self, A):
for i in range(len(A)):
if i == A[i]:
return i
return -1
ob1 = Solution()
print(ob1.fixedPoint([-10,-5,0,3,7]))
## Input
[-10,-5,0,3,7]
## Output
3
Published on 27-Feb-2020 06:54:47
|
{}
|
## Can a 'free' Bitcoin 'marketing' ad be checked in any way
0
Total Bitcoin newbie here. I have no bitc, nor a wallet, nor am I sure I want one. I've been intrigued for a while, but security is a big passion of mine...
Today just now youtube pointed me to an interesting lecture by Bill Gates and others. [1] Apparently it was at the SF, CA exploratorium in the 'morning', but that would not fit with the current timezone if it is live. (That part just came up at the end of the Bill Gates section a few minutes ago.)
This video is supposedly by Microsoft Europe - but the author has only this video.
There was a big picture on the screen while Mr. Gates was speaking at the "Village Global" event. It advertised free bitcoin if you send in bitcoin, eg they will send you back 10x what you send in from 0.1 to 20 Bitcoins.
Online I looked that up [2] and found this:
Total Sent
0.00000000 BTC
I believe that indicates the ad is definitely a scam. Did I get it right???
Are there other ways to check for scammers that I may not know? Thank you so much!
[2] Blockchain Explorer - Search the Blockchain | BTC | ETH | BCH: https://www.blockchain.com/btc/address/1w1AQvpK4ixo9YQ3KHFYi4BgKHuDqg1hR
– AnneTheAgile – 2020-03-29T01:06:13.407
The Village Global event with Gates seems to have taken place some time ago, perhaps June 2019 or November 2018. It certainly couldn't be happening live in San Francisco right now, as that city is under a shelter-in-place order due to coronavirus. There is no way that YouTube account has anything to do with the Microsoft corporation. – Nate Eldredge – 2020-03-29T03:34:54.587
Their address currently has had 8 transactions sent to it. On the one hand, this may unfortunately mean that 8 people have fallen for the scam. On the other hand, it proves that their chat window, which has "people" posting every few seconds about how they got money from the scheme, is just fake bots. – Nate Eldredge – 2020-03-29T04:09:06.577
2
Yes, this is very obviously a scam.
For some reason, some people seem to lose common sense when Bitcoin (or cryptocurrency in general) is involved. So here is a really quick way to check if an offer sounds legit: Consider the same situation, but using $USD (or your local currency of choice). For example: It advertised free dollars if you send in dollars, eg they will send you back 10x what you send in from$500 to $100,000! Does this sound believable? What about: It advertised free bitcoin if you send in bitcoin, eg they will send you back 10x what you send in from 0.1 to 20 Bitcoins. (At the time of writing, 1 BTC is worth approximately$5K USD).
Ask yourself: what business would do well to give away \$100K USD worth of anything to random people on the internet?
If it sounds too good to be true it probably is.
|
{}
|
# zbMATH — the first resource for mathematics
##### Examples
Geometry Search for the term Geometry in any field. Queries are case-independent. Funct* Wildcard queries are specified by * (e.g. functions, functorial, etc.). Otherwise the search is exact. "Topological group" Phrases (multi-words) should be set in "straight quotation marks". au: Bourbaki & ti: Algebra Search for author and title. The and-operator & is default and can be omitted. Chebyshev | Tschebyscheff The or-operator | allows to search for Chebyshev or Tschebyscheff. "Quasi* map*" py: 1989 The resulting documents have publication year 1989. so: Eur* J* Mat* Soc* cc: 14 Search for publications in a particular source with a Mathematics Subject Classification code (cc) in 14. "Partial diff* eq*" ! elliptic The not-operator ! eliminates all results containing the word elliptic. dt: b & au: Hilbert The document type is set to books; alternatively: j for journal articles, a for book articles. py: 2000-2015 cc: (94A | 11T) Number ranges are accepted. Terms can be grouped within (parentheses). la: chinese Find documents in a given language. ISO 639-1 language codes can also be used.
##### Operators
a & b logic and a | b logic or !ab logic not abc* right wildcard "ab c" phrase (ab c) parentheses
##### Fields
any anywhere an internal document identifier au author, editor ai internal author identifier ti title la language so source ab review, abstract py publication year rv reviewer cc MSC code ut uncontrolled term dt document type (j: journal article; b: book; a: book article)
Bifurcations of periodic solutions of delay differential equations. (English) Zbl 1027.34081
The author extends the method of {\it J. L. Kaplan} and {\it J. A. Yorke} [J. Differ. Equations 23, 293-314 (1977; Zbl 0307.34070)] to prove the existence of periodic solutions with certain period in scalar delay differential equations of the type $\dot x(t)= F(x(t), x(t-r), x(t-2r))$, where $F$ satisfies the relation $F(x,y,-x)=-F(-x,-y,x)$. For $F$ depending on parameters, the paper gives conditions under which Hopf and saddle-node bifurcations of periodic solutions occur. Moreover, the author provides examples showing that Hopf and saddle-node bifurcations often occur infinitely many times.
##### MSC:
34K18 Bifurcation theory of functional differential equations 34K13 Periodic solutions of functional differential equations
Full Text:
##### References:
[1] Chen, Y.: The existence of periodic solutions of the equation x \dot{}$(t)=-f(x(t)$,x(t-${\tau}$)). J. math. Anal. appl. 163, 227-237 (1993) · Zbl 0755.34063 [2] P. Dormayer, A.F. Ivanor, Symmetric periodic solutions of a delay differential equation, in: Dynamical Systems and Differential Equations, Vol. 1 (1998) (added volume to Discrete Continuous Dyn. Systems), Southwest Missouri State University, Springfield, pp. 220--230. · Zbl 1304.34119 [3] Ge, W.: Existence of many and infinitely many periodic solutions for some types of differential delay equations. J. Beijing inst. Technol. 1, 5-14 (1993) · Zbl 0807.34083 [4] Gopalsamy, K.; Li, J.; He, X.: On the construction of kaplan--Yorke type for some differential delay equations. Appl. anal. 59, 65-80 (1995) · Zbl 0845.34073 [5] Guckenheimer, J.; Holmes, P.: Nonlinear oscillations, dynamical systems and bifurcation of vector fields. (1983) · Zbl 0515.34001 [6] Hale, J.: Theory of functional differential equations. (1977) · Zbl 0352.34001 [7] M. Han, On the existence of symmetric periodic solutions of a differential difference equation, Differential Equations and Control Theory, Lecture Notes in Pure and Applied Math. Series, Vol. 176, Marcel Dekker, New York, 1995, pp. 73--77. [8] Jones, G. S.: Periodic motions in Banach space and application to functional differential equations. Contrib. differential equations 3, 75-106 (1964) [9] Kaplan, J.; Yorke, J.: Ordinary differential equations which yield periodic solutions of differential delay equations. J. math. Anal. appl. 48, 317-324 (1974) · Zbl 0293.34102 [10] Kaplan, J.; Yorke, J.: On the nonlinear differential delay equation x \dot{}$(t)=-f(x(t),x(t-1))$. J. differential equations 23, 293-314 (1977) · Zbl 0307.34070 [11] Saupe, D.: Global bifurcation of periodic solutions to some autonomous differential delay equations. Appl. math. Math. comput. 13, 185-211 (1983) · Zbl 0522.34067 [12] Wang, K.: On the existence of nontrivial periodic solutions of differential difference equations. Chinese ann. Math. 11B, 438-444 (1990) · Zbl 0729.34047 [13] Wen, L.: The existence of periodic solutions for a class of differential difference equations. Chinese ann. Math. 10A, 289-294 (1989)
|
{}
|
# Pipeline Parameter
Written by Jerry Ratzlaff on . Posted in Dimensionless Numbers
Pipeline parameter, abbreviated as $$\rho$$, a dimensionless number, is proportional to maximum water hammer pressure rise and static pressure.
## Pipeline Parameter formula
$$\large{ \rho = \frac {v\;v_i} {2\;g\;h_s} }$$
Symbol English Metric
$$\large{ \rho }$$ (Greek symbol rho) = pipeline parameter $$\large{dimensionless}$$
$$\large{ g }$$ = gravitational acceleration $$\large{\frac{ft}{sec^2}}$$ $$\large{\frac{m}{s^2}}$$
$$\large{ v_i }$$ = initial velocity $$\large{\frac{ft}{sec}}$$ $$\large{\frac{m}{s}}$$
$$\large{ v }$$ = wave velocity $$\large{\frac{ft}{sec}}$$ $$\large{\frac{m}{s}}$$
$$\large{ h_s }$$ = static head $$\large{ft}$$ $$\large{m}$$
|
{}
|
## 9.15 Normal extensions
Let $P \in F[x]$ be a nonconstant polynomial over a field $F$. We say $P$ splits completely into linear factors over $F$ or splits completely over $F$ if there exist $c \in F^*$, $n \geq 1$, $\alpha _1, \ldots , \alpha _ n \in F$ such that
$P = c(x - \alpha _1) \ldots (x - \alpha _ n)$
in $F[x]$. Normal extensions are defined as follows.
Definition 9.15.1. Let $E/F$ be an algebraic field extension. We say $E$ is normal over $F$ if for all $\alpha \in E$ the minimal polynomial $P$ of $\alpha$ over $F$ splits completely into linear factors over $E$.
As in the case of separable extensions, it takes a bit of work to establish the basic properties of this notion.
Lemma 9.15.2. Let $K/E/F$ be a tower of algebraic field extensions. If $K$ is normal over $F$, then $K$ is normal over $E$.
Proof. Let $\alpha \in K$. Let $P$ be the minimal polynomial of $\alpha$ over $F$. Let $Q$ be the minimal polynomial of $\alpha$ over $E$. Then $Q$ divides $P$ in the polynomial ring $E[x]$, say $P = QR$. Hence, if $P$ splits completely over $K$, then so does $Q$. $\square$
Lemma 9.15.3. Let $F$ be a field. Let $M/F$ be an algebraic extension. Let $F \subset E_ i \subset M$, $i \in I$ be subextensions with $E_ i/F$ normal. Then $\bigcap E_ i$ is normal over $F$.
Proof. Direct from the definitions. $\square$
Lemma 9.15.4. Let $E/F$ be a normal algebraic field extension. Then the subextension $E/E_{sep}/F$ of Lemma 9.14.6 is normal.
Proof. If the characteristic is zero, then $E_{sep} = E$, and the result is clear. If the characteristic is $p > 0$, then $E_{sep}$ is the set of elements of $E$ which are separable over $F$. Then if $\alpha \in E_{sep}$ has minimal polynomial $P$ write $P = c(x - \alpha )(x - \alpha _2) \ldots (x - \alpha _ d)$ with $\alpha _2, \ldots , \alpha _ d \in E$. Since $P$ is a separable polynomial and since $\alpha _ i$ is a root of $P$, we conclude $\alpha _ i \in E_{sep}$ as desired. $\square$
Lemma 9.15.5. Let $E/F$ be an algebraic extension of fields. Let $\overline{F}$ be an algebraic closure of $F$. The following are equivalent
1. $E$ is normal over $F$, and
2. for every pair $\sigma , \sigma ' \in \mathop{Mor}\nolimits _ F(E, \overline{F})$ we have $\sigma (E) = \sigma '(E)$.
Proof. Let $\mathcal{P}$ be the set of all minimal polynomials over $F$ of all elements of $E$. Set
$T = \{ \beta \in \overline{F} \mid P(\beta ) = 0\text{ for some }P \in \mathcal{P}\}$
It is clear that if $E$ is normal over $F$, then $\sigma (E) = T$ for all $\sigma \in \mathop{Mor}\nolimits _ F(E, \overline{F})$. Thus we see that (1) implies (2).
Conversely, assume (2). Pick $\beta \in T$. We can find a corresponding $\alpha \in E$ whose minimal polynomial $P \in \mathcal{P}$ annihilates $\beta$. Because $F(\alpha ) = F[x]/(P)$ we can find an element $\sigma _0 \in \mathop{Mor}\nolimits _ F(F(\alpha ), \overline{F})$ mapping $\alpha$ to $\beta$. By Lemma 9.10.5 we can extend $\sigma _0$ to a $\sigma \in \mathop{Mor}\nolimits _ F(E, \overline{F})$. Whence we see that $\beta$ is in the common image of all embeddings $\sigma : E \to \overline{F}$. It follows that $\sigma (E) = T$ for any $\sigma$. Fix a $\sigma$. Now let $P \in \mathcal{P}$. Then we can write
$P = (x - \beta _1) \ldots (x - \beta _ n)$
for some $n$ and $\beta _ i \in \overline{F}$ by Lemma 9.10.2. Observe that $\beta _ i \in T$. Thus $\beta _ i = \sigma (\alpha _ i)$ for some $\alpha _ i \in E$. Thus $P = (x - \alpha _1) \ldots (x - \alpha _ n)$ splits completely over $E$. This finishes the proof. $\square$
Lemma 9.15.6. Let $E/F$ be an algebraic extension of fields. If $E$ is generated by $\alpha _ i \in E$, $i \in I$ over $F$ and if for each $i$ the minimal polynomial of $\alpha _ i$ over $F$ splits completely in $E$, then $E/F$ is normal.
Proof. Let $P_ i$ be the minimal polynomial of $\alpha _ i$ over $F$. Let $\alpha _ i = \alpha _{i, 1}, \alpha _{i, 2}, \ldots , \alpha _{i, d_ i}$ be the roots of $P_ i$ over $E$. Given two embeddings $\sigma , \sigma ' : E \to \overline{F}$ over $F$ we see that
$\{ \sigma (\alpha _{i, 1}), \ldots , \sigma (\alpha _{i, d_ i})\} = \{ \sigma '(\alpha _{i, 1}), \ldots , \sigma '(\alpha _{i, d_ i})\}$
because both sides are equal to the set of roots of $P_ i$ in $\overline{F}$. The elements $\alpha _{i, j}$ generate $E$ over $F$ and we find that $\sigma (E) = \sigma '(E)$. Hence $E/F$ is normal by Lemma 9.15.5. $\square$
Lemma 9.15.7. Let $L/M/K$ be a tower of algebraic extensions.
1. If $M/K$ is normal, then any automorphism $\tau$ of $L/K$ induces an automorphism $\tau |_ M : M \to M$.
2. If $L/K$ is normal, then $K$-algebra map $\sigma : M \to L$ extends to an automorphism of $L$.
Proof. Choose an algebraic closure $\overline{L}$ of $L$ (Theorem 9.10.4).
Let $\tau$ be as in (1). Then $\tau (M) = M$ as subfields of $\overline{L}$ by Lemma 9.15.5 and hence $\tau |_ M : M \to M$ is an automorphism.
Let $\sigma : M \to L$ be as in (2). By Lemma 9.10.5 we can extend $\sigma$ to a map $\tau : L \to \overline{L}$, i.e., such that
$\xymatrix{ L \ar[r]_\tau & \overline{L} \\ M \ar[u] \ar[ru]_\sigma & K \ar[l] \ar[u] }$
is commutative. By Lemma 9.15.5 we see that $\tau (L) = L$. Hence $\tau : L \to L$ is an automorphism which extends $\sigma$. $\square$
Definition 9.15.8. Let $E/F$ be an extension of fields. Then $\text{Aut}(E/F)$ or $\text{Aut}_ F(E)$ denotes the automorphism group of $E$ as an object of the category of $F$-extensions. Elements of $\text{Aut}(E/F)$ are called automorphisms of $E$ over $F$ or automorphisms of $E/F$.
Here is a characterization of normal extensions in terms of automorphisms.
Lemma 9.15.9. Let $E/F$ be a finite extension. We have
$|\text{Aut}(E/F)| \leq [E : F]_ s$
with equality if and only if $E$ is normal over $F$.
Proof. Choose an algebraic closure $\overline{F}$ of $F$. Recall that $[E : F]_ s = |\mathop{Mor}\nolimits _ F(E, \overline{F})|$. Pick an element $\sigma _0 \in \mathop{Mor}\nolimits _ F(E, \overline{F})$. Then the map
$\text{Aut}(E/F) \longrightarrow \mathop{Mor}\nolimits _ F(E, \overline{F}),\quad \tau \longmapsto \sigma _0 \circ \tau$
is injective. Thus the inequality. If equality holds, then every $\sigma \in \mathop{Mor}\nolimits _ F(E, \overline{F})$ is gotten by precomposing $\sigma _0$ by an automorphism. Hence $\sigma (E) = \sigma _0(E)$. Thus $E$ is normal over $F$ by Lemma 9.15.5.
Conversely, assume that $E/F$ is normal. Then by Lemma 9.15.5 we have $\sigma (E) = \sigma _0(E)$ for all $\sigma \in \mathop{Mor}\nolimits _ F(E, \overline{F})$. Thus we get an automorphism of $E$ over $F$ by setting $\tau = \sigma _0^{-1} \circ \sigma$. Whence the map displayed above is surjective. $\square$
Lemma 9.15.10. Let $L/K$ be an algebraic normal extension of fields. Let $E/K$ be an extension of fields. Then either there is no $K$-embedding from $L$ to $E$ or there is one $\tau : L \to E$ and every other one is of the form $\tau \circ \sigma$ where $\sigma \in \text{Aut}(L/K)$.
Proof. Given $\tau$ replace $L$ by $\tau (L) \subset E$ and apply Lemma 9.15.7. $\square$
In your comment you can use Markdown and LaTeX style mathematics (enclose it like $\pi$). A preview option is available if you wish to see how it works out (just click on the eye in the toolbar).
|
{}
|
What is the correct answer?
4
# Demand is elastic when the coefficient of elasticity is:
greater than zero
less than one
greater than one
less than one
|
{}
|
• Create Account
### #ActualKhatharr
Posted 31 January 2013 - 05:07 AM
If you have a zbuffer (aka depth buffer) then whenever you render something the following steps are taken on a per-pixel basis:
new_z = //z value for this pixel based on what you're rendering
depth_z = //z value for that pixel in the zbuffer
if(depth_z > new_z) { //this test is configurable - it's called the 'depth test' (see MSDN)
depth_z = new_z
pixel_color = new_pixel_color
}
else {
}
Unless you're doing some special effects with the buffer you typically want to clear it every frame before drawing. The common use is simply to make sure that if you draw something and then draw another thing that overlaps the same pixels the correct pixel colors will be 'on top'.
http://en.wikipedia.org/wiki/Z-buffering
### #2Khatharr
Posted 31 January 2013 - 05:03 AM
If you have a zbuffer (aka depth buffer) then whenever you render something the following steps are taken on a per-pixel basis:
new_z = //z value for this pixel based on what you're rendering
depth_z = //z value for that pixel in the zbuffer
if(depth_z < new_z) { //this test is configurable - it's called the 'depth test' (see MSDN)
depth_z = new_z
pixel_color = new_pixel_color
}
else {
}
### #1Khatharr
Posted 31 January 2013 - 05:03 AM
If you have a zbuffer (aka depth buffer) then whenever you render something the following steps are taken on a per-pixel basis:
new_z = z value for this pixel based on what you're rendering
depth_z = z value for that pixel in the zbuffer
if(depth_z < new_z) { //this test is configurable - it's called the 'depth test' (see MSDN)
depth_z = new_z
pixel_color = new_pixel_color
}
else {
}
|
{}
|
# Applying Econometrics to Monetary Economics
I've been given a problem in which I am told that GDP is a function of the supply of money, prices and demand,
$$Y = (M/P)^\delta$$
Taking logs, $$\log(Y) = \alpha + \beta \log(m - p) +u$$
Differencing the series with constant prices, $$\Delta \log(Y) = \alpha + \Delta \beta \log(m) +u$$
My question is, I don't know how to interpret the function, $$\Delta \beta \log(m)$$. The fact that it is change is down to the fact that we took the difference, i.e. $$\beta_t$$ - $$\beta_{t-1}$$. The function is logged because we are interested in analysing the growth rate. Surely, $$\Delta \beta \log(m)$$ is the differenced time series process of the growth rate of money supply?
In our problem set we are given a number of variables in the data set:
M0: Growth rate of total notes and coins in circulation outside of the central bank
M4: Growth rate of monetary financial institutions' net lending to private sector.
Is the growth in money supply just the sum of these two variables?
How can there be a growth in notes and coins outside of the central bank, if it is the central bank controlling the supply?
• If you take logs, don't you get $log(Y) = \delta[log(M) - log(P)]$? – user17900 Mar 31 at 9:32
|
{}
|
# In what ratio is the line segment joining the point (−2, −3) and (3, 7) divided by y-axis ?
1. (2 : 3)
2. (3 : 0)
3. (−2 : 3)
4. (6 : 0)
## Answer (Detailed Solution Below)
Option 1 : (2 : 3)
## Detailed Solution
Concept:
Let P and Q be the given two points (x1, y1) and (x2, y2) respectively, and M be the point dividing the line segment PQ internally in the ratio m : n, then from the section formula, the coordinate of the point M is given by:
$$M(x, y) = \left \{ \left ( \frac{mx_2+nx_1}{m+n} \right ), \left ( \frac{my_2+ny_1}{m+n} \right ) \right \}$$
Calculation:
Let point P be the point that lies at the y-axis and divide the line segment made by two points A and B in the ratio k : 1.
Since point P lies on the y-axis, therefore, the coordinates of the point P would be of the form (0, y).
Now, using the section formula and equating the x-coordinates, we get
$$0 = \frac{3k - 2}{k+1}$$
⇒ 3k - 2 = 0
⇒ k = 2/3
∴ k : 1 = 2 : 3
Hence, the required ratio is 2 : 3.
|
{}
|
# Geometry and Topology Seminar
## Warwick Mathematics Institute, Term II, 2016-2017
Thursday January 19, 16:00, room B3.02 Dusa McDuff (Columbia) Symplectic topology today Colloquium: This talk will explain the basics of symplectic topology for a nonspecialist audience, outlining some of the classical results as well as some problems that are currently open.
Thursday January 26, 15:00, room MS.03 Marc Burger (ETH Zuerich) Abstract: Given a regular tree T of valency n and a permutation group F on n elements, one can associate the closed subgroup U(F) of those automorphisms of T which act locally like elements of F. These groups have been introduced in joint work with S.Mozes in the context of the study of lattices in the product of two trees. A major open problem is the characterization of F such that U(F) is the closure of the projection of a cocompact lattice in the product of two regular trees TxT'. A necessary condition is that open compact subgroups of U(F) be topologically finitely generated; in this talk we'll report on joint work with S. Mozes characterizing the F's for which U(F) has his property.
Thursday February 2, 15:00, room MS.03 Nicolas Tholozan (ENS Paris) $\mathrm{PSL}(2,\mathbb{R})$-Representations of the fundamental group of the punctured sphere Abstract: A famous question of Bowditch asks whether a (type-preserving) representation of the fundamental group of a (punctured) surface into $\mathrm{PSL}(2,\mathbb{R})$ which is not Fuchsian must send a simple closed curve to an elliptic or parabolic element. In this talk, I will show that some interesting representations of the fundamental group of the $n$-punctured sphere have an even stronger property : they map \emph{every} simple closed curve to an elliptic or parabolic element. We will show that these representations form compact components of relative character varieties which are symplectomorphic to $\mathbb{C} \mathbf{P}^{n-3}$ with a multiple of the Fubini--Study symplectic form. This is a joint work with Bertrand Deroin.
Thursday February 9, 15:00, room MS.03 Ingo Blechschmidt (Augsburg) A leisurely introduction to synthetic differential geometry Abstract: A tangent vector is an infinitesimal piece of a curve." Pictures likes this are routinely used when thinking informally about differential geometry, but they are not literally true in the usual setup. Synthetic differential geometry, iniated by Anders Kock, provides a different approach to differential geometry in which such statements are literally true. This is accomplished by switching to an alternate topos, a mathematical universe, in which the real numbers contain infinitesimal elements -- numbers $\varepsilon$ such that $\varepsilon \neq 0$ but $\varepsilon^2 = 0$. In this way it's possible to interpret, for instance, some of Sophus Lie's writings literally. Additionally synthetic differential geometry enables some new modes of thought. Differential forms, for example, are classically functionals on tangent vectors. In synthetic differential geometry, it's also possible to think about differential forms as quantities, that is functions defined on the manifold. Ideas related to synthetic differential geometry recently allowed Oliver Fabert to prove a version of the Arnold conjecture in infinite dimensions. The talk gives a leisurely introduction to synthetic differential geometry, presenting the synthetic definitions and establishing the link to the usual setup of differential geometry. No prior knowledge about toposes or formal logic is supposed.
Thursday February 23, 15:00, room MS.03 Ashot Minasyan (Southampton) Abstract: Let $\mathcal{C}$ be a class of groups (e.g., all finite groups, all $p$-groups, etc.). A group $G$ is said to be {\it residually-$\mathcal C$} if for any two distinct elements $x,y \in G$ there is a group $M \in \mathcal{C}$ and a homomorphism $\varphi: G \to M$ such that $\varphi(x) \neq \varphi(y)$ in $M$. Similarly, $G$ is $\mathcal C$-conjugacy separable if for any two non-conjugate elements $x,y \in G$ there is $M \in \mathcal{C}$ and a homomorphism $\varphi:G \to M$ such that $\varphi(x)$ is not conjugate to $\varphi(y)$ in $M$. When $\mathcal{C}$ is the class of all finite groups, the above \emph{residual properties} are closely related to the two main decision problems in groups: the word problem and the conjugacy problem. In the talk I will discuss $\mathcal C$-conjugacy separability of subdirect products $G \leq F_1 \times F_2$, where $F_i$, $i=1,2$, are either free or hyperbolic (in the sense of Gromov). Recall that a subgroup $G$ of a direct product of two groups $F_1\times F_2$ is said to be \emph{subdirect} if $G$ projects onto each of the coordinate groups $F_1,F_2$. In this case, it is easy to see that $N_1:=F_1 \cap G$ is a normal subgroup of $F_1$. Classically, in Combinatorial Group Theory, sudirect (fibre) products have been used to produce examples of groups with exotic properties. The standard underlying idea, originating from Mihajlova's trick, is that ''bad'' properties of the quotient $F_1/N_1$ transfer to ''less bad'' properties of the subdirect product $G$, and ''good'' properties of $F_1/N_1$ give rise to ''even better'' properties of $G$. Following this philosophy, we will prove that if $F_1/N_1$ is not residually-$\mathcal C$ then $G$ is not $\mathcal{C}$-conjugacy separable; on the other hand, if $F_i$, $i=1,2$, are free and all cyclic subgroups of $F_1/N_1$ are closed in the pro-$\mathcal{C}$-topology, then $G$ is $\mathcal{C}$-conjugacy separable. These criteria can be used to produce examples of subdirect products of free/hyperbolic groups which are conjugacy separable, but have non-conjugacy separable subgroups of finite index and vice-versa. Other applications will also be discussed.
Thursday March 2, 15:00, room MS.03 Henry Wilton (Cambridge) Surface subgroups of graphs of free groups Abstract: A well known question, usually attributed to Gromov, asks whether every hyperbolic group is either virtually free or contains a surface subgroup. I’ll discuss some recent progress on this problem for a the class of groups in the title.
Thursday March 9, 15:00, room MS.03 Saul Schleimer (Warwick) Circular orderings from veering triangulations Abstract: This is joint work with Henry Segerman. Suppose that (M,T) is a cusped hyperbolic three-manifold equipped with a veering triangulation. We show that there is a unique circular order on the cusps of the universal cover of M, which is compatible with T. After giving the necessary background and sketching the proof, I will speculate wildly about possible applications.
Thursday March 16, 15:00, room MS.03 Marco Golla (Uppsala) Abstract:
Information on past talks. This line was last edited Tuesday, January 19, 2016 08:35:36 PM GMT
|
{}
|
# Non Recursively Enumerable Languages
Can someone give me an example of Non Recursively Enumerable language... i.e. A language which no Turing machine can accept ? What makes a language non recursively enumerable ?
• I recommend looking an online lecture notes on computability theory. Jan 31 '16 at 9:18
• This is a general reference question. Wikipedia contains examples, and it's quite clear (assuming some mastery of the fundamentals) that the complents of all undecidable yet semi-decidable languages are examples as well. So I don't see how this question should be asked in this form. Community votes, please: is this unclear?
– Raphael
Jan 31 '16 at 11:14
• " What makes a language non recursively enumerable ? " -- what kind of answer are you looking for? They are, or they are not.
– Raphael
Jan 31 '16 at 11:15
• Also asked as a subquestion of: cs.stackexchange.com/questions/12747/… Dec 26 '20 at 11:19
## 2 Answers
Every undecidable yet semi-decidable language provides an example: its complement.
That is because if $L$ and $\overline{L}$ are both semi-decidable, they are also both decidable -- proving that is an easy exercise.
You should know at least one such language from class.
The Halting language, and probably numerous others from exercise problems.
An example of a language which is not recursively enumerable is the language $L$ of all descriptions of Turing machines which don't halt on the empty input. We know that $\overline{L}$ is recursively enumerable (exercise) while $L$ is not recursive (this is Turing's classical result), so it follows that $L$ is not recursively enumerable.
For a deeper look, we have to delve into the arithmetical hierarchy. We can write $L$ symbolically as follows: $$L = \{ \langle M \rangle : \forall t \text{ M doesn't halt within t steps}\}.$$ The reason for expressing $L$ in this particular way is that the predicate $\phi(\langle M \rangle,t)$ which holds when $M$ doesn't halt within $t$ steps is recursive (exercise). So $$L = \{ \langle M \rangle : \forall t \phi(\langle M \rangle, t) \},$$ where $\phi$ is some computable predicate. This puts $L$ in the complexity class $\Pi_1$ (here $1$ means that there is $1$ quantifier alternations, and $\Pi$ means that the first quantifier is $\forall$; in contrast $\overline{L} \in \Sigma_1$, since for $\overline{L}$ the first quantifier is $\exists$). Note that $\Sigma_1$ consists of all recursively enumerable languages, while $\Pi_1$ consists of all co-r.e. languages.
It so happens that $L$ is $\Pi_1$-complete. This means that for every language $A \in \Pi_1$ there is a computable reduction $f$ such that $x \in A$ iff $f(x) \in L$. I outline a proof below. Turing's argument implies that $\Pi_1 \neq \Sigma_1$, and in particular no $\Pi_1$-complete language (or more generally, no $\Pi_1$-hard language) can belong to $\Sigma_1$.
It remains to show that $L$ is $\Pi_1$-complete. Consider any language $A \in \Pi_1$, so $A = \{ x : \forall y \psi(x,y) \}$, where $\psi$ is computable. The function $f$ will construct a Turing machine that runs $\psi(x,y)$ on all values of $y$, and terminates if it finds a value of $y$ for which $\psi(x,y)$ is false. Thus $x \in A$ iff $f(x) \in L$.
• Third line: did you mean: while $\bar{L}$ is not recursive
– Upc
Feb 6 '19 at 4:16
• A language is recursive iff its complement is recursive. Feb 6 '19 at 4:47
|
{}
|
# Upon reflection
Geometry Level pending
Let $$y=ax+b$$ be the reflection of the line $$\ x+2y-4=0\$$ in $$\ x-y-2=0$$. What is the value of $$a+b$$?
×
|
{}
|
# Lahar
## February 20, 2007
### Fun with LaTeX. No, not that kind of Latex.
Filed under: Math,Uncategorized — by lahar @ 6:26 am
WordPress News: Math for the Masses
Awesome! We can have something simple, like the QG PV equation:
$\nabla^2 \phi = q$
where $\phi$ is the QG geopotential height and $q$ is the QG potential vorticity.
We can also do something more complex, like the full Navier-Stokes equations in a rotating frame with a potential body force with potential $\Phi$:
$\frac{\partial \mathbf{u}}{\partial t} + \left ( \mathbf{u} \cdot \nabla \right ) \mathbf{u} + 2 \mathbf{\Omega} \times \mathbf{u} = -\frac{1}{\rho}\nabla p - \nabla \Phi + \nu \nabla^2 \mathbf{u}$
Fun for several!
Filed under: Uncategorized — by lahar @ 6:04 am
ORDER OF THE SCIENCE SCOUTS OF EXEMPLARY REPUTE AND ABOVE AVERAGE PHYSIQUE
Since everyone’s doing it, I might as well follow suit. Unfortunately, as a junior proto-scientist in a small and somewhat theoretical field who doesn’t do work that gets his hands dirty, I don’t have too many of these which count, but at least it’s a start 🙂
Talking Science
MacGyver: For the time back when I worked at a well-known clothing retailer when one of my co-workers took a cash till key with her to Chicago, leaving me to jimmy the lock of the thing with a paperclip. (I know it’s a bit of a stretch, since it’s not strictly science-y )
Special Auxiliary Child Member of the Order of the Science Scouts: Or, at least I used to be. It’s not like you can have the badges you earned a long time ago taken away, right?
I bet I know more computer languages than you, and I’m not afraid to talk about it: I’d list them, but I don’t feel like bragging right now.
I will crush you with my math prowess: After all, I’m the one who skips out on work to attend courses over in the pure mathematics department.
I’ve done science with no concievable practical application: Well, lee vortices look pretty…
I’m a scientist who is fundamentally opposed to administrative duties: Probably mandatory.
I’m into telescopes astro, Level I: Another one from way back when I had a little telescope of my own.
Statistical linear regression: Hey, I know what that means! Haven’t used it in a while, though.
Any other takers?
## February 14, 2007
### A Brief History of Valentines Day
Filed under: Satire — by lahar @ 6:41 am
February certainly is downhill all the way, isn’t it? It starts out OK, with the Super Bowl and cool holidays like Darwin Day, but after a while the whole month just really gets to be a bear. The Rex Grossman kind, too, not the good Brian Urlacher kind. This is about the time of year that everyone starts getting tired of the snow and the ice and the cold, there aren’t any sports on but exciting mid-season basketball, and everyone’s waiting for Spring thaw and the beginning of Spring Training.
So of course the time of fastest descent happens in the middle of the month, right around the 14th. So about 1400 years ago a medieval monk and early psychologist decided it would be a great time to dub that day after himself, a rather dismal fellow most of the time, in honor of the whole month’s dreariness and despair. For this and other great deeds (including the invention of the first anti-depressant, also known as the frosted layer cake, as well as the first recorded use of the clue-by-four), he was given sainthood, and the day was henceforth named St Valentine’s Day.
This went well for several centuries, with annual celebrations coinciding with the low point of the year, until fate interfered in 1507. That year, while the festivities were under way, a Flemish merchant sold a large quantity of fresh flowers and a few hot new exports from the New World to a young man who was planning to feed the former to his goats and try to synthesize the first plastic from the latter. Unfortunately for all of us, as he was leaving the merchant’s store he came upon a comely young woman our goat farmer had known for some time, and in his awe accidentally dropped a few roses and other items at her feet. She was so taken by the apparent romantic gesture that the couple soon eloped, leaving the goats to starve.
The merchant, noticing the economic potential of selling otherwise lowly-regarded items as love tokens (at a large markup), soon started selling his stock to others who hoped to attract the objects of their affection, making a bundle in the process. This became so successful that not only did the flowers he was selling but also the imported items, a pulverized substance called chocolate and a resinous tree sap now known as latex, became symbols of romance throughout much of the world. The idea was quickly copied by many other flower sellers and importers, and since many of these people either had a large quantity of business sense (or a sick sense of irony), the biggest day for purchasing said items was Valentines Day, which was pitched as a way to try to alleviate the misery of the middle of winter, or at least get laid.
Over the years, the commercial appeal of the repurposed Valentines Day (with the apostrophe and ‘St.’ dropped in order to circumvent an intellectual property lawsuit from the Valentine estate, which had somehow continued to pay its lawyers for nearly a millennium) grew to the point where it has become a major holiday in the United States and other countries, and is worth billions of dollars to the economy.
However, there are those of us who have kept the original tradition of St Valentine alive, and not just by the liberal application of clue sticks to lusers. Today, we honor our fellow brethren, who recognize the true unluckiness of this day in the face of saccharine platitudes and the like.
First, we have the classic, Ron’s Anti-Valentine’s Day Wake, who not only sums up much of our sentiment, but also expresses his feelings mathematically.
In a more expository vein, here is i-Mockery’s takedown of this day, including the truth about Cupid. Mischievous doesn’t begin to describe the little winged punk.
Although the modern celebration of Valentines Day is predominantly an excuse to sell flowers and candy, the original, true St Valentine’s Day also indulged in a little gift-giving. This article in Digital Journal lists some up-to-date gifts to get in to the true spirit of the day. For the true adherent, Despair Inc., the famous creator of the Demotivator series of posters and desk accessories, brings you BitterSweets, some allegedly-flavored candy hearts with some more appropriate sayings to celebrate this most depressing of days. Flavors include “Banana Chalk, Grape Dust, Nappy-Citric, You-Call-This-Lime?, Pink Sand and Fossilized Antacid.”
So anyway, take heart, everyone. Spring will be here soon, National Pancake Day is next Tuesday, and the Cubs will soon be sucking again. Although we have to survive three more weeks of February while we’re at it.
## February 7, 2007
### best of craigslist : 2 effective Methods on bathing a CAT.
Filed under: Uncategorized — by lahar @ 6:18 am
best of craigslist : 2 effective Methods on bathing a CAT.
If you aren’t reading the Best of Craigslist, you are doing yourself a disservice.
First Method:
1. Thoroughly clean the toilet…
## February 5, 2007
### Is the Pope Catholic?
Filed under: Da Bears — by lahar @ 4:46 am
## February 4, 2007
### Super Bowl Forecast: Bear Weather
Filed under: Da Bears,Weather — by lahar @ 5:23 am
Ooohhhh, yeah. From the Miami NWS office:
FLZ072-074-041130-
INCLUDING THE CITIES OF…FORT LAUDERDALE…MIAMI
903 PM EST SAT FEB 3 2007
[…]
SUNDAY…BREEZY…CLOUDY. SCATTERED SHOWERS IN THE MORNING
THEN NUMEROUS SHOWERS IN THE AFTERNOON. HIGHS IN THE MID 70S. NORTHEAST WINDS 15 TO 20 MPH WITH GUSTS TO AROUND 30 MPH. CHANCE OF RAIN 60 PERCENT.
SUNDAY NIGHT…WINDY, CLOUDY. SCATTERED SHOWERS IN THE EVENING…THEN ISOLATED SHOWERS AFTER MIDNIGHT. LOWS AROUND 60. NORTH WINDS 20 TO 25 MPH WITH GUSTS TO AROUND 35 MPH. CHANCE OF RAIN 50 PERCENT.
Still beats the highs in the teens Chicago will have tomorrow, though.
## February 3, 2007
### Weather Wars
Filed under: Cartoons,Weather — by lahar @ 8:49 pm
Tom the Dancing Bug: Nate the Neoconservative
This high pressure area will cause rain tomorrow. Torrential downpour, folks.
|
{}
|
## Solution to Problem 142 Pressure Vessel
Problem 142
A pipe carrying steam at 3.5 MPa has an outside diameter of 450 mm and a wall thickness of 10 mm. A gasket is inserted between the flange at one end of the pipe and a flat plate used to cap the end. How many 40-mm-diameter bolts must be used to hold the cap on if the allowable stress in the bolts is 80 MPa, of which 55 MPa is the initial stress? What circumferential stress is developed in the pipe? Why is it necessary to tighten the bolt initially, and what will happen if the steam pressure should cause the stress in the bolts to be twice the value of the initial stress?
## Solution to Problem 141 Pressure Vessel
Problem 141
The tank shown in Fig. P-141 is fabricated from 1/8-in steel plate. Calculate the maximum longitudinal and circumferential stress caused by an internal pressure of 125 psi.
## Solution to Problem 138 Pressure Vessel
Problem 138
The strength of longitudinal joint in Fig. 1-17 is 33 kips/ft, whereas for the girth is 16 kips/ft. Calculate the maximum diameter of the cylinder tank if the internal pressure is 150 psi.
## Solution to Problem 136 Pressure Vessel
Problem 136
A cylindrical pressure vessel is fabricated from steel plating that has a thickness of 20 mm. The diameter of the pressure vessel is 450 mm and its length is 2.0 m. Determine the maximum internal pressure that can be applied if the longitudinal stress is limited to 140 MPa, and the circumferential stress is limited to 60 MPa.
## Solution to Problem 133 Pressure Vessel
Problem 133
A cylindrical steel pressure vessel 400 mm in diameter with a wall thickness of 20 mm, is subjected to an internal pressure of 4.5 MN/m2. (a) Calculate the tangential and longitudinal stresses in the steel. (b) To what value may the internal pressure be increased if the stress in the steel is limited to 120 MN/m2? (c) If the internal pressure were increased until the vessel burst, sketch the type of fracture that would occur.
## Thin-walled Pressure Vessels
A tank or pipe carrying a fluid or gas under a pressure is subjected to tensile forces, which resist bursting, developed across longitudinal and transverse sections.
TANGENTIAL STRESS, σt (Circumferential Stress)
Consider the tank shown being subjected to an internal pressure p. The length of the tank is L and the wall thickness is t. Isolating the right half of the tank:
## Total Hydrostatic Force on Plane Surfaces
For horizontal plane surface submerged in liquid, or plane surface inside a gas chamber, or any plane surface under the action of uniform hydrostatic pressure, the total hydrostatic force is given by
$F = pA$
where p is the uniform pressure and A is the area.
In general, the total hydrostatic pressure on any plane surface is equal to the product of the area of the surface and the unit pressure at its center of gravity.
$F = p_{cg}A$
where pcg is the pressure at the center of gravity. For homogeneous free liquid at rest, the equation can be expressed in terms of unit weight γ of the liquid.
$F = \gamma \bar{h} A$
where $\bar{h}$ is the depth of liquid above the centroid of the submerged area.
|
{}
|
Math Help - Cont. Proof
1. Cont. Proof
prove if $f,g$ are cont. on $[a,b]$, if $f(a) < g(a), f(b) > g(b)$, then there is a $c \in [a,b]$ s.t. $f(c) = g(c)$.
Ok so I believe I need to prove this using the Intermediate Value Thm. Do I need to apply the IVT to f - g? If that's even a correct approach, how do I begin to do that?
2. Originally Posted by Math2010
prove if $f,g$ are cont. on $[a,b]$, if $f(a) < g(a), f(b) > g(b)$, then there is a $c \in [a,b]$ s.t. $f(c) = g(c)$.
Define $h(x)=f(x)-g(x)$ then $h(a)<0~\&~h(b)>0$.
|
{}
|
# LHC: why proton-proton?
1. Dec 29, 2009
### wpoely
Hi,
I was wondering: why is the LHC a proton-proton collider and not a proton-antiproton collider? Has it a theoretical reasons or is it just for practical reasons?
Thanks,
Ward
2. Dec 29, 2009
### hamster143
At those energies, there's not much difference science-wise, and it's much easier to maintain high luminosity with proton-protons than with proton-antiprotons.
3. Dec 29, 2009
### Bob S
Recall that the CERN SPS (super proton synchrotron) was a relatively low energy p-bar p machine. The CERN anti-proton facility could not produce the high p-bar currents that Fermilab can produce. But in the end, there is not much physics difference between p-p and p-bar p physics, although the cost of building LHC was higher because the magnets all have two beam tubes..
Bob S
[added] It is also useful to note that the planned SSC (Superconducting Super Collider) in Texas was a p-p machine. One reason that the Fermilab Tevatron is a p-bar p machine is because when it was turned on in 1983, it was a rapid (for a superconducting machine) cycling fixed-target machine; 900-GeV pulses 20 seconds long every minute. Converting this to a p - p collider would be very expensive, relative to making it a p-bar p collider.
Last edited: Dec 29, 2009
4. Dec 29, 2009
Staff Emeritus
Hamster has it right. Most processes of interest at the LHC are gluon-gluon initiated, so a proton is as useful as an antiproton. A proton-antiproton collider would have slightly cheaper magnets, but would be starved for antiprotons and would have perhaps 1% of the luminosity of the LHC, so you would end up with a far less capable machine.
I don't believe the Tevatron ever ran at 900 GeV in fixed-target operation. If it did, it certainly didn't run like that way for long. Virtually all the Tevatron fixed-target data was at 800 GeV. Cycling the magnets 1000 times a day to 900 GeV would be extremely unreliable.
5. Dec 30, 2009
### wpoely
So, the low luminosity of proton-antiproton is just because it isn't so easy to produce antiproton's?
Ward
6. Jan 1, 2010
### vitruvianman
It is very hard to find or produce antimatter and much harder to control it especially when it's very near to (nolmal)matter. As we all know it causes nuclear reactions depending on both matter's and antimatter's (in the area) relativistic masses. Then is it impossible? Of course not, but very hard and would cost too much. :)
7. Jan 1, 2010
### blechman
Antiprotons are produced at fixed-target like setup:
$$p+p\rightarrow p+p+p+\overline{p}$$
As you can see, it takes a lot to make one antiproton, and then you have to harvest and "cool" them so they can be turned into a beam. This is difficult, and it is VERY expensive! And, as had already been mentioned, you don't gain anything for it at the LHC energies, since a proton and antiproton have just as much gluons.
8. Jan 1, 2010
### Bob S
As shown above, the total CM energy required to produce an anti-proton is 4M0c2
The transformation to the Lab frame is given in Eq. 38.3:
http://pdg.lbl.gov/2009/reviews/rpp2009-rev-kinematics.pdf
So the threshold laboratory total energy E1 and kinetic energy T1 needed to produce an anti-proton is 7M0c2=6.6 Gev and 6M0c2=5.6 GeV respectively. The Bevatron at Berkeley was built in ~1952 with a kinetic energy of 6.2 GeV specifically to discover anti-protons, which it did, for which Emilio Segre and Owen Chamberlain won the Nobel Prize.
Bob S
|
{}
|
# Find a specific Substring
I'm writing a little Downloader that will look through directories online and download the content. The first prototype of my program is a success, now I just want to refine it and learn some more C#. The task is this:
Take this string:
http://example.free.pl/plus%20violent/Dark%20The%20Suns/All%20Ends%20In%20Silence/
and create the substring between the last two /
Result: All%20Ends%20In%20Silence
I find my current code brutish and probably not a clean, C# way of solving the problem:
string GetDirectoryName(string directory)
{
int lastIndex = directory.LastIndexOf("/");
int count = directory.Count<char>(elem => elem == '/') - 1;
int startIndex = 0;
while (count > 0)
{
startIndex = directory.IndexOf("/", startIndex) + 1;
--count;
}
return directory.Substring(startIndex, lastIndex - startIndex).Replace("%20", " ");
}
Please advise me how I could make this code cleaner, clearer and following the C# methodology of programming.
The .NET framework will do this for you pretty cleanly with the right classes.
For example:
string url = "http://example.free.pl/plus%20violent/Dark%20The%20Suns/All%20Ends%20In%20Silence/";
DirectoryInfo di = new DirectoryInfo( new Uri(url).LocalPath );
The di.Name property contains "All Ends In Silence".
• That's pretty useful. I'll have to look at the library on MSDN, thanks! – IAE Sep 20 '11 at 15:27
You could replace your loop with a call to LastIndexOf:
const char separator = '/';
int lastSlash = directory.LastIndexOf(separator);
int slashBeforeLast = directory.LastIndexOf(separator, lastSlash - 1);
return directory.Substring(slashBeforeLast + 1, lastSlash - slashBeforeLast - 1);
• I am already getting the last '/' with LastIndexOf. How would a second call to LastIndexOf get me the other '/' I need? – IAE Sep 21 '11 at 15:45
• @SoulBeaver: By adding an index. I've specifically linked to the overload that takes an index. I'm updating my answer with an example. – Hosam Aly Sep 21 '11 at 21:00
• Aaaah now I understand what you mean, thanks! – IAE Sep 22 '11 at 10:49
|
{}
|
## The future: much like the present
In cosmology, the steady-state universe describes a model for which the universe is neither expanding nor contracting, but, as the name suggests, remains the same size (in terms of its geometric boundary). However, this has long been ruled out by radio telescope data, which show the universe is expanding and possibly even accelerating. But in… Continue reading The future: much like the present
## Paper Wealth is Real Wealth
On various online discussions, usually in regard to Elon Musk’s wealth (or other billionaires, but it is usually Elon Musk given his fame and wealth), is that the reason he cannot ‘do more’ in terms philanthropy is that his wealth is paper wealth, not real wealth, and thus it’s not like having money. Regardless of… Continue reading Paper Wealth is Real Wealth
## Normies, Status, and Substack
In regard to status, it’s not about how much money you earn, but rather the size of your megaphone. Obama is hardly among the wealthiest in the world, but commands a huge megaphone in terms of his influence. Normies, even wealthy normies, are stuck having little influence despite otherwise projecting the outward appearance of success,… Continue reading Normies, Status, and Substack
## Repost: Are Things Really That Bad For Millennials
The post The Vampire Society by Vox Day went viral. It seems everyone hates the boomers; even boomers hate themselves. Boomers are blamed, by conservatives and liberals alike, for everything that is wrong with society. They are accused of being narcissistic and materialistic and being poor parents who outsourced their parenting duties to daycare so… Continue reading Repost: Are Things Really That Bad For Millennials
## Convergence to the global hegemon
Another correct prediction: the S&P 500 closes above 4,000 for the first time ever, and as of writing this post it is up another 1.5%, at 4080-. The S&P 500 closed above 4000 for the first time to kick off the second quarter, buoyed by a continuing rebound in technology stocks. The broad stock gauge… Continue reading Convergence to the global hegemon
## Random Thoughts April 4th
A question I have pondered is, how does one come up with interesting insights/thoughts? I would suspect there is a reading involved and also necessary to gauge the interests of the prospective audience, but what makes something ‘click? or viral” From what I have observed, content that is introspective and personal that tells the author’s… Continue reading Random Thoughts April 4th
## Pre-Employment IQ Testing is Very Common
It is a common misconception online that pre-employment IQ testing is proscribed due to Griggs vs. Duke. This is false, as I epxlain in more detial here. AFIK, no employer administers an actual IQ test, such as the Stanford-Binet or Wechsler Adult Intelligence Scale tests, as those are expensive and time-consuming to administer and typically must… Continue reading Pre-Employment IQ Testing is Very Common
|
{}
|
# Looking for a proof of an interesting identity
Working on a problem I have encountered an interesting identity:
$$\sum_{k=0}^\infty \left(\frac{x}{2}\right)^{n+2k}\binom{n+2k}{k} =\frac{1}{\sqrt{1-x^2}}\left(\frac{1-\sqrt{1-x^2}}{x}\right)^n,$$ where $n$ is a non-negative integer number and $x$ is a real number with absolute value less than 1 (probably a similar expression is valid for arbitrary complex numbers $|z|<1$).
Is there any simple proof of this identity?
• How/where did you meet this identity? Is the LHS just the Taylor series for the RHS? – Daniel Robert-Nicoud Dec 14 '17 at 23:38
• Anything to do with a random walk? Why did you call this interesting, btw? – Mathemagical Dec 14 '17 at 23:44
• Reminiscent of the explicit form for Chebyshev polynomials of the first kind, hence they idea of enforcing the substitution $x=\sin\theta$ and applying the residue theorem looks like a promising one ;) – Jack D'Aurizio Dec 14 '17 at 23:56
• Or you may prove that both sides are the terms of an Appell / Fibonacci sequence of polynomials. – Jack D'Aurizio Dec 14 '17 at 23:57
• @tired This is in regard to the hitting time of a random walk (more particularly, the probability generating function of the hitting time of the walk at the barrier x=1. Please see the equations 15,16, and 17 in the linked document. galton.uchicago.edu/~lalley/Courses/312/RW.pdf The details of the computation are not listed here, but they are shown in greater detail in Michael Steele's book, Stochastic Calculus and Financial applications (pages 8 and 9). – Mathemagical Dec 16 '17 at 15:25
Using
$$\binom{n}{k}=\frac{1}{2 \pi i}\oint_C\frac{(1+z)^{n}}{z^{k+1}}dz$$ we get (integration contour is the unit cicrle)
$$2\pi iS_n=\oint dz \sum_{k=0}^{\infty}\frac{(1+z)^{n+2k}x^{n+2k}}{z^{k+1}2^{n+2k}}=\oint dz \frac{(1+z)^n x^n}{z2^n}\sum_{k=0}^{\infty}\frac{(1+z)^{2k}x^{2k}}{2^{2k}z^k}=\\ 4\frac{x^n}{2^n}\oint dz \underbrace{\frac{(1+z)^n}{4z-(1+z)^2x^2}}_{f(z)}$$
for $|x|<1$ only we have just one pole of $f(z)$ inside the unit circle namely $z_0(x)=\frac2{x^2}-\frac{2\sqrt{1-x^2}}{x^2}-1$ , so
$$S_n=4\frac{x^n}{2^n}\text{res}(f(z),z=z_0(x))=4\frac{x^n}{2^n}\left[ \frac{1}{4 \sqrt{1-x^2}}\left(2\frac{1-\sqrt{1-x^2}}{ x^2}\right)^n\right]$$
or
$$S_n=\frac{1}{\sqrt{1-x^2}}\left(\frac{1-\sqrt{1-x^2}}{ x}\right)^n$$
• Thanks Jack :-) – tired Dec 15 '17 at 0:55
• @tired: Thank you very much for the nice proof. It contains however a misprint. $4$ shall be replaced with $4^{1/n}$ in the denominator of the residue value. But probably simpler and more correctly were to write it just as $$\text{res}(f(z),z=z_0)=\frac{1}{4\sqrt{1-x^2}} \left(2\frac{1-\sqrt{1-x^2}}{x^2}\right)^n.$$ – user Dec 15 '17 at 11:56
• @user355705 you are absolutly right – tired Dec 16 '17 at 15:07
Extracting coefficients on the RHS we get the integral (coefficient on $x^{n+2k}$)
$$\frac{1}{2\pi i} \int_{|z|=\epsilon} \frac{1}{z^{n+2k+1}} \frac{1}{\sqrt{1-z^2}} \left(\frac{1-\sqrt{1-z^2}}{z}\right)^n \; dz.$$
Now we put $(1-\sqrt{1-z^2})/z = w$ so that $z = 2w/(1+w^2).$ This has $w = \frac{1}{2} z + \cdots$ so the image in $w$ of the contour in $z$ can be deformed to a small circle enclosing the origin in the $w$-plane. (Moreover we see that the exponentiated term starts at $z^n$ which justifies the corresponding offset in the series.) We get $dz = 2/(1+w^2) - 4w^2/(1+w^2)^2 \; dw = 2(1-w^2)/(1+w^2)^2 \; dw.$ We also have $1-z^2 = 1 - 4w^2/(1+w^2)^2 = (1-w^2)^2/(1+w^2)^2.$ All of this yields
$$\frac{1}{2\pi i} \int_{|w|=\gamma} \frac{(1+w^2)^{n+2k+1}}{2^{n+2k+1} w^{n+2k+1}} \frac{1}{(1-w^2)/(1+w^2)} w^n \frac{2(1-w^2)}{(1+w^2)^2} \; dw \\ = \frac{1}{2^{n+2k}} \frac{1}{2\pi i} \int_{|w|=\gamma} \frac{(1+w^2)^{n+2k}}{w^{2k+1}} \; dw.$$
This evaluates by inspection to
$$\frac{1}{2^{n+2k}} [w^{2k}] (1+w^2)^{n+2k} = \frac{1}{2^{n+2k}} [w^{k}] (1+w)^{n+2k} \\ = \frac{1}{2^{n+2k}} {n+2k\choose k}$$
which is the claim.
I plan to insert this approach into the next (2019) version of my notes.
The even powers of $\arcsin(x)$ all have a nice Maclaurin series (see eqs (19),(20),(21),(22)), and this can be seen as a side effect of the Lagrange–Bürmann inversion theorem. I am going to implement the same trick here, by computing the derivatives at the origin of the RHS (the opposite way has already been investigated by tired).
$$[x^m]\left[\frac{1}{\sqrt{1-x^2}}\left(\frac{1-\sqrt{1-x^2}}{x}\right)^n\right]=\frac{1}{2\pi i}\oint\frac{(1-\sqrt{1-x^2})^n}{x^{n+m+1}\sqrt{1-x^2}}\,dx$$ where the integral is performed along a small circle enclosing the origin. What about enforcing the substitution $x=\sin z$? The sine function is holomorphic and injective in a neighbourhood of the origin, hence the RHS is simply converted into $$\frac{1}{2\pi i}\oint\frac{(1-\cos z)^n}{\left(\sin z\right)^{n+m+1}}\,dz =\frac{1}{2\pi i}\oint\frac{2^n \sin^{2n}\frac{z}{2}}{2^{n+m+1}\sin^{n+m+1}\frac{z}{2}\cos^{n+m+1}\frac{z}{2}}\,dz$$ or $$\frac{1}{2\pi i}\oint\frac{\tan^{n-m-1}z}{2^{m}\cos^{2m}z}\,dz=\frac{1}{2^{m+1}\pi i}\oint u^{n-m-1}(1+u^2)^{m-1}\,du$$ via $z\to 2z$ and $z\to\arctan u$. Now the RHS is trivially given by a binomial coefficient multiplied by $\frac{1}{2^m}$ (provided by $m\geq n$) and the claim is proved.
|
{}
|
# Tech Reports
## ULCS-13-004
### A resolution-based calculus for Coalition Logic (Extended Version)
Cláudia Nalon, Lan Zhang, Clare Dixon, and Ullrich Hustadt
### Abstract
We present a resolution-based calculus for the Coalition Logic CL, a non-normal modal logic used for reasoning about cooperative agency. We present a normal form and a set of resolution-based inference rules to solve the satisfiability problem in CL. We also show that the calculus presented here is sound, complete, and terminating.
[Full Paper]
|
{}
|
# Determine whether the functions are linearly independent at the interval $I$. (In the context of differential equations)
So, here are two functions $$x_1(t)=\cos(2t)-1 \text{ and } x_2(t)=\sin^2(t), I = R$$. $$C_1x_1(t)+C_2x_2(t)=0$$
Here I am stuck, because I try to find the values of $$t$$ such that I can express $$C_1 \text{ or } C_2$$ and show that they must be equal to zero. But in this case, no matter the $$t$$ I choose I get all expression equal to 0, so I cannot conclude that $$C_1 \text{ and} C_2 \text{ must be equal to } 0 \text{ so that the equation } C_1x_1(t)+C_2x_2(t)=0 \text{ is valid}.$$ I think may be I can use Wronskian determinant as it tells whether two functions are linearly dependant. But the determinant was very complicated, so I think there must be some other solution.
• you can use this $\cos(2x)=2\cos^2(x)-1$ And also $\cos^2(x)+ \sin^2(x)=1$ You need to find the value of the constants $(C_1;C_2)$ not the value of $t$. – Aryadeva Nov 11 '19 at 20:53
• @Isham Yes, I understand, but we first try to guess the value of $t$ so that either $C_1x_1 \text{ or } C_2x_2 \text{ equals to } 0.$ I don't get how it changes something. Still, if I take $t=0$ Then $x_1=0$ and $x_2=0$ and so with any $t$, ( I think). I don't know how to express $C_2$, for example, because, if I try to eliminate $C_1x_1$ I also eliminate $C_2x_2$. – user Nov 11 '19 at 21:13
• It's normal because $x_1=-2\sin^2(t)=-2x_2$ – Aryadeva Nov 11 '19 at 22:41
• @Isham Thank you very much! – user Nov 11 '19 at 22:45
|
{}
|
# Value Engineering MCQ Quiz - Objective Question with Answer for Value Engineering - Download Free PDF
Last updated on Nov 2, 2022
## Latest Value Engineering MCQ Objective Questions
#### Value Engineering Question 1:
In value engineering approach, the value of the product is
1. Inversely proportional to its functions and directly proportional to its cost
2. Directly proportional to its functions and inversely proportional to its cost
3. Inversely proportional to its functions as well as its cost
4. Directly proportional to its functions as well as its cost
5. Directly proportional to its functions and independent of its cost
Option 2 : Directly proportional to its functions and inversely proportional to its cost
#### Value Engineering Question 1 Detailed Solution
Explanation:
Value Engineering:
Value engineering is the review of new or existing products during the design phase to reduce costs and increase functionality to increase the value of the product. The value of an item is defined as the most cost-effective way of producing an item without taking away from its purpose.
• Value engineering is a systematic and organized approach to providing the necessary functions in a project at the lowest cost.
• Value engineering promotes the substitution of materials and methods with less expensive alternatives, without sacrificing functionality.
• It is focused solely on the functions of various components and materials, rather than their physical attributes.
Example:
A bottle of dishwashing liquid that becomes slippery after some of the soap has leaked to the sides may be improved by redesigning the shape of the bottle and the opening spout to improve grip and minimize leakage. This improvement could lead to increased sales without incurring additional advertising costs.
Important Points
• With value engineering, cost reduction should not affect the quality of the product being developed or analyzed.
• Brainstorming is a way to generate ideas within a group
• The father of value engineering is Lawrence Miles
#### Value Engineering Question 2:
In value engineering ‘worth’ is value of:
1. Product
2. System
3. Service
4. Function
Option 4 : Function
#### Value Engineering Question 2 Detailed Solution
Concept:
Value analysis/engineering
• Value analysis aims at a systematic identification and elimination of unnecessary costs.
• Value analysis examines the design, method of manufacturing, the material used, function and cost of each and every component in order to produce it economically without decreasing its utility, function or reliability.
Value
• Value is the price we pay for a product, process, material or service required to perform a specific function with the required quality and reliability.
• Value can be defined as the combination of the quality, efficiency, price and service which ensures the ultimate economy and satisfaction of the purchaser
Worth
• In the value methodology, worth is defined as the lowest overall cost to perform a function without regard to criteria or codes'.
• Comparing function cost to function worth is the primary technique to help study teams identify areas of the greatest potential value improvement
• The very act of establishing the worth of a function will often create obvious choices for improvement. Another step is to create the Value Index.
• This index reflects the basic value theory that value is the relationship between cost and worth.
• The formula is: Value = Worth /Cost
#### Value Engineering Question 3:
For air travel over a distance of 500 km, the ticket price is Rs. 4000. The comfort of the air travel can be monetized at Rs.3000, and the monetary value of time saved because of air travel is Rs.3000. The value of air travel is ____________.
#### Value Engineering Question 3 Detailed Solution
Concept:
Value Engineering:
It is a process using multi-disciplined teams to review projects and standards to identify high-cost functions with improvement potential. The teams follow the systematic, creative VE job plan to establish an optimum value for selected functions. Alternatives, which will provide the necessary functions at the most economical initial capital costs and/or life cycle cost, are developed consistent with requirements for safety, quality, operation, maintenance, and aesthetics.
$$Value=\frac {Function}{Cost}$$
Calculation:
Given:
Monetized comfort = Rs 3000; Monetary time saved = Rs 3000; Cost of ticket = Rs 4000;
Monetary function = Monetized comfort + Monetary time saved
= 3000 + 3000 = Rs 6000
$$Value=\frac {Function}{Cost}=\frac {6000}{4000}$$
Value = 1.5
#### Value Engineering Question 4:
In value engineering approach, the value of the product is
1. Inversely proportional to its functions and directly proportional to its cost
2. Directly proportional to its functions and inversely proportional to its cost
3. Inversely proportional to its functions as well as its cost
4. Directly proportional to its functions as well as its cost
Option 2 : Directly proportional to its functions and inversely proportional to its cost
#### Value Engineering Question 4 Detailed Solution
Explanation:
Value Engineering:
Value engineering is the review of new or existing products during the design phase to reduce costs and increase functionality to increase the value of the product. The value of an item is defined as the most cost-effective way of producing an item without taking away from its purpose.
• Value engineering is a systematic and organized approach to providing the necessary functions in a project at the lowest cost.
• Value engineering promotes the substitution of materials and methods with less expensive alternatives, without sacrificing functionality.
• It is focused solely on the functions of various components and materials, rather than their physical attributes.
Example:
A bottle of dishwashing liquid that becomes slippery after some of the soap has leaked to the sides may be improved by redesigning the shape of the bottle and the opening spout to improve grip and minimize leakage. This improvement could lead to increased sales without incurring additional advertising costs.
Important Points
• With value engineering, cost reduction should not affect the quality of the product being developed or analyzed.
• Brainstorming is a way to generate ideas within a group
• The father of value engineering is Lawrence Miles
#### Value Engineering Question 5:
The percentage of area under ±σ in Gaussian frequency distribution is
1. 68.3 percent
2. 99.7 percent
3. 95.5 percent
4. 65.5 percent
Option 1 : 68.3 percent
#### Value Engineering Question 5 Detailed Solution
Explanation:
Gaussian distribution:
The normal distribution is a very common continuous probability distribution that is used in statistics and Six Sigma methodology.
The Gaussian distribution (named after the German mathematician Carl Gauss who first described it ) is often called the Bell Curve because the graph of its probability density looks like a bell. It is also known as called Normal Distribution.
• The percentage of area under ± σ in Gaussian frequency distribution is 68%, which implies that 68.2% of data falls within the first standard deviation from the mean. This means there is a 68% probability of randomly selecting a score between -1 and +1 standard deviations from the mean.
• The percentage of area under ± 2σ in Gaussian frequency distribution is 95%, which implies that 95.4% of data falls within the second standard deviation from the mean. This means there is a 95% probability of randomly selecting a score between -2 and +2 standard deviations from the mean.
• The percentage of area under ± 3σ in Gaussian frequency distribution is 99.7%, which implies that 99.7% of data fall within the third standard deviation from the mean. This means there is a 99.7% probability of randomly selecting a score between -3 and +3 standard deviations from the mean.
## Top Value Engineering MCQ Objective Questions
#### Value Engineering Question 6
The aim of the value engineering is to
1. minimize the overall cost of production without affecting the quality of product
2. determine the value of overall production
3. relate values of a job
4. All of the above
Option 1 : minimize the overall cost of production without affecting the quality of product
#### Value Engineering Question 6 Detailed Solution
Concept:
Value Engineering explicitly considers the cost by collecting the cost data and using cost models to make estimates for all functions over the life cycle. For required functions that cost more than they are worth, Value Engineering uses structured brain stroming to determine alternative ways of performing them.
The aim of value engineering is to minimize the overall cost of production without affecting the quality of the product. Value engineering is to increase the value of the products. The value of the product can increase by increasing the function or by decreasing the cost without compromising with the quality of the product.
The father of value engineering is Lawrence Miles.
Brainstorming is a way to generate ideas within a group
#### Value Engineering Question 7
The probability of a device performing its function for the period intended, under the prescribed operating condition is known as
1. Durability
2. Quality
3. Usability
4. Reliability
Option 4 : Reliability
#### Value Engineering Question 7 Detailed Solution
Reliability is the probability of a device performing its intended function under given operating conditions and environments for a specified length of time.
• Reliability can be depicted as the probability that an item will perform appropriately for a specified time period under a given service condition.
• For example, the reliability of 0.997 for a typical part implies that there is a probability of failure (an inverse of reliability) of 3 parts in every 1000 parts.
• If R(t) and F(t) are respectively is the reliability and the probability of failure with respect to time t, and are mutually exclusive then R(t) + F(t) = 1.
#### Value Engineering Question 8
In value engineering, the term value refers to
1. Manufacturing cost of the product
2. Selling price of the product
3. Total cost of the product
4. Utility of the product
Option 4 : Utility of the product
#### Value Engineering Question 8 Detailed Solution
Explanation:
Value analysis/engineering
• Value analysis aims at a systematic identification and elimination of unnecessary costs.
• Value analysis examines the design, method of manufacturing, the material used, function and cost of each and every component in order to produce it economically without decreasing its utility, function or reliability.
Value
• Value is the price we pay for a product, process, material or service required to perform a specific function with the required quality and reliability.
• Value can be defined as the combination of the quality, efficiency, price and service which ensures the ultimate economy and satisfaction of the purchaser.
• We can express value in a mathematical way, as
$$Value = \frac{{utility\;or\;function}}{{cost}} = \frac{{worth\;to\;you}}{{price\;you\;pay}}$$
Thus the value of a product can he increased either by increasing its utility with the same cost or by decreasing its cost for the same utility.
#### Value Engineering Question 9
_____ organisation is also called as military organization.
1. Functional
2. Line
3. Line & staff
4. None of the above
Option 2 : Line
#### Value Engineering Question 9 Detailed Solution
Explanation:
Line organization:
• Line organization is the basic framework for the whole organization.
• It represents a direct vertical relationship through which authority flows.
• In line organization authority flows from the top to the lower levels and responsibility flows upwards.
• Line organization is also known as military organization.
Line and staff organization:
• Line and staff organization is more suitable to a large business organization as compared to the line organization due to the requirement of expert and updated information for taking important decisions.
• Line managers utilize the advice of staff specialists in the case of line and staff organization. Therefore, line and staff organization work out to be better than a line organization.
Functional Organisation:
• In a functional organization, all the activities in the business enterprise are grouped together according to certain functions like production, marketing, finance, and personnel.
• Each function is put under the charge of an entire specialist.
• Each functional head performs a specialized function for the entire organization.
• This structure was designed by F.W.Taylor.
#### Value Engineering Question 10
The main object of scientific layout is
1. to produce better quality of product
2. to utilize maximum floor area
3. to minimize production delays
4. all of these
Option 4 : all of these
#### Value Engineering Question 10 Detailed Solution
Concept:
Plant/Scientific layout is the most effective physical arrangement, either existing or in plans of industrial facilities i.e. arrangement of machines, processing equipment and service departments to achieve greatest co-ordination and efficiency of 4 M’s (Men, Materials, Machines and Methods) in a plant.
The objective of scientific layout is:
• Proper and efficient utilization of available floor space
• To produce better quality of products
• Transportation of work from one point to another point without any delay
• Proper utilization of production capacity
• Reduce material handling costs
• Reduce accidents
• Provide for volume and product flexibility
• Provide ease of supervision and control
• Provide for employ safety and health
• Allow easy maintenance of machine and plant
• Improve productivity
NOTE:
Types of layout:
Product/Line layout: In this type of layout, the machines and equipments are arranged in single line depending upon sequence of operations. E.g. Automobile, etc.
Process/Functional layout: In this type of layout, machines of similar type are arranged in a same place. It is use in mechanical workshops.
Fixed layout: In this type of layout, the product is fixed at one place and machines and equipments are moving towards the product for machining. E.g. Airplane. space ship,etc.
Combine/group layout: Combination of product and process layout is known as combine layout.
#### Value Engineering Question 11
In value engineering approach, the value of the product is
1. Inversely proportional to its functions and directly proportional to its cost
2. Directly proportional to its functions and inversely proportional to its cost
3. Inversely proportional to its functions as well as its cost
4. Directly proportional to its functions as well as its cost
Option 2 : Directly proportional to its functions and inversely proportional to its cost
#### Value Engineering Question 11 Detailed Solution
Explanation:
Value Engineering:
Value engineering is the review of new or existing products during the design phase to reduce costs and increase functionality to increase the value of the product. The value of an item is defined as the most cost-effective way of producing an item without taking away from its purpose.
• Value engineering is a systematic and organized approach to providing the necessary functions in a project at the lowest cost.
• Value engineering promotes the substitution of materials and methods with less expensive alternatives, without sacrificing functionality.
• It is focused solely on the functions of various components and materials, rather than their physical attributes.
Example:
A bottle of dishwashing liquid that becomes slippery after some of the soap has leaked to the sides may be improved by redesigning the shape of the bottle and the opening spout to improve grip and minimize leakage. This improvement could lead to increased sales without incurring additional advertising costs.
Important Points
• With value engineering, cost reduction should not affect the quality of the product being developed or analyzed.
• Brainstorming is a way to generate ideas within a group
• The father of value engineering is Lawrence Miles
#### Value Engineering Question 12
In the value analysis, the value is denoted by
1. ratio of function to cost
2. ratio of cost to function
3. square root of the ratio of cost to function
4. square root of the ratio of function to cost
Option 1 : ratio of function to cost
#### Value Engineering Question 12 Detailed Solution
Explanation
• The term value in value Engineering defined as the cost of proportionate to the function.
$$value = \frac{{Function\left( {or~utility} \right)}}{{cost}}$$
• Value of the product can be increased either by increasing it’s a utility with the same cost or decreasing its cost for the same function.
• A function specifies the purpose of the product or what the product does, what is its utility.
#### Value Engineering Question 13
1. Reduction of scrap
2. Better manufacturing
3. Better quality of performance
4. Increase productivity
Option 4 : Increase productivity
#### Value Engineering Question 13 Detailed Solution
Explanation:
Value Analysis:
• The value analysis is a cost preventive techniques which eliminate unnecessary cost build up into the product.
• The object of value analysis is to reduce the cost, and to increase the profit and not to improve the quality, because if you want to improve quality then the cost will increase here in value analysis we have to reduce the cost by maintaining the current quality.
• It uses cheaper and better material
• It uses an efficient and economical process
• It reduces the cost of the product
• It improves product design
• It increases the utility of the product.
• It increases productivity and thus increases profit.
Value analysis procedure.
(i) Blast :
• Identify the product
• Collect the relevant information
• Define the different function
(ii) Create :
• Create different alternatives
• Critically evaluate the alternative
(iii) Refine :
• Develop the best alternatives
• Implement the alternative
#### Value Engineering Question 14
The percentage of area under ±σ in Gaussian frequency distribution is
1. 68.3 percent
2. 99.7 percent
3. 95.5 percent
4. 65.5 percent
Option 1 : 68.3 percent
#### Value Engineering Question 14 Detailed Solution
Explanation:
Gaussian distribution:
The normal distribution is a very common continuous probability distribution that is used in statistics and Six Sigma methodology.
The Gaussian distribution (named after the German mathematician Carl Gauss who first described it ) is often called the Bell Curve because the graph of its probability density looks like a bell. It is also known as called Normal Distribution.
• The percentage of area under ± σ in Gaussian frequency distribution is 68%, which implies that 68.2% of data falls within the first standard deviation from the mean. This means there is a 68% probability of randomly selecting a score between -1 and +1 standard deviations from the mean.
• The percentage of area under ± 2σ in Gaussian frequency distribution is 95%, which implies that 95.4% of data falls within the second standard deviation from the mean. This means there is a 95% probability of randomly selecting a score between -2 and +2 standard deviations from the mean.
• The percentage of area under ± 3σ in Gaussian frequency distribution is 99.7%, which implies that 99.7% of data fall within the third standard deviation from the mean. This means there is a 99.7% probability of randomly selecting a score between -3 and +3 standard deviations from the mean.
#### Value Engineering Question 15
Accident investigation should be aimed at
1. Fact-finding
2. Fault finding
3. Interrogation
4. Cause of the injury
Option 1 : Fact-finding
#### Value Engineering Question 15 Detailed Solution
Explanation:
Accident Investigation:
An accident investigation is a process for reporting, tracking, and investigating an accident. Accident investigations are an opportunity to uncover safety problems and correct them before they reoccur. When incidents are investigated, the emphasis should be concentrated on finding the root cause of the incident so you can prevent the event from happening again. The purpose is to find facts that can lead to corrective actions, not to find fault.
There are 5 basic steps to investigate an accident:
Gather information:
• Get a brief overview of the situation from witnesses and employees directly involved in the incident.
• Enough information should be collected to understand the basics of what happened.
Search for and establish facts:
• Examine the accident scene, looking for things that will help you understand how the accident happened.
• This includes looking for dents, cracks, scrapes, or splits in equipment; tire tracks or footprints; spills or leaks; scattered or broken parts etc.
• A record should be maintained related to the accident scene.
Establish essential contributing factors:
• Factors that influence the chances of an accident are evaluated in this section.
• Contributing factors include environmental factors, design factors, systems and procedures, and human behaviour.
• Design factors include workplace layout, design of tools and equipment, and maintenance.
• Systems and procedures factors include lack of systems and procedures, inappropriate systems and procedures, inadequate training procedures, and housekeeping.
• Human behavior is common in accidents and includes carelessness, rushing, and fatigue, among others.
Find the root causes:
• There are various contributing factors in an accident but here in this section, we try to evaluate the main cause of the accident.
• The main cause should be very much clear and detailed analysis should be done.
Determine corrective actions:
• Once we know what happened and why it happened, we should focus to determine how to fix the problem so that you avoid repeat accidents.
• We should include all the possible ideas to avoid such type of accident in future.
|
{}
|
# Schnorr signature
In cryptography, a Schnorr signature is a digital signature produced by the Schnorr signature algorithm. Its security is based on the intractability of certain discrete logarithm problems. The Schnorr signature is considered the simplest[1] digital signature scheme to be provably secure in a random oracle model.[2] It is efficient and generates short signatures. It was covered by U.S. Patent 4,995,082 which expired in February 2008.
## AlgorithmEdit
### Choosing parametersEdit
• All users of the signature scheme agree on a group, ${\displaystyle G}$ , of prime order, ${\displaystyle q}$ , with generator, ${\displaystyle g}$ , in which the discrete log problem is assumed to be hard. Typically a Schnorr group is used.
• All users agree on a cryptographic hash function ${\displaystyle H:\{0,1\}^{*}\rightarrow \mathbb {Z} _{q}}$ .
### NotationEdit
In the following,
• Exponentiation stands for repeated application of the group operation
• Juxtaposition stands for multiplication on the set of congruence classes or application of the group operation (as applicable)
• Subtraction stands for subtraction on set of equivalence groups
• ${\displaystyle M\in \{0,1\}^{*}}$ , the set of finite bit strings
• ${\displaystyle s,e,e_{v}\in \mathbb {Z} _{q}}$ , the set of congruence classes modulo ${\displaystyle q}$
• ${\displaystyle x,k\in \mathbb {Z} _{q}^{\times }}$ , the multiplicative group of integers modulo ${\displaystyle q}$ (for prime ${\displaystyle q}$ , ${\displaystyle \mathbb {Z} _{q}^{\times }=\mathbb {Z} _{q}\setminus {\overline {0}}_{q}}$ )
• ${\displaystyle y,r,r_{v}\in G}$ .
### Key generationEdit
• Choose a private signing key, ${\displaystyle x}$ , from the allowed set.
• The public verification key is ${\displaystyle y=g^{x}}$ .
### SigningEdit
To sign a message, ${\displaystyle M}$ :
• Choose a random ${\displaystyle k}$ from the allowed set.
• Let ${\displaystyle r=g^{k}}$ .
• Let ${\displaystyle e=H(r\parallel M)}$ , where ${\displaystyle \parallel }$ denotes concatenation and ${\displaystyle r}$ is represented as a bit string.
• Let ${\displaystyle s=k-xe}$ .
The signature is the pair, ${\displaystyle (s,e)}$ .
Note that ${\displaystyle s,e\in \mathbb {Z} _{q}}$ ; if ${\displaystyle q<2^{160}}$ , then the signature representation can fit into 40 bytes.
### VerifyingEdit
• Let ${\displaystyle r_{v}=g^{s}y^{e}}$
• Let ${\displaystyle e_{v}=H(r_{v}\parallel M)}$
If ${\displaystyle e_{v}=e}$ then the signature is verified.
### Proof of correctnessEdit
It is relatively easy to see that ${\displaystyle e_{v}=e}$ if the signed message equals the verified message:
${\displaystyle r_{v}=g^{s}y^{e}=g^{k-xe}g^{xe}=g^{k}=r}$ , and hence ${\displaystyle e_{v}=H(r_{v}\parallel M)=H(r\parallel M)=e}$ .
Public elements: ${\displaystyle G}$ , ${\displaystyle g}$ , ${\displaystyle q}$ , ${\displaystyle y}$ , ${\displaystyle s}$ , ${\displaystyle e}$ , ${\displaystyle r}$ . Private elements: ${\displaystyle k}$ , ${\displaystyle x}$ .
This shows only that a correctly signed message will verify correctly; many other properties are required for a secure signature algorithm.
### Security argumentEdit
The signature scheme was constructed by applying the Fiat–Shamir transformation[3] to Schnorr's identification protocol.[4] Therefore, (as per Fiat and Shamir's arguments), it is secure if ${\displaystyle H}$ is modeled as a random oracle.
Its security can also be argued in the generic group model, under the assumption that ${\displaystyle H}$ is "random-prefix preimage resistant" and "random-prefix second-preimage resistant".[5] In particular, ${\displaystyle H}$ does not need to be collision resistant.
In 2012, Seurin[2] provided an exact proof of the Schnorr signature scheme. In particular, Seurin shows that the security proof using the Forking lemma is the best possible result for any signature schemes based on one-way group homomorphisms including Schnorr-Type signatures and the Guillou-Quisquater signature schemes. Namely, under the ROMDL assumption, any algebraic reduction must lose a factor ${\displaystyle f({\epsilon }_{F})q_{h}}$ in its time-to-success ratio, where ${\displaystyle f\leq 1}$ is a function that remains close to 1 as long as "${\displaystyle {\epsilon }_{F}}$ is noticeably smaller than 1", where ${\displaystyle {\epsilon }_{F}}$ is the probability of forging an error making at most ${\displaystyle q_{h}}$ queries to the random oracle.
## NotesEdit
1. ^ Savu, Laura (2012). "SIGNCRYPTION SCHEME BASED ON SCHNORR DIGITAL SIGNATURE". arXiv:.
2. ^ a b Seurin, Yannick (2012-01-12). "On the Exact Security of Schnorr-Type Signatures in the Random Oracle Model" (PDF). Cryptology ePrint Archive. International Association for Cryptologic Research. Retrieved 2014-08-11.
3. ^ Fiat; Shamir (1986). "How To Prove Yourself: Practical Solutions to Identification and Signature Problems" (PDF). Proceedings of CRYPTO '86.
4. ^ Schnorr (1989). "Efficient Identification and Signatures for Smart Cards" (PDF). Proceedings of CRYPTO '89.
5. ^ Neven, Smart, Warinschi. "Hash Function Requirements for Schnorr Signatures". IBM Research. Retrieved 19 July 2012.
## ReferencesEdit
• Menezes, Alfred J. et al. (1996), Handbook of Applied Cryptography, CRC Press.
• C.P. Schnorr (1990), "Efficient identification and signatures for smart cards", in G. Brassard, ed. Advances in Cryptology—Crypto '89, 239-252, Springer-Verlag. Lecture Notes in Computer Science, nr 435
• Claus-Peter Schnorr (1991), "Efficient Signature Generation by Smart Cards", Journal of Cryptology 4(3), 161–174 (PS).
|
{}
|
×
# Mass is the amount of substance, what's Electromagnetic Mass?
We usually learn mass is the amount of the substance contained in it. But who might have guessed then that it was just one side of the story. That was just "mechanical mass"! There is one more type namely Electromagnetic Mass. It is all because computation tells us that momentum(due to em fields) comes out to be proportional to velocity. And what must be the coefficient of proportionality? You guessed it right. It's our deary Mass. It sometimes becomes $$\frac{2{e}^{2}}{3a{c}^{2}}$$ where $${e}^{2} = \frac{{q}^{2}}{4\pi\epsilon}$$, $$a$$ is the radius, $$c$$ is of course speed of light in vacuum.
But it comes out to be wrong since here we have not considered the "Poincare stresses". And we have to add the two contributions.
My question however, is how you would explain this "Electromagnetic mass" physically. That will be your own opinion since I think it has not been explained. This is just a mathematical consequence. What's the physics?
Note by Kartik Sharma
2 years, 1 month ago
Sort by:
Kartik, this is a huge subject. If you want a short answer, the idea of an "electromagnetic mass" has been superseded by special relativity, it's become largely irrelevant. But if you want the long answer, wait for it. It's a very interesting and instructive subject, which should serve to show that numerous physicists in the late 19th century were well aware of the issues that ultimately led to Einstein's theory of relativity. · 2 years, 1 month ago
Oh, thanks! I will surely want the long answer and wait. :) · 2 years, 1 month ago
|
{}
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.