text
stringlengths
100
957k
meta
stringclasses
1 value
Note This documentation is for a development version. Click here for the latest stable release (v3.0.0). # Nengo backend API¶ Nengo is designed so that models created with the Nengo frontend API work on a variety of different simulators, or “backends.” For example, backends have been created to take advantage of GPUs and neuromorphic hardware. ## Reference backend¶ nengo.Simulator Reference simulator for Nengo models. nengo.simulator.SimulationData Data structure used to access simulation data from the model. class nengo.Simulator(network, dt=0.001, seed=None, model=None, progress_bar=True, optimize=True)[source] Reference simulator for Nengo models. The simulator takes a Network and builds internal data structures to run the model defined by that network. Run the simulator with the run method, and access probed data through the data attribute. Building and running the simulation may allocate resources like files and sockets. To properly free these resources, call the Simulator.close method. Alternatively, Simulator.close will automatically be called if you use the with syntax: with nengo.Network() as net: ensemble = nengo.Ensemble(10, 1) with nengo.Simulator(net, progress_bar=False) as sim: sim.run(0.1) Note that the data attribute is still accessible even when a simulator has been closed. Running the simulator, however, will raise an error. When debugging or comparing models, it can be helpful to see the full ordered list of operators that the simulator will execute on each timestep. with nengo.Simulator(nengo.Network(), progress_bar=False) as sim: print('\n'.join("* %s" % op for op in sim.step_order)) * TimeUpdate{} The diff of two simulators’ sorted ops tells us how two built models differ. We can use difflib in the Python standard library to see the differences. # Original model with nengo.Network() as net: ensemble = nengo.Ensemble(10, 1, label="Ensemble") sim1 = nengo.Simulator(net, progress_bar=False) with net: node = nengo.Node(output=0, label="Node") nengo.Connection(node, ensemble) sim2 = nengo.Simulator(net, progress_bar=False) import difflib print("".join(difflib.unified_diff( sorted("%s: %s\n" % (type(op).__name__, op.tag) for op in sim1.step_order), sorted("%s: %s\n" % (type(op).__name__, op.tag) for op in sim2.step_order), fromfile="sim1", tofile="sim2", n=0, )).strip()) sim1.close() sim2.close() --- sim1 +++ sim2 @@ -0,0 +1 @@ +Copy: <Connection from <Node "Node"> to <Ensemble "Ensemble">> @@ -4,0 +6 @@ +SimProcess: Lowpass(tau=0.005) Parameters networkNetwork or None A network object to be built and then simulated. If None, then a Model with the build model must be provided instead. dtfloat, optional The length of a simulator timestep, in seconds. seedint, optional A seed for all stochastic operators used in this simulator. Will be set to network.seed + 1 if not given. modelModel, optional A Model that contains build artifacts to be simulated. Usually the simulator will build this model for you; however, if you want to build the network manually, or you want to inject build artifacts in the model before building the network, then you can pass in a Model instance. progress_barbool or ProgressBar, optional Progress bar for displaying build and simulation progress. If True, the default progress bar will be used. If False, the progress bar will be disabled. For more control over the progress bar, pass in a ProgressBar instance. optimizebool, optional If True, the builder will run an additional optimization step that can speed up simulations significantly at the cost of slower builds. If running models for very small amounts of time, pass False to disable the optimizer. Attributes closedbool Whether the simulator has been closed. Once closed, it cannot be reopened. dataSimulationData The SimulationData mapping from Nengo objects to the data associated with those objects. In particular, each Probe maps to the data probed while running the simulation. dgdict A dependency graph mapping from each Operator to the operators that depend on that operator. modelModel The Model containing the signals and operators necessary to simulate the network. signalsSignalDict The SignalDict mapping from Signal instances to NumPy arrays. property dt (float) The time step of the simulator. property n_steps (int) The current time step of the simulator. property step_order (list) The ordered list of step functions run by this simulator. property time (float) The current time of the simulator. clear_probes()[source] Clear all probe histories. New in version 3.0.0. close()[source] Closes the simulator. Any call to Simulator.run, Simulator.run_steps, Simulator.step, and Simulator.reset on a closed simulator raises a SimulatorClosed exception. reset(seed=None)[source] Reset the simulator state. Parameters seedint, optional A seed for all stochastic operators used in the simulator. This will change the random sequences generated for noise or inputs (e.g. from processes), but not the built objects (e.g. ensembles, connections). run(time_in_seconds, progress_bar=None)[source] Simulate for the given length of time. If the given length of time is not a multiple of dt, it will be rounded to the nearest dt. For example, if dt is 0.001 and run is called with time_in_seconds=0.0006, the simulator will advance one timestep, resulting in the actual simulator time being 0.001. The given length of time must be positive. The simulator cannot be run backwards. Parameters time_in_secondsfloat Amount of time to run the simulation for. Must be positive. progress_barbool or ProgressBar, optional Progress bar for displaying the progress of the simulation run. If True, the default progress bar will be used. If False, the progress bar will be disabled. For more control over the progress bar, pass in a ProgressBar instance. run_steps(steps, progress_bar=None)[source] Simulate for the given number of dt steps. Parameters stepsint Number of steps to run the simulation for. progress_barbool or ProgressBar, optional Progress bar for displaying the progress of the simulation run. If True, the default progress bar will be used. If False, the progress bar will be disabled. For more control over the progress bar, pass in a ProgressBar instance. step()[source] Advance the simulator by 1 step (dt seconds). trange(dt=None, sample_every=None)[source] Create a vector of times matching probed data. Note that the range does not start at 0 as one might expect, but at the first timestep (i.e., dt). Parameters sample_everyfloat, optional The sampling period of the probe to create a range for. If None, a time value for every dt will be produced. Changed in version 3.0.0: Renamed from dt to sample_every class nengo.simulator.SimulationData(raw)[source] Data structure used to access simulation data from the model. The main use case for this is to access Probe data; for example, probe_data = sim.data[my_probe]. However, it is also used to access the parameters of objects in the model; for example, encoder values for an ensemble can be accessed via encoders = sim.data[my_ens].encoders. This is like a view on the raw simulation data manipulated by the Simulator, which allows the raw simulation data to be optimized for speed while this provides a more user-friendly interface. Changed in version 3.0.0: Renamed from ProbeDict to SimulationData ## The build process¶ The build process translates a Nengo model to a set of data buffers (Signal instances) and computational operations (Operator instances) which implement the Nengo model defined with the frontend API. The build process is central to how the reference simulator works, and details how Nengo can be extended to include new neuron types, learning rules, and other components. Bekolay et al., 2014 provides a high-level description of the build process. For lower-level details and reference documentation, read on. nengo.builder.Model Stores artifacts from the build process, which are used by Simulator. nengo.builder.Builder Manages the build functions known to the Nengo build process. class nengo.builder.Model(dt=0.001, label=None, decoder_cache=None, builder=None)[source] Stores artifacts from the build process, which are used by Simulator. Parameters dtfloat, optional The length of a simulator timestep, in seconds. labelstr, optional A name or description to differentiate models. decoder_cacheDecoderCache, optional Interface to a cache for expensive parts of the build process. buildernengo.builder.Builder, optional A Builder instance to use for building. Defaults to a new Builder(). Attributes configConfig or None Build functions can set a config object here to affect sub-builders. decoder_cacheDecoderCache Interface to a cache for expensive parts of the build process. dtfloat The length of each timestep, in seconds. labelstr or None A name or description to differentiate models. operatorslist List of all operators created in the build process. All operators must be added to this list, as it is used by Simulator. paramsdict Mapping from objects to namedtuples containing parameters generated in the build process. probeslist List of all probes. Probes must be added to this list in the build process, as this list is used by Simulator. seededdict All objects are assigned a seed, whether the user defined the seed or it was automatically generated. ‘seeded’ keeps track of whether the seed is user-defined. We consider the seed to be user-defined if it was set directly on the object, or if a seed was set on the network in which the object resides, or if a seed was set on any ancestor network of the network in which the object resides. seedsdict Mapping from objects to the integer seed assigned to that object. sigdict A dictionary of dictionaries that organizes all of the signals created in the build process, as build functions often need to access signals created by other build functions. stepSignal The current step (i.e., how many timesteps have occurred thus far). timeSignal The current point in time. toplevelNetwork The top-level network being built. This is sometimes useful for accessing network elements after build, or for the network builder to determine if it is the top-level network. add_op(op)[source] Add an operator to the model. In addition to adding the operator, this method performs additional error checking by calling the operator’s make_step function. Calling make_step catches errors early, such as when signals are not properly initialized, which aids debugging. For that reason, we recommend calling this method over directly accessing the operators attribute. build(obj, *args, **kwargs)[source] Build an object into this model. See Builder.build for more details. Parameters objobject The object to build into this model. has_built(obj)[source] Returns true if the object has already been built in this model. Note Some objects (e.g. synapses) can be built multiple times, and therefore will always result in this method returning False even though they have been built. This check is implemented by checking if the object is in the params dictionary. Build function should therefore add themselves to model.params if they cannot be built multiple times. Parameters objobject The object to query. class nengo.builder.Builder[source] Manages the build functions known to the Nengo build process. Consists of two class methods to encapsulate the build function registry. All build functions should use the Builder.register method as a decorator. For example: class MyRule(nengo.learning_rules.LearningRuleType): modifies = "decoders" ... @nengo.builder.Builder.register(MyRule) def build_my_rule(model, my_rule, rule): ... registers a build function for MyRule objects. Build functions should not be called directly, but instead called through the Model.build method. Model.build uses the Builder.build method to ensure that the correct build function is called based on the type of the object passed to it. For example, to build the learning rule type my_rule from above, do: with nengo.Network() as net: ens_a = nengo.Ensemble(10, 1) ens_b = nengo.Ensemble(10, 1) my_rule = MyRule() connection = nengo.Connection(ens_a, ens_b, learning_rule_type=my_rule) model = nengo.builder.Model() model.build(my_rule, connection.learning_rule) This will call the build_my_rule function from above with the arguments model, my_rule, connection.learning_rule. Attributes buildersdict Mapping from types to the build function associated with that type. classmethod build(model, obj, *args, **kwargs)[source] Build obj into model. This method looks up the appropriate build function for obj and calls it with the model and other arguments provided. Note that if a build function is not specified for a particular type (e.g., EnsembleArray), the type’s method resolution order will be examined to look for superclasses with defined build functions (e.g., Network in the case of EnsembleArray). This indirection (calling Builder.build instead of the build function directly) enables users to augment the build process in their own models, rather than having to modify Nengo itself. In addition to the parameters listed below, further positional and keyword arguments will be passed unchanged into the build function. Parameters modelModel The Model instance in which to store build artifacts. objobject The object to build into the model. classmethod register(nengo_class)[source] A decorator for adding a class to the build function registry. Raises a warning if a build function already exists for the class. Parameters nengo_classClass The type associated with the build function being decorated. ### Basic operators¶ Operators represent calculations that will occur in the simulation. This code adapted from sigops/operator.py and sigops/operators.py (https://github.com/jaberg/sigops). This modified code is included under the terms of their license: Redistribution and use in source and binary forms, with or without modification, are permitted provided that the following conditions are met: 1. Redistributions of source code must retain the above copyright notice, this list of conditions and the following disclaimer. 2. Redistributions in binary form must reproduce the above copyright notice, this list of conditions and the following disclaimer in the documentation and/or other materials provided with the distribution. THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS “AS IS” AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT HOLDER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL, SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE, DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE. nengo.builder.Operator Base class for operator instances understood by Nengo. nengo.builder.operator.TimeUpdate Updates the simulation step and time. nengo.builder.operator.Reset Assign a constant value to a Signal. nengo.builder.operator.Copy Assign the value of one signal to another, with optional slicing. nengo.builder.operator.ElementwiseInc Increment signal Y by A * X (with broadcasting). nengo.builder.operator.reshape_dot Checks if the dot product needs to be reshaped. nengo.builder.operator.DotInc Increment signal Y by dot(A, X). nengo.builder.operator.SparseDotInc Like DotInc but A is a sparse matrix. nengo.builder.operator.BsrDotInc Increment signal Y by dot(A, X) using block sparse row format. nengo.builder.operator.SimPyFunc Apply a Python function to a signal, with optional arguments. class nengo.builder.Operator(tag=None)[source] Base class for operator instances understood by Nengo. During one simulator timestep, a Signal can experience 1. at most one set operator (optional) 2. any number of increments 4. at most one update in this specific order. A set defines the state of the signal at time $$t$$, the start of the simulation timestep. That state can then be modified by increment operations. A signal’s state will only be read after all increments are complete. The state is then finalized by an update, which denotes the state that the signal should be at time $$t + dt$$. Each operator must keep track of the signals that it manipulates, and which of these four types of manipulations is done to each signal so that the simulator can order all of the operators properly. Note There are intentionally no valid default values for the reads, sets, incs, and updates properties to ensure that subclasses explicitly set these values. Parameters tagstr, optional A label associated with the operator, for debugging purposes. Attributes tagstr or None A label associated with the operator, for debugging purposes. property incs Signals incremented by this operator. Increments will be applied after sets (if it is set), and before reads. property reads Signals that are read and not modified by this operator. property sets Signals set by this operator. Sets occur first, before increments. A signal that is set here cannot be set or updated by any other operator. property updates Signals updated by this operator. Updates are the last operation to occur to a signal. init_signals(signals)[source] Initialize the signals associated with this operator. The signals will be initialized into signals. Operator subclasses that use extra buffers should create them here. Parameters signalsSignalDict A mapping from signals to their associated live ndarrays. make_step(signals, dt, rng)[source] Returns a callable that performs the desired computation. This method must be implemented by subclasses. To fully understand what an operator does, look at its implementation of make_step. Parameters signalsSignalDict A mapping from signals to their associated live ndarrays. dtfloat Length of each simulation timestep, in seconds. rngnumpy.random.RandomState Random number generator for stochastic operators. class nengo.builder.operator.TimeUpdate(step, time, tag=None)[source] Updates the simulation step and time. Implements step[...] += 1 and time[...] = step * dt. A separate operator is used (rather than a combination of Copy and DotInc) so that other backends can manage these important parts of the simulation state separately from other signals. Parameters stepSignal The signal associated with the integer step counter. timeSignal The signal associated with the time (a float, in seconds). tagstr, optional A label associated with the operator, for debugging purposes. Notes 1. sets [step, time] 2. incs [] 3. reads [] 4. updates [] Attributes stepSignal The signal associated with the integer step counter. tagstr or None A label associated with the operator, for debugging purposes. timeSignal The signal associated with the time (a float, in seconds). make_step(signals, dt, rng)[source] Returns a callable that performs the desired computation. This method must be implemented by subclasses. To fully understand what an operator does, look at its implementation of make_step. Parameters signalsSignalDict A mapping from signals to their associated live ndarrays. dtfloat Length of each simulation timestep, in seconds. rngnumpy.random.RandomState Random number generator for stochastic operators. class nengo.builder.operator.Reset(dst, value=0, tag=None)[source] Assign a constant value to a Signal. Implements dst[...] = value. Parameters dstSignal The Signal to reset. valuefloat, optional The constant value to which dst is set. tagstr, optional A label associated with the operator, for debugging purposes. Notes 1. sets [dst] 2. incs [] 3. reads [] 4. updates [] Attributes dstSignal The Signal to reset. tagstr or None A label associated with the operator, for debugging purposes. valuefloat The constant value to which dst is set. make_step(signals, dt, rng)[source] Returns a callable that performs the desired computation. This method must be implemented by subclasses. To fully understand what an operator does, look at its implementation of make_step. Parameters signalsSignalDict A mapping from signals to their associated live ndarrays. dtfloat Length of each simulation timestep, in seconds. rngnumpy.random.RandomState Random number generator for stochastic operators. class nengo.builder.operator.Copy(src, dst, src_slice=None, dst_slice=None, inc=False, tag=None)[source] Assign the value of one signal to another, with optional slicing. Implements: • dst[:] = src • dst[dst_slice] = src[src_slice] (when dst_slice or src_slice is not None) • dst[dst_slice] += src[src_slice] (when inc=True) Parameters dstSignal The signal that will be assigned to (set). srcSignal The signal that will be copied (read). dst_sliceslice or list, optional Slice or list of indices associated with dst. src_sliceslice or list, optional Slice or list of indices associated with src incbool, optional Whether this should be an increment rather than a copy. tagstr, optional A label associated with the operator, for debugging purposes. Notes 1. sets [] if inc else [dst] 2. incs [dst] if inc else [] 3. reads [src] 4. updates [] Attributes dstSignal The signal that will be assigned to (set). dst_slicelist or None Indices associated with dst. srcSignal The signal that will be copied (read). src_slicelist or None Indices associated with src. tagstr or None A label associated with the operator, for debugging purposes. make_step(signals, dt, rng)[source] Returns a callable that performs the desired computation. This method must be implemented by subclasses. To fully understand what an operator does, look at its implementation of make_step. Parameters signalsSignalDict A mapping from signals to their associated live ndarrays. dtfloat Length of each simulation timestep, in seconds. rngnumpy.random.RandomState Random number generator for stochastic operators. class nengo.builder.operator.ElementwiseInc(A, X, Y, tag=None)[source] Increment signal Y by A * X (with broadcasting). Implements Y[...] += A * X. Parameters ASignal The first signal to be multiplied. XSignal The second signal to be multiplied. YSignal The signal to be incremented. tagstr, optional A label associated with the operator, for debugging purposes. Notes 1. sets [] 2. incs [Y] 3. reads [A, X] 4. updates [] Attributes ASignal The first signal to be multiplied. tagstr or None A label associated with the operator, for debugging purposes. XSignal The second signal to be multiplied. YSignal The signal to be incremented. make_step(signals, dt, rng)[source] Returns a callable that performs the desired computation. This method must be implemented by subclasses. To fully understand what an operator does, look at its implementation of make_step. Parameters signalsSignalDict A mapping from signals to their associated live ndarrays. dtfloat Length of each simulation timestep, in seconds. rngnumpy.random.RandomState Random number generator for stochastic operators. nengo.builder.operator.reshape_dot(A, X, Y, tag=None)[source] Checks if the dot product needs to be reshaped. Also does a bunch of error checking based on the shapes of A and X. class nengo.builder.operator.DotInc(A, X, Y, reshape=None, tag=None)[source] Increment signal Y by dot(A, X). Implements Y[...] += np.dot(A, X). Note Currently, this only supports matrix-vector multiplies for compatibility with NengoOCL. Parameters ASignal The first signal to be multiplied (a matrix). XSignal The second signal to be multiplied (a vector). YSignal The signal to be incremented. tagstr, optional A label associated with the operator, for debugging purposes. Notes 1. sets [] 2. incs [Y] 3. reads [A, X] 4. updates [] Attributes ASignal The first signal to be multiplied. tagstr or None A label associated with the operator, for debugging purposes. XSignal The second signal to be multiplied. YSignal The signal to be incremented. make_step(signals, dt, rng)[source] Returns a callable that performs the desired computation. This method must be implemented by subclasses. To fully understand what an operator does, look at its implementation of make_step. Parameters signalsSignalDict A mapping from signals to their associated live ndarrays. dtfloat Length of each simulation timestep, in seconds. rngnumpy.random.RandomState Random number generator for stochastic operators. class nengo.builder.operator.SparseDotInc(A, X, Y, tag=None)[source] Like DotInc but A is a sparse matrix. New in version 3.0.0. class nengo.builder.operator.BsrDotInc(A, X, Y, indices, indptr, reshape=None, tag=None)[source] Increment signal Y by dot(A, X) using block sparse row format. Implements Y[...] += np.dot(A, X), where A is an instance of scipy.sparse.bsr_matrix. Note Requires SciPy. Note Currently, this only supports matrix-vector multiplies for compatibility with NengoOCL. Parameters A(k, r, c) Signal The signal providing the k data blocks with r rows and c columns. X(k * c) Signal The signal providing the k column vectors to multiply with. Y(k * r) Signal The signal providing the k column vectors to update. indicesndarray Column indices, see scipy.sparse.bsr_matrix for details. indptrndarray Column index pointers, see scipy.sparse.bsr_matrix for details. reshapebool Whether to reshape the result. tagstr, optional A label associated with the operator, for debugging purposes. Notes 1. sets [] 2. incs [Y] 3. reads [A, X] 4. updates [] Attributes A(k, r, c) Signal The signal providing the k data blocks with r rows and c columns. indicesndarray Column indices, see scipy.sparse.bsr_matrix for details. indptrndarray Column index pointers, see scipy.sparse.bsr_matrix for details. reshapebool Whether to reshape the result. tagstr or None A label associated with the operator, for debugging purposes. X(k * c) Signal The signal providing the k column vectors to multiply with. Y(k * r) Signal The signal providing the k column vectors to update. make_step(signals, dt, rng)[source] Returns a callable that performs the desired computation. This method must be implemented by subclasses. To fully understand what an operator does, look at its implementation of make_step. Parameters signalsSignalDict A mapping from signals to their associated live ndarrays. dtfloat Length of each simulation timestep, in seconds. rngnumpy.random.RandomState Random number generator for stochastic operators. class nengo.builder.operator.SimPyFunc(output, fn, t, x, tag=None)[source] Apply a Python function to a signal, with optional arguments. Implements output[...] = fn(*args) where args can include the current simulation time t and an input signal x. Note that output may also be None, in which case the function is called but no output is captured. Parameters outputSignal or None The signal to be set. If None, the function is still called. fncallable The function to call. tSignal or None The signal associated with the time (a float, in seconds). If None, the time will not be passed to fn. xSignal or None An input signal to pass to fn. If None, an input signal will not be passed to fn. tagstr, optional A label associated with the operator, for debugging purposes. Notes 1. sets [] if output is None else [output] 2. incs [] 3. reads ([] if t is None else [t]) + ([] if x is None else [x]) 4. updates [] Attributes fncallable The function to call. outputSignal or None The signal to be set. If None, the function is still called. tSignal or None The signal associated with the time (a float, in seconds). If None, the time will not be passed to fn. tagstr or None A label associated with the operator, for debugging purposes. xSignal or None An input signal to pass to fn. If None, an input signal will not be passed to fn. make_step(signals, dt, rng)[source] Returns a callable that performs the desired computation. This method must be implemented by subclasses. To fully understand what an operator does, look at its implementation of make_step. Parameters signalsSignalDict A mapping from signals to their associated live ndarrays. dtfloat Length of each simulation timestep, in seconds. rngnumpy.random.RandomState Random number generator for stochastic operators. ### Signals¶ nengo.builder.signal.is_sparse Check if obj is a sparse matrix. nengo.builder.Signal Represents data or views onto data within a Nengo simulation. nengo.builder.signal.SignalDict Map from Signal -> ndarray. nengo.builder.signal.is_sparse(obj)[source] Check if obj is a sparse matrix. class nengo.builder.Signal(initial_value=None, shape=None, name=None, base=None, readonly=False, offset=0)[source] Represents data or views onto data within a Nengo simulation. Signals are tightly coupled to NumPy arrays, which is how live data is represented in a Nengo simulation. Signals provide a view onto the important metadata of the live NumPy array, and maintain the original value of the array in order to reset the simulation to the initial state. Parameters initial_valuearray_like The initial value of the signal. Much of the metadata tracked by the Signal is based on this array as well (e.g., dtype). namestr, optional Name of the signal. Primarily used for debugging. If None, the memory location of the Signal will be used. baseSignal, optional The base signal, if this signal is a view on another signal. Linking the two signals with the base argument is necessary to ensure that their live data is also linked. Whether this signal and its related live data should be marked as readonly. Writing to these arrays will raise an exception. offsetint, optional For a signal view this gives the offset of the view from the base initial_value in bytes. This might differ from the offset of the NumPy array view provided as initial_value if the base is a view already (in which case the signal base offset will be 0 because it starts where the view starts. That NumPy view can have an offset of itself). property base (Signal) The base signal, if this signal is a view. Linking the two signals with the base argument is necessary to ensure that their live data is also linked. property dtype (numpy.dtype) Data type of the signal (e.g., float64). property elemoffset (int) Offset of data from base in elements. property elemstrides (int) Strides of data in elements. property initial_value (numpy.ndarray) Initial value of the signal. Much of the metadata tracked by the Signal is based on this array as well (e.g., dtype). property is_view (bool) True if this Signal is a view on another Signal. property itemsize (int) Size of an array element in bytes. property name (str) Name of the signal. Primarily used for debugging. property nbytes (int) Number of bytes consumed by the signal. property ndim (int) Number of array dimensions. property offset (int) Offset of data from base in bytes. For a signal view this gives the offset of the view from the base initial_value in bytes. This might differ from the offset of the NumPy array view provided as initial_value if the base is a view already (in which case the signal base offset will be 0 because it starts where the view starts. That NumPy view can have an offset of itself). property readonly (bool) Whether associated live data can be changed. property shape (tuple) Tuple of array dimensions. property size (int) Total number of elements. property strides (tuple) Strides of data in bytes. property sparse (bool) Whether the signal is sparse. may_share_memory(other)[source] Determine if two signals might overlap in memory. This comparison is not exact and errs on the side of false positives. See numpy.may_share_memory for more details. Parameters otherSignal The other signal we are investigating. reshape(*shape)[source] Return a view on this signal with a different shape. Note that reshape cannot change the overall size of the signal. See numpy.reshape for more details. Any number of integers can be passed to this method, describing the desired shape of the returned signal. class nengo.builder.signal.SignalDict[source] Map from Signal -> ndarray. This dict subclass ensures that the ndarray values aren’t overwritten, and instead data are written into them, which ensures that these arrays never get copied, which wastes time and space. Use init to set the ndarray initially. init(signal)[source] Set up a permanent mapping from signal -> data. reset(signal)[source] Reset ndarray to the base value of the signal that maps to it. ### Network builder¶ nengo.builder.network.build_network Builds a Network object into a model. nengo.builder.network.seed_network Populate seeding dictionaries for all objects in a network. nengo.builder.network.build_network(model, network, progress=None)[source] Builds a Network object into a model. The network builder does this by mapping each high-level object to its associated signals and operators one-by-one, in the following order: 1. Ensembles, nodes, neurons 2. Subnetworks (recursively) 3. Connections, learning rules 4. Probes Before calling any of the individual objects’ build functions, random number seeds are assigned to objects that did not have a seed explicitly set by the user. Whether the seed was assigned manually or automatically is tracked, and the decoder cache is only used when the seed is assigned manually. Parameters modelModel The model to build into. networkNetwork The network to build. progressProgress, optional Object used to track the build progress. Note that this will only affect top-level networks. Notes Sets model.params[network] to None. nengo.builder.network.seed_network(network, seeds, seeded, base_rng=numpy.random)[source] Populate seeding dictionaries for all objects in a network. This includes all subnetworks. New in version 3.0.0. Parameters networkNetwork The network containing all objects to set seeds for. seeds{object: int} Pre-existing map from objects to seeds for those objects. Will be modified in-place, but entries will not be overwritten if already set. seeded{object: bool} Pre-existing map from objects to a boolean indicating whether they have a fixed seed either themselves or from a parent network (True), or whether the seed is randomly generated (False). Will be modified in-place, but entries will not be overwritten if already set. base_rngnp.random.RandomState Random number generator to use to set the seeds. ### Connection builder¶ nengo.builder.connection.BuiltConnection Collects the parameters generated in build_connection. nengo.builder.connection.get_eval_points Get evaluation points for connection. nengo.builder.connection.get_targets Get target points for connection with given evaluation points. nengo.builder.connection.build_linear_system Get all arrays needed to compute decoders. nengo.builder.connection.build_decoders Compute decoders for connection. nengo.builder.connection.solve_for_decoders Solver for decoders. nengo.builder.connection.slice_signal Apply a slice operation to given signal. nengo.builder.connection.build_solver Apply decoder solver to connection. nengo.builder.connection.build_no_solver Special builder for NoSolver to skip unnecessary steps. nengo.builder.connection.build_connection Builds a Connection object into a model. class nengo.builder.connection.BuiltConnection[source] Collects the parameters generated in build_connection. These are stored here because in the majority of cases the equivalent attribute in the original connection is a Distribution. The attributes of a BuiltConnection are the full NumPy arrays used in the simulation. See the Connection documentation for more details on each parameter. Parameters eval_pointsndarray Evaluation points. solver_infodict Information dictionary returned by the Solver. weightsndarray Connection weights. May be synaptic connection weights defined in the connection’s transform, or a combination of the decoders automatically solved for and the specified transform. transformndarray The transform matrix. Create new instance of BuiltConnection(eval_points, solver_info, weights, transform) nengo.builder.connection.get_eval_points(model, conn, rng)[source] Get evaluation points for connection. nengo.builder.connection.get_targets(conn, eval_points, dtype=None)[source] Get target points for connection with given evaluation points. nengo.builder.connection.build_linear_system(model, conn, rng)[source] Get all arrays needed to compute decoders. nengo.builder.connection.build_decoders(model, conn, rng)[source] Compute decoders for connection. nengo.builder.connection.solve_for_decoders(conn, gain, bias, x, targets, rng)[source] Solver for decoders. Factored out from build_decoders for use with the cache system. nengo.builder.connection.slice_signal(model, signal, sl)[source] Apply a slice operation to given signal. nengo.builder.connection.build_solver(model, solver, conn, rng)[source] Apply decoder solver to connection. nengo.builder.connection.build_no_solver(model, solver, conn, rng)[source] Special builder for NoSolver to skip unnecessary steps. nengo.builder.connection.build_connection(model, conn)[source] Builds a Connection object into a model. A brief summary of what happens in the connection build process, in order: 1. Solve for decoders. 2. Combine transform matrix with decoders to get weights. 3. Add operators for computing the function or multiplying neural activity by weights. 4. Call build function for the synapse. 5. Call build function for the learning rule. 6. Add operator for applying learning rule delta to weights. Some of these steps may be altered or omitted depending on the parameters of the connection, in particular the pre and post types. Parameters modelModel The model to build into. connConnection The connection to build. Notes Sets model.params[conn] to a BuiltConnection instance. ### Ensemble builder¶ nengo.builder.ensemble.BuiltEnsemble Collects the parameters generated in build_ensemble. nengo.builder.ensemble.gen_eval_points Generate evaluation points for ensemble. nengo.builder.ensemble.get_activities Get output of ensemble neurons for given evaluation points. nengo.builder.ensemble.get_gain_bias Compute concrete gain and bias for ensemble. nengo.builder.ensemble.build_ensemble Builds an Ensemble object into a model. class nengo.builder.ensemble.BuiltEnsemble[source] Collects the parameters generated in build_ensemble. These are stored here because in the majority of cases the equivalent attribute in the original ensemble is a Distribution. The attributes of a BuiltEnsemble are the full NumPy arrays used in the simulation. See the Ensemble documentation for more details on each parameter. Parameters eval_pointsndarray Evaluation points. encodersndarray Normalized encoders. interceptsndarray X-intercept of each neuron. max_ratesndarray Maximum firing rates for each neuron. scaled_encodersndarray Normalized encoders scaled by the gain and radius. This quantity is used in the actual simulation, unlike encoders. gainndarray Gain of each neuron. biasndarray Bias current injected into each neuron. Create new instance of BuiltEnsemble(eval_points, encoders, intercepts, max_rates, scaled_encoders, gain, bias) nengo.builder.ensemble.gen_eval_points(ens, eval_points, rng, scale_eval_points=True, dtype=None)[source] Generate evaluation points for ensemble. nengo.builder.ensemble.get_activities(built_ens, ens, eval_points)[source] Get output of ensemble neurons for given evaluation points. nengo.builder.ensemble.get_gain_bias(ens, rng=numpy.random, dtype=None)[source] Compute concrete gain and bias for ensemble. nengo.builder.ensemble.build_ensemble(model, ens)[source] Builds an Ensemble object into a model. A brief summary of what happens in the ensemble build process, in order: 1. Generate evaluation points and encoders. 2. Normalize encoders to unit length. 3. Determine bias and gain. 4. Create neuron input signal 5. Add operator for injecting bias. 6. Call build function for neuron type. 7. Scale encoders by gain and radius. 8. Add operators for multiplying decoded input signal by encoders and incrementing the result in the neuron input signal. 9. Call build function for injected noise. Some of these steps may be altered or omitted depending on the parameters of the ensemble, in particular the neuron type. For example, most steps are omitted for the Direct neuron type. Parameters modelModel The model to build into. ensEnsemble The ensemble to build. Notes Sets model.params[ens] to a BuiltEnsemble instance. ### Learning rule builders¶ nengo.builder.learning_rules.SimPES Calculate connection weight change according to the PES rule. nengo.builder.learning_rules.SimBCM Calculate connection weight change according to the BCM rule. nengo.builder.learning_rules.SimOja Calculate connection weight change according to the Oja rule. nengo.builder.learning_rules.SimVoja Simulates a simplified version of Oja’s rule in the vector space. nengo.builder.learning_rules.get_pre_ens Get the input Ensemble for connection. nengo.builder.learning_rules.get_post_ens Get the output Ensemble for connection. nengo.builder.learning_rules.build_or_passthrough Builds the obj on signal, or returns the signal if obj is None. nengo.builder.learning_rules.build_learning_rule Builds a LearningRule object into a model. nengo.builder.learning_rules.build_bcm Builds a BCM object into a model. nengo.builder.learning_rules.build_oja Builds a BCM object into a model. nengo.builder.learning_rules.build_voja Builds a Voja object into a model. nengo.builder.learning_rules.build_pes Builds a PES object into a model. class nengo.builder.learning_rules.SimPES(pre_filtered, error, delta, learning_rate, tag=None)[source] Calculate connection weight change according to the PES rule. Implements the PES learning rule of the form $\Delta \omega_{ij} = \frac{\kappa}{n} e_j a_i$ where • $$\kappa$$ is a scalar learning rate, • $$n$$ is the number of presynaptic neurons • $$e_j$$ is the error for the jth output dimension, and • $$a_i$$ is the activity of a presynaptic neuron. New in version 3.0.0. Parameters pre_filteredSignal The presynaptic activity, $$a_i$$. errorSignal The error signal, $$e_j$$. deltaSignal The synaptic weight change to be applied, $$\Delta \omega_{ij}$$. learning_ratefloat The scalar learning rate, $$\kappa$$. tagstr, optional A label associated with the operator, for debugging purposes. Notes 1. sets [] 2. incs [] 3. reads [pre_filtered, error] 4. updates [delta] Attributes pre_filteredSignal The presynaptic activity, $$a_i$$. errorSignal The error signal, $$e_j$$. deltaSignal The synaptic weight change to be applied, $$\Delta \omega_{ij}$$. learning_ratefloat The scalar learning rate, $$\kappa$$. tagstr, optional A label associated with the operator, for debugging purposes. make_step(signals, dt, rng)[source] Returns a callable that performs the desired computation. This method must be implemented by subclasses. To fully understand what an operator does, look at its implementation of make_step. Parameters signalsSignalDict A mapping from signals to their associated live ndarrays. dtfloat Length of each simulation timestep, in seconds. rngnumpy.random.RandomState Random number generator for stochastic operators. class nengo.builder.learning_rules.SimBCM(pre_filtered, post_filtered, theta, delta, learning_rate, tag=None)[source] Calculate connection weight change according to the BCM rule. Implements the Bienenstock-Cooper-Munroe learning rule of the form $\Delta \omega_{ij} = \kappa a_j (a_j - \theta_j) a_i$ where • $$\kappa$$ is a scalar learning rate, • $$a_j$$ is the activity of a postsynaptic neuron, • $$\theta_j$$ is an estimate of the average $$a_j$$, and • $$a_i$$ is the activity of a presynaptic neuron. Parameters pre_filteredSignal The presynaptic activity, $$a_i$$. post_filteredSignal The postsynaptic activity, $$a_j$$. thetaSignal The modification threshold, $$\theta_j$$. deltaSignal The synaptic weight change to be applied, $$\Delta \omega_{ij}$$. learning_ratefloat The scalar learning rate, $$\kappa$$. tagstr, optional A label associated with the operator, for debugging purposes. Notes 1. sets [] 2. incs [] 3. reads [pre_filtered, post_filtered, theta] 4. updates [delta] Attributes deltaSignal The synaptic weight change to be applied, $$\Delta \omega_{ij}$$. learning_ratefloat The scalar learning rate, $$\kappa$$. post_filteredSignal The postsynaptic activity, $$a_j$$. pre_filteredSignal The presynaptic activity, $$a_i$$. tagstr or None A label associated with the operator, for debugging purposes. thetaSignal The modification threshold, $$\theta_j$$. make_step(signals, dt, rng)[source] Returns a callable that performs the desired computation. This method must be implemented by subclasses. To fully understand what an operator does, look at its implementation of make_step. Parameters signalsSignalDict A mapping from signals to their associated live ndarrays. dtfloat Length of each simulation timestep, in seconds. rngnumpy.random.RandomState Random number generator for stochastic operators. class nengo.builder.learning_rules.SimOja(pre_filtered, post_filtered, weights, delta, learning_rate, beta, tag=None)[source] Calculate connection weight change according to the Oja rule. Implements the Oja learning rule of the form $\Delta \omega_{ij} = \kappa (a_i a_j - \beta a_j^2 \omega_{ij})$ where • $$\kappa$$ is a scalar learning rate, • $$a_i$$ is the activity of a presynaptic neuron, • $$a_j$$ is the activity of a postsynaptic neuron, • $$\beta$$ is a scalar forgetting rate, and • $$\omega_{ij}$$ is the connection weight between the two neurons. Parameters pre_filteredSignal The presynaptic activity, $$a_i$$. post_filteredSignal The postsynaptic activity, $$a_j$$. weightsSignal The connection weight matrix, $$\omega_{ij}$$. deltaSignal The synaptic weight change to be applied, $$\Delta \omega_{ij}$$. learning_ratefloat The scalar learning rate, $$\kappa$$. betafloat The scalar forgetting rate, $$\beta$$. tagstr, optional A label associated with the operator, for debugging purposes. Notes 1. sets [] 2. incs [] 3. reads [pre_filtered, post_filtered, weights] 4. updates [delta] Attributes betafloat The scalar forgetting rate, $$\beta$$. deltaSignal The synaptic weight change to be applied, $$\Delta \omega_{ij}$$. learning_ratefloat The scalar learning rate, $$\kappa$$. post_filteredSignal The postsynaptic activity, $$a_j$$. pre_filteredSignal The presynaptic activity, $$a_i$$. tagstr or None A label associated with the operator, for debugging purposes. weightsSignal The connection weight matrix, $$\omega_{ij}$$. make_step(signals, dt, rng)[source] Returns a callable that performs the desired computation. This method must be implemented by subclasses. To fully understand what an operator does, look at its implementation of make_step. Parameters signalsSignalDict A mapping from signals to their associated live ndarrays. dtfloat Length of each simulation timestep, in seconds. rngnumpy.random.RandomState Random number generator for stochastic operators. class nengo.builder.learning_rules.SimVoja(pre_decoded, post_filtered, scaled_encoders, delta, scale, learning_signal, learning_rate, tag=None)[source] Simulates a simplified version of Oja’s rule in the vector space. See Learning new associations for details. Parameters pre_decodedSignal Decoded activity from presynaptic ensemble, $$a_i$$. post_filteredSignal Filtered postsynaptic activity signal. 2d array of encoders, multiplied by scale. deltaSignal The synaptic weight change to be applied, $$\Delta \omega_{ij}$$. scalendarray The length of each encoder. learning_signalSignal Scalar signal to be multiplied by learning_rate. Expected to be either 0 or 1 to turn learning off or on, respectively. learning_ratefloat The scalar learning rate. tagstr, optional A label associated with the operator, for debugging purposes. Notes 1. sets [] 2. incs [] 3. reads [pre_decoded, post_filtered, scaled_encoders, learning_signal] 4. updates [delta] Attributes deltaSignal The synaptic weight change to be applied, $$\Delta \omega_{ij}$$. learning_ratefloat The scalar learning rate. learning_signalSignal Scalar signal to be multiplied by learning_rate. Expected to be either 0 or 1 to turn learning off or on, respectively. post_filteredSignal Filtered postsynaptic activity signal. pre_decodedSignal Decoded activity from presynaptic ensemble, $$a_i$$. scalendarray The length of each encoder. 2d array of encoders, multiplied by scale. tagstr or None A label associated with the operator, for debugging purposes. make_step(signals, dt, rng)[source] Returns a callable that performs the desired computation. This method must be implemented by subclasses. To fully understand what an operator does, look at its implementation of make_step. Parameters signalsSignalDict A mapping from signals to their associated live ndarrays. dtfloat Length of each simulation timestep, in seconds. rngnumpy.random.RandomState Random number generator for stochastic operators. nengo.builder.learning_rules.get_pre_ens(conn)[source] Get the input Ensemble for connection. nengo.builder.learning_rules.get_post_ens(conn)[source] Get the output Ensemble for connection. nengo.builder.learning_rules.build_or_passthrough(model, obj, signal)[source] Builds the obj on signal, or returns the signal if obj is None. nengo.builder.learning_rules.build_learning_rule(model, rule)[source] Builds a LearningRule object into a model. A brief summary of what happens in the learning rule build process, in order: 1. Create a delta signal for the weight change. 2. Add an operator to increment the weights by delta. 3. Call build function for the learning rule type. The learning rule system is designed to work with multiple learning rules on the same connection. If only one learning rule was to be applied to the connection, then we could directly modify the weights, rather than calculating the delta here and applying it in build_connection. However, with multiple learning rules, we must isolate each delta signal in case calculating the delta depends on the weights themselves, making the calculation depend on the order of the learning rule evaluations. Parameters modelModel The model to build into. ruleLearningRule The learning rule to build. Notes Sets model.params[rule] to None. nengo.builder.learning_rules.build_bcm(model, bcm, rule)[source] Builds a BCM object into a model. Calls synapse build functions to filter the pre and post activities, and adds a SimBCM operator to the model to calculate the delta. Parameters modelModel The model to build into. bcmBCM Learning rule type to build. ruleLearningRule The learning rule object corresponding to the neuron type. Notes Does not modify model.params[] and can therefore be called more than once with the same BCM instance. nengo.builder.learning_rules.build_oja(model, oja, rule)[source] Builds a BCM object into a model. Calls synapse build functions to filter the pre and post activities, and adds a SimOja operator to the model to calculate the delta. Parameters modelModel The model to build into. ojaOja Learning rule type to build. ruleLearningRule The learning rule object corresponding to the neuron type. Notes Does not modify model.params[] and can therefore be called more than once with the same Oja instance. nengo.builder.learning_rules.build_voja(model, voja, rule)[source] Builds a Voja object into a model. Calls synapse build functions to filter the post activities, and adds a SimVoja operator to the model to calculate the delta. Parameters modelModel The model to build into. vojaVoja Learning rule type to build. ruleLearningRule The learning rule object corresponding to the neuron type. Notes Does not modify model.params[] and can therefore be called more than once with the same Voja instance. nengo.builder.learning_rules.build_pes(model, pes, rule)[source] Builds a PES object into a model. Calls synapse build functions to filter the pre activities, and adds a SimPES operator to the model to calculate the delta. Parameters modelModel The model to build into. pesPES Learning rule type to build. ruleLearningRule The learning rule object corresponding to the neuron type. Notes Does not modify model.params[] and can therefore be called more than once with the same PES instance. ### Neuron builders¶ nengo.builder.neurons.SimNeurons Set a neuron model output for the given input current. nengo.builder.neurons.build_neurons Builds a NeuronType object into a model. nengo.builder.neurons.build_rates_to_spikes Builds a RatesToSpikesNeuronType object into a model. class nengo.builder.neurons.SimNeurons(neurons, J, output, state=None, tag=None)[source] Set a neuron model output for the given input current. Implements neurons.step(dt, J, **state). Parameters neuronsNeuronType The NeuronType, which defines a step function. JSignal The input current. outputSignal The neuron output signal that will be set statelist, optional A list of additional neuron state signals set by step. tagstr, optional A label associated with the operator, for debugging purposes. Notes 1. sets [output] + state 2. incs [] 3. reads [J] 4. updates [] Attributes JSignal The input current. neuronsNeuronType The NeuronType, which defines a step function. outputSignal The neuron output signal that will be set. statelist A list of additional neuron state signals set by step. tagstr or None A label associated with the operator, for debugging purposes. make_step(signals, dt, rng)[source] Returns a callable that performs the desired computation. This method must be implemented by subclasses. To fully understand what an operator does, look at its implementation of make_step. Parameters signalsSignalDict A mapping from signals to their associated live ndarrays. dtfloat Length of each simulation timestep, in seconds. rngnumpy.random.RandomState Random number generator for stochastic operators. nengo.builder.neurons.build_neurons(model, neurontype, neurons, input_sig=None, output_sig=None)[source] Builds a NeuronType object into a model. This function adds a SimNeurons operator connecting the input current to the neural output signals, and handles any additional state variables defined within the neuron type. Parameters modelModel The model to build into. neurontypeNeuronType Neuron type to build. neuronNeurons The neuron population object corresponding to the neuron type. Notes Does not modify model.params[] and can therefore be called more than once with the same NeuronType instance. nengo.builder.neurons.build_rates_to_spikes(model, neurontype, neurons)[source] Builds a RatesToSpikesNeuronType object into a model. This function adds two SimNeurons operators. The first one handles simulating the base_type, converting input signals into rates. The second one takes those rates as input and emits spikes. Parameters modelModel The model to build into. neurontypeRatesToSpikesNeuronType Neuron type to build. neuronNeurons The neuron population object corresponding to the neuron type. Notes Does not modify model.params[] and can therefore be called more than once with the same NeuronType instance. ### Node builder¶ nengo.builder.node.build_node Builds a Node object into a model. nengo.builder.node.build_node(model, node)[source] Builds a Node object into a model. The node build function is relatively simple. It involves creating input and output signals, and connecting them with an Operator that depends on the type of node.output. Parameters modelModel The model to build into. nodeNode The node to build. Notes Sets model.params[node] to None. ### Probe builder¶ nengo.builder.probe.conn_probe Build a “connection” probe type. nengo.builder.probe.signal_probe Build a “signal” probe type. nengo.builder.probe.build_probe Builds a Probe object into a model. nengo.builder.probe.conn_probe(model, probe)[source] Build a “connection” probe type. Connection probes create a connection from the target, and probe the resulting signal (used when you want to probe the default output of an object, which may not have a predefined signal). nengo.builder.probe.signal_probe(model, key, probe)[source] Build a “signal” probe type. Signal probes directly probe a target signal. nengo.builder.probe.build_probe(model, probe)[source] Builds a Probe object into a model. Under the hood, there are two types of probes: connection probes and signal probes. Connection probes are those that are built by creating a new Connection object from the probe’s target to the probe, and calling that connection’s build function. Creating and building a connection ensure that the result of probing the target’s attribute is the same as would result from that target being connected to another object. Signal probes are those that are built by finding the correct Signal in the model and calling the build function corresponding to the probe’s synapse. Parameters modelModel The model to build into. probeProbe The connection to build. Notes Sets model.params[probe] to a list. Simulator appends to that list when running a simulation. ### Process builder¶ nengo.builder.processes.SimProcess Simulate a process. nengo.builder.processes.build_process Builds a Process object into a model. class nengo.builder.processes.SimProcess(process, input, output, t, mode='set', state=None, tag=None)[source] Simulate a process. Parameters processProcess The Process to simulate. inputSignal or None Input to the process, or None if no input. outputSignal or None Output from the process, or None if no output. tSignal The signal associated with the time (a float, in seconds). modestr, optional Denotes what type of update this operator performs. Must be one of 'update', 'inc' or 'set'. tagstr, optional A label associated with the operator, for debugging purposes. Notes 1. sets [output] if output is not None and mode=='set' else [] 2. incs [output] if output is not None and mode=='inc' else [] 3. reads [t, input] if input is not None else [t] 4. updates [output] if output is not None and mode=='update' else [] Attributes inputSignal or None Input to the process, or None if no input. modestr Denotes what type of update this operator performs. outputSignal or None Output from the process, or None if no output. processProcess The Process to simulate. tSignal The signal associated with the time (a float, in seconds). tagstr or None A label associated with the operator, for debugging purposes. make_step(signals, dt, rng)[source] Returns a callable that performs the desired computation. This method must be implemented by subclasses. To fully understand what an operator does, look at its implementation of make_step. Parameters signalsSignalDict A mapping from signals to their associated live ndarrays. dtfloat Length of each simulation timestep, in seconds. rngnumpy.random.RandomState Random number generator for stochastic operators. nengo.builder.processes.build_process(model, process, sig_in=None, sig_out=None, mode='set')[source] Builds a Process object into a model. Parameters modelModel The model to build into. processProcess Process to build. sig_inSignal, optional The input signal, or None if no input signal. sig_outSignal, optional The output signal, or None if no output signal. mode“set” or “inc” or “update”, optional The mode of the built SimProcess. Notes Does not modify model.params[] and can therefore be called more than once with the same Process instance. ### Transform builders¶ nengo.builder.transforms.multiply Matrix-matrix multiply, interpreting vectors as diagonal matrices. nengo.builder.transforms.build_dense Build a Dense transform object. nengo.builder.transforms.build_sparse Build a Sparse transform object. nengo.builder.transforms.build_convolution Build a Convolution transform object. nengo.builder.transforms.ConvInc Apply convolutional weights to input signal. nengo.builder.transforms.build_no_transform Build a NoTransform transform object. nengo.builder.transforms.multiply(x, y)[source] Matrix-matrix multiply, interpreting vectors as diagonal matrices. nengo.builder.transforms.build_dense(model, transform, sig_in, decoders=None, encoders=None, rng=numpy.random)[source] Build a Dense transform object. nengo.builder.transforms.build_sparse(model, transform, sig_in, decoders=None, encoders=None, rng=numpy.random)[source] Build a Sparse transform object. nengo.builder.transforms.build_convolution(model, transform, sig_in, decoders=None, encoders=None, rng=numpy.random)[source] Build a Convolution transform object. class nengo.builder.transforms.ConvInc(W, X, Y, conv, tag=None)[source] Apply convolutional weights to input signal. New in version 3.0.0. Parameters WSignal The convolutional weights (a.k.a. the kernel). XSignal The input signal. YSignal Output signal to be incremented. convConvolution The Convolution object being applied. tagstr, optional A label associated with the operator, for debugging purposes. Notes 1. sets [] 2. incs [Y] 3. reads [W, X] 4. updates [] Attributes WSignal The convolutional weights. XSignal The input signal. YSignal Output signal to be incremented. convConvolution The Convolution object being applied. tagstr, optional A label associated with the operator, for debugging purposes. make_step(signals, dt, rng)[source] Returns a callable that performs the desired computation. This method must be implemented by subclasses. To fully understand what an operator does, look at its implementation of make_step. Parameters signalsSignalDict A mapping from signals to their associated live ndarrays. dtfloat Length of each simulation timestep, in seconds. rngnumpy.random.RandomState Random number generator for stochastic operators. nengo.builder.transforms.build_no_transform(model, transform, sig_in, decoders=None, encoders=None, rng=numpy.random)[source] Build a NoTransform transform object. ### Decoder cache¶ Caching capabilities for a faster build process. nengo.cache.get_fragment_size Get fragment size in cross-compatible way. nengo.cache.safe_stat Does os.stat, but fails gracefully in case of an OSError. nengo.cache.safe_remove Does os.remove, but fails gracefully in case of an OSError. nengo.cache.safe_makedirs Try to make directories, but continue on error. nengo.cache.check_dtype Check that array is a standard dtype. nengo.cache.check_seq Check that all objects in list are fingerprintable. nengo.cache.check_mapping Check that all values in dict are fingerprintable. nengo.cache.check_attrs Check that all attributes of obj are fingerprintable. nengo.cache.Fingerprint Fingerprint of an object instance. nengo.cache.CacheIndex Cache index mapping keys to (filename, start, end) tuples. nengo.cache.WriteableCacheIndex Writable cache index mapping keys to files. nengo.cache.DecoderCache Cache for decoders. nengo.cache.NoDecoderCache Provides the same interface as DecoderCache without caching. nengo.cache.get_default_decoder_cache Get default decoder implementation based on config settings. nengo.cache.get_fragment_size(path)[source] Get fragment size in cross-compatible way. nengo.cache.safe_stat(path)[source] Does os.stat, but fails gracefully in case of an OSError. nengo.cache.safe_remove(path)[source] Does os.remove, but fails gracefully in case of an OSError. nengo.cache.safe_makedirs(path)[source] Try to make directories, but continue on error. nengo.cache.check_dtype(ndarray)[source] Check that array is a standard dtype. nengo.cache.check_seq(tpl)[source] Check that all objects in list are fingerprintable. nengo.cache.check_mapping(mapping)[source] Check that all values in dict are fingerprintable. nengo.cache.check_attrs(obj)[source] Check that all attributes of obj are fingerprintable. class nengo.cache.Fingerprint(obj)[source] Fingerprint of an object instance. A finger print is equal for two instances if and only if they are of the same type and have the same attributes. The fingerprint will be used as identification for caching. Parameters objobject Object to fingerprint. Notes Not all objects can be fingerprinted. In particular, custom classes are tricky to fingerprint as their implementation can change without changing its fingerprint, as the type and attributes may be the same. In order to ensure that only safe object are fingerprinted, this class maintains class attribute WHITELIST that contains all types that can be safely fingerprinted. If you want your custom class to be fingerprinted, call the whitelist class method and pass in your class. Attributes fingerprinthash A unique fingerprint for the object instance. classmethod supports(obj)[source] Determines whether obj can be fingerprinted. Uses the whitelist method and runs the check function associated with the type of obj. classmethod whitelist(typ, fn=None)[source] Whitelist the type given in typ. Will run the check function fn on objects if provided. class nengo.cache.CacheIndex(cache_dir)[source] Cache index mapping keys to (filename, start, end) tuples. Once instantiated the cache index has to be used in a with block to allow access. The index will not be loaded before the with block is entered. This class only provides read access to the cache index. For write access use WriteableCacheIndex. Parameters cache_dirstr Path where the cache is stored. Notes Under the hood, the cache index is stored as a pickle file. The pickle file contains two objects which are read sequentially: the version tuple, and the index dictionary mapping keys to (filename, start, end) tuples. Examples from nengo.cache import CacheIndex, WriteableCacheIndex to_cache = ("gotta cache 'em all", 151) # create index file with WriteableCacheIndex(cache_dir) as index: index[hash(to_cache)] = ("file1", 0, 1) # set an item with CacheIndex(cache_dir) as index: filename, start, end = index[hash(to_cache)] Attributes cache_dirstr Path where the cache is stored. index_pathstr Path to the cache index file. versiontuple Version code of the loaded cache index. The first element gives the format of the cache and the second element gives the pickle protocol used to store the index. Note that a cache index will always be written in the newest version with the highest pickle protocol. VERSION (class attribute)int Highest supported version, and version used to store the cache index. class nengo.cache.WriteableCacheIndex(cache_dir)[source] Writable cache index mapping keys to files. The updated cache file will be written when the with block is exited. The initial read and the write on exit of the with block are locked against concurrent access with a file lock. The lock will be released within the with block. Parameters cache_dirstr Path where the cache is stored. Examples from nengo.cache import WriteableCacheIndex to_cache = ("gotta cache 'em all", 151) key = hash(to_cache) with WriteableCacheIndex(cache_dir) as index: index[key] = ("file1", 0, 1) # set an item del index[key] # remove an item by key index.remove_file_entry("file1") # remove an item by filename remove_file_entry(filename)[source] Remove entries mapping to filename. sync()[source] Write changes to the cache index back to disk. The call to this function will be locked by a file lock. class nengo.cache.DecoderCache(readonly=False, cache_dir=None)[source] Cache for decoders. Hashes the arguments to the decoder solver and stores the result in a file which will be reused in later calls with the same arguments. Be aware that decoders should not use any global state, but only values passed and attributes of the object instance. Otherwise the wrong solver results might get loaded from the cache. Parameters Indicates that already existing items in the cache will be used, but no new items will be written to the disk in case of a cache miss. cache_dirstr or None Path to the directory in which the cache will be stored. It will be created if it does not exists. Will use the value returned by get_default_dir, if None. static get_default_dir()[source] Returns the default location of the cache. Returns str get_files()[source] Returns all of the files in the cache. Returns list of (str, int) tuples get_size()[source] Returns the size of the cache with units as a string. Returns str get_size_in_bytes()[source] Returns the size of the cache in bytes as an int. Returns int invalidate()[source] Invalidates the cache (i.e. removes all cache files). shrink(limit=None)[source] Reduces the size of the cache to meet a limit. Parameters limitint, optional Maximum size of the cache in bytes. remove_file(path)[source] Removes the file at path from the cache. wrap_solver(solver_fn)[source] Takes a decoder solver and wraps it to use caching. Parameters solverfunc Decoder solver to wrap for caching. Returns func Wrapped decoder solver. class nengo.cache.NoDecoderCache[source] Provides the same interface as DecoderCache without caching. nengo.cache.get_default_decoder_cache()[source] Get default decoder implementation based on config settings. ### Optimizer¶ Operator graph optimizers. nengo.builder.optimizer.optimize Optimizes the operator graph by merging operators. nengo.builder.optimizer.OpMergePass Manages a single optimization pass. nengo.builder.optimizer.OpInfo Analyze and store extra information about operators. nengo.builder.optimizer.OpsToMerge Analyze and store extra information about a list of ops to be merged. nengo.builder.optimizer.OpMerger Manages the op merge classes known to the optimizer. nengo.builder.optimizer.Merger Base class for all op merge classes. nengo.builder.optimizer.ResetMerger Merge Reset ops. nengo.builder.optimizer.CopyMerger Merge Copy ops. nengo.builder.optimizer.ElementwiseIncMerger Merge ElementwiseInc ops. nengo.builder.optimizer.DotIncMerger Merge DotInc ops. nengo.builder.optimizer.SimNeuronsMerger Merge SimNeurons ops. nengo.builder.optimizer.SigMerger Merge signals. nengo.builder.optimizer.groupby Groups the given list by the value returned by keyfunc. nengo.builder.optimizer.optimize(model, dg)[source] Optimizes the operator graph by merging operators. This reduces the number of iterators to iterate over in slow Python code (as opposed to fast C code). The resulting merged operators will also operate on larger chunks of sequential memory, making better use of CPU caching and prefetching. The optimization algorithm has worst case complexity $$O(n^2 + e)$$, where $$n$$ is the number of operators and $$e$$ is the number of edges in the dependency graph. In practice the run time will be much better because not all $$n^2$$ pairwise combinations of operators will be evaluated. A grouping depending on the operator type and view bases is done with dictionaries. This grouping can be done in amortized linear time and reduces the actual worst-case runtime of the optimization algorithm to $$O(gm^2 + e)$$, where $$g$$ is the number of groups and $$m$$ is the number of elements in a group. Moreover, information about memory alignment will be used to cut the inner loop short in many cases and gives a runtime much closer to linear in most cases. Note that this function modifies both model and dg. Parameters modelnengo.builder.Model Builder output to optimize. dgdict Dict of the form {a: {b, c}} where b and c depend on a, specifying the operator dependency graph of the model. class nengo.builder.optimizer.OpMergePass(dg)[source] Manages a single optimization pass. perform_merges()[source] Go through all operators and merge them where possible. Parameters only_merge_ops_with_viewbool Limit merges to operators with views. perform_merges_for_subset(subset)[source] Performs operator merges for a subset of operators. Parameters subsetlist Subset of operators. perform_merges_for_view_subset(subset)[source] Perform merges for a subset of operators with the same view base. Parameters subsetlist Subset of operators. These need to have the same view base (can be None if it is None for all) for their first signal in all_signals. merge(tomerge)[source] Merges the given operators. This method will also update op_replacements, sig_replacements, and the internal list of merged operators to prevent further merges on the same operators before all required operators and signals have been replaced. class nengo.builder.optimizer.OpInfo[source] Analyze and store extra information about operators. class nengo.builder.optimizer.OpsToMerge(initial_op, merged, merged_dependents, dependents)[source] Analyze and store extra information about a list of ops to be merged. class nengo.builder.optimizer.OpMerger[source] Manages the op merge classes known to the optimizer. class nengo.builder.optimizer.Merger[source] Base class for all op merge classes. static merge_dicts(*dicts)[source] Merges the given dictionaries into a single dictionary. This function assumes and enforces that no keys overlap. class nengo.builder.optimizer.ResetMerger[source] Merge Reset ops. class nengo.builder.optimizer.CopyMerger[source] Merge Copy ops. class nengo.builder.optimizer.ElementwiseIncMerger[source] Merge ElementwiseInc ops. class nengo.builder.optimizer.DotIncMerger[source] Merge DotInc ops. class nengo.builder.optimizer.SimNeuronsMerger[source] Merge SimNeurons ops. class nengo.builder.optimizer.SigMerger[source] Merge signals. static check(signals, axis=0)[source] Checks that all signals can be concatenated along a given axis. For views, this includes also a check that the signals have a common base and agree on the strides. In comparison to the check_* functions, this function does not throw exceptions and allows for either signals or signal views. static check_signals(signals, axis=0)[source] Checks that all signals can be merged along a given axis. If this is not possible, or any signals are views, a ValueError will be raised. static check_views(signals, axis=0)[source] Checks that all signal views can be merged along a given axis. If this is not possible, or any signals are not views, a ValueError will be raised. signals must be ordered by the offset into the base signal. static merge(signals, axis=0)[source] Merges multiple signals or signal views into one contiguous signal. Note that if any of the signals are linked to another signal (by being the base of a view), the merged signal will not reflect those links anymore. Parameters signalssequence Signals to merge. Must not contain views. axisint, optional Axis along which to concatenate the signals. Returns merged_signalSignal The merged signal. replacementsdict Dictionary mapping from the old signals to new signals that are a view into the merged signal. Used to replace old signals. static merge_signals(signals, axis=0)[source] Merges multiple signal into one contiguous signal. Note that if any of the signals are linked to another signal (by being the base of a view), the merged signal will not reflect those links anymore. Parameters signalssequence Signals to merge. Must not contain views. axisint, optional Axis along which to concatenate the signals. Returns merged_signalSignal The merged signal. replacementsdict Dictionary mapping from the old signals to new signals that are a view into the merged signal. Used to replace old signals. static merge_views(signals, axis=0)[source] Merges multiple signal views into one contiguous signal view. Parameters signalssequence Signals to merge. Must only contain views. axisint, optional Axis along which to concatenate the signals. Returns merged_signalSignal The merged signal. replacementsdict Dictionary mapping from the old signals to new signals that are a view into the merged signal. Used to replace old signals. nengo.builder.optimizer.groupby(lst, keyfunc=<function <lambda>>)[source] Groups the given list by the value returned by keyfunc. Similar to itertools.groupby, but returns a dict, and does not depend on the order of the input list. ## Exceptions¶ nengo.exceptions.NengoException Base class for Nengo exceptions. nengo.exceptions.NengoWarning Base class for Nengo warnings. nengo.exceptions.ValidationError A ValueError encountered during validation of a parameter. nengo.exceptions.ReadonlyError A ValidationError occurring because a parameter is read-only. nengo.exceptions.BuildError A ValueError encountered during the build process. nengo.exceptions.ObsoleteError A feature that has been removed in a backwards-incompatible way. nengo.exceptions.MovedError A feature that has been moved elsewhere. nengo.exceptions.ConfigError A ValueError encountered in the config system. nengo.exceptions.SpaModuleError An error in how SPA keeps track of modules. nengo.exceptions.SpaParseError An error encountered while parsing a SPA expression. nengo.exceptions.SimulatorClosed Raised when attempting to run a closed simulator. nengo.exceptions.SimulationError An error encountered during simulation of the model. nengo.exceptions.SignalError An error dealing with Signals in the builder. nengo.exceptions.FingerprintError An error in fingerprinting an object for cache identification. nengo.exceptions.NetworkContextError An error with the Network context stack. nengo.exceptions.Unconvertible Raised a requested network conversion cannot be done. nengo.exceptions.CacheIOError An IO error in reading from or writing to the decoder cache. nengo.exceptions.TimeoutError A timeout occurred while waiting for a resource. nengo.exceptions.NotAddedToNetworkWarning A NengoObject has not been added to a network. nengo.exceptions.CacheIOWarning A non-critical issue in accessing files in the cache. exception nengo.exceptions.NengoException[source] Base class for Nengo exceptions. NengoException instances should not be created; this base class exists so that all exceptions raised by Nengo can be caught in a try / except block. exception nengo.exceptions.NengoWarning[source] Base class for Nengo warnings. exception nengo.exceptions.ValidationError(msg, attr, obj=None)[source] A ValueError encountered during validation of a parameter. exception nengo.exceptions.ReadonlyError(attr, obj=None, msg=None)[source] A ValidationError occurring because a parameter is read-only. exception nengo.exceptions.BuildError[source] A ValueError encountered during the build process. exception nengo.exceptions.ObsoleteError(msg, since=None, url=None)[source] A feature that has been removed in a backwards-incompatible way. exception nengo.exceptions.MovedError(location=None)[source] A feature that has been moved elsewhere. New in version 3.0.0. exception nengo.exceptions.ConfigError[source] A ValueError encountered in the config system. exception nengo.exceptions.SpaModuleError[source] An error in how SPA keeps track of modules. exception nengo.exceptions.SpaParseError[source] An error encountered while parsing a SPA expression. exception nengo.exceptions.SimulatorClosed[source] Raised when attempting to run a closed simulator. exception nengo.exceptions.SimulationError[source] An error encountered during simulation of the model. exception nengo.exceptions.SignalError[source] An error dealing with Signals in the builder. exception nengo.exceptions.FingerprintError[source] An error in fingerprinting an object for cache identification. exception nengo.exceptions.NetworkContextError[source] An error with the Network context stack. exception nengo.exceptions.Unconvertible[source] Raised a requested network conversion cannot be done. exception nengo.exceptions.CacheIOError[source] An IO error in reading from or writing to the decoder cache. exception nengo.exceptions.TimeoutError[source] A timeout occurred while waiting for a resource. exception nengo.exceptions.NotAddedToNetworkWarning(obj)[source] A NengoObject has not been added to a network. exception nengo.exceptions.CacheIOWarning[source] A non-critical issue in accessing files in the cache.
{}
Please use this identifier to cite or link to this item: http://hdl.handle.net/2381/18535 Title: Detection of an X-ray periodicity in the Narrow-line Seyfert 1 galaxy Mrk 766 with XMM-Newton Authors: Boller, T.Keil, R.Trümper, J.O'Brien, P. T.Reeves, J.Page, M. First Published: Jan-2001 Publisher: EDP Sciences for European Southern Observatory (ESO) Citation: Astronomy & Astrophysics, 2001, 365 (1) Abstract: We have analyzed the timing properties of the Narrow-line Seyfert 1 galaxy Mrk 766 observed with XMM-Newton during the PV phase. The source intensity changes by a factor of 1.3 over the 29 000 s observation. If the soft excess is modeled by a black body component, as indicated by the EPIC pn data, the luminosity of the black body component scales with its temperature according to $L \sim T^4$. This requires a lower limit "black body size"of about $\rm 1.3 10^{25} cm^2$. In addition, we report the detection of a strong periodic signal with $\rm 2.4 10^{-4} Hz$. Simulations of light curves with the observed time sequence and phase randomized for a red noise spectrum clearly indicate that the periodicity peak is intrinsic to the distant AGN. Furthermore, its existence is confirmed by the EPIC MOS and RGS data. The spectral fitting results show that the black body temperature and the absorption by neutral hydrogen remain constant during the periodic oscillations. This observational fact tends to rule out models in which the intensity changes are due to hot spots orbiting the central black hole. Precession according to the Bardeen-Petterson effect or instabilities in the inner accretion disk may provide explanations for the periodic signal. DOI Link: 10.1051/0004-6361:20000083 ISSN: 0004-6361 Links: http://hdl.handle.net/2381/18535http://www.aanda.org/articles/aa/abs/2001/01/aaxmm30/aaxmm30.html Version: Publisher Version Status: Peer-reviewed Type: Journal Article Rights: Copyright © 2001 ESO. Reproduced with permission from Astronomy & Astrophysics, © ESO. Appears in Collections: Published Articles, Dept. of Physics and Astronomy Files in This Item: File Description SizeFormat
{}
Intuitive content of Loop Gravity-Rovelli's program Astronomy Sci Advisor PF Gold P: 23,232 http://arxiv.org/abs/1301.6210 Embedding loop quantum cosmology without piecewise linearity Jonathan Engle (Submitted on 26 Jan 2013) An important goal is to understand better the relation between full loop quantum gravity (LQG) and the simplified, reduced theory known as loop quantum cosmology (LQC), directly at the quantum level. Such a firmer understanding would increase confidence in the reduced theory as a tool for formulating predictions of the full theory, as well as permitting lessons from the reduced theory to guide further development in the full theory. The present paper constructs an embedding of the usual state space of LQC into that of standard LQG, that is, LQG based on piecewise analytic paths. The embedding is well-defined even prior to solving the diffeomorphism constraint, at no point is a graph fixed, and at no point is the piecewise linear category used. This motivates for the first time a definition of operators in LQC corresponding to holonomies along non-piecewise-linear paths, without changing the usual kinematics of LQC in any way. The new embedding intertwines all operators corresponding to such holonomies, and all elements in its image satisfy an operator equation which classically implies homogeneity and isotropy. The construction is made possible by a recent result proven by Fleischhack. 18 pages http://arxiv.org/abs/1301.6173 Scale Anomaly as the Origin of Time Julian Barbour, Matteo Lostaglio, Flavio Mercati (Submitted on 25 Jan 2013) We explore the problem of time in quantum gravity in a point-particle analogue model of scale-invariant gravity. If quantized after reduction to true degrees of freedom, it leads to a time-independent Schrödinger equation. As with the Wheeler--DeWitt equation, time disappears, and a frozen formalism that gives a static wavefunction on the space of possible shapes of the system is obtained. However, if one follows the Dirac procedure and quantizes by imposing constraints, the potential that ensures scale invariance gives rise to a conformal anomaly, and the scale invariance is broken. A behaviour closely analogous to renormalization-group (RG) flow results. The wavefunction acquires a dependence on the scale parameter of the RG flow. We interpret this as time evolution and obtain a novel solution of the problem of time in quantum gravity. We apply the general procedure to the three-body problem, showing how to fix a natural initial value condition, introducing the notion of complexity. We recover a time-dependent Schrödinger equation with a repulsive cosmological force in the `late-time' physics and we analyse the role of the scale invariant Planck constant. We suggest that several mechanisms presented in this model could be exploited in more general contexts. 31 pages, 5 figures http://arxiv.org/abs/1301.6259 Inconsistencies from a Running Cosmological Constant Herbert W. Hamber, Reiko Toriumi (Submitted on 26 Jan 2013) We examine the general issue of whether a scale dependent cosmological constant can be consistent with general covariance, a problem that arises naturally in the treatment of quantum gravitation where coupling constants generally run as a consequence of renormalization group effects. The issue is approached from several points of view, which include the manifestly covariant functional integral formulation, covariant continuum perturbation theory about two dimensions, the lattice formulation of gravity, and the non-local effective action and effective field equation methods. In all cases we find that the cosmological constant cannot run with scale, unless general covariance is explicitly broken by the regularization procedure. Our results are expected to have some bearing on current quantum gravity calculations, but more generally should apply to phenomenological approaches to the cosmological vacuum energy problem. 34 pages. http://arxiv.org/abs/1301.6483 Coupling dimers to CDT - conceptual issues Lisa Glaser (Submitted on 28 Jan 2013) Causal dynamical triangulations allows for a non perturbative approach to quantum gravity. In this article a solution for dimers coupled to CDT is presented and some of the conceptual problems that arise are reflected upon. 3 pages. To appear in the Proceedings of the 13th Marcel Grossmann Meeting on General Relativity brief mention: http://arxiv.org/abs/1301.6440 The Preon Sector of the SLq(2) (Knot) Model Robert J. Finkelstein (Submitted on 28 Jan 2013) We describe a Lagrangian defining the preon sector of the knot model. The preons are the elements of the fundamental representation of SLq(2). They exactly agree with the preons conjectured by Harari and Shupe. The coupling constants and masses required by this Lagrangian are in principle experimentally measurable... 26 Pages Astronomy Sci Advisor PF Gold P: 23,232 http://arxiv.org/abs/1301.6795 Inhomogenous loop quantum cosmology with matter Daniel Martín-de Blas, Mercedes Martín-Benito, Guillermo A. Mena Marugán (Submitted on 28 Jan 2013) The linearly polarized Gowdy T3 model with a massless scalar field with the same symmetries as the metric is quantized by applying a hybrid approach. The homogeneous geometry degrees of freedom are loop quantized, fact which leads to the resolution of the cosmological singularity, while a Fock quantization is employed for both matter and gravitational inhomogeneities. Owing to the inclusion of the massless scalar field this system allows us to modelize flat Friedmann-Robertson-Walker cosmologies filled with inhomogeneities propagating in one direction. It provides a perfect scenario to study the quantum back-reaction between the inhomogeneities and the polymeric homogeneous and isotropic background. 4 pages, for the proceedings of the Loops 11-Madrid conference. http://arxiv.org/abs/1301.7466 Report on the session QG4 of the 13th Marcel Grossmann Meeting Jorge Pullin, Parampreet Singh (Submitted on 30 Jan 2013) We summarize the talks presented at the QG4 session (loop quantum gravity: cosmology and black holes) of the 13th Marcel Grossmann Meeting held in Stockholm, Sweden. 5 pages, to appear in the proceedings http://arxiv.org/abs/1301.7688 Shape Dynamics and Gauge-Gravity Duality Henrique Gomes, Tim Koslowski (Submitted on 31 Jan 2013) The dynamics of gravity can be described by two different systems. The first is the familiar spacetime picture of General Relativity, the other is the conformal picture of Shape Dynamics. We argue that the bulk equivalence of General Relativity and Shape Dynamics is a natural setting to discuss familiar bulk/boundary dualities. We discuss consequences of the Shape Dynamics description of gravity as well as the issue why the bulk equivalence is not explicitly seen in the General Relativity description of gravity. 4 pages, contribution to the 13th Marcel Grossmann Meeting brief mention: http://arxiv.org/abs/1301.7750 Quantization maps, algebra representation and non-commutative Fourier transform for Lie groups Carlos Guedes, Daniele Oriti, Matti Raasakka (Submitted on 31 Jan 2013) Astronomy Sci Advisor PF Gold P: 23,232 http://arxiv.org/abs/1302.0254 The pre-inflationary dynamics of loop quantum cosmology: Confronting quantum gravity with observations Ivan Agullo, Abhay Ashtekar, William Nelson (Submitted on 1 Feb 2013) Using techniques from loop quantum gravity, the standard theory of cosmological perturbations was recently generalized to encompass the Planck era. We now apply this framework to explore pre-inflationary dynamics. The framework enables us to isolate and resolve the true trans-Planckian difficulties, with interesting lessons both for theory and observations. Specifically, for a large class of initial conditions at the bounce, we are led to a self consistent extension of the inflationary paradigm over the 11 orders of magnitude in density and curvature, from the big bounce to the onset of slow roll. In addition, for a narrow window of initial conditions, there are departures from the standard paradigm, with novel effects ---such as a modification of the consistency relation between the ratio of the tensor to scalar power spectrum and the tensor spectral index, as well as a new source for non-Gaussianities--- which could extend the reach of cosmological observations to the deep Planck regime of the early universe. 64 pages, 15 figures http://arxiv.org/abs/1302.0168 Warm inflation in loop quantum cosmology: a model with a general dissipative coefficient Xiao-Min Zhang, Jian-Yang Zhu (Submitted on 1 Feb 2013) A general form of warm inflation with the dissipative coefficient Γ = Γ0 (φ/φ0)n (T/τ0)m in loop quantum cosmology is studied. In this case, we obtain conditions for the existence of a warm inflationary attractor in the context of loop quantum cosmology by using the method of stability analysis. The two cases when the dissipative coefficient is independent (m=0) and dependent (m≠0) on temperature are analyzed specifically. In the latter case, we use the new power spectrum which should be used when considering temperature dependence in the dissipative coefficient. We find that the thermal effect is enhanced in the case m>0. As in the standard inflation in loop quantum cosmology, we also reach the conclusion that quantum effect leaves a tiny imprint on the cosmic microwave background (CMB) sky. 12 pages, accepted for publication in Phys. Rev. D http://arxiv.org/abs/1212.5226 Nine-Year Wilkinson Microwave Anisotropy Probe (WMAP) Observations: Cosmological Parameter Results G. Hinshaw, D. Larson, E. Komatsu, D. N. Spergel, C. L. Bennett, J. Dunkley, M. R. Nolta, M. Halpern, R. S. Hill, N. Odegard, L. Page, K. M. Smith, J. L. Weiland, B. Gold, N. Jarosik, A. Kogut, M. Limon, S. S. Meyer, G. S. Tucker, E. Wollack, E. L. Wright (Submitted on 20 Dec 2012 (v1), last revised 30 Jan 2013 (this version, v2)) We present cosmological parameter constraints based on the final nine-year WMAP data, in conjunction with additional cosmological data sets. The WMAP data alone, and in combination, continue to be remarkably well fit by a six-parameter LCDM model. When WMAP data are combined with measurements of the high-l CMB anisotropy, the BAO scale, and the Hubble constant, the densities, Ωbh2, Ωch2, and ΩΛ, are each determined to a precision of ~1.5%. The amplitude of the primordial spectrum is measured to within 3%, and there is now evidence for a tilt in the primordial spectrum at the 5σ level, confirming the first detection of tilt based on the five-year WMAP data. At the end of the WMAP mission, the nine-year data decrease the allowable volume of the six-dimensional LCDM parameter space by a factor of 68,000 relative to pre-WMAP measurements. We investigate a number of data combinations and show that their LCDM parameter fits are consistent. New limits on deviations from the six-parameter model are presented, for example: the fractional contribution of tensor modes is limited to r<0.13 (95% CL); the spatial curvature parameter is limited to -0.0027 (+0.0039/-0.0038); the summed mass of neutrinos is Ʃmv< 0.44 eV (95% CL); and the number of relativistic species is found to be 3.84±0.40 when the full data are analyzed. The joint constraint on Neff and the primordial helium abundance agrees with the prediction of standard Big Bang nucleosynthesis. We compare recent PLANCK measurements of the Sunyaev-Zel'dovich effect with our seven-year measurements, and show their mutual agreement. Our analysis of the polarization pattern around temperature extrema is updated. This confirms a fundamental prediction of the standard cosmological model and provides a striking illustration of acoustic oscillations and adiabatic initial conditions in the early universe. 31 pages, 12 figures For enlightening comment on the latest WMAP estimates see http://resonaances.blogspot.com/2013...os-in-sky.html Astronomy Sci Advisor PF Gold P: 23,232 http://arxiv.org/abs/1302.0724 Death and resurrection of the zeroth principle of thermodynamics Hal M. Haggard, Carlo Rovelli (Submitted on 4 Feb 2013) The zeroth principle of thermodynamics in the form "temperature is uniform at equilibrium" is notoriously violated in relativistic gravity. Temperature uniformity is often derived from the maximization of the total number of microstates of two interacting systems under energy exchanges. Here we discuss a generalized version of this derivation, based on informational notions, which remains valid in the general context. The result is based on the observation that the time taken by any system to move to a distinguishable (nearly orthogonal) quantum state is a universal quantity that depends solely on the temperature. At equilibrium the net information flow between two systems must vanish, and this happens when two systems transit the same number of distinguishable states in the course of their interaction. 5 pages, 2 figures brief mention: http://arxiv.org/abs/1302.0451 Macroscopic superpositions and black hole unitarity Stephen D.H. Hsu (Submitted on 3 Feb 2013) We discuss the black hole information problem, including the recent claim that unitarity requires a horizon firewall, emphasizing the role of decoherence and macroscopic superpositions. We consider the formation and evaporation of a large black hole as a quantum amplitude, and note that during intermediate stages (e.g., after the Page time), the amplitude is a superposition of macroscopically distinct (and decohered) spacetimes, with the black hole itself in different positions on different branches. Small but semiclassical observers (who are themselves part of the quantum amplitude) that fall into the hole on one branch will miss it entirely on other branches and instead reach future infinity. This observation can reconcile the subjective experience of an infalling observer with unitarity. We also discuss implications for the nice slice formulation of the information problem, and to complementarity. 3 pages, 1 figure. PF Gold P: 1,961 http://arxiv.org/abs/1302.1357 A consistent Horava gravity without extra modes and equivalent to general relativity at the linearized level J. Bellorin, A. Restuccia, A. Sotomayor (Submitted on 6 Feb 2013) We consider a Horava theory that has a consistent structure of constraints and propagates two physical degrees of freedom. The Lagrangian includes the terms of Blas, Pujolas ans Sibiryakov. The theory can be obtained from the general Horava's formulation by setting lambda = 1/3. This value of lambda is protected in the quantum formulation of the theory by the presence of a constraint. The theory has two second-class constraints that are absent for other values of lambda. They remove the extra scalar mode. There is no strong-coupling problem in this theory since there is no extra mode. We perform explicit computations on a model that put together a z=1 term and the IR effective action. We also show that the lowest-order perturbative version of the IR effective theory has a dynamics identical to the one of linearized general relativity. Therefore, this theory is smoothly recovered at the deepest IR without discontinuities in the physical degrees of freedom. Astronomy Sci Advisor PF Gold P: 23,232 http://arxiv.org/abs/1302.1245 Dynamical behaviors of FRW Universe containing a positive/negative potential scalar field in loop quantum cosmology Xiao Liu, Kui Xiao, Jian-Yang Zhu (Submitted on 6 Feb 2013) The dynamical behaviors of FRW Universe containing a posivive/negative potential scalar field in loop quantum cosmology scenario are discussed. The method of the phase-plane analysis is used to investigate the stability of the Universe. It is found that the stability properties in this situation are quite different from the classical cosmology case. For a positive potential scalar field coupled with a barotropic fluid, the cosmological autonomous system has five fixed points and one of them is stable if the adiabatic index γ satisfies 0<γ<2. This leads to the fact that the universe just have one bounce point instead of the singularity which lies in the quantum dominated area and it is caused by the quantum geometry effect. There are four fixed points if one considers a scalar field with a negative potential, but none of them is stable. Therefore, the universe has two kinds of bounce points, one is caused by the quantum geometry effect and the other is caused by the negative potential, the Universe may enter a classical re-collapse after the quantum bounce. This hints that the spatially flat FRW Universe containing a negative potential scalar field is cyclic. 6 pages, 2 figures, accepted for publication in General Relativity and Gravitation brief mention: http://arxiv.org/abs/1302.1312 Fixed Functionals in Asymptotically Safe Gravity Maximilian Demmel, Frank Saueressig, Omar Zanusso (Submitted on 6 Feb 2013) We summarize the status of constructing fixed functionals within the f(R)-truncation of Quantum Einstein Gravity in three spacetime dimensions. Focusing on curvatures much larger than the IR-cutoff scale, it is shown that the fixed point equation admits three different scaling regimes: for classical and quantum dominance the equation becomes linear and has power-law solutions, while the balanced case gives rise to a generalized homogeneous equation whose order is reduced by one and whose solutions are non-analytical. 4 pages, to appear in Proceedings of the Thirteenth Marcel Grossman Meeting on General Relativity http://arxiv.org/abs/1302.1206 Thermality and Heat Content of horizons from infinitesimal coordinate transformations Bibhas Ranjan Majhi, T. Padmanabhan (Submitted on 5 Feb 2013) http://arxiv.org/abs/1302.1498 "The Waters I am Entering No One yet Has Crossed": Alexander Friedman and the Origins of Modern Cosmology Ari Belenkiy (Submitted on 6 Feb 2013) Ninety years ago, in 1922, Alexander Friedman (1888-1925) demonstrated for the first time that the General Relativity equations admit non-static solutions and thus the Universe may expand, contract, collapse, and even be born. The fundamental equations he derived still provide the basis for the current cosmological theories of the Big Bang and the Accelerating Universe. Later, in 1924, he was the first to realize that General Relativity allows the Universe to be infinite. Friedman's ideas initially met strong resistance from Einstein, yet from 1931 he became their staunchest supporter. This essay connects Friedman's cosmological ideas with the 1998-2004 results of the astronomical observations that led to the 2011 Nobel Prize in Physics. It also describes Friedman's little known topological ideas of how to check General Relativity in practice and compares his contributions to those of Georges Lemaitre. Recently discovered corpus of Friedman's writings in the Ehrenfest Archives at Leiden University sheds some new light on the circumstances surrounding his 1922 work and his relations with Paul Ehrenfest. 26 pages, 11 figures. Accepted for publication in the proceedings of the conference "Origins of the Expanding Universe: 1912-1932", M. J. Way & D. Hunter, eds., ASP Conf. Ser., Vol. 471 in press Astronomy Sci Advisor PF Gold P: 23,232 http://arxiv.org/abs/1302.1781 Self-Energy in the Lorentzian ERPL-FK Spin Foam Model of Quantum Gravity Aldo Riello (Submitted on 7 Feb 2013) We calculate the most divergent contribution to the self-energy (or "melonic") graph in the context of the Lorentzian EPRL-FK Spin Foam model of Quantum Gravity. We find that such a contribution is logarithmically divergent in the cut-off over the SU(2)-representation spins when one chooses the face amplitude guaranteeing the face-splitting invariance of the foam. We also find that the dependence on the boundary data is different from that of the bare propagator. This fact has its origin in the non-commutativity of the EPRL-FK Y-map with the projector onto SL(2,C)-invariant states. In the course of the paper, we discuss in detail the approximations used during the calculations, its geometrical interpretation as well as the physical consequences of our result. 55 pages, 8 figures http://arxiv.org/abs/1302.1841 Cosmological Parameters from Pre-Planck CMB Measurements Erminia Calabrese, Renée A. Hlozek, Nick Battaglia, Elia S. Battistelli, J. Richard Bond, Jens Chluba, Devin Crichton, Sudeep Das, Mark J. Devlin, Joanna Dunkley, Rolando Dünner, Marzieh Farhang, Megan B. Gralla, Amir Hajian, Mark Halpern, Matthew Hasselfield, Adam D. Hincks, Kent D. Irwin, Arthur Kosowsky, Thibaut Louis, Tobias A. Marriage, Kavilan Moodley, Laura Newburgh, Michael D. Niemack, Mike R. Nolta, Lyman A. Page, Neelima Sehgal, Blake D. Sherwin, Jonathan L. Sievers, Cristóbal Sifón, David N. Spergel, Suzanne T. Staggs, Eric R. Switzer, Ed Wollack (Submitted on 7 Feb 2013) Recent data from the WMAP, ACT and SPT experiments provide precise measurements of the cosmic microwave background temperature power spectrum over a wide range of angular scales. The combination of these observations is well fit by the standard, spatially flat LCDM cosmological model, constraining six free parameters to within a few percent. The scalar spectral index, ns = 0.9678 ± 0.0088, is less than unity at the 3.6 sigma level, consistent with simple models of inflation. The damping tail of the power spectrum at high resolution, combined with the amplitude of gravitational lensing measured by ACT and SPT, constrains the effective number of relativistic species to be Neff = 3.24 ± 0.39, in agreement with the standard model's three species of light neutrinos. 5 pages, 4 figures There is a slight inconsistency with the range of Neff given in a similar paper by some of the same people a couple of days ago. See post #1893 about WMAP9 paper http://arxiv.org/abs/1212.5226 . See page 17, and Table 7: Neff = 3.84 ± 0.40 (with all relevant data sets combined). brief mention: http://arxiv.org/abs/1302.1617 What if Planck's Universe isn't flat? Philip Bull, Marc Kamionkowski (Submitted on 6 Feb 2013) http://arxiv.org/abs/1302.1860 On cosmic hair and "de Sitter breaking" in linearized quantum gravity Ian A. Morrison (Submitted on 7 Feb 2013) Astronomy Sci Advisor PF Gold P: 23,232 http://arxiv.org/abs/1302.2173 Quantum Gravity via Causal Dynamical Triangulations J. Ambjorn, A. Goerlich, J. Jurkiewicz, R. Loll (Submitted on 8 Feb 2013) "Causal Dynamical Triangulations" (CDT) represent a lattice regularization of the sum over spacetime histories, providing us with a non-perturbative formulation of quantum gravity. The ultraviolet fixed points of the lattice theory can be used to define a continuum quantum field theory, potentially making contact with quantum gravity defined via asymptotic safety. We describe the formalism of CDT, its phase diagram, and the quantum geometries emerging from it. We also argue that the formalism should be able to describe a more general class of quantum-gravitational models of Horava-Lifgarbagez type. 31 pages. To appear in "Handbook of Spacetime", Springer Verlag. http://arxiv.org/abs/1302.2181 Quantum Spacetime, from a Practitioner's Point of View J. Ambjorn, S. Jordan, J. Jurkiewicz, R. Loll (Submitted on 9 Feb 2013) We argue that theories of quantum gravity constructed with the help of (Causal) Dynamical Triangulations have given us the most informative, quantitative models to date of quantum spacetime. Most importantly, these are derived dynamically from nonperturbative and background-independent quantum theories of geometry. In the physically relevant case of four spacetime dimensions, the ansatz of Causal Dynamical Triangulations produces - from a fairly minimal set of quantum field-theoretic inputs - an emergent spacetime which macroscopically looks like a de Sitter universe, and on Planckian scales possesses unexpected quantum properties. Important in deriving these results are a regularized version of the theory, in which the quantum dynamics is well defined, can be studied with the help of numerical Monte Carlo methods and extrapolated to infinite lattice volumes. 7 pages, 5 figures, submission to Multicosmofun '12, Szczecin. http://arxiv.org/abs/1302.2210 The transfer matrix method in four-dimensional causal dynamical triangulations J. Ambjorn, J. Gizbert-Studnicki, A.T. Goerlich, J. Jurkiewicz, R. Loll (Submitted on 9 Feb 2013) The Causal Dynamical Triangulation model of quantum gravity (CDT) is a proposition to evaluate the path integral over space-time geometries using a lattice regularization with a discrete proper time and geometries realized as simplicial manifolds. The model admits a Wick rotation to imaginary time for each space-time configuration. Using computer simulations we determined the phase structure of the model and discovered that it predicts a de Sitter phase with a four-dimensional spherical semi-classical background geometry. The model has a transfer matrix, relating spatial geometries at adjacent (discrete lattice) times. The transfer matrix uniquely determines the theory. We show that the measurements of the scale factor of the (CDT) universe are well described by an effective transfer matrix where the matrix elements are labelled only by the scale factor. Using computer simulations we determine the effective transfer matrix elements and show how they relate to an effective minisuperspace action at all scales. 6 pages, 6 figures, contribution to the MULTIVERSE conference, Szczecin, Poland, September 2012 brief mention: http://arxiv.org/abs/1302.2440 Universality of 2d causal dynamical triangulations J. Ambjorn, A. Ipsen (Submitted on 11 Feb 2013) The formalism of Causal Dynamical Triangulations (CDT) attempts to provide a non-perturbative regularization of quantum gravity, viewed as an ordinary quantum field theory. In two dimensions one can solve the lattice theory analytically and the continuum limit is universal, not depending on the details of the lattice regularization. 11 pages http://arxiv.org/abs/1302.2285 Quantum Gravity: Meaning and Measurement John Stachel, Kaća Bradonjić (Submitted on 10 Feb 2013) A discussion of the meaning of a physical concept cannot be separated from discussion of the conditions for its ideal measurement. We assert that quantization is no more than the invocation of the quantum of action in the explanation of some process or phenomenon, and does not imply an assertion of the fundamental nature of such a process. This leads to an ecumenical approach to the problem of quantization of the gravitational field. There can be many valid approaches,.. We advocate an approach to general relativity based on the unimodular group, which emphasizes the physical significance and measurability of the conformal and projective structures. ... 24 pages; Submitted to Studies in the History and Philosophy of Modern Physics special Quantum Gravity issue P: 247 http://arxiv.org/abs/1302.2151 Lanczos-Lovelock models of gravity T. Padmanabhan, D. Kothawala (Submitted on 8 Feb 2013) Lanczos-Lovelock models of gravity represent a natural and elegant generalization of Einstein's theory of gravity to higher dimensions. They are characterized by the fact that the field equations only contain up to second derivatives of the metric even though the action functional can be a quadratic or higher degree polynomial in the curvature tensor. Because these models share several key properties of Einstein's theory they serve as a useful set of candidate models for testing the emergent paradigm for gravity. This review highlights several geometrical and thermodynamical aspects of Lanczos-Lovelock models which have attracted recent attention. http://arxiv.org/abs/1302.2336 Constraints of NonCommutative Spectral Action from Gravity Probe B Gaetano Lambiase, Mairi Sakellariadou, Antonio Stabile (Submitted on 10 Feb 2013) Noncommutative spectral geometry offers a purely geometric explanation for the standard model of particle physics, including a geometric explanation for the origin of the Higgs field. Within this framework, gravity together with the electroweak and the strong forces are all described as purely gravitational forces on a unified noncommutative spacetime. In this letter, we infer a constraint on the parameter characterising the coupling constants at unification, by linearising the field equations in the limit of weak gravitational fields generated by a rotating gravitational source and by making use of the recent experimental data obtained by Gravity Probe B. We find a lower bound on the Weyl term appearing in the noncommutative spectral action, namely \beta > 1/ (10^6 m), which is much stronger than any limit imposed so far to curvature squared terms. http://arxiv.org/abs/1302.2383 Surface gravities for non-Killing horizons Bethan Cropp (SISSA/INFN), Stefano Liberati (SISSA/INFN), Matt Visser (Victoria University of Wellington) (Submitted on 11 Feb 2013) There are many logically and computationally distinct characterizations of the surface gravity of a horizon, just as there are many logically rather distinct notions of horizon. Fortunately, in standard general relativity, for stationary horizons, most of these characterizations are degenerate. However, in modified gravity, or in analogue spacetimes, horizons may be non-Killing or even non-null, and hence these degeneracies can be lifted. We present a brief overview of the key issues, specifically focusing on horizons in analogue spacetimes and universal horizons in modified gravity. http://arxiv.org/abs/1302.2613 Nonviolent information transfer from black holes: a field theory parameterization Steven B. Giddings (Submitted on 11 Feb 2013) A candidate parameterization is introduced, in an effective field theory framework, for the quantum information transfer from a black hole that is necessary to restore unitarity. This in particular allows description of the effects of such information transfer in the black hole atmosphere, for example seen by infalling observers. In the presence of such information transfer, it is shown that infalling observers need not experience untoward violence. Moreover, the presence of general moderate-frequency couplings to field modes with high angular momenta offers a mechanism to enhance information transfer rates, commensurate with the increased energy flux, when a string is introduced to "mine" a black hole. Generic such models for nonviolent information transfer predict extra energy flux from a black hole, beyond that of Hawking. Astronomy Sci Advisor PF Gold P: 23,232 http://arxiv.org/abs/1302.2810 Four-dimensional Causal Dynamical Triangulations and an effective transfer matrix Andrzej Görlich (Submitted on 12 Feb 2013) Causal Dynamical Triangulations is a background independent approach to quantum gravity. We show that there exists an effective transfer matrix labeled by the scale factor which properly describes the evolution of the quantum universe. In this framework no degrees of freedom are frozen, but, the obtained effective action agrees with the minisuperspace model. Comments: To appear in the Proceedings of the 13th Marcel Grossmann Meeting on General P: 247 http://arxiv.org/abs/1302.2849 Disappearance and emergence of space and time in quantum gravity Daniele Oriti (Submitted on 12 Feb 2013) We discuss the hints for the disappearance of continuum space and time at microscopic scale. These include arguments for a discrete nature of them or for a fundamental non-locality, in a quantum theory of gravity. We discuss how these ideas are realized in specific quantum gravity approaches. Turning then the problem around, we consider the emergence of continuum space and time from the collective behaviour of discrete, pre-geometric atoms of quantum space, and for understanding spacetime as a kind of "condensate", and we present the case for this emergence process being the result of a phase transition, dubbed "geometrogenesis". We discuss some conceptual issues of this scenario and of the idea of emergent spacetime in general. As a concrete example, we outline the GFT framework for quantum gravity, and illustrate a tentative procedure for the emergence of spacetime in this framework. Last, we re-examine the conceptual issues raised by the emergent spacetime scenario in light of this concrete example. http://arxiv.org/abs/1302.2850 The universal path integral Seth Lloyd, Olaf Dreyer (Submitted on 12 Feb 2013) Path integrals represent a powerful route to quantization: they calculate probabilities by summing over classical configurations of variables such as fields, assigning each configuration a phase equal to the action of that configuration. This paper defines a universal path integral, which sums over all computable structures. This path integral contains as sub-integrals all possible computable path integrals, including those of field theory, the standard model of elementary particles, discrete models of quantum gravity, string theory, etc. The universal path integral possesses a well-defined measure that guarantees its finiteness, together with a method for extracting probabilities for observable quantities. The universal path integral supports a quantum theory of the universe in which the world that we see around us arises out of the interference between all computable structures. http://arxiv.org/abs/1302.2687 Massive gravity as a limit of bimetric gravity Prado Martin-Moruno (Victoria University of Wellington), Valentina Baccetti (Victoria University of Wellington), Matt Visser (Victoria University of Wellington) (Submitted on 12 Feb 2013) Massive gravity may be viewed as a suitable limit of bimetric gravity. The limiting procedure can lead to an interesting interplay between the "background" and "foreground" metrics in a cosmological context. The fact that in bimetric theories one always has two sets of metric equations of motion continues to have an effect even in the massive gravity limit. Thus, solutions of bimetric gravity in the limit of vanishing kinetic term are also solutions of massive gravity, but the contrary statement is not necessarily true. http://arxiv.org/abs/1302.2731 Quantum correlations which imply causation Joseph Fitzsimons, Jonathan Jones, Vlatko Vedral (Submitted on 12 Feb 2013) In ordinary, non-relativistic, quantum physics, time enters only as a parameter and not as an observable: a state of a physical system is specified at a given time and then evolved according to the prescribed dynamics. While the state can, and usually does, extend across all space, it is only defined at one instant of time, in conflict with special relativity where space and time are treated on an equal footing. Here we ask what would happen if we defined the notion of the quantum density matrix for multiple spatial and temporal measurements. We introduce the concept of a pseudo-density matrix which treats space and time indiscriminately. This matrix in general fails to be positive for timelike separated measurements, motivating us to define a measure of causality that discriminates between spacelike and timelike correlations. Important properties of this measure, such as monotonicity under local operations, are proved. Two qubit NMR experiments are presented that illustrate how a temporal pseudo-density matrix approaches a genuinely allowed density matrix as the amount of decoherence is increased between two consecutive measurements. P: 247 http://arxiv.org/abs/1302.2928 Modulated Ground State of Gravity Theories with Stabilized Conformal Factor Alfio Bonanno, Martin Reuter (Submitted on 12 Feb 2013) We discuss the stabilization of the conformal factor by higher derivative terms in a conformally reduced $R+R^2$ Euclidean gravity theory. The flat spacetime is unstable towards the condensation of modes with nonzero momentum, and they "condense" in a modulated phase above a critical value of the coupling $\beta$ of the $R^2$ term. By employing a combination of variational, numerical and lattice methods we show that in the semiclassical limit the corresponding functional integral is dominated by a single nonlinear plane wave of frequency $\approx 1/\sqrt{\beta} \lp$. We argue that the ground state of the theory is characterized by a spontaneous breaking of translational invariance at Planckian scales. Astronomy Sci Advisor PF Gold P: 23,232 http://arxiv.org/abs/1302.3226 Solution to the cosmological constant problem T. Padmanabhan, Hamsa Padmanabhan (Submitted on 13 Feb 2013) The current, accelerated, phase of expansion of our universe can be modeled in terms of a cosmological constant. A key issue in theoretical physics is to explain the extremely small value of the dimensionless parameter Λ LP2 ~ 3.4 x 10-122, where LP is the Planck length. We show that this value can be understood in terms of a new dimensionless parameter N, which counts the number of modes inside a Hubble volume crossing the Hubble radius, from the end of inflation until the beginning of the accelerating phase. Theoretical considerations suggest that N = 4π. On the other hand, N is related to ln(ΛLP2) and two other parameters which will be determined by high energy particle physics: (a) the ratio between the number densities of photons and matter and (b) the energy scale of inflation. For realistic values of (nγ/nm) ~ 4.3 x 1010 and Einf ~ 1015 GeV, our postulate N =4π leads to the observed value of the cosmological constant. This provides a unified picture of cosmic evolution relating the early inflationary phase to the late accelerating phase. 15 pages; 2 figures PF Gold P: 1,961 http://arxiv.org/abs/1302.3406 Spontaneous Lorentz Violation in Gauge Theories A. P. Balachandran, S. Vaidya (Submitted on 14 Feb 2013) Frohlich, Morchio and Strocchi long ago proved that Lorentz invariance is spontaneously broken in QED because of infrared effects. We develop a simple model where consequences of this breakdown can be explicitly and easily calculated. For this purpose, the superselected U(1) charge group of QED is extended to a superselected "Sky" group containing direction-dependent gauge transformations at infinity. It is the analog of the Spi group of gravity. As Lorentz transformations do not commute with Sky, they are spontaneously broken. These abelian considerations and model are extended to non-Abelian gauge symmetries. Basic issues regarding the observability of twisted non-Abelian gauge symmetries and of the asymptotic ADM symmetries of quantum gravity are raised. Astronomy Sci Advisor PF Gold P: 23,232 http://arxiv.org/abs/1302.3833 Loop Quantum Cosmology Ivan Agullo, Alejandro Corichi (Submitted on 15 Feb 2013) This Chapter provides an up to date, pedagogical review of some of the most relevant advances in loop quantum cosmology. We review the quantization of homogeneous cosmological models, their singularity resolution and the formulation of effective equations that incorporate the main quantum corrections to the dynamics. We also summarize the theory of quantized metric perturbations propagating in those quantum backgrounds. Finally, we describe how this framework can be applied to obtain a self-consistent extension of the inflationary scenario to incorporate quantum aspects of gravity, and to explore possible phenomenological consequences. 52 pages, 5 figures. To appear as a Chapter of "The Springer Handbook of Spacetime," edited by A. Ashtekar and V. Petkov. (Springer-Verlag, at Press). P: 247 http://arxiv.org/abs/1302.1496 Standard Model Higgs field and energy scale of gravity F.R. Klinkhamer (Submitted on 6 Feb 2013 (v1), last revised 14 Feb 2013 (this version, v3)) The effective potential of the Higgs scalar field in the Standard Model may have a second degenerate minimum at an ultrahigh vacuum expectation value. This second minimum then determines, by radiative corrections, the values of the top-quark and Higgs-boson masses at the standard minimum corresponding to the electroweak energy scale. An argument is presented that this ultrahigh vacuum expectation value is proportional to the energy scale of gravity, E_{Planck} \equiv \sqrt{\hbar c^5/G_N}, considered to be characteristic of a spacetime foam. In the context of a simple model, the existence of kink-type wormhole solutions places a lower bound on the ultrahigh vacuum expectation value and this lower bound is of the order of E_{Planck}. http://arxiv.org/abs/1302.3680 Quantum Gravity on a Quantum Computer? Achim Kempf (Submitted on 15 Feb 2013) EPR-type measurements on spatially separated entangled spin qubits allow one, in principle, to detect curvature. Also the entanglement of the vacuum state is affected by curvature. Here, we ask if the curvature of spacetime can be expressed entirely in terms of the spatial entanglement structure of the vacuum. This would open up the prospect that quantum gravity could be simulated on a quantum computer and that quantum information techniques could be fully employed in the study of quantum gravity. http://arxiv.org/abs/1302.3648 Causality and non-equilibrium second-order phase transitions in inhomogeneous systems A. del Campo, T. W. B. Kibble, W. H. Zurek (Submitted on 14 Feb 2013) When a second-order phase transition is crossed at fine rate, the evolution of the system stops being adiabatic as a result of the critical slowing down in the neighborhood of the critical point. In systems with a topologically nontrivial vacuum manifold, disparate local choices of the ground state lead to the formation of topological defects. The universality class of the transition imprints a signature on the resulting density of topological defects: It obeys a power law in the quench rate, with an exponent dictated by a combination of the critical exponents of the transition. In inhomogeneous systems the situation is more complicated, as the spontaneous symmetry breaking competes with bias caused by the influence of the nearby regions that already chose the new vacuum. As a result, the choice of the broken symmetry vacuum may be inherited from the neighboring regions that have already entered the new phase. This competition between the inherited and spontaneous symmetry breaking enhances the role of causality, as the defect formation is restricted to a fraction of the system where the front velocity surpasses the relevant sound velocity and phase transition remains effectively homogeneous. As a consequence, the overall number of topological defects can be substantially suppressed. When the fraction of the system is small, the resulting total number of defects is still given by a power law related to the universality class of the transition, but exhibits a more pronounced dependence on the quench rate. This enhanced dependence complicates the analysis but may also facilitate experimental test of defect formation theories. Astronomy Sci Advisor PF Gold P: 23,232 http://arxiv.org/abs/1302.5265 The loop quantum gravity black hole Rodolfo Gambini, Jorge Pullin (Submitted on 21 Feb 2013) We quantize spherically symmetric vacuum gravity without gauge fixing the diffeomorphism constraint. Through a rescaling, we make the algebra of Hamiltonian constraints Abelian and therefore the constraint algebra is a true Lie algebra. This allows the completion of the Dirac quantization procedure using loop quantum gravity techniques. We can construct explicitly the exact solutions of the physical Hilbert space annihilated by all constraints. New observables living in the bulk appear at the quantum level (analogous to spin in quantum mechanics) that are not present at the classical level and are associated with the discrete nature of the spin network states of loop quantum gravity. The resulting quantum space-times resolve the singularity present in the classical theory inside black holes. The new observables that arise suggest a possible resolution for the "firewall" problem of evaporating black holes. Comments: 4 pages, PF Gold P: 1,961 http://arxiv.org/abs/1302.5273 There exist no 4-dimensional geodesically equivalent metrics with the same stress-energy tensor Volodymir Kiosak, Vladimir S. Matveev (Submitted on 21 Feb 2013) We show that if two 4-dimensional metrics of arbitrary signature on one manifold are geodesically equivalent (i.e., have the same geodesics considered as unparameterized curves) and are solutions of the Einstein field equation with the same stress-energy tensor, then they are affinely equivalent or flat. Under the additional assumption that the metrics are complete or the manifold is closed, the result survives in all dimensions >2. http://arxiv.org/abs/1302.5162 On CCC-predicted concentric low-variance circles in the CMB sky V. G. Gurzadyan, R. Penrose (Submitted on 21 Feb 2013) A new analysis of the CMB, using WMAP data, supports earlier indications of non-Gaussian features of concentric circles of low temperature variance. Conformal cyclic cosmology (CCC) predicts such features from supermassive black-hole encounters in an aeon preceding our Big Bang. The significance of individual low-variance circles in the true data has been disputed; yet a recent independent analysis has confirmed CCC's expectation that CMB circles have a non-Gaussian temperature distribution. Here we examine concentric sets of low-variance circular rings in the WMAP data, finding a highly non-isotropic distribution. A new "sky-twist" procedure, directly analysing WMAP data, without appeal to simulations, shows that the prevalence of these concentric sets depends on the rings being circular, rather than even slightly elliptical, numbers dropping off dramatically with increasing ellipticity. This is consistent with CCC's expectations; so also is the crucial fact that whereas some of the rings' radii are found to reach around $15^\circ$, none exceed $20^\circ$. The non-isotropic distribution of the concentric sets may be linked to previously known anomalous and non-Gaussian CMB features. Related Discussions Engineering, Comp Sci, & Technology Homework 7 Classical Physics 29 Astronomy & Astrophysics 8 Beyond the Standard Model 24 Introductory Physics Homework 4
{}
# Work done to move spring displacement 1. Oct 15, 2011 ### gunster 1. The problem statement, all variables and given/known data A spring has a relaxed length of 5 cm and a stiffness of 95 N/m. How much work must you do to change its length from 2 cm to 10 cm? k=95 Lnull=0.05 delta x = .1-.02 = 0.08 2. Relevant equations F=-kx W=Fdcostheta 3. The attempt at a solution I honestly have tried everything and am beginning to think I am way off the mark and missed something. But what i tried was W = Fdcostheta where F = -kx Therefore, since force changes direction after the displacement is past the relaxed spring length, i used: W = -95 * (0.05-0.02) * (0.05-0.02) cos 0 + -95 * (0.1-0.5) * (0.1-0.5) cos 180 But that was apparently completely wrong. any help please? 2. Oct 15, 2011 ### DukeLuke $F = F d \cos \theta$ is only valid if the force is constant over the distance (not a function of x in this case). Your force is a function of x, so you will have to integrate to get the work. It's possible you can solve the problem with a energy approach if integrals are beyond your course material. $$W = \int F(x) dx$$ 3. Oct 15, 2011 ### gunster EDIT: nvm realized my mistake was suppose to subtract Thanks a lot for reminding me force is not constant XD Last edited: Oct 15, 2011
{}
## ICPC ECNA 2005 G - Swamp Things View as PDF Points: 7 Time limit: 1.0s Memory limit: 256M Problem type Allowed languages Ada, Assembly, Awk, Brain****, C, C#, C++, COBOL, CommonLisp, D, Dart, F#, Forth, Fortran, Go, Groovy, Haskell, Intercal, Java, JS, Kotlin, Lisp, Lua, Nim, ObjC, OCaml, Octave, Pascal, Perl, PHP, Pike, Prolog, Python, Racket, Ruby, Rust, Scala, Scheme, Sed, Swift, TCL, Text, Turing, VB, Zig ##### ICPC East Central NA Regional Contest 2005, Problem G Hugh F. Oh, in his never-ending quest to prove the existence of extraterrestrials, has gotten hold of a number of nighttime photographs taken by a research group that is examining glowing swamp gas. Hugh wants to see if any of the photos show, not swamp gas, but Little Grey Men in glowing suits. The photographs consist of bright dots appearing against a black background. Unfortunately, at the time the photos were taken, trains were travelling through the area (there is a train trestle over the swamp), and occasional lights from the train windows also appear in the photographs. Hugh, being a fastidious researcher, wants to eliminate these spots from the images. He can't tell from the photos exactly where the tracks are, or from what direction the photos were taken, but he knows that the tracks in that area are perfectly straight, so he's decided on the following approach: he will find the line with the maximum number of spots lying on it and, if there are four or more spots on the line, he will eliminate those points from his calculations, assuming that those are windows on the train. If two or more lines have the maximum number of points, Hugh will just randomly select one such set and delete it from the photo (he's not all that fastidious - after all, he believes in Little Grey Men). If there are fewer than four points lying along a common line, Hugh will assume that there is no train in the photograph and won't delete any points. Please write a program for him to process a set of photographs. #### Input Specification There will be a series of test cases. Each test case is one photograph described by a line containing a positive integer , the number of distinct spots in the photograph, followed by lines containing the integer coordinates of the spots, one pair per line. All coordinates are between and . The last photo description is followed by a line containing a zero, marking the end of the input. This line should not be processed. #### Output Specification For each test case, output the photo number followed by the number of points eliminated from the photograph. Imitate the sample output below. #### Sample Input 6 0 1 0 2 1 2 2 2 4 5 5 6 4 3 5 4 4 6 5 7 4 0 #### Sample Output Photo 1: 4 points eliminated Photo 2: 0 points eliminated
{}
Line Integral over Vector Field? 1. May 15, 2009 Not exactly a homework problem, a problem from a sample test. I'm boning up for my qualifying exam. 1. The problem statement, all variables and given/known data Consider the vector field: F = (ax + by)i + (cx + dy)j where a, b, c, d are constants. Let C be the circle of radius r centered at the origin and going around the origin one turn in the mathematically positive direction starting from the positive x-axis. A parameterization for C is x = r cost y = r sint, (z=0), Where 0$$\leq$$ t $$\leq$$ 2 $$\pi$$ Find the integral $$\int_{C}$$F.dR for any values of a, b, c, d (the answer may depend on a, b, c, d) 2. Relevant equations 3. The attempt at a solution The rust is killing me. I remember that I need line integrals to solve the problem, but the setup isn't coming out of the fog. 2. May 15, 2009 Dick In terms of your parametrization C, dR is (-r*sin(t)dt,r*cos(t)dt). Do you see why? Now express the vector F in terms of t and take the dot product. You'll wind up with two integrals dt to do. Any clearer?
{}
# Optimal strategy for Jackpot Rock Paper Scissors Jackpot Rock Paper Scissors is a gambling variant of Rock Paper Scissors, wherein ties result in the wager being carried forward into a jackpot. If a player plays the same hand (rock, paper or scissors) 3 or more consecutive turns and wins on that turn ... they win the total jackpot, and the game ends. Each turn, the player must wager $1. The possible outcomes: • Winning: You get$2 (your money back, and the opponents) • Losing: You lose the $1 bet amount (it's given to the opponent) • Tie: You and your opponent lose$1. The jackpot increases by 2 dollars. To prevent problem gambling (and infinite games), if 1000 turns elapse and the game has not ended and the jackpot is split 50-50, The goal of the game is to maximize your own earnings -- not to necessarily win the jackpot. Running a simple simulation, playing purely randomly -- I get these statistics: --- Random vs Random (over 1000000 games) --- Expected Return: -0.0212 Average turns: 48.355552 Average jackpot: 32.245392 Biggest jackpot: 440 Longest Game: 600 turns Biggest Win: 210 ---------------------- However, I have trivially been able to beat pure random play in my simulations (give more bias to a play that would win you the jackpot), and trivially been beat that strategy. What I have not been able to do, however, is come up with a strategy that I can not beat myself. So I put it to you: For such a game, what would be the optimal strategy? • how does one win the jackpot ? May 5 '14 at 21:13 • By winning with a hand that has been played 3 or more consecutive times. I have updated the description to make it more clear May 5 '14 at 22:11 • If a player makes the same move twice in a row, it seems like the opponent should almost always play to defeat that move the third time to prevent the jackpot. May 7 '14 at 20:47 • That's my impression too. And that means that all games will continue for 1000 turns and be null. I don't see a way to increase your earnings. – user65203 May 7 '14 at 20:57 • @AlexZorn .. that strategy can easily be beaten. If I ever play two in a row (say 2 rock), and you are using that strategy .. I know you will play paper. In which case, I will play scissors. I might not get the jackpot, but I'll make extra money. Remember, the goal is to maximize the amount of money you win ... not necessarily win the jackpot. May 8 '14 at 1:08 Call the strategies of rock, paper, and scissors A, B, and C: C beats B beats A beats C. Label the possible positions in this game with $2(n-1)$ dollars in the pot as either: $T_{n-1}$ if the previous result was a tie; $G_{n-1}$ if player I has a winning streak of 1 using strategy A (where A could be any of rock, paper or scissors); or $H_{n-1}$ if player I has a winning streak of 2 using strategy A (and will thus take the pot if he wins the next round using strategy A). We are using $n-1$ for the amount in the pot rather than $n$ for algebraic convenience when presenting the sub-game matrices. (Clearly, this omits the cases where player II has a winning streak, but the values of those positions $G^{-}_{n-1}$ and $H^{-}_{n-1}$ are just the negatives of the values of the corresponding $G_{n-1}$ and $H_{n-1}$ positions.) Let the value of any position in this game be the optimum game value minus the result that would happen if the players were to quit and immediately split the pot. Then by symmetry $V(T_{n-1}) = 0$ for all $n >0$ and the optimal strategy for any $T_{n-1}$ position is the trivial one of choosing A, B, or C each with probability 1/3. Because the value of $T_{n-1}$ is zero, in the position analyses the values and strategies are unaffected if we were to say we don't care how the $2(n-1$ dollars got into the pot, and that any tie ends the game by splitting the pot (and that the players. So the game overall for some value of $n-1$ is characterized by which of 4 positions you are at. From $G_{n-1}$ the game may end (by a tie, in which case the players restart the game at $T_n$ but since that has zero value we don't care), transition to $H_{n-1}$ with player I gaining a dollar, or transition to $G^{-}_{n-1}$ with player II gaining a dollar. From $H_{n-1}$ the game may end by a tie, transition to $G_{n-1}$ with player I gaining a dollar having won using strategy B or C, transition to $G^{-}_{n-1}$ with player II gaining a dollar, or end with player I winning the pot (gaining $n$ dollars altogether) by winning using strategy A. For $n=1$, with no money in he pot, their is no advantage at all to having a winning streak, the position values are all zero, and the optimal strategies are the trivial equal-probability strategies. All the interesting features are in games where there is someting in the pot. The two game matrices (for $n>1) are: $$G_{n-1} = \left( \begin{array}{ccc} 0 & -g & h \\ +g & 0 & -g \\ -g & g & 0 \end{array} \right)$$ (where for convenience we introduce$g \equiv 1+V(G_{n-1})$and$h \equiv 1+V(H_{n-1})$) and $$H_{n-1} = \left( \begin{array}{ccc} 0 & -g & n \ +g & 0 & -g \ -g & g & 0 \end{array} \right)$$ $$H_{n-1} = \left( \begin{array}{cc|c} 0 & -g & n \\ +g & 0 & -g \\ -g & g & 0 \end{array} \right)$$ To solve$G_{n-1}$we do the usual mantra of subtracting columns and taking determinants: $$G_{n-1}: \begin{array}{ccc|cc|c} 0 & -g & h & g & -h & 3g^2 \\ +g & 0 & -g & g & 2g & g^2 + 2hg \\ -g & g & 0 & -26 & -g & 2g^2 + hg \\ \hline -g & g & h+g & & & \\ g & -2g & h & & & \\ \hline 2g^2 + hg & g^2 + 2hg & 3g^2 & & & \end{array}$$ He game value is$\frac{hg-g^2}{3h+g6}$. So $$g = 1 + V(G_{n-1}) = 1 + \frac{hg-g^2}{3h+g6}$$ Similarly, we solve$H_{n-1}$: $$H_{n-1}: \begin{array}{ccc|cc|c} 0 & -g & n & g & -n & 3g^2 \\ +g & 0 & -g & g & 2g & g^2 + 2ng \\ -g & g & 0 & -26 & -g & 2g^2 + ng \\ \hline -g & g & n+g & & & \\ g & -2g & n & & & \\ \hline 2g^2 + ng & g^2 + 2ng & 3g^2 & & & \end{array}$$ The game value is$\frac{hg-g^2}{3h+g6}$. So $$h = 1 + \frac{ng-g^2}{3n+g6}$$ Before working with these two equations, we can look at the situation for$n$very large. There,$ h = 1 + g/3 + O(1/n)$and we can substitute to find that $$\begin{array}{l} g = \frac{15+\sqrt{1053}}{46} \approx 1.031521 \\ V(G_{\infty}) \approx 0.031521 \\ h = 1 + g/3 \approx 1.34384 \\ V(H_{\infty}) \approx 0.34384 \end{array}$$ That is, with a very large pot, if player I has a winning streak of 2 wins playing strategy A, player II will very rarely (probability$\frac{3g}{3N+6g}$) risk using strategy C, which could result in losing the whole pot. Player I will almost always choose strategies B (about 2/3 of the time) or C (about 1/3), and the game is favorable to player I with a value of$+\frac{1}{3}$. And if player 1 has just a 1 game winning streak, then the roughly 1.33 reward in the upper right hand corner (going over to$H_{n-1}$with another A win) biases the game in player I's favor, but only by about 0.03. Okay, now look at the case for$n$not large enought to ignore$1/n$effects: Substituting the expression for$h$into the expression for$g$we find $$40g^3 + (23n-21)g^2 - (15n+18)g - 9n = 0$$ So for example, if there is$2 \times 1$dollars in the pot ($n = 2$) then the value of the game to a player with a winning streak is$G_1 \approx 0.008118$and the value of a two game winning streak is$G_1 \approx 0.082991$. In game$G_1$the strategy for player I is to choose A (the winning streak choice) to (B which beats A) to C in ratio $$3g : g + 2h : 2g + h = 3.024 : 3.172 : 3.099$$ and in game$H_1$the ratios are $$3g : g + 2n : 2g + n = 3.024 : 5.008 : 4.016$$ The game values for$n = 11$(ten dollars in the pot) are$G_{10} = 0.024413$and$H_{10} = 0.261048$; the strategies for player 1 in game$H_{10}\$ are in ratio $$3.073 : 21.049: 1.024 \approx 1 : 7 : 4$$ and for player II they are in ratio $$2n+g : n+2g : 3g = 21:024 : 12.049 : 3.073 \approx 7 : 4 : 1$$ for A, B and C in that order.
{}
This question is the real question asked on StackOverflow. I'm here to review my answer and see how can I optimize it. ### Here is the answer text: This is a basic approach, but it proposes a proof of concept of what might be done. I do it using Bash along with the usage of the GCC -fsyntax-only option. Here is the bash script: #!/bin/bash while IFS='' read -r line || [[ -n "$line" ]]; do LINE=echo$line | grep -oP "(?<=//).*" if [[ -n "$LINE" ]]; then echo$LINE | gcc -fsyntax-only -xc - if [[ $? -eq 0 ]]; then sed -i "/$LINE/d" ./$1 fi fi done < "$1" The approach I followed here was reading each line from the code file. Then, greping the text after the // delimiter (if exists) with the regex (?<=//).* and passing that to the gcc -fsyntax-only command to check whether it's a correct C/C++ statement or not. Notice that I've used the argument -xc - to pass the input to GCC from stdin (see my answer here to understand more). An important note, the c in -xc - specifies the language, which is C in this case, if you want it to be C++ you shall change it to -xc++. Then, if GCC was able to successfully parse the statement (i.e., it's a legitimate C/C++ statement), I directly remove it using sed -i from the file passed. Running it on your example (but after removing <- commented code from the third line to make it a legitimate statement): // Those parameters control foo and bar... <- valid comment int t = 5; // int t = 10; int k = 2*t; Output (in the same file): // Those parameters control foo and bar... <- valid comment int t = 5; int k = 2*t; (if you want to add your modifications in a different file, just remove the -i from sed -i) The script can be called just like: ./script.sh file.cpp, it may show several GCC errors while these are the correct • echo | grep is unwarranted. bash understands regular expressions ("$line" =~ regex), and can do simple substitutions: line=${line#[[:space:]]*\/\/} removes leading whitespaces , followed by the comment, just what we are after. • Replacing the file while reading it looks suspiciously. I recommend to have a destination file, and copy valid lines (and don't copy undesired ones). A perk benefit is that forking sed is not needed anymore. A side note: the script makes a false positive in cases like // Notice that // some_valid_c_code; // doesn't work, because etc The part of the comment would be recognized as a dead code, and the output will be // Notice that // doesn't work, because etc • Your side note, you mean the code is going to remove all of the comments? – Andrew Naguib Jan 11 at 20:40 • @AndrewNaguib No. See edit – vnp Jan 11 at 20:41 • Yeah, true. But, who would write a comment in such a way (i.e., "some_valid_c_code;"? :) – Andrew Naguib Jan 11 at 20:42 • @AndrewNaguib Me, for starters. And I've seen it in the wild. That is quite a good way to explain some not-so-obvious design decisions. – vnp Jan 11 at 20:47 • @AndrewNaguib Thanks for pointing the typo; fixed. The operator returns 0 or 1 depending on match success or failure. Use it in if [[ "$line" =~ regex]]. – vnp Jan 11 at 21:12 ### Look for corner cases This command is fragile, there are several ways in which it can malfunction: sed -i "/$LINE/d" ./$1 For example: • If the dead code contains /, it will break the sed command, because / within /.../d must be escaped. • It doesn't target accurately the line to remove. It removes all lines that match $LINE. If there are lines in the file that are similar enough to a dead code that appears somewhere else, it will be removed too. Both of these problems can be fixed by tracking the line numbers that should be deleted, and then using those with the d command of sed, instead of pattern matching. The pattern "(?<=//).*" used by the grep is not strict enough, and may incorrectly match lines that are not dead code, for example: int x = 1; // some comment char * s = "foo // bar"; ### Double-quote variables used in command line arguments How many bugs can you spot here? while IFS='' read -r line || [[ -n "$line" ]]; do somecmd ./$1 done < "$1" I see at least: • It doesn't handle absolute paths correctly. When $1 is an absolute path, then ./$1 and "$1" are likely different files, except in the lucky case when the working directory is /. • ./$1 is not properly quoted, so if $1 contains spaces or shell meta-characters, the command will fail. The solution is simple: quote properly and use the same path consistently somecmd "$1". In addition, it's usually a good idea to assign command line arguments to variables with descriptive names at the top of a script, and then refer to it by that name, instead of have $1 scattered at multiple places in the script. ### Use the exit code of commands directly in conditional statements somecmd if [[ $? -eq 0 ]]; then ... fi You can write: if somecmd; then ... fi Simpler and quite natural! ### Avoid sed -i ... somefile in a loop Repeatedly rewriting the content of a file in a loop, looks dangerous. ### Use here-strings Usually echo "..." | somecommand can be rewritten as somecommand <<< "...", using here-strings, and saving an echo and a pipe. Then depending on somecommand, better options may be available, such as using [[ ... =~ ... ]] for pattern matching instead of grep (as @vnp mentioned), or running grep on a larger outer scope (as demonstrated in the previous point). ### Alternative implementation Consider this alternative implementation that fixes the above issues and bad practices. #!/usr/bin/env bash input=$1 sed_commands=() line_num=1 while IFS= read -r line || [[ "$line" ]]; do if [[ "$line" =~ ^[[:space:]]+// ]]; then if gcc -fsyntax-only -xc - <<< "$line"; then sed_commands+=(-e "${line_num}d") fi fi ((line_num++)) done < "$input" sed "${sed_commands[@]}" -i "\$input" The weakness of this alternative is that if there are enough dead code lines in the input, then the maximum argument count limit of the shell may be reached in the final sed command. When that becomes a realistic issue, it can be optimized to handle that.
{}
A common fixed point of weakly commuting multivalued mappings.(English)Zbl 0664.54031 Let (X,d) be a metric space and let CB(X) denote the family of all closed and bounded subsets of X. It is well known that CB(X) is a metric space with the Hausdorff metric H. Following the author we call the mappings f and g from X into CB(X) weakly commuting, if H(fg(x),gf(x))$$\leq H(f(x),g(x))$$, for every $$x\in X$$ provided H(fg(x),gf(x)) is defined. A theorem on a common fixed point for two weakly commuting maps is proved. Reviewer: L.Gorniewicz MSC: 54H25 Fixed-point and coincidence theorems (topological aspects) 47H10 Fixed-point theorems weakly commuting
{}
]> 7.3 The Symmetric Approximation: f '(x) ~ (f(x+d) - f(x-d)) / (2d) ## 7.3 The Symmetric Approximation: f '(x) ~ (f(x+d) - f(x-d)) / (2d) Using this formula for the "d-approximation" to the derivative is much more efficient than using the naive formula $f ( x + d ) − f ( x ) d$ . Why is it better? The answer is that the "symmetric formula" is exactly right if f is a quadratic function, which means that the error made by it is proportional to $d 2$ or less as $d$ decreases. The naive formula is wrong for quadratics and makes an error that is proportional to $d$ . How come? Suppose $f$ is a quadratic: $f ( x ) = a x 2 + b x + c$ . Then we get $f ( x + d ) = a ( x + d ) 2 + b ( x + d ) + x$ and $f ( x + d ) − f ( x − d ) 2 d = 4 a x d + 2 b d 2 d = 2 a x + b$ On the other hand, we get $f ( x + d ) − f ( x ) d = 2 a x + b + a d$ This means that the symmetric approximation is exact for any value of $d$ for any quadratic; no need to make $d$ small; and this is not true for the asymmetric formula. In general, if our function being differentiated, $f ( x + d )$ , can be expanded in a power series in $d$ , the first error in our symmetric formula comes from cubic terms, and will be proportional to $d 2$ . The reason this happens is that the $d 2$ term in $f ( x + d ) − f ( x − d )$ cancels itself out, being the same in both terms. The same things happens for all even power terms, by the way; the errors in this approximation to the derivative all come from odd power terms in the power series expansion of $f$ about $x$ . Thus, if we replace d by $d 2$ , the error in the symmetric approximation will decline by a factor of 4, while the asymmetric formula has error which declines only by a factor of 2 when we divide $d$ by 2. And so, the symmetric formula approaches the true answer for the derivative much faster than the naive asymmetric one does, as we decrease $d$ . Now we ask: can we get even faster convergence?
{}
PL EN Preferencje Język Widoczny [Schowaj] Abstrakt Liczba wyników Czasopismo ## Applicationes Mathematicae 2007 | 34 | 2 | 143-167 Tytuł artykułu ### Existence of solutions to the nonstationary Stokes system in $H_{-μ}^{2,1}$, μ ∈ (0,1), in a domain with a distinguished axis. Part 2. Estimate in the 3d case Autorzy Treść / Zawartość Warianty tytułu Języki publikacji EN Abstrakty EN We examine the regularity of solutions to the Stokes system in a neighbourhood of the distinguished axis under the assumptions that the initial velocity v₀ and the external force f belong to some weighted Sobolev spaces. It is assumed that the weight is the (-μ )th power of the distance to the axis. Let $f∈ L_{2,-μ}$, $v₀ ∈ H_{-μ}¹$, μ ∈ (0,1). We prove an estimate of the velocity in the $H_{-μ}^{2,1}$ norm and of the gradient of the pressure in the norm of $L_{2,-μ}$. We apply the Fourier transform with respect to the variable along the axis and the Laplace transform with respect to time. Then we obtain two-dimensional problems with parameters. Deriving an appropriate estimate with a constant independent of the parameters and using estimates in the two-dimensional case yields the result. The existence and regularity in a bounded domain will be shown in another paper. Słowa kluczowe Kategorie tematyczne Czasopismo Rocznik Tom Numer Strony 143-167 Opis fizyczny Daty wydano 2007 Twórcy autor • Institute of Mathematics, Polish Academy of Sciences, Śniadeckich 8, 00-956 Warszawa, Poland • Institute of Mathematics and Cryptology, Military University of Technology, Kaliskiego 2, 00-908 Warszawa, Poland Bibliografia Typ dokumentu Bibliografia Identyfikatory
{}
## Sunday, September 11, 2016 ### Mapping out the train routes in India I had nothing better to do on a Sunday morning so I made a map. If you have been following my blog, you will know that I am trying to make sense of the Indian Railways, why the trains runs late  and if there's anyway we can learn about the cause of the delays, based on patters in delay. Towards that goal, I posted two blog posts that described how and where I was collecting my data from and a first look at the data I was gathering. Even from the preliminary look, it can be seen that there are specific routes/stations that are causing a delay along a train's route. And the delays were induced on multiple runs at the same station, meaning that it wasn't simply a one time thing. Moving on, another way to look at the problem is to understand how crowded the railway lines are. By mapping all train movement in India, we will be able to understand how crowded specific routes are and if they're crowded at a specific time of the day. By adding delay information from multiple trains to the map, we will also be able to prove with good certainty about the routes that are crowded and are leading to large delays. I took a small step in that direction today by mapping out the routes of a few trains. Below you will find one such map/image, where the red lines correspond to the routes of a few trains. Note that this isn't the complete roster of trains that are run by the Indian Railways, it is but a very very small subset. But the process by which such a map can be made is scalable, the small subset to make this proof of concept. While this image isn't interactive, running the code actually creates an interactive image, which displays station codes when the user hovers over the route. I modified an available example that produced a USA flight paths map using plotly.  The code can be found below. import glob import plotly.plotly as py import pandas as pd files = glob.glob('routes/*') df_stations = pd.DataFrame(columns=['station', 'lat', 'long']) df_station_paths = pd.DataFrame(columns=['start_lon', 'start_lat', 'end_lon', 'end_lat']) for file in files: names=['station', 'lat', 'long'], na_values=[0.0]) df_stations_temp = df_stations_temp.dropna(axis=0, how='any') df_station_paths_temp = pd.DataFrame([[df_stations_temp.iloc[i]['long'], df_stations_temp.iloc[i]['lat'], df_stations_temp.iloc[i+1]['long'], df_stations_temp.iloc[i+1]['lat']] for i in range(len(df_stations_temp)-1)], columns=['start_lon', 'start_lat', 'end_lon', 'end_lat']) df_stations = pd.concat([df_stations, df_stations_temp], ignore_index=True) df_station_paths = pd.concat([df_station_paths, df_station_paths_temp], ignore_index=True) stations = [ dict( type = 'scattergeo', locationmode = 'India', lon = df_stations['long'], lat = df_stations['lat'], hoverinfo = 'text', text = df_stations['station'], mode = 'markers', marker = dict( size=2, color='rgb(255, 0, 0)', line = dict( width=3, color='rgba(68, 68, 68, 0)' ) ))] station_paths = [] for i in range( len( df_station_paths ) ): station_paths.append( dict( type = 'scattergeo', locationmode = 'India', lon = [ df_station_paths['start_lon'][i], df_station_paths['end_lon'][i] ], lat = [ df_station_paths['start_lat'][i], df_station_paths['end_lat'][i] ], mode = 'lines', line = dict( width = 1, color = 'red', ), opacity = 1., ) ) layout = dict( showlegend = False, height=1000, geo = dict( scope='India', projection=dict( type='azimuthal equal area' ), showland = True, landcolor = 'rgb(243, 243, 243)', countrycolor = 'rgb(204, 204, 204)', ), ) fig = dict( data=station_paths+stations, layout=layout ) py.iplot( fig, filename='d3-station-paths' ) To briefly go over the code, the route files for individual trains were stored in routes/train_number.csv and each file contained three columns - station code along route, lat, long. Note that the locations and station codes along the route of a specific train were acquired using RailwayAPI. From the file, the above code first creates a Pandas DataFrame, which is then manipulated to create a new DataFrame that contains the train's path/route. These two DataFrames are finally modified and passed onto plotly, which creates the above map. The map is far from perfect. For starters, like I mentioned earlier, it is but a small subset of all trains available. Secondly, the lat/long data seems to be faulty, because there seem to be stray lines that deviate from a train's actual route in the map. I am trying to look for a better source of information than RailwayAPI. I am trying to get a list of all trains run by Indian Railways. I am trying to find an official source of information, from the Indian Railways. I am trying to find an easier way to make such a map, and make it interactive. If there's something I can do to make the map/processing better, point it out to me! I'd love to hear comments/feedback. Until the next time ... ## Sunday, August 21, 2016 ### A tale of two trains : The Indian Railways Last week, I started collecting the running status of a few (<10) trains everyday. I wrote a blogpost last week about how I was collecting the data if you want to know more. Now, let's look at what I've collected so far. (Open in the following images in a new page to take a better look at which stations are the most problematic and understand the general trend better) Train 18029 - runs from Lokmanyatilak (Mumbai) to Shalimar. This train is mostly representative of what happens with the rest of the trains discussed below. There are stations enroute where the train makes up for lost time and then it loses any gains made. But, for the most part, I guess the delays are acceptable, given that they're within an hour of expected arrival time. Train 12809 - runs from Mumbai CST to Howrah JN. This train was a little surprising because it's different compared to the rest of the lot. The train almost always makes up for delays in at the start of the route. There are a few places where there's a drastic reduction in delay but the gains are offset a few stations later (thrice)! Train 12322 - runs from Mumbai CST to Howrah JN. This train displays two interesting trends. The first is that even though there are stations enroute where the train makes up for lost time (twice), it gets delayed again almost immediately. The second interesting trend is that beyond a certrain point enroute, the delay persists, and in 2/4 cases, the train can't make up for lost time. Train 12616 - runs from Delhi to Chennai Central. The interesting thing to note here is that there are points enroute where the train makes up for lost time - but, it gets delayed again almost immediately, negating any reduction in delay. Train 12424 - runs from New Delhi to Dibrugarh Town via Guwahati. This train is just sad. At no point enroute does it show any prospect of making up lost time, if it's late. Train 14056 - runs from Delhi to Dibrugarh via Guwahati. The running status of the train looks a little weird, doesn't it? After a certain point, the delays become very predictable instead of random. That is because I was asking for the running status of train at the wrong time - when the train was still in enroute. Of course, if I ask for the running status while a train in enroute, all I will get is estimated delay at future stations. Which is the reason behind long horizontal lines followed by dips. Train 15910 - runs from Lalgarh JN to Dibrugarh via Guwahati. The running status of the above train shows the same behavior as the earlier one (14056) i.e asking for the running status while the train in still enroute WILL give me faulty estimates of delay beyond the current position of the train. And of course, the it's in the Indian Railways' best interests to estimate no delay instead of providing more accurate estimates. That's all for now folks. I know, we didn't learn too much above why the delays are being caused or what routes lead to the most delay but we'll get there. I think. I'll try. I'll post the code I used to analyze the data and generate the plots tomorrow. If you can gleam anything more from the plots above or any other comments that you'd like to pass on to me, I'm all ears. ## Sunday, August 14, 2016 ### Dude, Where's my Train? - The Indian Railways. I have a friend who was traveling from New Delhi to Guwahati and due to certain constraints, she had to take train number 12502. She had made further plans of traveling from Guwahati based on the assumption that she would reach Guwahati at the expected arrival time. You all know where this is going. The train was late and she had to make changes to her travel plans. We all have either known someone who went through this or personally went through this ourselves. Usually, trains run by the Indian Railways are not more than an hour late. There are ones who run perfectly on time too. And then there are also trains that are multiple hours late, sometimes even > 6! Which I don't think is acceptable. And because I had nothing better to do on a Sunday, I set about to do something about it. If you guys have read a few of my earlier blog posts, you know where this is going. I'm going to write some code that will help me automate something. Or get some data. Or make a plot or a map. This too will be more of the same. In the context of trains run by the Indian Railways. Let's start with something simple - trains that are cancelled. Everyday, the Indian Railways announces Fully/Partially Cancelled Trains for that day. It doesn't state a reason as to why they were cancelled. And I never really had a reason to check this list before. Until now. Let me first state what I want to do. I want to see if there are trains that have been cancelled everyday. And then look for reasons as to why they might be. Think about it. Why would the Indian Railways even list a train if it's being cancelled every day? Are there costs involved with taking a train off of service or rotation? Are there costs involved with maintaining a train, even though it's being cancelled every day? To find out, let's write some code. One other way to find out would be to manually make a list of trains cancelled every day from this website but I'm wayyy too lazy for that. In order to automate the task of getting every day's list of cancelled trains, I used the RailwayAPI website. These people are not affiliated to the Indian Railways but they seem to be offering most of the information I need. I've cross checked the list these RailwayAPI people are returning me is the same that Indian Railways displays so the data from RailwayAPI seems to be correct. With that, let's move on to some code. from api_key import API_KEY as MY_API_KEY import urllib import json from datetime import datetime today = datetime.today() day = today.day month = today.month year = today.year URL_TEMPLATE = "http://api.railwayapi.com/cancelled/date/{day}-{month}-{year}/apikey/{APIKEY}/" url = URL_TEMPLATE.format(day=day, month=month, year=year, APIKEY=MY_API_KEY) response = urllib.urlopen(url) filename = "{day}-{month}-{year}.can".format(day=day, month=month, year=year) with open(filename, 'w') as f: for train in response['trains']: f.write("{} \n".format(train['train']['number'])) Let me explain what is happening in the above code. The API_KEY in the first line is used to let me access their data. The same way Facebook recognizes you using a username and password, the RailwayAPI people recognize me using the API_KEY. I had to register with them to get my API_KEY and I can't display it publicly. You too can get one by signing up with them. Which is why I'm importing it as a variable, that was defined in another file. Moving on. The URL_TEMPLATE is what I need to query to get information on cancelled trains. You can see that there are placeholders for date in the URL, which are filled in in the next step. The response is made using the urllib library available in the Python Standard Library. The next few steps are simply making the request, getting a response and converting the response into a meaningful format. The last few steps involve writing the response to a file. The complete response contains information about where the train departs from and what it's destination is, what it's number is and so on. I don't need all of that information. I just want the train number. Which is what I'm writing to the file in the last time. You will need to look at the response yourself to understand the it's structure. All I had to do now was run this code everyday. But because I'm too lazy to manually run this code everyday, I automated that too. Cron comes to the rescue. I mentioned cron in one of my earlier posts. It's a way to run tasks/jobs periodically on Linux/OSX. I simply had to add 01 00 * * * /usr/bin/python /path/to/script/query_for_cancelled.py to my list of automated cron tasks/jobs, that can be accessed using crontab -e Again. All I'm doing is call the script query_for_cancelled using Python (/usr/bin/python is the full path to the executable) at 00:01 AM every day. There ends the code. Now for a preliminary look at the results. I queried for the trains cancelled on August 03, 04, 05 and 06 and then got the trains that were cancelled on all of those days, a few of which are - 18235, 18236, 19943, 19944, 22183, 22184, 22185, 22186, 24041, 24042. Notice that there are 5 pairs in total, where two numbers in a pair only differ in the last digit. If you don't know already, last digit changes differentiate trains from A to B and B to A. All of those trains are a little weird. Especially the last one - 24041 and 24042. It's a train that is supposed to travel 25 KMs. Umm, what? ## Wednesday, July 27, 2016 ### Playing around with errors in Python - NameErrors Let's start with NameErrors, which is one of the more common errors that a newcomer to Python will come across. It is reported when Python can't find a local or global variable in the code. One reason this might pop up is because a variable is being referred to outside of it's namespace. To give you an example a = 10 def test_f(): a = 20 print a test_f() print a Let's walk through the code. After defining the variable a and the function test_f, you would naively expect the test_f() function call to change the value of a to 20 and print 20. You expect the print statement after the function call to also print 20. Because you expect the function call to have changed the value of a. But, if you try running the code for yourself, you'll notice that the final print statement will print 10. This is where namespaces come into the picture. def test_f(): b = 20 print b test_f() print b The call to the test_f function will set and print the variable b but the print statement afterwards will throw a NameError because outside of the test_f function, the variable b isn't defined. Let's look at another example, this time in the context of classes in Python. class test_c: b = 20 def test_f(self): print b test_c().test_f() Let me explain the last statement first and then the rest of the example. test_c() creates an instance of the class test_c and test_c().test_f() calls the test_f method on the instance of the class. Naively, you would expect the code print 20, which is the value of the variable b in the class. But instead, you will get a NameError, telling you that the variable b isn't defined. The solution to this problem is to refer to b as self.b inside any of the methods defined on test_c, which tells Python that this variable belongs to the class on which the method is defined. There are definitely a lot more ways in which you can make Python throw a NameError at you but I wanted to use the NameError to introduce the concept of namespaces in python. That's all for now. And as always, I am thankful for any feedback on the writing style and/or content. Until next time ... [1]. hilite.me was used to create the inline code blocks. [2]. You can refer to the Python official documentation on namespaces for more information. [3]. A Python Shell can be accessed on the official Python page. A more comprehensive editor can be found here. ## Monday, July 25, 2016 ### Pocket reading list - Week 4.1 of July. The Ukrainian Hacker Who Became the FBI’s Best Weapon—And Worst Nightmare - What I find most amazing about this article is what the hacker says about his fellow mates, that all they want is a job and if they found one that paid well and was stable, they wouldn't have much need to hack and make money the illegal way. This is, in my opinion, in general true for a sizable human population that defaults to stealing and cheating to make their livelihood, because they didn't have the option to work towards a legal/stable livelihood and now they're having to make ends meet one way or another. Canada’s \$6.9 Billion Wildfire Is the Size of Delaware—and Still Out of Control - Just another reminder that Nature is a force out of our control. After settling down in every remote corner or the Earth, moving to the top of any and every food chain, we humans might feel invincible. But events like this remind us that nature around us is very fragile and can be disturbed beyond the point of return. This wasn't a man-made problem, afaik. But there have been wild fires caused by people leaving behind cigarettes, partly burnt camp fires and what not. And wild fires actually do the forest some good, in the sense that it cleans the area and leads to new growth. But the fact that people are living in such close proximity to such areas, the fact that out settlements have given rise to conditions such that a wild fire can jump from one region to the next, leveling a large swath of land than what could've been possible had humans not meddled with the ecosystem. Refusing to Be Measured - We are in the era of big data and all around us, people are coming up with products to quantify things, in this case, quantifying the productivity of a faculty member. While such quantification can be a good thing, for the faculty member as it can help them understand whether or not they're growing year-to-year, and for the institute to understand whether it's spending its resources intelligently or if there are avenues to improve; the exercise might lead to problems if the quantification process is faulty. If one uses solely journal publications to quantify a faculty member, they are discarding their teaching abilities. This is not to say that the publishing industry is, in itself, a mess and it can take many months to the order of years and numerous revisions for a paper to get accepted in a journal. Solving a Century-Old Typographical Mystery - One more interesting story to have come out of the world-wide web and the digitization of literature, specifically, old papers in this case. This is the account of one man who searched through old papers looking for ads from that era. The article points out, which I agree with, that ads from a bygone era throw light on some interesting things about that time, things which might not be inferred from texts or serious literary works. How Typography Can Save Your Life - The more I've read, the more i'm interested in how typography matters in day-to-day life. I remember reading about why comic sans is hated all around, which concluded that comic sans wasn't made for HD monitors and that when it was introduced, it was in fact the most legible font. A change in font can change a reader's mood or set the tone for the article.
{}
# Confusion with metric components in spherical coordinate system I know that by definition the transition between basis vectors $e_{a}$ of one coordinate system to basis vectors $e_{a^\prime}$ of another coordinate systems is performed via the next expression: $$e_{a^\prime}=\Lambda^b_{a^\prime}e_b$$ $$\Lambda^b_{a^\prime}=\frac{\partial x^{b}}{\partial x^{a^\prime}}$$ $$e_b=e_a$$ But when I want to find basis vectors of spherical coordinate system via cartesian coordinate system by the definition above I come across the next problem: The spherical system coordinates expressed as $x=rsin{\theta}cos{\phi}, y=rsin{\theta}sin{\phi}, z=rcos{\theta}$, so I am performing the steps which are shown in definition:$$\partial r=\frac{\partial x}{\partial r}\partial x +\frac{\partial y}{\partial r}\partial y + \frac{\partial z}{\partial r}\partial z$$ But according to my book (Relativity demystified) this is not correct and should be done in vice-versa way namely: $$\partial r=\frac{\partial r}{\partial x}\partial x +\frac{\partial r}{\partial y}\partial y + \frac{\partial r}{\partial z}\partial z$$ But I see no sense in their action and how they got to the correct answer. Is it misprint or my misunderstanding? This is really a mistake. Relativity Demystified, example 2-5.
{}
Journal ArticleDOI # A finite difference method for an initial–boundary value problem with a Riemann–Liouville–Caputo spatial fractional derivative 01 Jan 2021-Journal of Computational and Applied Mathematics (North-Holland)-Vol. 381, pp 113020 TL;DR: A fractional Friedrichs’ inequality is derived and is used to prove that the problem approaches a steady-state solution when the source term is zero, and it is proved that the scheme converges with first order in the maximum norm. AbstractAn initial–boundary value problem with a Riemann–Liouville-Caputo space fractional derivative of order α ∈ ( 1 , 2 ) is considered, where the boundary conditions are reflecting. A fractional Friedrichs’ inequality is derived and is used to prove that the problem approaches a steady-state solution when the source term is zero. The solution of the general problem is approximated using a finite difference scheme defined on a uniform mesh and the error analysis is given in detail for typical solutions which have a weak singularity near the spatial boundary  x = 0 . It is proved that the scheme converges with first order in the maximum norm. Numerical results are given that corroborate our theoretical results for the order of convergence of the difference scheme, the approach of the solution to steady state, and mass conservation. Topics: Fractional calculus (60%), Boundary value problem (58%), Finite difference method (54%), Rate of convergence (54%), Singularity (53%) ### 1. Introduction • The problem considered in this paper is inspired by [1], which gives a lengthy discussion of various types of fractional initial-boundary value problem and the boundary conditions that are appropriate for each type. • Here the authors assume that the function v is such that the definitions make sense. • In [1] the quantity Dα−1C,x u is called the Caputo fractional flux. • Three numerical examples are given in Section 5, to illustrate their theoretical results. • Note that C can take different values in different places. ### 2. Some properties of the solution • In this section the authors first derive a Friedrichs’ inequality for Caputo derivatives (Lemma 1). • Lemma 1 (Friedrichs’ inequality for Caputo derivatives). • A related result was obtained in [1, but using a very technical argument. • Corollary 1 (Stability and uniqueness of solution). ### 3. Finite difference scheme • ,N. To discretise the spatial derivative in (1a) the authors follow [8], where the two-point boundary value problem corresponding to (1a) was considered. • The time derivative in (1a) is discretised by the backward Euler method. • Thus, the diagonal entries of A are positive and its off-diagonal entries are nonpositive. • This completes the inductive step and the proof. ### 4. Error estimate for scheme • In this section an error bound for their difference scheme is derived. • To convert these bounds to an error estimate for the computed solution, the authors shall employ a barrier function whose construction is discussed in Section 4.3. • For the time-dependent problem (1), one expects that the classical time derivative will not affect the behaviour of the spatial derivatives. • The mesh function {Ψ̃nm}M,Nm=0,n=0 is called a discrete barrier function. ### 5. Numerical experiments • The authors present numerical results for three examples. • In the first example, the exact solution is known and satisfies the bounds (17); its numerical results illustrate the error estimates of Theorem 1. • In the third example, convergence to steady state and mass conservation are discussed. • The computed orders of converge again agree with Theorem 1. Did you find this useful? Give us your feedback Content maybe subject to copyright    Report A finite dierence method for an initial-boundary value problem with a Riemann-Liouville-Caputo spatial fractional derivative Jos ´ e Luis Gracia 1 , Martin Stynes b, a IUMA and Department of Applied Mathematics, University of Zaragoza, Spain. b Applied and Computational Mathematics Division, Beijing Computational Science Research Center, Haidian District, Beijing 100193, P.R.China. Abstract An initial-boundary value problem with a Riemann-Liouville-Caputo space frac- tional derivative of order α (1, 2) is considered, where the boundary conditions are reflecting. A fractional Friedrichs’ inequality is derived and is used to prove that the problem approaches a steady-state solution when the source term is zero. The solution of the general problem is approximated using a finite dierence scheme defined on a uniform mesh and the error analysis is given in detail for typical solu- tions which have a weak singularity near the spatial boundary x = 0. It is proved that the scheme converges with first order in the maximum norm. Numerical re- sults are given that corroborate our theoretical results for the order of convergence of the dierence scheme, the approach of the solution to steady state, and mass conservation. Keywords: Fractional dierential equation, time-dependent problem, Riemann-Liouville-Caputo fractional derivative, weak singularity, discrete comparison principle, finite dierence scheme, steady-state problem 1. Introduction The problem considered in this paper is inspired by [1], which gives a lengthy discussion of various types of fractional initial-boundary value problem and the boundary conditions that are appropriate for each type. In particular, we shall focus on the “Caputo fractional flux” and reflecting boundary conditions of [1, Section 6]. Corresponding author: m.stynes@csrc.ac.cn Preprint submitted to Elsevier October 21, 2020 Set := (0, L) and Q := × (0, T ]. For (x, t) and constant r > 0, define the Riemann-Liouville integral operator I r x of order r by I r x v(x, t) := 1 Γ(r) Z x s=0 (x s) r1 v(s, t) ds. Then for any positive constant β with n 1 < β < n where n is a positive integer, the Caputo fractional derivative of order β is defined by D β C,x v(x, t) := I nβ x n v x n (x, t). Here we assume that the function v is such that the definitions make sense. Let α be constant with 1 < α < 2. In this paper, we examine the initial- boundary value problem u t D α RLC,x u = f for (x, t) Q, (1a) u(x, 0) = φ(x) for x , (1b) D α1 C,x u(0, t) = D α1 C,x u(L, t) = 0 for t (0, T], (1c) where D α RLC,x is the Riemann-Liouville-Caputo fractional derivative of order α, which is defined by D α RLC,x u(x, t) := x D α1 C,x u(x, t) for x > 0 and 0 < t T. This hybrid fractional derivative D α RLC,x has been suggested by several researchers, from both modelling and mathematical viewpoints; see [8] for references. In [1] the quantity D α1 C,x u is called the Caputo fractional flux. The left boundary condition in (1c) is defined by 0 = D α1 C,x u(0, t) := lim x0 + D α1 C,x u(x, t). It is suitable for certain physical models [3] and removes a troublesome singularity from the solution u(x, t) at x = 0; see [1, 5, 8] and Remark 1 below. Furthermore, both boundary conditions in (1c) are reflecting and ensure that mass is conserved; see the discussion in [1]. Remark 1. In [8] it is shown that D α1 C,x u(0, t) = 0 in (1c) is equivalent to the classical Neumann boundary condition u x (0, t) = 0 if D α RLC,x u(·, t) C[0, L]. Our analysis in Sections 3 and 4 assumes that the solution u of problem (1) has this regularity, so for convenience our numerical method will discretise u x (0, t) = 0 2 α1 C,x u(0, t) = 0; but the other boundary condition D α1 C,x u(L, t) = 0 of (1c) cannot be simplified in the same way and must be handled directly. Remark 2. For the function x β with α 1 < β < 1, one has [4, p. 193] D α1 C,x x β = Γ(β + 1) Γ(β α + 2) x βα+1 and d dx x β = βx β1 . Here β α + 1 > 0 while β 1 < 0, i.e., for these functions x β , the boundary condition D α1 C,x u(0, t) = 0 of (1c) does not imply the Neumann boundary condition u x = 0, unlike the situation described in Remark 1. Thus replacing D α1 C,x u(0, t) = 0 by u x (x, 0) = 0 means we exclude functions u(x, t) that behave like a multiple of x β as x 0 for some fixed value of t. But for such functions, in (1a) one has D α RLC,x u(x, t) O(x βα ) near x = 0 with β α < 0, so we are merely excluding certain singularities in the terms of the dierential equation. In [2, Proposition 19] it is proved that problem (1) with f 0 is well-posed in the Banach space L 1 (0, L) for each value of t. We shall assume a reasonable amount of smoothness of the solution see equation (17) below. Our aim is to approximate the solution u of problem (1) by a finite dierence method whose analysis requires these bounds on derivatives. The structure of the paper is as follows. In Section 2 we discuss some proper- ties of the solution u of problem (1). A fractional Friedrichs’ inequality for Caputo fractional derivatives is established and is used to prove that u converges to the steady state solution when f 0. A finite dierence scheme for solving (1) on a uniform mesh is defined in Section 3 and it is shown that it satisfies a discrete comparison principle. In Section 4 this principle and an appropriate barrier func- tion are used to prove that the solution of the finite dierence scheme converges to u with first order in the discrete maximum norm. Three numerical examples are given in Section 5, to illustrate our theoretical results. Notation: Denote by AC[0, L] the set of absolutely continuous functions on [0, L] and by L p (0, L) the usual Lebesgue space with norm k · k L P (0,L) . Throughout the paper, C denotes a generic constant that can depend on the data of the problem (1) but it is independent of the mesh used for its numerical solution. Note that C can take dierent values in dierent places. 2. Some properties of the solution In this section we first derive a Friedrichs’ inequality for Caputo derivatives (Lemma 1). Hence, in Lemma 2, we prove convergence of the solution u to the 3 R L x=0 φ(x) dx /L in the special case when f 0. This lemma implies uniqueness of the solution to (1) and stability of this solution in terms of perturbations of the initial condition; see Corollary 1. Lemma 1 (Friedrichs’ inequality for Caputo derivatives). Let β (0, 1). Suppose that v AC[0, L] with R v dx = 0 and kD β C,x vk L 2 (0,L) < . Then kvk L 2 (0,L) L β Γ(β + 1) kD β C,x vk L 2 (0,L) . (2) Proof. For all x (0, L], by [4, Theorem 3.8] one has z(x) := v(x) v(0) = (I β x D β C,x v)(x) = (ω β D β C,x v)(x), where denotes convolution and ω β (x) := x β1 /Γ(β). Hence, similarly to [6, Lemma 2.6], we get kzk L 2 (0,L) = kω β D β C,x vk L 2 (0,L) kω β k L 1 (0,L) kD β C,x vk L 2 (0,L) = L β Γ(β + 1) kD β C,x vk L 2 (0,L) , (3) using Young’s inequality for convolutions. But kzk 2 L 2 (0,L) = Z L x=0 v 2 (x) 2v(0)v(x) + v 2 (0) dx = kvk 2 L 2 (0,L) + Lv 2 (0), since R vdx = 0. Thus, (3) implies (2). Remark 3. If v AC[0, L] and 0 < β < 1, then kD β C,x vk L p (0,L) < for 1 p < 1 by [4, Lemma 2.12]. Thus in Lemma 1 the hypothesis v AC[0, L] implies the hypothesis kD β C,x vk L 2 (0,L) < when β < 1/2. The next lemma shows that when f 0, the solution of problem (1) converges (in the L 2 (0, L) sense) to the steady-state solution as t . A related result was obtained in [1, Appendix] (see Remark 5 below), but using a very technical argument. Our simpler proof is based on the Friedrichs’ inequality of Lemma 1. Lemma 2. Assume that f 0 in (1). Assume also that the solution of problem (1) satisfies u(x, ·) AC[0, T ] for each x and u x (·, t) AC[0, L] for each t. Then u(x, t) 1 L Z L x=0 φ(x) dx L 2 (0,L) 0 as t . 4 Proof. Set v(x, t) = u(x, t) 1 L Z L x=0 φ(x) dx . Then v t D α RLC,x v = 0 for (x, t) Q, (4a) v(x, 0) = φ(x) 1 L Z L x=0 φ(x) dx for x (0, L), (4b) D α1 C,x v(0, t) = D α1 C,x v(L, t) = 0 for t (0, T]. (4c) Observe that v has the additional property that Z L x=0 v(x, 0) dx = 0. Note first that mass is conserved, i.e., for each t we have Z L x=0 v(x, t) = Z L x=0 v(x, 0) = 0; (5) to see this, integrate v t D α RLC,x v = 0 over [0, L] × [0, t] and use (4c) to eliminate the x-derivative terms. Let t [0, T ] be arbitrary. Let k be a constant that is chosen later. Multi- ply (4a) by e kt v(x, t) then integrate over [0, L] × [0, t]. Using e kt vv t = e kt (v 2 ) t /2 and integration by parts in time and in space, we get 0 = 1 2 Z L x=0 e kt v 2 (x, t) v 2 (x, 0) dx 1 2 Z t s=0 ke ks Z L x=0 v 2 (x, s) ds dx + Z t s=0 e ks Z L x=0 v x (x, s)D α1 C,x v(x, s) dx ds. (6) These integrations by parts are justified since by hypothesis v(x, ·) AC[0, T] for each x and v x (·, t) AC[0, L] for each t, so D α1 C,x v(·, t) = I 2α x v x (·, t) AC[0, L] by [9, Lemma 2.3]. But by [10, Lemma 3.1] one has Z L x=0 v x (x, s)D α1 C,x v(x, s) dx = Z L x=0 v x (x, s)(I 2α x v x )(x, s) dx cos (2 α)π 2 Z L x=0 I 1α/2 x v x 2 (x, s) dx 5 ##### Citations More filters Posted Content Abstract: This paper derives physically meaningful boundary conditions for fractional diffusion equations, using a mass balance approach. Numerical solutions are presented, and theoretical properties are reviewed, including well-posedness and steady state solutions. Absorbing and reflecting boundary conditions are considered, and illustrated through several examples. Reflecting boundary conditions involve fractional derivatives. The Caputo fractional derivative is shown to be unsuitable for modeling fractional diffusion, since the resulting boundary value problem is not positivity preserving. 39 citations ##### References More filters Journal ArticleDOI Abstract: We discuss existence, uniqueness, and structural stability of solutions of nonlinear differential equations of fractional order. The differential operators are taken in the Riemann–Liouville sense and the initial conditions are specified according to Caputo's suggestion, thus allowing for interpretation in a physically meaningful way. We investigate in particular the dependence of the solution on the order of the differential equation and on the initial condition, and we relate our results to the selection of appropriate numerical schemes for the solution of fractional differential equations. 2,775 citations Journal ArticleDOI Abstract: In this article a theoretical framework for the Galerkin finite element approximation to the steady state fractional advection dispersion equation is presented. Appropriate fractional derivative spaces are defined and shown to be equivalent to the usual fractional dimension Sobolev spaces Hs. Existence and uniqueness results are proven, and error estimates for the Galerkin approximation derived. Numerical results are included that confirm the theoretical estimates. © 2005 Wiley Periodicals, Inc. Numer Methods Partial Differential Eq, 2005 639 citations Journal ArticleDOI TL;DR: The final convergence result shows clearly how the regularity of the solution and the grading of the mesh affect the order of convergence of the difference scheme, so one can choose an optimal mesh grading. Abstract: A reaction-diffusion problem with a Caputo time derivative of order $\alpha\in (0,1)$ is considered. The solution of such a problem is shown in general to have a weak singularity near the initial time $t=0$, and sharp pointwise bounds on certain derivatives of this solution are derived. A new analysis of a standard finite difference method for the problem is given, taking into account this initial singularity. This analysis encompasses both uniform meshes and meshes that are graded in time, and includes new stability and consistency bounds. The final convergence result shows clearly how the regularity of the solution and the grading of the mesh affect the order of convergence of the difference scheme, so one can choose an optimal mesh grading. Numerical results are presented that confirm the sharpness of the error analysis. 343 citations Journal ArticleDOI Abstract: A class of nonlocal models based on the use of fractional derivatives (FDs) is proposed to describe nondiffusive transport in magnetically confined plasmas. FDs are integro-differential operators that incorporate in a unified framework asymmetric non-Fickian transport, non-Markovian (“memory”) effects, and nondiffusive scaling. To overcome the limitations of fractional models in unbounded domains, we use regularized FDs that allow the incorporation of finite-size domain effects, boundary conditions, and variable diffusivities. We present an α-weighted explicit/implicit numerical integration scheme based on the Grunwald-Letnikov representation of the regularized fractional diffusion operator in flux conserving form. In sharp contrast with the standard diffusive model, the strong nonlocality of fractional diffusion leads to a linear in time response for a decaying pulse at short times. In addition, an anomalous fractional pinch is observed, accompanied by the development of an uphill transport region where the “effective” diffusivity becomes negative. The fractional flux is in general asymmetric and, for steady states, it has a negative (toward the core) component that enhances confinement and a positive component that increases toward the edge and leads to poor confinement. The model exhibits the characteristic anomalous scaling of the confinement time, τ, with the system’s size, L, τ∼Lα, of low-confinement mode plasma where 1<α<2 is the order of the FD operator. Numerical solutions of the model with an off-axis source show that the fractional inward transport gives rise to profile peaking reminiscent of what is observed in tokamak discharges with auxiliary off-axis heating. Also, cold-pulse perturbations to steady sates in the model exhibit fast, nondiffusive propagation phenomena that resemble perturbative experiments. 100 citations Posted Content Abstract: In this article we investigate the solution of the steady-state fractional diffusion equation on a bounded domain in $\real^{1}$. From an analysis of the underlying model problem, we postulate that the fractional diffusion operator in the modeling equations is neither the Riemann-Liouville nor the Caputo fractional differential operators. We then find a closed form expression for the kernel of the fractional diffusion operator which, in most cases, determines the regularity of the solution. Next we establish that the Jacobi polynomials are pseudo eigenfunctions for the fractional diffusion operator. A spectral type approximation method for the solution of the steady-state fractional diffusion equation is then proposed and studied. 88 citations
{}
# BGVAR: Bayesian Global Vector Autoregression #### 07 April 2022 Abstract This document describes the BGVAR library to estimate Bayesian Global vector autoregressions (GVAR) with different prior specifications and stochastic volatility. The library offers a fully fledged toolkit to conduct impulse response functions, forecast error variance and historical error variance decompositions. To identify structural shocks in a given country model or joint regional shocks, the library offers simple Cholesky decompositions, generalized impulse response functions and zero and sign restrictions – the latter of which can also be put on the cross-section. We also allow for different structures of the GVAR like including different weights for different variables or setting up additional country models that determine global variables such as oil prices. Last, we provide functions to conduct and evaluate out-of-sample forecasts as well as conditional forecasts that allow for the setting of a future path for a particular variable of interest. The toolbox requires R>=3.5. # Introduction This vignette describes the BGVAR package that allows for the estimation of Bayesian global vector autoregressions (GVARs). The focus of the vignette is to provide a range of examples that demonstrate the full functionality of the library. It is accompanied by a more technical description of the GVAR framework. Here, it suffices to briefly summarize the main idea of a GVAR, which is a large system of equations designed to analyze or control for interactions across units. Most often, these units refer to countries and the interactions between them arise through economic and financial interdependencies. Also in this document, the examples we provide contain cross-country data. In principle, however, the GVAR framework can be applied to other units, such as regions, firms, etc. The following examples show how the GVAR can be used to either estimate spillover effects from one country to another, or alternatively, to look at the effects of a domestic shock controlling for global factors. In a nutshell, the GVAR consists of two stages. In the first, $$N$$ vector autoregressive (VAR) models are estimated, one per unit. Each equation in a unit model is augmented with foreign variables, that control for global factors and link the unit-specific models later. Typically, these foreign variables are constructed using exogenous, bilateral weights, stored in an $$N \times N$$ weight matrix. The classical framework of Pesaran, Schuermann, and Weiner (2004) and Dees et al. (2007) proposes estimating these country models in vector error correction form, while in this package we take a Bayesian stance and estimation is carried out using VARs. The user can transform the data prior estimation into stationary form or estimate the model in levels. The BGVAR package also allows us to include a trend to get trend-stationary data. In the second step, the single country models are combined using the assumption that single models are linked via the exogenous weights, to yield a global representation of the model. This representation of the model is then used to carry out impulse response analysis and forecasting. This vignette consists of four blocks: getting started and data handling, estimation, structural analysis and forecasting. In the next part, we discuss which data formats the bgvar library can handle. We then proceed by showing examples of how to estimate a model using different Bayesian shrinkage priors – for references see Crespo Cuaresma, Feldkircher, and Huber (2016) and Feldkircher and Huber (2016). We also discuss how to run diagnostic and convergence checks and examine the main properties of the model. In the third section, we turn to structural analysis, either using recursive (Cholesky) identification or sign restrictions. We will also discuss structural and generalized forecast error variance decompositions and historical decompositions. In the last section, we show how to compute unconditional and conditional forecasts with the package. # Getting Started We start by installing the package from CRAN and attaching it with oldpar <- par(no.readonly=TRUE) set.seed(1) library(BGVAR) ## Warning: package 'BGVAR' was built under R version 4.0.5 To ensure reproducibility of the examples that follow, we have set a particular seed (for Rs random number generator). As every R library, the BGVAR package provides built-in help files which can be accessed by typing ? followed by the function / command of interest. It also comes along with four example data sets, two of them correspond to data the quarterly data set used in Feldkircher and Huber (2016) (eerData, eerDataspf), one is on monthly frequency (monthlyData). For convenience we also include the data that come along with the Matlab GVAR toolbox of Smith, L.V. and Galesi, A. (2014), pesaranData. We include the 2019 vintage (Mohaddes and Raissi 2020). We start illustrating the functionality of the BGVAR package by using the eerData data set from Feldkircher and Huber (2016). It contains 76 quarterly observations for 43 countries over the period from 1995Q1 to 2013Q4. The euro area (EA) is included as a regional aggregate. We can load the data by typing data(eerData) This loads two objects: eerData, which is a list object of length $$N$$ (i.e., the number of countries) and W.trade0012, which is an $$N \times N$$ weight matrix. We can have a look at the names of the countries contained in eerData names(eerData) ## [1] "EA" "US" "UK" "JP" "CN" "CZ" "HU" "PL" "SI" "SK" "BG" "RO" "EE" "LT" "LV" ## [16] "HR" "AL" "RS" "RU" "UA" "BY" "GE" "AR" "BR" "CL" "MX" "PE" "KR" "PH" "SG" ## [31] "TH" "IN" "ID" "MY" "AU" "NZ" "TR" "CA" "CH" "NO" "SE" "DK" "IS" and at the names of the variables contained in a particular country by colnames(eerData$UK) ## [1] "y" "Dp" "rer" "stir" "ltir" "tb" We can zoom in into each country by accessing the respective slot of the data list: head(eerData$US) ## y Dp rer stir ltir tb poil ## [1,] 4.260580 0.007173874 4.535927 0.0581 0.0748 -0.010907595 2.853950 ## [2,] 4.262318 0.007341077 4.483116 0.0602 0.0662 -0.010637081 2.866527 ## [3,] 4.271396 0.005394799 4.506013 0.0580 0.0632 -0.007689327 2.799958 ## [4,] 4.278025 0.006218849 4.526343 0.0572 0.0589 -0.008163675 2.821479 ## [5,] 4.287595 0.007719866 4.543933 0.0536 0.0591 -0.008277170 2.917315 ## [6,] 4.301597 0.008467671 4.543933 0.0524 0.0672 -0.009359032 2.977115 Here, we see that the global variable, oil prices (poil) is attached to the US country model. This corresponds to the classical GVAR set-up used among others in Pesaran, Schuermann, and Weiner (2004) and Dees et al. (2007). We also see that in general, each country model $$i$$ can contain a different set of variables $$k_i$$ as opposed to requirements in a balanced panel. The GVAR toolbox relies on one important naming convention, though: It is assumed that neither the country names nor the variable names contain a . [dot]. The reason is that the program internally has to collect and separate the data more than once and in doing that, it uses the . to separate countries / entities from variables. To give a concrete example, the slot in the eerData list referring to the USA should not be labelled U.S.A., nor should any of the variable names contain a . The toolbox also allows the user to submit the data as a $$T \times k$$ data matrix, with $$k=\sum^N_{i=1} k_i$$ denoting the sum of endogenous variables in the system. We can switch from data representation in list form to matrix from by using the function list_to_matrix (and vice versa using matrix_to_list). To convert the eerData we can type: bigX<-list_to_matrix(eerData) For users who want to submit data in matrix form, the above mentioned naming convention implies that the column names of the data matrix have to include the name of the country / entity and the variable name, separated by a . For example, for the converted eerData data set, the column names look like: colnames(bigX)[1:10] ## [1] "EA.y" "EA.Dp" "EA.rer" "EA.stir" "EA.ltir" "EA.tb" "US.y" ## [8] "US.Dp" "US.rer" "US.stir" with the first part of each columname indicating the country (e.g., EA) and the second the variable (e.g., y), separated by a . Regardless whether the data are submitted as list or as big matrix, the underlying data can be either of matrix class or time series classes such as ts or xts. Finally, we look at the second important ingredient to build our GVAR model, the weight matrix. Here, we use annual bilateral trade flows (including services), averaged over the period from 2000 to 2012. This implies that the $$ij^{th}$$ element of $$W$$ contains trade flows from unit $$i$$ to unit $$j$$. These weights can also be made symmetric by calculating $$\frac{(W_{ij}+W_{ji})}{2}$$. Using trade weights to establish the links in the GVAR goes back to the early GVAR literature (Pesaran, Schuermann, and Weiner 2004) but is still used in the bulk of GVAR studies. Other weights, such as financial flows, have been proposed in Eickmeier and Ng (2015) and examined in Feldkircher and Huber (2016). Another approach is to use estimated weights as in Feldkircher and Siklos (2019). The weight matrix should have rownames and colnames that correspond to the $$N$$ country names contained in Data. head(W.trade0012) ## EA US UK JP CN CZ ## EA 0.0000000 0.13815804 0.16278169 0.03984424 0.09084817 0.037423312 ## US 0.1666726 0.00000000 0.04093296 0.08397042 0.14211997 0.001438531 ## UK 0.5347287 0.11965816 0.00000000 0.02628600 0.04940218 0.008349458 ## JP 0.1218515 0.21683444 0.02288576 0.00000000 0.22708532 0.001999762 ## CN 0.1747925 0.19596384 0.02497009 0.15965721 0.00000000 0.003323641 ## CZ 0.5839067 0.02012227 0.03978617 0.01174212 0.03192080 0.000000000 ## HU PL SI SK BG RO ## EA 0.026315925 0.046355019 0.0088805499 0.0140525286 0.0054915888 0.0147268739 ## US 0.001683935 0.001895003 0.0003061785 0.0005622383 0.0002748710 0.0007034870 ## UK 0.006157917 0.012682611 0.0009454295 0.0026078946 0.0008369228 0.0031639564 ## JP 0.002364775 0.001761420 0.0001650431 0.0004893263 0.0001181310 0.0004293428 ## CN 0.003763771 0.004878752 0.0005769658 0.0015252866 0.0006429077 0.0019212312 ## CZ 0.025980933 0.062535144 0.0058429207 0.0782762640 0.0027079942 0.0080760690 ## EE LT LV HR AL ## EA 0.0027974288 3.361644e-03 1.857555e-03 0.0044360005 9.328127e-04 ## US 0.0002678272 4.630261e-04 2.407372e-04 0.0002257508 2.057213e-05 ## UK 0.0009922865 1.267497e-03 1.423142e-03 0.0004528439 3.931010e-05 ## JP 0.0002038361 9.363053e-05 9.067431e-05 0.0001131534 4.025852e-06 ## CN 0.0004410996 5.033345e-04 4.041432e-04 0.0006822057 1.435258e-04 ## CZ 0.0009807047 2.196688e-03 1.107030e-03 0.0027393932 1.403948e-04 ## RS RU UA BY GE AR ## EA 2.430815e-03 0.06112681 0.0064099317 1.664224e-03 0.0003655903 0.005088057 ## US 7.079076e-05 0.01024815 0.0008939856 2.182909e-04 0.0001843835 0.004216105 ## UK 2.730431e-04 0.01457675 0.0010429571 4.974630e-04 0.0001429294 0.001387324 ## JP 2.951168e-05 0.01725841 0.0007768546 4.392221e-05 0.0001062724 0.001450359 ## CN 1.701277e-04 0.02963668 0.0038451574 5.069157e-04 0.0001685347 0.005817928 ## CZ 1.638111e-03 0.03839817 0.0082099438 1.447486e-03 0.0002651330 0.000482138 ## BR CL MX PE KR PH ## EA 0.018938315 0.0051900915 0.011455138 0.0019463003 0.018602006 0.0034965601 ## US 0.020711342 0.0064996880 0.140665729 0.0037375799 0.033586979 0.0076270807 ## UK 0.007030657 0.0016970182 0.003832710 0.0005204850 0.010183755 0.0025279396 ## JP 0.010910951 0.0077091104 0.011164908 0.0020779123 0.079512267 0.0190560664 ## CN 0.025122526 0.0102797021 0.010960261 0.0040641411 0.103135774 0.0150847984 ## CZ 0.001898955 0.0002425703 0.001538938 0.0001152155 0.005248484 0.0008790869 ## SG TH IN ID MY AU ## EA 0.012319690 0.007743377 0.016629452 0.0065409485 0.009631702 0.010187442 ## US 0.017449474 0.012410910 0.014898397 0.0079866535 0.017364286 0.011578348 ## UK 0.011309096 0.006146707 0.013461838 0.0032341466 0.006768142 0.012811822 ## JP 0.029885052 0.044438961 0.010319951 0.0369586674 0.035612054 0.047921306 ## CN 0.029471018 0.024150334 0.024708981 0.0201353186 0.033336788 0.036066785 ## CZ 0.002867306 0.003136170 0.003568422 0.0009949029 0.002695855 0.001291178 ## NZ TR CA CH NO SE ## EA 0.0017250647 0.028935117 0.012886035 0.065444998 0.025593188 0.041186900 ## US 0.0023769166 0.004969085 0.213004968 0.013343786 0.003975441 0.006793041 ## UK 0.0022119334 0.013099627 0.020699895 0.026581778 0.033700469 0.022629934 ## JP 0.0048098149 0.002560745 0.020149431 0.010014273 0.002895884 0.004375537 ## CN 0.0030625686 0.006578520 0.019121787 0.007657646 0.002804712 0.005853722 ## CZ 0.0001703354 0.006490032 0.001808704 0.013363416 0.003843421 0.013673889 ## DK IS ## EA 0.025035373 1.163498e-03 ## US 0.003135419 2.750740e-04 ## UK 0.013518119 1.119179e-03 ## JP 0.003207880 2.637944e-04 ## CN 0.003990731 7.806304e-05 ## CZ 0.007521567 1.490167e-04 The countries in the weight matrix should be in the same order as in the data list: all(colnames(W.trade0012)==names(eerData)) ## [1] TRUE The weight matrix should be row-standardized and the diagonal elements should be zero: rowSums(W.trade0012) ## EA US UK JP CN CZ HU PL SI SK BG RO EE LT LV HR AL RS RU UA BY GE AR BR CL MX ## 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 ## PE KR PH SG TH IN ID MY AU NZ TR CA CH NO SE DK IS ## 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 diag(W.trade0012) ## EA US UK JP CN CZ HU PL SI SK BG RO EE LT LV HR AL RS RU UA BY GE AR BR CL MX ## 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 ## PE KR PH SG TH IN ID MY AU NZ TR CA CH NO SE DK IS ## 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 Note that through row-standardizing, the final matrix is typically not symmetric (even when using the symmetric weights as raw input). In what follows, we restrict the dataset to contain only three countries, EA, US and RU and adjust the weight matrix accordingly. We do this only for illustrational purposes to save time and storage in this document: cN<-c("EA","US","RU") eerData<-eerData[cN] W.list<-lapply(W.list,function(l){l<-apply(l[cN,cN],2,function(x)x/rowSums(l[cN,cN]))}) This results in the same dataset as available in testdata. In order to make BGVAR easier to handle for users working and organising data in spreadsheets via Excel, we provide a own reader function relying on the readxl package. In this section we intend to provide some code to write the provided datasets to Excel spreadsheets, and to show then how to read the data from Excel. Hence, we provide an easy-to-follow approach with an example how the data should be organised in Excel. We start by exporting the data to excel. The spreadsheet should be organised as follows. Each sheet consists of the data set for one particular country, hence the naming of the sheets with the country names is essential. In each sheet, you should provide the time in the first column of the matrix, followed by one column per variable. In the following, we will export the eerData data set to Excel: time <- as.character(seq.Date(as.Date("1995-01-01"),as.Date("2013-10-01"),by="quarter")) for(cc in 1:length(eerData)){ x <- coredata(eerData[[cc]]) rownames(x) <- time write.xlsx(x = x, file="./excel_eerData.xlsx", sheetName = names(eerData)[cc], col.names=TRUE, row.names=TRUE, append=TRUE) } which will create in your current working directory an excel sheet named excel_eerData.xlsx. This can then be read to R with the BGVAR package as follows: eerData_read <- excel_to_list(file = "./excel_eerData.xlsx", first_column_as_time=TRUE, skipsheet=NULL, ...) which creates a list in the style of the original eerData data set. The first argument file has to be valid path to an excel file. The second argument first_column_as_time is a logical indicating whether you provide as first column in each spreadsheet a time index, while the skipsheet argument can be specified to leave out specific sheets (either as vector of strings or numeric indices). If you want to transform the list object to a matrix, you can use the command list_to_matrix or to transform it back to a list with matrix_to_list: eerData_matrix <- list_to_matrix(eerData_read) eerData_list <- matrix_to_list(eerData_matrix) # Estimation The main function of the BGVAR package is its bgvar function. The unique feature of this toolbox is that we use Bayesian shrinkage priors with optionally stochastic volatility to estimate the country models in the GVAR. In its current version, three priors for the country VARs are implemented: • Non-conjugate Minnesota prior (MN, Litterman 1986; Koop and Korobilis 2010) • Stochastic Search Variable Selection prior (SSVS, George, Sun, and Ni 2008) • Normal-Gamma prior (NG, Huber and Feldkircher 2019) The first two priors are described in more detail in Crespo Cuaresma, Feldkircher, and Huber (2016). For a more technical description of the Normal-Gamma prior see Huber and Feldkircher (2019) and for an application in the GVAR context Feldkircher and Siklos (2019). For the variances we can assume homoskedasticity or time variation (stochastic volatility). For the latter, the library relies on the stochvol package of Kastner (2016). We start with estimating our toy model using the NG prior, the reduced eerData data set and the adjusted W.trade0012 weight matrix: model.1<-bgvar(Data=eerData, draws=100, burnin=100, plag=1, prior="NG", hyperpara=NULL, SV=TRUE, thin=1, trend=TRUE, hold.out=0, eigen=1 ) The default prior specification in bgvar is to use the NG prior with stochastic volatility and one lag for both the endogenous and weakly exogenous variables (plag=1). In general, due to its high cross-sectional dimension, the GVAR can allow for very complex univariate dynamics and it might thus not be necessary to increase the lag length considerably as in a standard VAR (Burriel and Galesi 2018). The setting hyperpara=NULL implies that we use the standard hyperparameter specification for the NG prior; see the helpfiles for more details. Other standard specifications that should be submitted by the user comprise the number of posterior draws (draws) and burn-ins (burnin, i.e., the draws that are discarded). To ensure that the MCMC estimation has converged, a high-number of burn-ins is recommended (say 15,000 to 30,000). Saving the full set of posterior draws can eat up a lot of storage. To reduce this, we can use a thinning interval which stores only a thin$$^{th}$$ draw of the global posterior output. For example, with thin=10 and draws=5000 posterior draws, the amount of MCMC draws stored is 500. TREND=TRUE implies that the model is estimated using a trend. Note that regardless of the trend specification, each equation always automatically includes an intercept term. Expert users might want to take further adjustments. These have to be provided via a list (expert). For example, to speed up computation, it is possible to invoke parallel computing in R. The number of available cpu cores can be specified via cores. Ideally this number is equal to the number of units $$N$$ (expert=list(cores=N)). Based on the user’s operating system, the package then either uses parLapply (Windows platform) or mclapply (non-Windows platform) to invoke parallel computing. If cores=NULL, the unit models are estimated subsequently in a loop (via R’s lapply function). To use other / own apply functions, pass them on via the argument applyfun. As another example, we might be interested in inspecting the output of the $$N$$ country models in more detail. To do so, we could provide expert=list(save.country.store=TRUE), which allows to save the whole posterior distribution of each unit / country model. Due to storage reasons, the default is set to FALSE and only the posterior medians of the single country models are reported. Note that even in this case, the whole posterior distribution of the global model is stored. We estimated the above model with stochastic volatility (SV=TRUE). There are several reasons why one may want to let the residual variances change over time. First and foremost, most time periods used in macroeconometrics are nowadays rather volatile including severe recessions. Hence accounting for time variation might improve the fit of the model (Primiceri 2005; Sims and Zha 2006; Dovern, Feldkircher, and Huber 2016; Huber 2016). Second, the specification implemented in the toolbox nests the homoskedastic case. It is thus a good choice to start with the more general case when first confronting the model with the data. For structural analysis such as the calculation of impulse responses, we take the variance covariance matrix with the median volatilities (over the sample period) on its diagonal. If we want to look at the volatilities of the first equation (y) in the euro area country model, we can type: model.1$cc.results$sig$EA[,"EA.y","EA.y"] To discard explosive draws, we can compute the eigenvalues of the reduced form of the global model, written in its companion form. Unfortunately, this can only be done once the single models have been estimated and stacked together (and hence not directly built into the MCMC algorithm for the country models). To discard draws that lead to higher eigenvalues than 1.05, set eigen=1.05. We can look at the 10 largest eigenvalues by typing: model.1$stacked.results$F.eigen[1:10] ## [1] 0.9835544 0.9523397 0.9795195 0.9907462 0.9863376 0.9955975 0.9789576 ## [8] 0.9892011 0.9966557 0.9918312 Last, we have used the default option h=0, which implies that we use the full sample period to estimate the GVAR. For the purpose of forecast evaluation, h could be specified to a positive number, which then would imply that the last h observations are reserved as a hold-out sample and not used to estimate the model. ## Model Output and Diagnostic Checks Having estimated the model, we can summarize the outcome in various ways. First, we can use the print method print(model.1) ## --------------------------------------------------------------------------- ## Model Info: ## Prior: Normal-Gamma prior (NG) ## Number of lags for endogenous variables: 1 ## Number of lags for weakly exogenous variables: 1 ## Number of posterior draws: 100/1=100 ## Size of GVAR object: 0.5 Mb ## Trimming leads to 34 (34%) stable draws out of 100 total draws. ## --------------------------------------------------------------------------- ## Model specification: ## ## EA: y, Dp, rer, stir, ltir, tb, y*, Dp*, rer*, stir*, ltir*, tb*, poil**, trend ## US: y, Dp, rer, stir, ltir, tb, poil, y*, Dp*, rer*, stir*, ltir*, tb*, trend ## RU: y, Dp, rer, stir, tb, y*, Dp*, rer*, stir*, ltir*, tb*, poil**, trend This just prints the submitted arguments of the bgvar object along with the model specification for each unit. The asterisks indicate weakly exogenous variables, double asterisks exogenous variables and variables without asterisks the endogenous variables per unit. The summary method is a more enhanced way to analyze output. It computes descriptive statistics like convergence properties of the MCMC chain, serial autocorrelation in the errors and the average pairwise autocorrelation of cross-unit residuals. summary(model.1) ## --------------------------------------------------------------------------- ## Model Info: ## Prior: Normal-Gamma prior (NG) ## Number of lags for endogenous variables: 1 ## Number of lags for weakly exogenous variables: 1 ## Number of posterior draws: 100/1=100 ## Number of stable posterior draws: 34 ## Number of cross-sectional units: 3 ## --------------------------------------------------------------------------- ## Convergence diagnostics ## Geweke statistic: ## 92 out of 360 variables' z-values exceed the 1.96 threshold (25.56%). ## --------------------------------------------------------------------------- ## F-test, first order serial autocorrelation of cross-unit residuals ## Summary statistics: ## ========= ========== ====== ## \ # p-values in % ## ========= ========== ====== ## >0.1 5 27.78% ## 0.05-0.1 1 5.56% ## 0.01-0.05 3 16.67% ## <0.01 9 50% ## ========= ========== ====== ## --------------------------------------------------------------------------- ## Average pairwise cross-unit correlation of unit-model residuals ## Summary statistics: ## ======= ======== ======== ========== ========== ======== ======== ## \ y Dp rer stir ltir tb ## ======= ======== ======== ========== ========== ======== ======== ## <0.1 0 (0%) 3 (100%) 2 (66.67%) 1 (33.33%) 2 (100%) 3 (100%) ## 0.1-0.2 0 (0%) 0 (0%) 0 (0%) 0 (0%) 0 (0%) 0 (0%) ## 0.2-0.5 3 (100%) 0 (0%) 1 (33.33%) 2 (66.67%) 0 (0%) 0 (0%) ## >0.5 0 (0%) 0 (0%) 0 (0%) 0 (0%) 0 (0%) 0 (0%) ## ======= ======== ======== ========== ========== ======== ======== ## --------------------------------------------------------------------------- We can now have a closer look at the output provided by summary. The header contains some basic information about the prior used to estimate the model, how many lags, posterior draws and countries. The next line shows Geweke’s CD statistic, which is calculated using the coda package. Geweke’s CD assesses practical convergence of the MCMC algorithm. In a nutshell, the diagnostic is based on a test for equality of the means of the first and last part of a Markov chain (by default we use the first 10% and the last 50%). If the samples are drawn from the stationary distribution of the chain, the two means are equal and Geweke’s statistic has an asymptotically standard normal distribution. The test statistic is a standard Z-score: the difference between the two sample means divided by its estimated standard error. The standard error is estimated from the spectral density at zero and so takes into account any autocorrelation. The test statistic shows that only a small fraction of all coefficients did not convergence. Increasing the number of burn-ins can help decreasing this fraction further. The statistic can also be calculated by typing conv.diag(model.1). The next model statistic is the likelihood of the global model. This statistic can be used for model comparison. Next and to assess, whether there is first order serial autocorrelation present, we provide the results of a simple F-test. The table shows the share of p-values that fall into different significance categories. Since the null hypothesis is that of no serial correlation, we would like to have as many large ($$>0.1$$) p-values as possible. The statistics show that already with one lag, serial correlation is modest in most equations’ residuals. This could be the case since we have estimated the unit models with stochastic volatility. To further decrease serial correlation in the errors, one could increase the number of lags via plag. The last part of the summary output contains a statistic of cross-unit correlation of (posterior median) residuals. One assumption of the GVAR framework is that of negligible, cross-unit correlation of the residuals. Significant correlations prohibit structural and spillover analysis (Dees et al. 2007). In this example, correlation is reasonably small. Some other useful methods the BGVAR toolbox offers contain the coef (or coefficients as its alias) methods to extract the $$k \times k \times plag$$ matrix of reduced form coefficients of the global model. Via the vcov command, we can access the global variance covariance matrix and the logLik() function allows us to gather the global log likelihood (as provided by the summary command). Fmat <- coef(model.1) Smat <- vcov(model.1) lik <- logLik(model.1) Last, we can have a quick look at the in-sample fit using either the posterior median of the country models’ residuals (global=FALSE) or those of the global solution of the GVAR (global=TRUE). The in-sample fit can also be extracted by using fitted(). Here, we show the in-sample fit of the euro area model (global=FALSE). yfit <- fitted(model.1) plot(model.1, global=FALSE, resp="EA") We can estimate the model with two further priors on the unit models, the SSVS prior and the Minnesota prior. To give a concrete example, the SSVS prior can be invoked by typing: model.ssvs.1<-bgvar(Data=eerData, W=W.trade0012, draws=100, burnin=100, plag=1, prior="SSVS", hyperpara=NULL, SV=TRUE, thin=1, trend=TRUE, eigen=1, expert=list(save.shrink.store=TRUE) ) One feature of the SSVS prior is that it allows us to look at the posterior inclusion probabilities to gauge the importance of particular variables. For example, we can have a look at the PIPs of the euro area model by typing: model.ssvs.1$cc.results$PIP$PIP.cc$EA ## y Dp rer stir ltir tb ## y_lag1 1.00 0.80 0.92 0.23 0.17 0.76 ## Dp_lag1 0.28 0.34 0.21 0.22 0.42 0.30 ## rer_lag1 0.53 0.41 1.00 1.00 0.23 0.68 ## stir_lag1 0.98 0.25 0.61 1.00 0.19 0.70 ## ltir_lag1 0.62 0.87 0.34 0.31 1.00 0.76 ## tb_lag1 0.28 0.33 0.37 0.31 0.18 1.00 ## y* 0.95 0.61 0.39 0.14 0.13 0.04 ## Dp* 0.25 0.26 0.20 0.03 0.09 0.54 ## rer* 0.10 0.22 1.00 0.65 0.16 0.27 ## stir* 0.18 0.85 0.14 0.08 0.22 0.16 ## ltir* 0.06 0.39 0.59 0.88 0.07 0.30 ## tb* 0.15 1.00 0.13 0.14 0.23 0.39 ## poil** 0.13 1.00 0.29 0.20 0.16 0.04 ## y*_lag1 0.43 0.25 0.54 0.19 0.11 0.69 ## Dp*_lag1 0.92 0.26 0.28 0.38 0.21 0.13 ## rer*_lag1 0.18 0.27 0.93 0.12 0.10 0.14 ## stir*_lag1 0.27 0.20 0.32 0.57 0.16 0.45 ## ltir*_lag1 1.00 1.00 0.47 1.00 0.29 0.50 ## tb*_lag1 0.09 1.00 0.65 0.88 0.23 0.68 ## poil**_lag1 0.15 0.73 0.48 0.23 0.08 0.13 ## cons 0.28 0.43 0.74 0.17 0.09 1.00 ## trend 0.89 0.65 0.56 0.04 0.23 1.00 The equations in the EA country model can be read column-wise with the rows representing the associated explanatory variables. The example shows that besides other variables, the trade balance (tb) is an important determinant of the real exchange rate (rer). We can also have a look at the average of the PIPs across all units: model.ssvs.1$cc.results$PIP$PIP.avg ## y Dp rer stir ltir tb poil ## y_lag1 1.00000000 0.5266667 0.4000000 0.30333333 0.200 0.6200000 0.17 ## Dp_lag1 0.20666667 0.5033333 0.1533333 0.18666667 0.285 0.3366667 0.34 ## rer_lag1 0.24333333 0.5700000 1.0000000 0.70333333 0.185 0.8533333 0.98 ## stir_lag1 0.99333333 0.1533333 0.4400000 1.00000000 0.140 0.3033333 0.29 ## ltir_lag1 0.44000000 0.4650000 0.3050000 0.23500000 1.000 0.5000000 0.28 ## tb_lag1 0.47333333 0.4833333 0.4866667 0.36000000 0.120 1.0000000 0.02 ## y* 0.44666667 0.2500000 0.4866667 0.41333333 0.175 0.3733333 1.00 ## Dp* 0.25333333 0.4766667 0.2366667 0.04666667 0.395 0.6333333 1.00 ## rer* 0.46333333 0.4666667 0.7800000 0.40333333 0.580 0.4366667 0.34 ## stir* 0.26333333 0.3933333 0.2300000 0.13666667 0.145 0.2366667 0.26 ## ltir* 0.07333333 0.4766667 0.2866667 0.39333333 0.160 0.2166667 0.14 ## tb* 0.32666667 0.8500000 0.1266667 0.16666667 0.200 0.3500000 1.00 ## poil** 0.56500000 0.5600000 0.2950000 0.29000000 0.160 0.3850000 NaN ## y*_lag1 0.32000000 0.1300000 0.5466667 0.42000000 0.150 0.4000000 0.90 ## Dp*_lag1 0.48000000 0.1566667 0.1566667 0.23333333 0.255 0.1600000 0.22 ## rer*_lag1 0.34666667 0.4600000 0.8133333 0.10333333 0.550 0.2233333 0.10 ## stir*_lag1 0.44666667 0.1733333 0.3333333 0.30333333 0.160 0.2900000 0.07 ## ltir*_lag1 0.40666667 0.4066667 0.2533333 0.39000000 0.340 0.2866667 0.21 ## tb*_lag1 0.15333333 0.7300000 0.2633333 0.40000000 0.215 0.3400000 0.23 ## poil**_lag1 0.22500000 0.4100000 0.3750000 0.22000000 0.080 0.1400000 NaN ## cons 0.47000000 0.3100000 0.6100000 0.26000000 0.080 0.7066667 0.42 ## trend 0.59666667 0.4033333 0.2333333 0.28000000 0.305 0.4733333 0.87 ## poil_lag1 0.17000000 0.1400000 0.0700000 0.95000000 0.420 0.3000000 1.00 This shows that the same determinants for the real exchange rate appear as important regressors in other country models. ## Different Specifications of the Model In this section we explore different specifications of the structure of the GVAR model. Other specification choices that relate more to the time series properties of the data, such as specifying different lags and priors are left for the reader to explore. We will use the SSVS prior and judge the different specifications by examining the posterior inclusion probabilities. As a first modification, we could use different weights for different variable classes as proposed in Eickmeier and Ng (2015). For example we could use financial weights to construct weakly exogenous variables of financial factors and trade weights for real variables. The eerData set provides us with a list of different weight matrices that are described in the help files. Now we specify the sets of variables to be weighted: variable.list<-list();variable.list$real<-c("y","Dp","tb");variable.list$fin<-c("stir","ltir","rer") We can then re-estimate the model and hand over the variable.list via the argument expert: # weights for first variable set tradeW.0012, for second finW0711 model.ssvs.2<-bgvar(Data=eerData, plag=1, draws=100, burnin=100, prior="SSVS", SV=TRUE, eigen=1, expert=list(variable.list=variable.list,save.shrink.store=TRUE), trend=TRUE ) Another specification would be to include a foreign variable only when its domestic counterpart is missing. For example, when working with nominal bilateral exchange rates we probably do not want to include also its weighted average (which corresponds to something like an effective exchange rate). Using the previous model we could place an exclusion restriction on foreign long-term interest rates using Wex.restr which is again handed over via expert. The following includes foreign long-term rates only in those country models where no domestic long-term rates are available: # does include ltir* only when ltir is missing domestically model.ssvs.3<-bgvar(Data=eerData, plag=1, draws=100, burnin=100, prior="SSVS", SV=TRUE, eigen=1, expert=list(Wex.restr="ltir",save.shrink.store=TRUE), trend=TRUE ) print(model.ssvs.3) ## --------------------------------------------------------------------------- ## Model Info: ## Prior: Stochastic Search Variable Selection prior (SSVS) ## Number of lags for endogenous variables: 1 ## Number of lags for weakly exogenous variables: 1 ## Number of posterior draws: 100/1=100 ## Size of GVAR object: 0.5 Mb ## Trimming leads to 28 (28%) stable draws out of 100 total draws. ## --------------------------------------------------------------------------- ## Model specification: ## ## EA: y, Dp, rer, stir, ltir, tb, y*, Dp*, rer*, stir*, tb*, poil**, trend ## US: y, Dp, rer, stir, ltir, tb, poil, y*, Dp*, rer*, stir*, tb*, trend ## RU: y, Dp, rer, stir, tb, y*, Dp*, rer*, stir*, tb*, poil**, trend Last, we could also use a different specification of oil prices in the model. Currently, the oil price is determined endogenously within the US model. Alternatively, one could set up an own standing oil price model with additional variables that feeds the oil price back into the other economies as exogenous variable (Mohaddes and Raissi 2019). The model structure would then look something like in the Figure below: For that purpose we have to remove oil prices from the US model and attach them to a separate slot in the data list. This slot has to have its own country label. We use ‘OC’ for “oil country”. eerData2<-eerData eerData2$OC<-eerData$US[,c("poil"),drop=FALSE] # move oil prices into own slot eerData2$US<-eerData$US[,c("y","Dp", "rer" , "stir", "ltir","tb")] # exclude it from US m odel Now we have to specify a list object that we label OC.weights. The list has to consist of three slots with the following names weights, variables and exo: OC.weights<-list() OC.weights$weights<-rep(1/3, 3) names(OC.weights$weights)<-names(eerData2)[1:3] # last one is OC model, hence only until 3 OC.weights$variables<-c(colnames(eerData2$OC),"y") # first entry, endog. variables, second entry weighted average of y from the other countries to proxy demand OC.weights$exo<-"poil" The first slot, weights, should be a vector of weights that sum up to unity. In the example above, we simply use $$1/N$$, other weights could include purchasing power parities (PPP). The weights are used to aggregate specific variables that in turn enter the oil model as weakly exogenous. The second slot, variables, should specify the names of the endogenous and weakly exogenous variables that are used in the OC model. In the oil price example, we include the oil price (poil) as an endogenous variable (not contained in any other country model) and a weighted average using weights of output (y) to proxy world demand as weakly exogenous variable. Next, we specify via exo which one of the endogenous variables of the OC model are fed back into the other country models. In this example we specify poil. Last, we put all this information in a further list called OE.weights (other entity weights). This is done to allow for multiple other entity models (i.e., an oil price model, a joint monetary union model, etc.). It is important that the list entry has the same name as the other entity model, in our example OC. # other entities weights with same name as new oil country OE.weights <- list(OC=OC.weights) Now we can re-estimate the model where we pass on OE.weights via the expert argument. model.ssvs.4<-bgvar(Data=eerData2, W=W.trade0012, plag=1, draws=100, burnin=100, prior="SSVS", SV=TRUE, expert=list(OE.weights=OE.weights, save.shrink.store=TRUE), trend=TRUE ) and can compare the results of the four models by e.g., looking at the average PIPs. aux1<-model.ssvs.1$cc.results$PIP$PIP.avg;aux1<-aux1[-nrow(aux1),1:6] aux2<-model.ssvs.2$cc.results$PIP$PIP.avg;aux2<-aux2[-nrow(aux2),1:6] aux3<-model.ssvs.3$cc.results$PIP$PIP.avg;aux3<-aux3[-nrow(aux3),1:6] aux4<-model.ssvs.4$cc.results$PIP$PIP.avg;aux4<-aux4[-nrow(aux4),1:6] heatmap(aux1,Rowv=NA,Colv=NA, main="Model 1") heatmap(aux2,Rowv=NA,Colv=NA, main="Model 2") heatmap(aux3,Rowv=NA,Colv=NA, main="Model 3") heatmap(aux4,Rowv=NA,Colv=NA, main="Model 4") We could also compare the models based on their fit, the likelihood, information criteria such as the DIC, residual properties or their forecasting performance. # Impulse response functions The package allows to calculate three different ways of dynamic responses, namely generalized impulse response functions (GIRFs) as in Pesaran and Shin (1998), orthogonalized impulse response functions using a Cholesky decomposition of the variance covariance matrix and finally impulse response functions given a set of user-specified sign restrictions. ## Recursive Identification and GIRFs Most of the GVAR applications deal with locally identified shocks. This implies that the shock of interest is orthogonal to the other shocks in the same unit model and hence can be interpreted in a structural way. There is still correlation between the shocks of the unit models, and these responses (the spillovers) are hence not fully structural (Eickmeier and Ng 2015). Hence some GVAR applications favor generalized impulse response functions, which per se do not rely on an orthogonalization. In BGVAR, responses to both types of shocks can be easily analyzed using the irf function. This function needs as input a model object (x), the impulse response horizon (n.ahead) and the default identification method is the recursive identification scheme via the Cholesky decomposition. Further arguments can be passed on using the wrapper expert and are discussed in the helpfiles. The following provides impulse response to all N shocks with unit scaling and using generalized impulse response functions: irf.chol<-irf(model.ssvs.1, n.ahead=24, expert=list(save.store=FALSE)) The results are stored in irf.chol$posterior, which is a four-dimensional array: $$K \times n.ahead \times nr.of shocks \times Q$$, with Q referring to the 50%, 68% and 95% quantiles of the posterior distribution of the impulse response functions. The posterior median of responses to the first shock could be accessed via irf.girf$posterior[,,1,"Q50"] Note that this example was for illustrational purposes; in most instances, we would be interested in a particular shock and calculating responses to all shocks in the system is rather inefficient. Hence, we can provide the irf function with more information. To be more precise, let us assume that we are interested in an expansionary monetary policy shock (i.e., a decrease in short-term interest rates) in the US country model. For that purpose, we can set up an shockinfo object, which contains information about which variable we want to shock (shock), the size of the shock (scale), the specific identification method(ident), and whether it is a shock applied in a single country or in multiple countries (global). We can use the helper function get_shockinfo() to set up a such a dummy object which we can subsequently modify according to our needs. The following lines of code are used for a negative 100 bp shock applied to US short term interest rates: # US monetary policy shock - Cholesky shockinfo_chol<-get_shockinfo("chol") shockinfo_chol$shock<-"US.stir" shockinfo_chol$scale<--100 # US monetary policy shock - GIRF shockinfo_girf<-get_shockinfo("girf") shockinfo_girf$shock<-"US.stir" shockinfo_girf$scale<--100 The shockinfo objects for Cholesky and GIRFs look exactly the same but have additionally an attribute which classifies the particular identification scheme. If we compare them, we notice that both have three columns defining the shock, the scale and whether it is defined as global shock. But we also see that the attributes differ which is important for the identification in the irf function. shockinfo_chol ## shock scale global ## 1 US.stir -100 FALSE shockinfo_girf ## shock scale global ## 1 US.stir -100 FALSE Now, we identify a monetary policy shock with recursive identification: irf.chol.us.mp<-irf(model.ssvs.1, n.ahead=24, shockinfo=shockinfo_chol, expert=list(save.store=TRUE)) The results are stored in irf.chol.us.mp. In order to save the complete set of draws, one can activate the save.store argument by setting it to TRUE within the expert settings (note: this may need a lot of storage). names(irf.chol.us.mp) ## [1] "posterior" "ident" "shockinfo" "rot.nr" "struc.obj" "model.obj" ## [7] "IRF_store" Again, irf.chol.us.mp$posterior is a $$K \times n.ahead \times nr.of shocks \times 7$$ object and the last slot contains the 50%, 68% and 95% credible intervals along with the posterior median. If save.store=TRUE, IRF_store contains the full set of impulse response draws and you can calculate additional quantiles of interest. We can plot the complete responses of a particular country by typing: plot(irf.chol.us.mp, resp="US", shock="US.stir") The plot shows the posterior median response (solid, black line) along 50% (dark grey) and 68% (light grey) credible intervals. We can also compare the Cholesky responses with GIRFs. For that purpose, let us look at a GDP shock. # cholesky shockinfo_chol <- get_shockinfo("chol", nr_rows = 2) shockinfo_chol$shock <- c("US.stir","US.y") shockinfo_chol$scale <- c(1,1) # generalized impulse responses shockinfo_girf <- get_shockinfo("girf", nr_rows = 2) shockinfo_girf$shock <- c("US.stir","US.y") shockinfo_girf$scale <- c(1,1) # Recursive US GDP # GIRF US GDP irf.girf.us.y<-irf(model.ssvs.1, n.ahead=24, shockinfo=shockinfo_girf) plot(irf.chol.us.y, resp="US.y", shock="US.y") plot(irf.girf.us.y, resp="US.y", shock="US.y") plot(irf.chol.us.y, resp="US.rer", shock="US.y") plot(irf.girf.us.y, resp="US.rer", shock="US.y") We see that the responses are similar. This is not surprising because we have shocked the first variable in the US country model (y) and there are no timing restrictions on the remaining variables (they are all affected without any lag). In that case, the orthogonal impulse responses and the GIRF coincide. Last, we could also look at a joint or global shock. For example, we could be interested in the effects of a simultaneous decrease in output across major economies, such as the G-7 and Russia. For that purpose, we have to set global<-TRUE. The following lines illustrate the joint GDP shock: shockinfo<-get_shockinfo("girf", nr_rows = 3) shockinfo$shock<-c("EA.y","US.y","RU.y") shockinfo$global<-TRUE shockinfo$scale<--1 irf.global<-irf(model.ssvs.1, n.ahead=24, shockinfo=shockinfo) plot(irf.global, resp=c("US.y","EA.y","RU.y"), shock="Global.y") ## Identification with Zero- and Sign-Restrictions In this section, we identify the shocks locally with sign-restrictions. For that purpose, we will use another example data set and estimate a new GVAR. This data set contains one-year ahead GDP, inflation and short-term interest rate forecasts for the USA. The forecasts are from the survey of professional forecasters (SPF) data base. data("eerData") eerData<-eerData[cN] W.trade0012<-W.trade0012[cN,cN] W.trade0012<-apply(W.trade0012,2,function(x)x/rowSums(W.trade0012)) # append expectations data to US model temp <- cbind(USexpectations, eerData$US) colnames(temp) <- c(colnames(USexpectations),colnames(eerData$US)) eerData$US <- temp model.ssvs.eer<-bgvar(Data=eerData, plag=1, draws=100, burnin=100, prior="SSVS", SV=TRUE) For now, we start with an identification of two standard shocks in economics in the US model, namely an aggregate demand and aggregate supply shock. While the shockinfo was optional when using Cholesky / GIRFs, it is mandatory when working with sign restrictions. We do this in two steps, first we create a dummy object with get_shockinfo("sign") that contains information on the general shock setting and then add sign restrictions one-by-one using add_shockinfo(). The following illustrates this: shockinfo<-get_shockinfo("sign") restriction="US.Dp", sign=">", horizon=1, prob=1, scale=1) restriction="US.y", sign="<", horizon=1, prob=1, scale=1) In add_shockinfo we provide information on which variable to shock (shock), on which responses to put the sign restrictions (restriction), the direction of the restriction (sign) and the horizon how long these restrictions should hold (horizon). Note that the shock is always positive, but can be re-scaled by scale. The argument prob allows you to specify a percentage of the draws for which the restrictions have to hold. This argument might be useful when working with cross-sectional sign restrictions, where the idea is that some restrictions have to hold on average or at a certain percentage. The default is prob=1. If we want to add more restrictions to a particular shock, we can simply provide a vector instead of a scalar add_shockinfo(shockinfo, shock="US.Dp",restriction=c("US.y", "US.stir"), sign=c("<","<"), horizon=c(1,1), prob=c(1,1), scale=1) Note that increasing the number of restrictions (on the variables or the horizon) will lead to more precise inference; however, finding a suitable rotation matrix will become substantially harder. We then invoke the irf() command to compute the impulse responses. The function draws rotation matrices using the algorithm of Rubio-Ramirez, Waggoner, and Zha (2010). In case we specify additional zero restrictions (see the next example below), we use the algorithm of Arias, Rubio-Ramirez, and Waggoner (2018). By default, we use one CPU core (cores=NULL) and do not store the full set of responses (save.store=FALSE). The maximum number of rotation matrices sampled per MCMC draw before we jump to the next draw can be specified by MaxTries. irf.sign<-irf(model.ssvs.eer, n.ahead=24, shockinfo=shockinfo, expert=list(MaxTries=100, save.store=FALSE, cores=NULL)) We can infer the number of successful rotation matrices by looking at irf.sign$rot.nr ## [1] "For 46 draws out of 71 draws, a rotation matrix has been found." plot(irf.sign, resp=c("US.y","US.Dp"), shock="US.y") plot(irf.sign, resp=c("US.y","US.Dp"), shock="US.Dp") Several recent papers advocate the inclusion of survey data in a VAR. Castelnuovo and Surico (2010) show that including inflation expectations mitigates the price puzzle (i.e., the counter intuitive positive movement of inflation in response to a monetary tightening). D’Amico and King (2015) go one step further and argue that expectations should always be included in a VAR model since they contain information that is not contained in standard macroeconomic data. They also show how to make inference with survey data in a VAR framework and propose so-called rationality conditions. For an application in a GVAR context, see Boeck, Feldkircher, and Siklos (2021). In a nutshell, these conditions put restrictions on actual data to match the expectations either on average over (ratio.average) or at the end of (ratio.H) the forecast horizon. Let us look at a concrete example. shockinfo<-get_shockinfo("sign") shockinfo<-add_shockinfo(shockinfo, shock="US.stir_t+4", restriction=c("US.Dp_t+4","US.stir","US.y_t+4","US.stir_t+4","US.Dp_t+4","US.y_t+4"), sign=c("<","0","<","ratio.avg","ratio.H","ratio.H"), horizon=c(1,1,1,5,5,5), prob=1, scale=1) irf.sign.zero<-irf(model.ssvs.eer, n.ahead=20, shockinfo=shockinfo, expert=list(MaxTries=100, save.store=TRUE)) The figure below shows the results for short term interest rates (stir) and output (y). # rationality condition: US.stir_t+4 on impact is equal to average of IRF of # US.stir between horizon 2 and 5 matplot(cbind(irf.sign.zero$IRF_store["US.stir_t+4",1,,1], irf.sign.zero$IRF_store["US.stir",1,,1]), type="l",ylab="",main="stir",lwd=2,xaxt="n"); axis(side=1,at=c(1:5,9,13,17,21,25),label=c(0:4,8,12,16,20,24)) legend("topright",lty=c(1,2),c("expected","actual"),lwd=2,bty="n",col=c("black","red")) segments(x0=2,y0=1,x1=5,y1=1,lwd=2,lty=3,col="grey") points(1,1,col="grey",pch=19,lwd=4) abline(v=c(2,5),lty=3,col="grey",lwd=2) # rationality condition: US.y_t+4 on impact is equal to H-step ahead IRF # of US.y in horizon 5 matplot(cbind(irf.sign.zero$IRF_store["US.y_t+4",1,,1], irf.sign.zero$IRF_store["US.y",1,,1]), type="l",ylab="",main="y",lwd=2,xaxt="n") axis(side=1,at=c(1:5,9,13,17,21,25),label=c(0:4,8,12,16,20,24)) legend("topright",lty=c(1,2),c("expected","actual"),lwd=2,bty="n",col=c("black","red")) yy<-irf.sign.zero$IRF_store["US.y_t+4",1,1,1] segments(x0=1,y0=yy,x1=5,y1=yy,lwd=2,lty=3,col="grey");abline(v=c(1,5),col="grey",lty=3) points(1,yy,col="grey",pch=19,lwd=4);points(5,yy,col="grey",pch=19,lwd=4) Impulse responses that refer to observed data are in red (dashed), and the ones referring to expected data in black. The condition we have imposed on short-term interest rates (top panel) was that observed rates should equal the shock to expected rates on average over the forecast horizon (one year, i.e., on impact plus 4 quarters). The respective period is marked by the two vertical, grey lines. Put differently, the average of the red-dashed line over the forecast horizon has to equal the expectation shock on impact (grey dot). On output, shown in the bottom panel, by contrast, we have imposed a condition that has to hold exactly at the forecast horizon. The red line, the impulse response of observed output, has to meet the impact response of expected output at $$h=5$$. In the figure, these two points are indicated by the two grey dots. The last example we look at is how to put restrictions on the cross-section. Chudik and Fidora (2011) and Cashin et al. (2014) argue that a major advantage of GVARs is that they allow to put restrictions also on variables from different countries, which should further sharpen inference. They apply cross-sectional restrictions to identify oil supply and demand shocks with the restrictions on oil importing countries’ GDP. Here, we follow Feldkircher, Gruber, and Huber (2020) who use cross-sectional restrictions to identify a term spread shock in the euro area. Since they use separate country models for members of the euro area, the joint monetary policy has to be modeled. One idea that has been put forth in recent applications is to set up an additional country for the joint monetary policy in the euro area. In the next example, we follow Georgiadis (2015) and set up a ECB model that determines euro area interest rates according to a Taylor rule. This idea follows the set-up of the additional oil price model and can be summarized graphically in the picture below. We can look at the data by typing: data(monthlyData);monthlyData$OC<-NULL names(monthlyData) ## [1] "BG" "CZ" "DK" "HR" "HU" "PL" "RO" "SE" "GB" "AT" "BE" "DE" "GR" "ES" "FI" ## [16] "FR" "IE" "IT" "NL" "PT" "US" "CN" "JP" "RU" "TR" "CA" "EB" # list of weights of other entities with same name as additional country model OE.weights = list(EB=EB.weights) EA_countries <- c("AT", "BE", "DE","ES", "FI","FR") # "IE", "IT", "NL", "PT","GR","SK","MT","CY","EE","LT","LV") To estimate the GVAR with an ‘EB’ country model, we have to specify additional arguments similar to the example with the oil price model discussed above. The monthlyData set already comes along with a pre-specified list EA.weights with the mandatory slots weights, variables and exo. The specification implies that the euro area monetary policy model (EB) includes EAstir, total.assets, M3, ciss as endogenous variables (these are contained in monthlyData$EB). We use PPP-weights contained in weights to aggregate output (y) and prices (p) from euro area countries and include them as weakly exogenous variables. Euro area short-term interest rates (EAstir) and the ciss indicator (ciss), specified in exo, are then passed on as exogenous variables to the remaining countries. Finally, we put EA.weights into the OE.weights list and label the slot EB (as the name of the additional country model, names(monthlyData)) and estimate the model: monthlyData <- monthlyData[c(EA_countries,"EB")] W<-W[EA_countries,EA_countries] W<-apply(W,2,function(x)x/rowSums(W)) OE.weights$EB$weights <- OE.weights$EB$weights[names(OE.weights$EB$weights)%in%EA_countries] # estimates the model model.ssvs<-bgvar(Data=monthlyData, W=W, draws=200, burnin=200, plag=1, prior="SSVS", eigen=1.05, expert=list(OE.weights=OE.weights)) ## ## ## Start estimation of Bayesian Global Vector Autoregression. ## ## Prior: Stochastic Search Variable Selection prior. ## Lag order: 1 (endo.), 1 (w. exog.) ## Stochastic volatility: enabled. ## Number of cores used: 1. ## Thinning factor: 1. This means every draw is saved. ## Hyperparameter setup: ## No hyperparameters are chosen, default setting applied. ## ## Estimation of country models starts... ## ## Start estimation of Bayesian Global Vector Autoregression. ## ## Prior: Stochastic Search Variable Selection prior. ## Lag order: 1 (endo.), 1 (w. exog.) ## Stochastic volatility: enabled. ## Number of cores used: 1. ## Thinning factor: 1. This means every draw is saved. ## No hyperparameters are chosen, default setting applied. ## ## Estimation of country models starts... ## Model: 1 / 7 done. ## ## Start estimation of Bayesian Global Vector Autoregression. ## ## Prior: Stochastic Search Variable Selection prior. ## Lag order: 1 (endo.), 1 (w. exog.) ## Stochastic volatility: enabled. ## Number of cores used: 1. ## Thinning factor: 1. This means every draw is saved. ## No hyperparameters are chosen, default setting applied. ## ## Estimation of country models starts... ## Model: 2 / 7 done. ## ## Start estimation of Bayesian Global Vector Autoregression. ## ## Prior: Stochastic Search Variable Selection prior. ## Lag order: 1 (endo.), 1 (w. exog.) ## Stochastic volatility: enabled. ## Number of cores used: 1. ## Thinning factor: 1. This means every draw is saved. ## No hyperparameters are chosen, default setting applied. ## ## Estimation of country models starts... ## Model: 3 / 7 done. ## ## Start estimation of Bayesian Global Vector Autoregression. ## ## Prior: Stochastic Search Variable Selection prior. ## Lag order: 1 (endo.), 1 (w. exog.) ## Stochastic volatility: enabled. ## Number of cores used: 1. ## Thinning factor: 1. This means every draw is saved. ## No hyperparameters are chosen, default setting applied. ## ## Estimation of country models starts... ## Model: 4 / 7 done. ## ## Start estimation of Bayesian Global Vector Autoregression. ## ## Prior: Stochastic Search Variable Selection prior. ## Lag order: 1 (endo.), 1 (w. exog.) ## Stochastic volatility: enabled. ## Number of cores used: 1. ## Thinning factor: 1. This means every draw is saved. ## No hyperparameters are chosen, default setting applied. ## ## Estimation of country models starts... ## Model: 5 / 7 done. ## ## Start estimation of Bayesian Global Vector Autoregression. ## ## Prior: Stochastic Search Variable Selection prior. ## Lag order: 1 (endo.), 1 (w. exog.) ## Stochastic volatility: enabled. ## Number of cores used: 1. ## Thinning factor: 1. This means every draw is saved. ## No hyperparameters are chosen, default setting applied. ## ## Estimation of country models starts... ## Model: 6 / 7 done. ## ## Start estimation of Bayesian Global Vector Autoregression. ## ## Prior: Stochastic Search Variable Selection prior. ## Lag order: 1 (endo.), 1 (w. exog.) ## Stochastic volatility: enabled. ## Number of cores used: 1. ## Thinning factor: 1. This means every draw is saved. ## No hyperparameters are chosen, default setting applied. ## ## Estimation of country models starts... ## Model: 7 / 7 done. ## ## Start estimation of Bayesian Global Vector Autoregression. ## ## Prior: Stochastic Search Variable Selection prior. ## Lag order: 1 (endo.), 1 (w. exog.) ## Stochastic volatility: enabled. ## Number of cores used: 1. ## Thinning factor: 1. This means every draw is saved. ## No hyperparameters are chosen, default setting applied. ## ## Estimation of country models starts... ## Estimation done and took 0 mins 9 seconds. ## Start stacking: ## ## Stacking finished. ## Computation of BGVAR yields 199 (100%) draws (active trimming). ## Needed time for estimation of bgvar: 0 mins 9 seconds. We can now impose a joint shock on long-term interest rates for selected countries using sign restrictions on the cross section with the following lines of code: # imposes sign restrictions on the cross-section and for a global shock # (long-term interest rates) shockinfo<-get_shockinfo("sign") for(cc in c("AT","BE","FR")){ restriction=paste0(cc,c(".ip",".p")), sign=c("<","<"), horizon=c(1,1), prob=c(0.5,0.5), scale=c(-100,-100), global=TRUE) } We can have a look at the restrictions by looking at the shockinfo object: shockinfo ## shock restriction sign horizon scale prob global ## 1 AT.ltir AT.ip < 1 -1 0.5 TRUE ## 2 AT.ltir AT.p < 1 -1 0.5 TRUE ## 3 BE.ltir BE.ip < 1 -1 0.5 TRUE ## 4 BE.ltir BE.p < 1 -1 0.5 TRUE ## 5 FR.ltir FR.ip < 1 -1 0.5 TRUE ## 6 FR.ltir FR.p < 1 -1 0.5 TRUE Note the column prob. Here, we have specified that the restrictions have to hold only for half of the countries. We could make the restrictions stricter by increasing the percentage. We can now compute the impulse responses using the same function as before. irf.sign.ssvs<-irf(model.ssvs, n.ahead=24, shockinfo=shockinfo, expert=list(MaxTries=500)) To verify the sign restrictions, type: irf.sign.ssvs$posterior[paste0(EA_countries[-c(3,12)],".ltir"),1,1,"Q50"] ## AT.ltir BE.ltir ES.ltir FI.ltir FR.ltir ## -1.0000000 -0.9933509 -0.9632447 -0.8812171 -1.0231577 irf.sign.ssvs$posterior[paste0(EA_countries,".ip"),1,1,"Q50"] ## AT.ip BE.ip DE.ip ES.ip FI.ip FR.ip ## -0.002559368 -0.043602492 -0.006863765 -0.004440010 0.001735155 -0.034581966 irf.sign.ssvs$posterior[paste0(EA_countries,".ip"),1,1,"Q50"] ## AT.ip BE.ip DE.ip ES.ip FI.ip FR.ip ## -0.002559368 -0.043602492 -0.006863765 -0.004440010 0.001735155 -0.034581966 The following plots the output responses for selected euro area countries. plot(irf.sign.ssvs, resp=c("AT.ip"), shock="Global.ltir") plot(irf.sign.ssvs, resp=c("BE.ip"), shock="Global.ltir") plot(irf.sign.ssvs, resp=c("DE.ip"), shock="Global.ltir") plot(irf.sign.ssvs, resp=c("ES.ip"), shock="Global.ltir") ## Forecast Error Variance Decomposition (FEVD) Forecast error variance decompositions indicate the amount of information each variable contributes to the other variables in the autoregression. It is calculated by examining how much of the forecast error variance of each of the variables can be explained by exogenous shocks to the other variables. In a system with fully orthogonalized errors, the shares of FEVD sum up to 1. In the GVAR context, however, since we identify a shock only locally in particular country model and we still have a certain degree of residual correlation, shares typically exceed unity. By contrast, a fully orthogonalized system obtained for example by means of a Cholesky decomposition would yield shares that sum up to unity but inherits assumptions that are probably hard to defend. In the case of the Cholesky decomposition, this would imply timing restrictions, i.e., which variables in which units are immediately affected or affected only with a lag. One way of fixing this is to use generalized forecast error variance decompositions. Like with GIRFs, these are independent of the ordering but, since the shocks are not orthogonalized, yield shares that exceed unity. Recently, Lanne and Nyberg (2016) proposed a way of scaling the GFEVDs, which has the nice property of shares summing up to 1 and results being independent of the ordering of the variables in the system. To calculate them, we can use the GFEVD.LN command. We can either use a running mean (running=TRUE) or the full set of posterior draws. The latter is computationally very expensive. #calculates the LN GFEVD gfevd.us.mp=gfevd(model.ssvs.eer,n.ahead=24,running=TRUE,cores=4)$FEVD ## ## Start computing generalized forecast error variance decomposition of Bayesian Global Vector Autoregression. ## ## Start computation on 4 cores (71 stable draws in total). ## Size of IRF object: 0.1 Mb ## Needed time for computation: 0 mins 2 seconds. # get position of EA idx<-which(grepl("EA.",dimnames(gfevd.us.mp)[[2]])) own<-colSums(gfevd.us.mp["EA.y",idx,]) foreign<-colSums(gfevd.us.mp["EA.y",-idx,]) barplot(t(cbind(own,foreign)),legend.text =c("own","foreign")) The plot above shows a typical pattern: On impact and in the first periods, EA variables (own) explain a large share of GFEVD. With time and through the lag structure in the model, other countries’ variables show up more strongly as important determinants of EA output error variance. In case we want to focus on a single country, which we have fully identified either using a Cholesky decomposition or sign restrictions, we can compute a simple forecast error variance decomposition (FEVD). This can be done by using the command fevd(). Since the computation is very time consuming, the FEVDs are based on the posterior median only (as opposed to calculating FEVDs for each MCMC draw or using a running mean). In case the underlying shock has been identified via sign restrictions, the corresponding rotation matrix is the one that fulfills the sign restrictions at the point estimate of the posterior median of the reduced form coefficients (stored in irf.obj$struc.obj$Rmed). Alternatively one can submit a rotation matrix using the option R. # calculates FEVD for variables US.y matplot(cbind(HD$x[,1],org.ts[,1]),type="l",ylab="",lwd=2) legend("bottomright",c("hd series","original"),col=c("black","red"),lty=c(1,2),bty="n",cex=2) # Unconditional and Conditional Forecasts In this section, we demonstrate how the package can be used for forecasting. We distinguish between unconditional and conditional forecasting. Typical applications of unconditional forecasting are to select a model from a range of candidate models or for out-of-sample forecasting. Conditional forecasts can be used for scenario analysis by comparing a forecast with a fixed future path of a variable of interest to its unconditional forecast. ## Unconditional Forecasts Since the GVAR framework was developed to capture cross-country dependencies, it can handle a rich set of dynamics and interdependencies. This can also be useful for forecasting either global components (e.g., global output) or country-specific variables controlling for global factors. Pesaran, Schuermann, and Smith (2009) show that the GVAR yields competitive forecasts for a range of macroeconomic and financial variables. Crespo Cuaresma, Feldkircher, and Huber (2016) demonstrate that Bayesian shrinkage priors can help improving GVAR forecasts and Dovern, Feldkircher, and Huber (2016) and Huber (2016) yield evidence for further gains in forecast performance by using GVARs with stochastic volatility. To compute forecasts with the BGVAR package, we use the command predict. To be able to evaluate the forecast, we have to specify the size of the hold-out sample when estimating the model. Here, we choose a hold-out-sample of 8 observations by setting h=8 (the default values is h=0): model.ssvs.h8<-bgvar(Data=eerData, W=W.trade0012, draws=500, burnin=500, plag=1, prior="SSVS", hyperpara=NULL, SV=TRUE, thin=1, trend=TRUE, hold.out=8, eigen=1 ) The forecasts can then be calculated using the predict function. We calculate forecasts up to 8 periods ahead by setting n.ahead=8-step fcast <- predict(model.ssvs.h8, n.ahead=8, save.store=TRUE) The forecasts are stored in fcast$fcast which contains also the credible intervals of the predictive posterior distribution. We can evaluate the forecasts with the retained observations looking at the root mean squared errors (RMSEs) or log-predictive scores (LPS). lps.h8 <- lps(fcast) rmse.h8 <- rmse(fcast) The objects lps.h8 and rmse.h8 then each contain a $$8 \times k$$ matrix with the LPS scores / RMSEs for each variable in the system over the forecast horizon. Last, we can visualize the forecasts by typing plot(fcast, resp="US.Dp", cut=8) with Cut denoting the number of realized data points that should be shown in the plot prior the forecasts start. ## Conditional Forecasts Similar to structural analysis, it is possible to use conditional forecasts, identified in a country model. For that purpose, we use the methodology outlined in Waggoner and Zha (1999) and applied in Feldkircher, Huber, and Moder (2015) in the GVAR context. The following lines set up a conditional forecast holding inflation in the US country model fixed for five periods to its last observed value in the sample. Make sure that the inputs to cond.predict bgvar.obj and pred.obj belong to the same model. # matrix with constraints constr <- matrix(NA,nrow=fcast$n.ahead,ncol=ncol(model.ssvs.h8$xglobal)) colnames(constr) <- colnames(model.ssvs.h8$xglobal) # set "US.Dp" for five periods on its last value constr[1:5,"US.Dp"] <-model.ssvs.h8$xglobal[nrow(model.ssvs.h8$xglobal),"US.Dp"] # compute conditional forecast (hard restriction) cond_fcast <- predict(model.ssvs.h8, n.ahead=8, constr=constr, constr_sd=NULL) We could impose the same restrictions as “soft conditions” accounting for uncertainty by drawing from a Gaussian distribution with the conditional forecast in constr as mean and standard deviations in the matrix constr_sd of same size as constr. # add uncertainty to conditional forecasts constr_sd <- matrix(NA,nrow=fcast$n.ahead,ncol=ncol(model.ssvs.h8$xglobal)) colnames(constr_sd) <- colnames(model.ssvs.h8$xglobal) constr_sd[1:5,"US.Dp"] <- 0.001 # compute conditional forecast with soft restrictions cond_fcast2 <- predict(model.ssvs.h8, n.ahead=8, constr=constr, constr_sd=constr_sd) We can then compare the results plot(cond_fcast, resp="US.Dp", cut=10) plot(cond_fcast2, resp="US.Dp", cut=10) with Cut denoting the number of realized data points that should be shown in the plot prior the conditioning starts. # Appendix ## Function Arguments bgvar Main arguments and description of the function bgvar. • Data: Either a • list object of length $$N$$ that contains the data. Each element of the list refers to a country / entity. The number of columns (i.e., variables) in each country model can be different. The $$T$$ rows (i.e., number of time observations), however, need to be the same for each country. Country and variable names are not allowed to contain a . [dot]. • matrix of dimension $$T \times k$$, with $$k$$ denoting the sum of all endogenous variables of the system. The column names should consist of two parts, separated by a . The first part should denote the country / entity and the second part the name of the variable. Country and variable names are not allowed to contain a . [dot]. • W: An $$N \times N$$ weight matrix with 0 elements on the diagonal and row sums that sum up to unity or a list of weight matrices. See the help files for getweights for more details. • plag: Number of lags used (the same for domestic, exogenous and weakly exogenous variables). Default set to plag=1. • draws: Number of draws saved. Default set to draws=5000. • burnin: Number of burn-ins. Default set to burnin=5000. • prior: Either “SSVS”, “MN” or “NG”. See details below. Default set to prior=NG. • SV: If set to "TRUE", models are fitted with stochastic volatility using the stochvol and GIGrvg packages. Due to storage issues, not the whole history of the $$T$$ variance covariance matrices are kept. Consequently, the BGVAR package shows only one set of impulse responses (with variance covariance matrix based on the median volatilities over the sample period) instead of $$T$$ sets. Specify SV=FALSE to turn SV off. • hold.out: Defines the hold-out sample. Default without hold-out sample, thus set to zero. • thin: Is a thinning interval which grabs every ’thin’th draw from the posterior output. For example, thin=10 saves every tenth draw from the posterior. Default set to thin=1. • hyperpara: Is a list object that defines the hyperparameters when the prior is set to either "MN", "SSVS" or "NG". • "miscellaneous:" • a_1 is the prior hyperparameter for the inverted gamma prior (shape) (set a_1 = b_1 to a small value for the standard uninformative prior). Default is set to a_1=0.01. • b_1 is the prior hyperparameter for the inverted gamma prior (rate). Default sit set to b_1=0.01. • prmean is the prior mean on the first own lag of the autoregressive coefficient, standard value is prmean=1 for non-stationary data. The prior mean for the remaining autoregressive coefficients automatically set to 0. • bmu If SV=TRUE, this is the prior hyperparameter for the mean of the log-volatilities. Default is bmu=0. • Bmu If SV=TRUE, this is the prior hyperparameter for the variance of the mean of the log-volatilities. Default is Bmu=0. • a0 If SV=TRUE, this is the hyperparameter for the Beta prior on the persistence parameter of the log-volatilities. Default is a0=25. • b0 If SV=TRUE, this is the hyperparameter for the Beta prior on the persistence parameter of the log-volatilities. Default is b0=1.5. • Bsigma If SV=TRUE, this is the hyperparameter for the Gamma prior on the variance of the log-volatilities. Default is Bsigma=1. • "MN" • shrink1 Starting value of shrink1. Default set to 0.1. • shrink2 Starting value of shrink2. Default set to 0.2. • shrink3 Hyperparameter of shrink3. Default set to 100. • shrink4 Starting value of shrink4. Default set to 0.1. • "SSVS" • tau0 is the prior variance associated with the normal prior on the regression coefficients if a variable is NOT included (spike, tau0 should be close to zero). • tau1 is the prior variance associated with the normal prior on the regression coefficients if a variable is included (slab, tau1 should be large). • kappa0 is the prior variance associated with the normal prior on the covariances if a covariance equals zero (spike, kappa0 should be close to zero). • kappa1 is the prior variance associated with the normal prior on the covariances if a covariance is unequal to zero (slab, kappa1 should be large). • p_i is the prior inclusion probability for each regression coefficient (default is 0.5). • q_ij is the prior inclusion probability for each covariance (default is 0.5). • "NG" • e_lambda Prior hyperparameter for the Gamma prior on the lag-specific shrinkage components, standard value is e_lambda=1.5. • d_lambda Prior hyperparameter for the Gamma prior on the lag-specific shrinkage components, standard value is d_lambda=1. • tau_theta Parameter of the Normal-Gamma prior that governs the heaviness of the tails of the prior distribution. A value of tau_theta=1 would lead to the Bayesian LASSO. Default value differs per entity and set to tau_theta=1/log(M), where M is the number of endogenous variables per entity. • sample_tau If set to TRUE tau_theta is sampled. • eigen Set to TRUE if you want to compute the largest eigenvalue of the companion matrix for each posterior draw. If the modulus of the eigenvalue is significantly larger than unity, the model is unstable. Unstable draws exceeding an eigenvalue of one are then excluded. If eigen is set to a numeric value, then this corresponds to the maximum eigenvalue. The default is set to $$1.05$$ (which excludes all posterior draws for which the eigenvalue of the companion matrix was larger than $$1.05$$ in modulus). • Ex For including truly exogenous variables to the model. Either a + list object of maximum length N that contains the data. Each element of the list refers to a country/entity and has to match the country/entity names in Data. If no truly exogenous variables are added to the respective country/entity model, omit the entry. The T rows (i.e., number of time observations), however, need to be the same for each country. Country and variable names are not allowed to contain a . [dot] since this is our naming convention. + matrix object of dimension T times number of truly exogenous variables. The column names should consist of two parts, separated by a . [dot]. The first part should denote the country / entity name and the second part the name of the variable. Country and variable names are not allowed to contain a . [dot]. • trend If set to TRUE a deterministic trend is added to the country models. • expert Expert settings, must be provided as list. Default is set to NULL. + variable.list In case W is a list of weight matrices, specify here which set of variables should be weighted by which weighting matrix. Default is set to NULL. + OE.weights: Default value is NULL. Can be used to provide information of how to handle additional country models (other entities). Additional country models can be used to endogenously determine variables that are (weakly) exogenous for the majority of the other country models. As examples, one could think of an additional oil price model (Mohaddes and Raissi 2019) or a model for the joint euro area monetary policy (Georgiadis 2015; Feldkircher, Gruber, and Huber 2020). The data for these additional country models has to be contained in Data. The number of additional country models is unlimited. Each list entry of OE.weights has to be named similar to the name of the additional country model contained in Data. Each slot of OE.weights has to contain the following information: + weights a vector of weights with names relating to the countries for which data should be aggregated. Can also relate to a subset of countries contained in the data. + variables a vector of variable names that should be included in the additional country model. Variables that are not contained in the data slot of the extra country model are assumed to be weakly exogenous for the additional country model (aggregated with weights). + exo a vector of variable names that should be fed into the other countries as (weakly) exogenous variables. + Wex.restr A character vector that contains variables that should only be specified as weakly exogenous if not contained as endogenous variable in a particular country. An example that has often been used in the literature is to place these restrictions on nominal exchange rates. Default is NULL in which case all weakly exogenous variables are treated symmetrically. See function getweights for more details. + save.country.store If set to TRUE then function also returns the container of all draws of the individual country models. Significantly raises object size of output and default is thus set to FALSE. + save.shrink.store If set to TRUE the function also inspects posterior output of shrinkage coefficients. Default set to FALSE. + save.vola.store If set to TRUE the function also inspects posterior output of coefficients associated with the volatility process. Default set to FALSE. + use_R Boolean whether estimation should fall back on R version, otherwise Rcpp version is used (default). + applyfun applyfun Allows for user-specific apply function, which has to have the same interface than . If cores=NULL then lapply is used, if set to a numeric either parallel::parLapply() is used on Windows platforms and parallel::mclapply() on non-Windows platforms. + cores Specifies the number of cores which should be used. Default is set to and is used. • verbose If set to FALSE it suppresses printing messages to the console. Below, find some example code for all three priors. # load dataset data(eerData) # Minnesota prior and two different weight matrices and no SV # weights for first variable set tradeW.0012, for second finW0711 variable.list <- list() variable.list$real <- c("y","Dp","tb") variable.list$fin <- c("stir","ltir","rer") Hyperparm.MN <- list(a_i = 0.01, # prior for the shape parameter of the IG b_i = 0.01 # prior for the scale parameter of the IG ) model.MN<-bgvar(Data=eerData, draws=200, burnin=200, plag=1, hyperpara=Hyperparm.MN, prior="MN", thin=1, eigen=TRUE, SV=TRUE, expert=list(variable.list=variable.list)) # SSVS prior Hyperparm.ssvs <- list(tau0 = 0.1, # coefficients: prior variance for the spike # (tau0 << tau1) tau1 = 3, # coefficients: prior variance for the slab # (tau0 << tau1) kappa0 = 0.1, # covariances: prior variance for the spike # (kappa0 << kappa1) kappa1 = 7, # covariances: prior variance for the slab # (kappa0 << kappa1) a_1 = 0.01, # prior for the shape parameter of the IG b_1 = 0.01, # prior for the scale parameter of the IG p_i = 0.5, # prior inclusion probability of coefficients q_ij = 0.5 # prior inclusion probability of covariances ) model.ssvs<-bgvar(Data=eerData, draws=100, burnin=100, plag=1, hyperpara=Hyperparm.ssvs, prior="SSVS", thin=1, eigen=TRUE) # Normal Gamma prior data(monthlyData) monthlyData$OC<-NULL Hyperparm.ng<-list(d_lambda = 1.5, # coefficients: prior hyperparameter for the NG-prior e_lambda = 1, # coefficients: prior hyperparameter for the NG-prior prmean = 0, # prior mean for the first lag of the AR coefficients a_1 = 0.01, # prior for the shape parameter of the IG b_1 = 0.01, # prior for the scale parameter of the IG tau_theta = .6, # (hyper-)parameter for the NG sample_tau = FALSE # estimate a? ) model.ng<-bgvar(Data=monthlyData, W=W, draws=200, burnin=100, plag=1, hyperpara=Hyperparm.ng, prior="NG", thin=2, eigen=TRUE, SV=TRUE, expert=list(OE.weights=list(EB=EA.weights))) ## Function Arguments irf() • x: An objected fitted by function bgvar. • n.ahead: Forecasting horizon. • shockinfo Dataframe with additional information about the nature of shocks. Depending on the ident argument, the dataframe has to be specified differently. In order to get a dummy version for each identification scheme use get_shockinfo. • quantiles Numeric vector with posterior quantiles. Default is set to compute median along with 68%/80%/90% confidence intervals. • expert Expert settings, must be provided as list. Default is set to NULL. • MaxTries Numeric specifying maximal number of tries for finding a rotation matrix with sign-restrictions. Attention: setting this number very large may results in very long computation times. • save.store If set to TRUE the full posterior of both, impulse response and rotation matrices, are returned. Default is FALSE in order to save storage. • use_R Boolean whether IRF computation should fall back on R version, otherwise Rcpp version is used (default). • applyfun In case use_R=TRUE, this allows for user-specific apply function, which has to have the same interface as lapply. If cores=NULL then lapply is used, if set to a numeric either parallel::parLapply() is used on Windows platforms and parallel::mclapply() on non-Windows platforms. • cores Numeric specifying the number of cores which should be used, also all and half is possible. By default only one core is used. • verbose If set to FALSE it suppresses printing messages to the console. Below, find some further examples. # First example, a US monetary policy shock, quarterly data library(BGVAR) data(eerData) model.eer<-bgvar(Data=eerData,W=W.trade0012,draws=500,burnin=500,plag=1,prior="SSVS",thin=10,eigen=TRUE,trend=TRUE) # generalized impulse responses shockinfo<-get_shockinfo("girf") shockinfo$shock<-"US.stir"; shockinfo$scale<--100 irf.girf.us.mp<-irf(model.eer, n.ahead=24, shockinfo=shockinfo) # cholesky identification shockinfo<-get_shockinfo("chol") shockinfo$shock<-"US.stir"; shockinfo\$scale<--100 # sign restrictions shockinfo <- get_shockinfo("sign") sign=c("<","<"), horizon=c(1,1), scale=1, prob=1) # sign restrictions with relaxed cross-country restrictions shockinfo <- get_shockinfo("sign") # restriction for other countries holds to 75\% sign=c("<","<","<"), horizon=1, scale=1, prob=c(1,0.75,0.75)) sign=c("<","<","<"), horizon=1, scale=1, prob=c(1,0.75,0.75)) # Example with zero restriction (Arias et al., 2018) and # rationality conditions (D'Amico and King, 2017). data("eerDataspf") plag=1, prior="SSVS", eigen=TRUE) shockinfo <- get_shockinfo("sign") restriction=c("US.Dp_t+4","US.stir","US.y_t+4"), sign=c("<","0","<"), horizon=1, prob=1, scale=1) # rationality condition: US.stir_t+4 on impact is equal to average of # IRF of US.stir between horizon 1 to 4 sign="ratio.avg", horizon=5, prob=1, scale=1) # rationality condition: US.Dp_t+4 on impact is equal to IRF of US.Dp at horizon 4 sign="ratio.H", horizon=5, prob=1, scale=1) # rationality condition: US.y_t+4 on impact is equal to IRF of US.y at horizon 4 sign="ratio.H", horizon=5, prob=1, scale=1) # regulate maximum number of tries with expert settings expert=list(MaxTries=10)) par(oldpar) # References Arias, Jonas E., Juan F. Rubio-Ramirez, and Daniel F. Waggoner. 2018. “Inference Based on Structural Vector Autoregressions Identified with Sign and Zero Restrictions: Theory and Applications.” Econometrica 86 (2): 685–720. https://doi.org/10.3982/ECTA14468. Boeck, Maximilian, Martin Feldkircher, and Pierre Siklos. 2021. “International Effects of Forward Guidance.” Oxford Bulletin of Economics and Statistics forthcoming. Burriel, Pablo, and Alessandro Galesi. 2018. “Uncovering the heterogeneous effects of ecb unconventional monetary policies across euro area countries.” Working Papers. European Economic Review 101: 201–29. https://ideas.repec.org/p/bde/wpaper/1631.html. Cashin, Paul, Kamiar Mohaddes, Maziar Raissi, and Mehdi Raissi. 2014. “The differential effects of oil demand and supply shocks on the global economy.” Energy Economics 44 (C): 113–34. https://ideas.repec.org/a/eee/eneeco/v44y2014icp113-134.html. Castelnuovo, Efrem, and Paolo Surico. 2010. “Monetary Policy, Inflation Expectations and the Price Puzzle*.” The Economic Journal 120 (549): 1262–83. https://doi.org/10.1111/j.1468-0297.2010.02368.x. Chudik, Alexander, and Michael Fidora. 2011. “Using the global dimension to identify shocks with sign restrictions.” Working Paper Series No. 1318. European Central Bank. Crespo Cuaresma, Jesús, Martin Feldkircher, and Florian Huber. 2016. “Forecasting with Global Vector Autoregressive Models: A Bayesian Approach.” Journal of Applied Econometrics Vol. 31 (7): 1371–91. https://doi.org/10.1002/jae.2504. D’Amico, Stefania, and Thomas B. King. 2015. “What Does Anticipated Monetary Policy Do?” Working Paper Series WP-2015-10. Federal Reserve Bank of Chicago. https://ideas.repec.org/p/fip/fedhwp/wp-2015-10.html. Dees, Stephane, Filippo di Mauro, Hashem M. Pesaran, and L. Vanessa Smith. 2007. “Exploring the international linkages of the euro area: a global VAR analysis.” Journal of Applied Econometrics 22 (1). Dovern, Jonas, Martin Feldkircher, and Florian Huber. 2016. “Does Joint Modelling of the World Economy Pay Off? Evaluating Global Forecasts from a Bayesian Gvar.” Journal of Economic Dynamics and Control 70: 86–100. https://doi.org/http://dx.doi.org/10.1016/j.jedc.2016.06.006. Eickmeier, Sandra, and Tim Ng. 2015. “How Do Us Credit Supply Shocks Propagate Internationally? A Gvar Approach.” European Economic Review 74: 128–45. https://doi.org/http://dx.doi.org/10.1016/j.euroecorev.2014.11.011. Feldkircher, Martin, Thomas Gruber, and Florian Huber. 2020. “International effects of a compression of euro area yield curves.” Journal of Banking & Finance 113: 11–14. Feldkircher, Martin, and Florian Huber. 2016. “The international transmission of US shocks – Evidence from Bayesian global vector autoregressions.” European Economic Review 81 (C): 167–88. https://ideas.repec.org/a/eee/eecrev/v81y2016icp167-188.html. Feldkircher, Martin, and Pierre Siklos. 2019. “Global inflation dynamics and inflation expectations.” International Review of Economics & Finance 64: 217–41. Feldkircher, M., F. Huber, and I. Moder. 2015. “Towards a New Normal: How Different Paths of Us Monetary Policy Affect the World Economy.” Economic Notes 44 (3): 409–18. https://doi.org/10.1111/ecno.12041. George, Edward I, Dongchu Sun, and Shawn Ni. 2008. “Bayesian stochastic search for VAR model restrictions.” Journal of Econometrics 142 (1): 553–80. Georgiadis, Georgios. 2015. “Examining Asymmetries in the Transmission of Monetary Policy in the Euro Area: Evidence from a Mixed Cross-Section Global Var Model.” European Economic Review 75 (C): 195–215. http://EconPapers.repec.org/RePEc:eee:eecrev:v:75:y:2015:i:c:p:195-215. Huber, Florian. 2016. “Density Forecasting Using Bayesian Global Vector Autoregressions with Stochastic Volatility.” International Journal of Forecasting 32 (3): 818–37. Huber, Florian, and Martin Feldkircher. 2019. “Adaptive Shrinkage in Bayesian Vector Autoregressive Models.” Journal of Business & Economic Statistics 37 (1): 27–39. https://doi.org/10.1080/07350015.2016.1256217. Kastner, Gregor. 2016. “Dealing with Stochastic Volatility in Time Series Using the R Package stochvol.” Journal of Statistical Software 69 (1): 1–30. https://doi.org/10.18637/jss.v069.i05. Koop, Gary, and Dimitris Korobilis. 2010. “Bayesian Multivariate Time Series Methods for Empirical Macroeconomics.” Foundations and Trends(R) in Econometrics 3 (4): 267–358. https://doi.org/10.1561/0800000013. Lanne, Markku, and Henri Nyberg. 2016. “Generalized Forecast Error Variance Decomposition for Linear and Nonlinear Multivariate Models.” Oxford Bulletin of Economics and Statistics 78 (4): 595–603. https://doi.org/10.1111/obes.12125. Litterman, R. B. 1986. “Forecasting with Bayesian vector autoregressions - Five years of experience.” Journal of Business and Economic Statistics 5: 25–38. Mohaddes, Kamiar, and Mehdi Raissi. 2019. “The US oil supply revolution and the global economy.” Empirical Economics 57: 515–1546. Mohaddes, K., and M. Raissi. 2020. “Compilation, Revision and Updating of the Global Var (Gvar) Database, 1979Q2-2019Q4.” University of Cambridge: Judge Business School (mimeo). https://www.mohaddes.org/gvar. Pesaran, M. Hashem, Til Schuermann, and L. Vanessa Smith. 2009. “Forecasting economic and financial variables with global VARs.” International Journal of Forecasting 25 (4): 642–75. Pesaran, M. Hashem, Til Schuermann, and S. M. Weiner. 2004. “Modeling Regional Interdependencies Using a Global Error-Correcting Macroeconometric Model.” Journal of Business and Economic Statistics, American Statistical Association 22: 129–62. Pesaran, M. Hashem, and Yongcheol Shin. 1998. “Generalized Impulse Response Analysis in Linear Multivariate Models.” Economics Letters 58 (1): 17–29. http://ideas.repec.org/a/eee/ecolet/v58y1998i1p17-29.html. Primiceri, Giorgio E. 2005. “Time Varying Structural Vector Autoregressions and Monetary Policy.” The Review of Economic Studies 72 (3): 821–52. Rubio-Ramirez, Juan F., Daniel F. Waggoner, and Tao Zha. 2010. “Structural Vector Autoregressions: Theory of Identification and Algorithms for Inference.” Review of Economic Studies 77 (2): 665–96. http://ideas.repec.org/a/oup/restud/v77y2010i2p665-696.html. Sims, Christopher A, and Tao Zha. 2006. “Were There Regime Switches in Us Monetary Policy?” The American Economic Review 96 (1): 54–81. Smith, L.V. and Galesi, A. 2014. “GVAR Toolbox.” https://sites.google.com/site/gvarmodelling/gvar-toolbox. Waggoner, Daniel F, and Tao Zha. 1999. “Conditional Forecasts in Dynamic Multivariate Models.” Review of Economics and Statistics 81 (4): 639–51.
{}
# plotting 3D image for the "grazing goat and silo" problem I am looking for the code to generate a 3D graphic with this area rotated around its PQ-axis: The red circle represents a silo centered at point P. A goat is tethered at point Q on the edge of the silo, with a tether length less than half the circumference of the silo, so that the area the goat is able to roam over is smaller than a circle. I want to generate a 3D version of this figure, where the 2D figure above is rotated around the PQ axis. The result should look similar to (but distinctly different from) this This example is just two simple spheres, not the more complicated figure I'm looking for. • Hi, welcome to MMA.SE. Take your equations and read documentation of ParametricPlot3D or SphericalPlot3D. You are going to be interested in Opacity too. – Kuba Apr 11 '16 at 7:35 • Obviously I read this and have no clue, that's why i am asking here. Apr 11 '16 at 7:39 • This is "obvious" only for you because we only see two links in your question. – Kuba Apr 11 '16 at 7:48 • – user9660 Apr 11 '16 at 8:02 • I wonder why people are setting this question "on hold" today as "unclear", after it was perfectly answered by Jason yesterday. Apr 12 '16 at 14:08 What you need to do here is to generate a ParametricPlot to give the 2D goat/silo problem, and then we can rotate it with RevolutionPlot3D. From reading the page on MathWorld, we can see that we need to make a circle involute to describe the portion of the area where the goat's circle is limited by the presence of the silo. In Cartesian coordinates, this is written $$x = r \left( \cos \theta + \theta\sin \theta \right)$$ $$y = r \left( \sin \theta - \theta\cos \theta \right)$$ where $r$ is the radius of the silo and $\theta$ is the angle of rotation. At the point where $\theta=l/r$, where $l$ is the length of the rope, the goat can then move in an area described by a simple half circle. So we will run a ParametricPlot as a function of t where t can go from 0 to l/r, and we'll go ahead and set r=0 for simplicity. First we need the silo, With[{lr = π}, ParametricPlot[{ {Cos[2 π t/lr], Sin[2 π t/lr]} } , {t, 0, lr}] ] Next we bring in the circle involute, With[{lr = 2}, ParametricPlot[{ {Cos[2 π t/lr], Sin[2 π t/lr]}, {Cos[t] + t Sin[t], Sin[t] - t Cos[t]} } , {t, 0, lr}] ] Now for the portion of the goat's area where it is unimpeded by the silo. This is a half circle with radius l centered at the point {Cos[l], Sin[l]} with an initial angle of l, With[{lr = 2}, ParametricPlot[{ {Cos[2 π t/lr], Sin[2 π t/lr]}, {Cos[t] + t Sin[t], Sin[t] - t Cos[t]}, {Cos[lr] + lr Sin[lr + (π t)/lr], -lr Cos[lr + (π t)/lr] + Sin[lr]} } , {t, 0, lr}] ] Finally, to get the last part of the circle, I'll use ReflectionTransform With[{lr = 2}, ParametricPlot[{ {Cos[2 π t/lr], Sin[2 π t/lr]}, {Cos[t] + t Sin[t], Sin[t] - t Cos[t]}, {Cos[lr] + lr Sin[lr + (π t)/lr], -lr Cos[lr + (π t)/lr] + Sin[lr]}, ReflectionTransform[{-Sin[lr], Cos[lr]}][{Cos[t] + t Sin[t], Sin[t] - t Cos[t]}] } , {t, 0, lr}] ] The next step is to rotate everything so that the tethering point stays fixed, and add a Point to show that location, then use Manipulate to show the effets of the tether size, Manipulate[ Module[{silo, circinv1, circinv2, tether, t}, silo = {Cos[2 π t/l], Sin[2 π t/l]}; circinv1 = {Cos[t] + t Sin[t], Sin[t] - t Cos[t]}; circinv2 = ReflectionMatrix[{-Sin[l], Cos[l]}].circinv1; tether = {-1 - l Sin[(π t)/l], l Cos[(π t)/l]}; {circinv1, circinv2} = RotationTransform[π - l] /@ {circinv1, circinv2}; ParametricPlot[{ silo, tether, circinv1, circinv2 }, {t, 0, l}, PlotStyle -> {Red, Blue, Blue, Blue}, Epilog -> {PointSize -> Large, Point[{{-1, 0}}]}, PlotRange -> {{-4.5, 2}, {-3.5, 3.5}}]] , {{l, .1}, .1, π, .01}] To turn this into a 3D plot, we will use RevolutionPlot3D With[{lr = .5 π}, RevolutionPlot3D[{ {Cos[2 π t/lr], Sin[2 π t/lr], 0}, {Cos[t] + t Sin[t], Sin[t] - t Cos[t], 0}, {Cos[lr] + lr Sin[lr + (π t)/(2 lr)], -lr Cos[lr + (π t)/(2 lr)] + Sin[lr], 0} } , {t, 0, lr}, {θ, 0, 2 π}, RevolutionAxis -> {Cos[lr], Sin[lr], 0}, PlotRange -> All, Boxed -> False, Axes -> False] ] Here is an animation for the 3D graphic, Manipulate[ silo = {Cos[2 π t/l], Sin[2 π t/l]}; circinv1 = RotationMatrix[π - l].{Cos[t] + t Sin[t], Sin[t] - t Cos[t]}; tether = {-1 - l Sin[(π t)/(2 l)], l Cos[(π t)/(2 l)]}; {silo, tether, circinv1} = Join[#, {0}] & /@ {silo, tether, circinv1}; RevolutionPlot3D[Evaluate@{ silo, tether, circinv1 }, {t, 0, l}, {θ, 0, 2 π}, RevolutionAxis -> {1, 0, 0}, PlotStyle -> {{Red}, {Opacity[0.5], Blue}, {Opacity[0.5], Blue}}, PlotRange -> {{-4.5, 2}, {-3.5, 3.5}, {-3.5, 3.5}}, Boxed -> False, Axes -> False] , {{l, .1}, .1, π, .05}] ## Area (and volume) available for the goat (space goat?) to graze in Most instances I find online talking about this problem use it as an example for integral calculus - finding the area available for grazing. You can find the explicit formula for the 2D problem by rewriting the parametric equations to explicitly depend on l and a (the tether length and the silo radius): {circinv, tether, silo} = {{a (-Cos[l/a - t] + t Sin[l/a - t]), a (t Cos[l/a - t] + Sin[l/a - t])}, {-a - l Sin[(a π t)/(2 l)], l Cos[(a π t)/(2 l)]}, {-a Cos[l/a - t], a Sin[l/a - t]}}; ParametricPlot[Evaluate[{tether, circinv, silo} /. {a -> 4, l -> 2}], {t, 1/2, 0}, AspectRatio -> .6] For the area, we find the area of the three regions, add together the circle involute and tether and subtract off the silo, then double it. Integrate[#2 D[#1, t], {t, l/a, 0}] & @@@ {tether, circinv, silo} Expand[2 (#1 + #2 - #3) & @@ %] (* {(l^2 π)/4, (a l)/2 + l^3/(6 a) - 1/4 a^2 Sin[(2 l)/a], -(1/4) a (-2 l + a Sin[(2 l)/a])} *) (* l^3/(3 a) + (l^2 π)/2 *) Which matches the formula from MathWorld. The volume in the 3D case is likewise easily found: Integrate[π #2^2 D[#1, t], {t, l/a, 0}] & @@@ {tether, circinv, silo} Expand[2 (#1 + #2 - #3) & @@ %] // FullSimplify (* {(2 l^3 π)/3, 1/12 a π (-64 a^2 + 36 l^2 + 63 a^2 Cos[l/a] + a^2 Cos[(3 l)/a]), 4/3 a^3 π (2 + Cos[l/a]) Sin[l/(2 a)]^4} *) (* 2/3 π (-18 a^3 + 9 a l^2 + 2 l^3 + 18 a^3 Cos[l/a]) *) $$\frac{2}{3} \pi \left(18 a^3 \cos \left(\frac{l}{a}\right)-18 a^3+9 a l^2+2 l^3\right)$$ But I haven't seen this anywhere else to check my work :-) • I wonder what happens to your 2D formula if l > a pi Apr 14 '16 at 22:51
{}
# Given $(x-1)^3+3(x-1)^2-2(x-1)-4=a(x+1)^3+b(x+1)^2+c(x+1)+d$, find$(a,b,c,d)$ Given $$(x-1)^3+3(x-1)^2-2(x-1)-4=a(x+1)^3+b(x+1)^2+c(x+1)+d$$, find$$(a,b,c,d)$$ my attempt: $$(x+1)=(x-1)\frac{(x+1)}{(x-1)}$$ but this seems useless? I want to use synthetic division but I don't know how • You could simply substitute $x=y-1$ and expand the LHS. – Martin R Jul 24 at 11:46 • The LHS simplifies to $x^3 - 5x$. – Viktor Glombik Jul 24 at 11:48 It's $$(x+1-2)^3+3(x+1-2)^2-2(x+1-2)-4=(x+1)^3-3(x+1)^2-2(x+1)+4.$$ Can you end it now? If $$F(t)=c_0+c_1t+c_2t^2+\cdots$$ then $$c_n=\frac1{n!}\left.\frac{d^nF}{dt^n}\right|_{t=0}.$$ In case you don't know how to solve this most elegantly, there is still the straightforward possibility by expanding both sides. The LHS is given by $$x^3-5x,$$ whereas the RSH is $$ax^3 + (3a + b)x^2 + (3a + 2b + c)x + a + b + c + d.$$
{}
# Conjecture re 32n^2 + 3n 1. May 8, 2010 ### ramsey2879 Conjecture: If P is odd, then there is one and only one number n in the set {1,2,3,...(P-1)} which satisfies the equation (32*n^2 + 3n) = 0 mod P an this number. Can anyone help me with a proof of this? If by chance this is a trival matter. I have gone further and determined 4 equations for n based upon the value of P mod 8, but I will leave that for later. I would like to know if the conjecture is trival first. 2. May 9, 2010 ### robert Ihnot This is just a linear equation since we can factor as (n)(32n+3), and n==0 is unacceptable. So we are left with $$32n+3\equiv 0 \bmod p$$ 3. May 9, 2010 ### Martin Rattigan Question doesn't say P is prime, just odd, so shouldn't this be fleshed out a bit? 4. May 9, 2010 ### CRGreathouse I think that's sufficient, since gcd(P, 32) = 1. 5. May 9, 2010 ### Martin Rattigan I wasn't talking about getting a solution of 32n+3=0(P). I thought the reasoning behind $n(32n+3)=0(P)\Rightarrow n=0(P)\vee 32n+3=0(P)$ was a bit sparse. This would be obvious if P were prime, but what's wrong with the following? 9(32.9+3)=0(27) 18(32.18+3)=0(27) Here 9 and 18 are different residues modulo 27, neither 0(27). Also 32n+3=0(27) is false for both n=9 and n=18. Last edited: May 9, 2010 6. May 10, 2010 ### robert Ihnot I was just thinking of primes, but, assumedly, your reasoning is correct. 7. May 11, 2010 ### RedGolpe I am not sure if I got the question right but with P=9, 32n^2+3n=0 (mod 9) for n=3 and n=6. 8. May 11, 2010 ### Martin Rattigan P=55; n=11,25 9. May 12, 2010 ### ramsey2879 Thanks everyone for the counter examples. The ones with 3|n should have been obvious. Last edited: May 12, 2010
{}
## Real Analysis Exchange ### Weakly Symmetric Functions and Weakly Symmetrically Continuous Functions Kandasamy Muthuvel #### Abstract We prove that there exists a nowhere weakly symmetric function $f: \mathbb{R} \rightarrow \mathbb{R}$ that is everywhere weakly symmetrically continuous and everywhere weakly continuous. Existence of a nowhere weakly symmetrically continuous function $f: \mathbb{R} \rightarrow \mathbb{R}$ that is everywhere weakly symmetric remains open. #### Article information Source Real Anal. Exchange, Volume 40, Number 2 (2015), 455-458. Dates First available in Project Euclid: 4 April 2017
{}
# Online IDE ? – do it yourself Jupyter Notebook is one of the most useful tool for data exploration, machine learning and fast prototyping. There are many plugins and projects which make it even more powerful: * jupyterlab-git * nbdev * jupyter debugger But sometimes you simply need IDE … One of my favorite text editor is vim. It is lightweight, fast and with appropriate plugins it can be used as a IDE. Using Dockerfile you can build jupyter environment with fully equipped vim: FROM continuumio/miniconda3 RUN apt update && apt install curl git cmake ack g++ python3-dev vim-youcompleteme tmux -yq RUN sh -c "$(curl -fsSL https://raw.githubusercontent.com/qooba/vim-python-ide/master/setup.sh)" RUN conda install xeus-python jupyterlab jupyterlab-git -c conda-forge RUN jupyter labextension install @jupyterlab/debugger @jupyterlab/git RUN pip install nbdev RUN echo "alias ls='ls --color=auto'" >> /root/.bashrc CMD bin/bash Now you can run the image: docker run --name jupyter -d --rm -p 8888:8888 -v$(pwd)/jupyter:/root/.jupyter -v $(pwd)/notebooks:/opt/notebooks qooba/miniconda3 /bin/bash -c "jupyter lab --notebook-dir=/opt/notebooks --ip='0.0.0.0' --port=8888 --no-browser --allow-root --NotebookApp.password='' --NotebookApp.token=''" In the jupyter lab start terminal session, run bash (it works better in bash) and then vim. The online IDE is ready: # References [1] Top image Boskampi from Pixabay # FastAI with TensorRT on Jetson Nano IoT and AI are the hottest topics nowadays which can meet on Jetson Nano device. In this article I’d like to show how to use FastAI library, which is built on the top of the PyTorch on Jetson Nano. Additionally I will show how to optimize the FastAI model for the usage with TensorRT. You can find the code on https://github.com/qooba/fastai-tensorrt-jetson.git. # 1. Training Although the Jetson Nano is equipped with the GPU it should be used as a inference device rather than for training purposes. Thus I will use another PC with the GTX 1050 Ti for the training. Docker gives flexibility when you want to try different libraries thus I will use the image which contains the complete environment. Training environment Dockerfile: FROM nvcr.io/nvidia/tensorrt:20.01-py3 WORKDIR / RUN apt-get update && apt-get -yq install python3-pil RUN pip3 install jupyterlab torch torchvision RUN pip3 install fastai RUN DEBIAN_FRONTEND=noninteractive && apt update && apt install curl git cmake ack g++ tmux -yq RUN pip3 install ipywidgets && jupyter nbextension enable --py widgetsnbextension CMD ["sh","-c", "jupyter lab --notebook-dir=/opt/notebooks --ip='0.0.0.0' --port=8888 --no-browser --allow-root --NotebookApp.password='' --NotebookApp.token=''"] To use GPU additional nvidia drivers (included in the NVIDIA CUDA Toolkit) are needed. If you don’t want to build your image simply run: docker run --gpus all --name jupyter -d --rm -p 8888:8888 -v$(pwd)/docker/gpu/notebooks:/opt/notebooks qooba/fastai:1.0.60-gpu Now you can use pets.ipynb notebook (the code is taken from lesson 1 FastAI course) to train and export pets classification model. from fastai.vision import * from fastai.metrics import error_rate path = untar_data(URLs.PETS) path_anno = path/'annotations' path_img = path/'images' fnames = get_image_files(path_img) # prepare data np.random.seed(2) pat = r'/([^/]+)_\d+.jpg$' bs = 16 data = ImageDataBunch.from_name_re(path_img, fnames, pat, ds_tfms=get_transforms(), size=224, bs=bs).normalize(imagenet_stats) # prepare model learner learn = cnn_learner(data, models.resnet34, metrics=error_rate) # train learn.fit_one_cycle(4) # export learn.export('/opt/notebooks/export.pkl') Finally you get pickled pets model (export.pkl). # 2. Inference (Jetson Nano) The Jetson Nano device with Jetson Nano Developer Kit already comes with the docker thus I will use it to setup the inference environment. I have used the base image nvcr.io/nvidia/l4t-base:r32.2.1 and installed the pytorch and torchvision. If you have JetPack 4.4 Developer Preview you can skip this steps and start with the base image nvcr.io/nvidia/l4t-pytorch:r32.4.2-pth1.5-py3. The FastAI installation on Jetson is more problematic because of the blis package. Finally I have found the solution here. Additionally I have installed torch2trt package which converts PyTorch model to TensorRT. Finally I have used the tensorrt from the JetPack which can be found in /usr/lib/python3.6/dist-packages/tensorrt . The final Dockerfile is: FROM nvcr.io/nvidia/l4t-base:r32.2.1 WORKDIR / # install pytorch RUN apt update && apt install -y --fix-missing make g++ python3-pip libopenblas-base RUN wget https://nvidia.box.com/shared/static/ncgzus5o23uck9i5oth2n8n06k340l6k.whl -O torch-1.4.0-cp36-cp36m-linux_aarch64.whl RUN pip3 install Cython RUN pip3 install numpy torch-1.4.0-cp36-cp36m-linux_aarch64.whl # install torchvision RUN apt update && apt install libjpeg-dev zlib1g-dev git libopenmpi-dev openmpi-bin -yq RUN git clone --branch v0.5.0 https://github.com/pytorch/vision torchvision RUN cd torchvision && python3 setup.py install # install fastai RUN pip3 install jupyterlab ENV TZ=Europe/Warsaw RUN ln -snf /usr/share/zoneinfo/$TZ /etc/localtime && echo $TZ > /etc/timezone && apt update && apt -yq install npm nodejs python3-pil python3-opencv RUN apt update && apt -yq install python3-matplotlib RUN git clone https://github.com/NVIDIA-AI-IOT/torch2trt.git /torch2trt && mv /torch2trt/torch2trt /usr/local/lib/python3.6/dist-packages && rm -r /torch2trt COPY tensorrt /usr/lib/python3.6/dist-packages/tensorrt RUN pip3 install --no-deps fastai RUN git clone https://github.com/fastai/fastai /fastai RUN apt update && apt install libblas3 liblapack3 liblapack-dev libblas-dev gfortran -yq RUN curl -LO https://github.com/explosion/cython-blis/files/3566013/blis-0.4.0-cp36-cp36m-linux_aarch64.whl.zip && unzip blis-0.4.0-cp36-cp36m-linux_aarch64.whl.zip && rm blis-0.4.0-cp36-cp36m-linux_aarch64.whl.zip COPY blis-0.4.0-cp36-cp36m-linux_aarch64.whl . RUN pip3 install scipy pandas blis-0.4.0-cp36-cp36m-linux_aarch64.whl spacy fastai scikit-learn CMD ["sh","-c", "jupyter lab --notebook-dir=/opt/notebooks --ip='0.0.0.0' --port=8888 --no-browser --allow-root --NotebookApp.password='' --NotebookApp.token=''"] As before you can skip the docker image build and use ready image: docker run --runtime nvidia --network app_default --name jupyter -d --rm -p 8888:8888 -e DISPLAY=$DISPLAY -v /tmp/.X11-unix/:/tmp/.X11-unix -v $(pwd)/docker/jetson/notebooks:/opt/notebooks qooba/fastai:1.0.60-jetson Now we can open jupyter notebook on jetson and move pickled model file export.pkl from PC. The notebook jetson_pets.ipynb show how to load the model. import torch from torch2trt import torch2trt from fastai.vision import * from fastai.metrics import error_rate learn = load_learner('/opt/notebooks/') learn.model.eval() model=learn.model if torch.cuda.is_available(): input_batch = input_batch.to('cuda') model.to('cuda') Additionally we can optimize the model using torch2trt package: x = torch.ones((1, 3, 224, 224)).cuda() model_trt = torch2trt(learn.model, [x]) Let’s prepare example input data: import urllib url, filename = ("https://github.com/pytorch/hub/raw/master/dog.jpg", "dog.jpg") try: urllib.URLopener().retrieve(url, filename) except: urllib.request.urlretrieve(url, filename) from PIL import Image from torchvision import transforms input_image = Image.open(filename) preprocess = transforms.Compose([ transforms.Resize(256), transforms.CenterCrop(224), transforms.ToTensor(), transforms.Normalize(mean=[0.485, 0.456, 0.406], std=[0.229, 0.224, 0.225]), ]) input_tensor = preprocess(input_image) input_batch = input_tensor.unsqueeze(0) Finally we can run prediction for PyTorch and TensorRT model: x=input_batch y = model(x) y_trt = model_trt(x) and compare PyTorch and TensorRT performance: def prediction_time(model, x): import time times = [] for i in range(20): start_time = time.time() y_trt = model(x) delta = (time.time() - start_time) times.append(delta) mean_delta = np.array(times).mean() fps = 1/mean_delta print('average(sec):{},fps:{}'.format(mean_delta,fps)) prediction_time(model,x) prediction_time(model_trt,x) where for: * PyTorch – average(sec):0.0446, fps:22.401 * TensorRT – average(sec):0.0094, fps:106.780 The TensorRT model is almost 5 times faster thus it is worth to use torch2trt. # References [1] Top image DrZoltan from Pixabay # Azuronet – Warsaw .NET & Azure Meetup #2 With big pleasure I would like to invite you to join Azuronet – .NET & Azure Meetup #2 in Warsaw, where I will talk (in polish) about Milla project and give you some insights into the world of chatbots and intelligent assistants. # Live and let her speak – congratulations for the Milla chatbot I am pleased to hear that the first Polish banking chatbot with which you can make a transfer was awarded in a competition organized by a Gazeta Bankowa. With Milla you can talk in the Bank Millennium mobile application. Currently, Milla can speak (text to speech), listen (automatic speak recognition) and understand what you write to her (intent detection with slot filling). This is not a sponsored post 🙂 but I’ve been developing Milla for the few months and I’m really happy that I had opportunity to do this. Have a nice talk with Milla. # Quantum teleportation do it yourself with Q# Quantum computing nowadays is the one of the hottest topics in the computer science world. Recently IBM unveiled the IBM Q System One: a 20-qubit quantum computer which is touting as “the world’s first fully integrated universal quantum computing system designed for scientific and commercial use”. In this article I’d like how to show the quantum teleportation phenomenon. I will use the Q# language designed by Microsoft to simplify creating quantum algorithms. In this example I have used the quantum simulator which I have wrapped with the REST api and put into the docker image. Quantum teleportation allows moving a quantum state from one location to another. Shared quantum entanglement between two particles in the sending and receiving locations is used to do this without having to move physical particles along with it. # 1. Theory Let’s assume that we want to send the message, specific quantum state described using Dirac notation: $$|\psi\rangle=\alpha|0\rangle+\beta|1\rangle$$ Additionally we have two entangled qubits, first in Laboratory 1 and second in Laboratory 2: $$|\phi^+\rangle=\frac{1}{\sqrt{2}}(|00\rangle+|11\rangle)$$ thus we starting with the input state: $$|\psi\rangle|\phi^+\rangle=(\alpha|0\rangle+\beta|1\rangle)(\frac{1}{\sqrt{2}}(|00\rangle+|11\rangle))$$ $$|\psi\rangle|\phi^+\rangle=\frac{\alpha}{\sqrt{2}}|000\rangle + \frac{\alpha}{\sqrt{2}}|011\rangle + \frac{\beta}{\sqrt{2}}|100\rangle + \frac{\beta}{\sqrt{2}}|111\rangle$$ To send the message we need to start with two operations applying CNOT and then Hadamard gate. CNOT gate flips the second qubit only if the first qubit is 1. Applying CNOT gate will modify the first qubit of the input state and will result in: $$\frac{\alpha}{\sqrt{2}}|000\rangle + \frac{\alpha}{\sqrt{2}}|011\rangle + \frac{\beta}{\sqrt{2}}|110\rangle + \frac{\beta}{\sqrt{2}}|101\rangle$$ Hadamard gate changes states as follows: $$|0\rangle \rightarrow \frac{1}{\sqrt{2}}(|0\rangle+|1\rangle))$$ and $$|1\rangle \rightarrow \frac{1}{\sqrt{2}}(|0\rangle-|1\rangle))$$ Applying Hadmard gate results in: $$\frac{\alpha}{\sqrt{2}}(\frac{1}{\sqrt{2}}(|0\rangle+|1\rangle))|00\rangle + \frac{\alpha}{\sqrt{2}}(\frac{1}{\sqrt{2}}(|0\rangle+|1\rangle))|11\rangle + \frac{\beta}{\sqrt{2}}(\frac{1}{\sqrt{2}}(|0\rangle-|1\rangle))|10\rangle + \frac{\beta}{\sqrt{2}}(\frac{1}{\sqrt{2}}(|0\rangle-|1\rangle))|01\rangle$$ and: $$\frac{1}{2}(\alpha|000\rangle+\alpha|100\rangle+\alpha|011\rangle+\alpha|111\rangle+\beta|010\rangle-\beta|110\rangle+\beta|001\rangle-\beta|101\rangle)$$ which we can write as: $$\frac{1}{2}(|00\rangle(\alpha|0\rangle+\beta|1\rangle)+|01\rangle(\alpha|1\rangle+\beta|0\rangle)+|10\rangle(\alpha|0\rangle-\beta|1\rangle)+|11\rangle(\alpha|1\rangle-\beta|0\rangle))$$ Then we measure the states of the first two qubits (message qubit and Laboratory 1 qubit) where we can have four results: • $|00\rangle$ which simplifies equation to: $|00\rangle(\alpha|0\rangle+\beta|1\rangle)$ and indicates that the qubit in the Laboratory 2 is $\alpha|0\rangle+\beta|1\rangle$ • $|01\rangle$ which simplifies equation to: $|01\rangle(\alpha|1\rangle+\beta|0\rangle)$ and indicates that the qubit in the Laboratory 2 is $\alpha|1\rangle+\beta|0\rangle$ • $|10\rangle$ which simplifies equation to: $|10\rangle(\alpha|0\rangle-\beta|1\rangle)$ and indicates that the qubit in the Laboratory 2 is $\alpha|0\rangle-\beta|1\rangle$ • $|11\rangle$ which simplifies equation to: $|11\rangle(\alpha|1\rangle-\beta|0\rangle)$ and indicates that the qubit in the Laboratory 2 is $\alpha|1\rangle-\beta|0\rangle$ Now we have to send the result classical way from Laboratory 1 to Laboratory 2. Finally we know what transformation we need to apply to qubit in the Laboratory 2 to make its state equal to message qubit: $$|\psi\rangle=\alpha|0\rangle+\beta|1\rangle$$ if Laboratory 2 qubit is in state: • $\alpha|0\rangle+\beta|1\rangle$ we don’t need to do anything. • $\alpha|1\rangle+\beta|0\rangle$ we need to apply NOT gate. • $\alpha|0\rangle-\beta|1\rangle$ we need to apply Z gate. • $\alpha|1\rangle-\beta|0\rangle$ we need to apply NOT gate followed by Z gate This operations will transform Laboratory 2 qubit state to initial message qubit state thus we moved the particle state from Laboratory 1 to Laboratory 2 without moving particle. # 2. Code Now it’s time to show the quantum teleportation using Q# language. I have used Microsoft Quantum Development Kit to run the Q# code inside the .NET Core application. Additionally I have added the nginx proxy with the angular gui which will help to show the results. Everything was put inside the docker to simplify the setup. Before you will start you will need git, docker and docker-compose installed on your machine (https://docs.docker.com/get-started/) To run the project we have to clone the repository and run it using docker compose: git clone https://github.com/qooba/quantum-teleportation-qsharp.git cd quantum-teleportation-qsharp docker-compose -f app/docker-compose.yml up Now we can run the http://localhost:8020/ in the browser: Then we can put the message in the Laboratory 1, click the Teleport button, trigger for the teleportation process which sends the message to the Laboratory 2. The text is converted into array of bits and each bit is sent to the Laboratory 2 using quantum teleportation. In the first step we encode the incoming message using X gate. if (message) { X(msg); } Then we prepare the entanglement between the qubits in the Laboratory 1 and Laboratory 2. H(here); CNOT(here, there); In the second step we apply CNOT and Hadamard gate to send the message: CNOT(msg, here); H(msg); Finally we measure the message qubit and the Laboratory 1 qubit: if (M(msg) == One) { Z(there); } if (M(here) == One) { X(there); } If the message qubit has state $|1\rangle$ then we need to apply the Z gate to the Laboratory 2 qubit. If the Laboratory 1 qubit has state $|1\rangle$ then we need to apply the X gate to the Laboratory 2 qubit. This information must be sent classical way to the Laboratory 2. Now the Laboratory 2 qubit state is equal to the initial message qubit state and we can check it: if (M(there) == One) { set measurement = true; } This kind of communication is secure because even if someone will take over the information sent classical way it is still impossible to decode the message. # Boosting Elasticsearch with machine learning – Elasticsearch, RankLib, Docker Elastic search is powerful search engine. Its distributed architecture give ability to build scalable full-text search solution. Additionally it provides comprehensive query language. Despite this sometimes the engine and search results is not enough to meet the expectations of users. In such situations it is possible to boost search quality using machine learning algorithms. In this article I will show how to do this using RankLib library and LambdaMart algorithm . Moreover I have created ready to use platform which: 1. Index the data 2. Helps to label the search results in the user friendly way 3. Trains the model 4. Deploys the model to elastic search 5. Helps to test the model The whole project is setup on the docker using docker compose thus you can setup it very easy. The platform is based on the elasticsearch learning to rank plugin. I have also used the python example described in this project. Before you will start you will need docker and docker-compose installed on your machine (https://docs.docker.com/get-started/) To run the project you have to clone it: git clone https://github.com/qooba/elasticsearch-learning-to-rank.git Then to make elasticsearch working you need to create data folder with appropriate access: cd elasticsearch-learning-to-rank/ mkdir docker/elasticsearch/esdata1 chmod g+rwx docker/elasticsearch/esdata1 chgrp 1000 docker/elasticsearch/esdata1 Finally you can run the project: docker-compose -f app/docker-compose.yml up Now you can open the http://localhost:8020/. # 1. Architecture There are three main components: A. The ngnix reverse proxy with angular app B. The flask python app which orchestrates the whole ML solution C. The elastic search with rank lib plugin installed ### A. Ngnix I have used the Ngnix reverse proxy to expose the flask api and the angular gui which helps with going through the whole proces. ngnix.config server { listen 80; server_name localhost; root /www/data; location / { autoindex on; } location /images/ { autoindex on; } location /js/ { autoindex on; } location /css/ { autoindex on; } location /training/ { proxy_set_header Host$host; proxy_set_header X-Real-IP $remote_addr; proxy_set_header X-Forwarded-For$proxy_add_x_forwarded_for; proxy_set_header X-Forwarded-Proto $scheme; proxy_set_header Host$http_host; proxy_pass http://training-app:5090; } } This is the core of the project. It exposes api for: • Indexing • Labeling • Training • Testing It calls directly the elastic search to get the data and do the modifications. Because training with RankLib require the java thus Docker file for this part contains default-jre installation. Additionally it downloads the RankLib-2.8.jar and tmdb.json (which is used as a default data source) from: http://es-learn-to-rank.labs.o19s.com/. Dockerfile FROM python:3 RUN \ apt update && \ apt-get -yq install default-jre RUN pip install -r requirements.txt EXPOSE 5090 CMD ["python", "-u", "app.py"] ### C. Elastic search As mentioned before it is the instance of elastic search with the rank lib plugin installed Dockerfile FROM docker.elastic.co/elasticsearch/elasticsearch:6.2.4 RUN /usr/share/elasticsearch/bin/elasticsearch-plugin install \ -b http://es-learn-to-rank.labs.o19s.com/ltr-1.1.0-es6.2.4.zip All layers are composed with docker-compose.yml: version: '2.2' services: elasticsearch: build: ../docker/elasticsearch container_name: elasticsearch environment: - discovery.type=single-node - bootstrap.memory_lock=true - xpack.security.enabled=false - "ES_JAVA_OPTS=-Xms512m -Xmx512m" ulimits: memlock: soft: -1 hard: -1 volumes: - ../docker/elasticsearch/esdata1:/usr/share/elasticsearch/data networks: - esnet training-app: build: ../docker/training-app networks: - esnet depends_on: - elasticsearch environment: - ES_HOST=http://elasticsearch:9200 - ES_INDEX=tmdb - ES_TYPE=movie volumes: nginx: image: "nginx:1.13.5" ports: - "8020:80" volumes: - ../docker/frontend-reverse-proxy/conf:/etc/nginx/conf.d - ../docker/frontend-reverse-proxy/www/data:/www/data depends_on: - elasticsearch - training-app networks: - esnet volumes: esdata1: driver: local networks: esnet: # 2. Platform The platform helps to run and understand the whole process thought four steps: A. Indexing the data B. Labeling the search results C. Training the model D. Testing trained model ### A. Indexing The first step is obvious thus I will summarize it shortly. As mentioned before the default data source is taken from tmdb.json file but it can be simply changed using ES_DATA environment variable in the docker-compose.yml : training-app: environment: - ES_HOST=http://elasticsearch:9200 - ES_INDEX=tmdb - ES_TYPE=movie - ES_FEATURE_SET_NAME=movie_features - ES_MODEL_NAME=test_6 - ES_MODEL_TYPE=6 - ES_METRIC_TYPE=ERR@10 Clicking Prepare Index the data is taken from ES_DATA file and indexed in the elastic search. ES_HOST – the elastic search url ES_USER/ES_PASSWORD – elastic search credentials, by default authentication is turned off ES_INDEX/ES_TYPE – index/type name for data from ES_DATA file ES_FEATURE_SET_NAME – name of container for defined features (described later) ES_MODEL_NAME – name of trained model kept in elastic search (described later) ES_MODEL_TYPE – algorithm used to train the model (described later). ES_METRIC_TYPE – metric type (described later) We can train and keep multiple models in elastic search which can be used for A/B testing. ### B. Labeling The supervised learning algorithms like learn to rank needs labeled data thus in this step I will focus on this area. First of all I have to prepare the file label_list.json which contains the list of queries to label e.g.: [ "rambo", "terminator", "babe", "die hard", "goonies" ] When the file is ready I can go to the second tab (Step 2 Label). For each query item the platform prepare the result candidates which have to be ranked from 0 to 4. You have to go through the whole list and at the last step the labeled movies are saved in the file : # grade (0-4) queryid docId title # # Use them to populate your query templates # # qid:1: rambo # qid:2: terminator # qid:3: babe # qid:4: die hard # # https://sourceforge.net/p/lemur/wiki/RankLib%20File%20Format/ # # 4 qid:1 # 7555 Rambo 4 qid:1 # 1370 Rambo III 4 qid:1 # 1368 First Blood 4 qid:1 # 1369 Rambo: First Blood Part II 0 qid:1 # 31362 In the Line of Duty: The F.B.I. Murders 0 qid:1 # 13258 Son of Rambow 0 qid:1 # 61410 Spud 4 qid:2 # 218 The Terminator 4 qid:2 # 534 Terminator Salvation 4 qid:2 # 87101 Terminator Genisys 4 qid:2 # 61904 Lady Terminator ... Each labeling cycle is saved to the separate file: timestamp_judgments.txt ### C. Training Now it is time to use labeled data to make elastic search much more smarter. To do this we have to indicate the candidates features. The features list is defined in the files: 1-4.json in the training-app directory. Each feature file is elastic search query eg. the {{keyword}} (which is searched text) match the title property: { "query": { "match": { "title": "{{keywords}}" } } } In this example I have used 4 features: – title match keyword – overview match keyword – keyword is prefix of title – keyword is prefix of overview I can add more features without code modification, the list of features is defined and read using naming pattern (1-n.json). Now I can go to the Step 3 Train tab and simply click the train button. At the first stage the training app takes all feature files and build the features set which is save in the elastic search (the ES_FEATURE_SET_NAME environment variable defines the name of this set). In the next step the latest labeling file (ordered by the timestamp) is processed (for each labeled item the feature values are loaded) eg. 4 qid:1 # 7555 Rambo The app takes the document with id=7555 and gets the elastic search score for fetch defined feature. The Rambo example is translated into: 4 qid:1 1:12.318446 2:10.573845 3:1.0 4:1.0 # 7555 rambo Which means that score of feature one is 12.318446 (and respectively 10.573845, 1.0, 1.0 for features 2,3,4 ). This format is readable for the RankLib library. And the training can be perfomed. The full list of parameters is available on: [https://sourceforge.net/p/lemur/wiki/RankLib/][https://sourceforge.net/p/lemur/wiki/RankLib/]. The ranker type is chosen using ES_MODEL_TYPE parameter: – 0: MART (gradient boosted regression tree) – 1: RankNet – 2: RankBoost – 4: Coordinate Ascent – 6: LambdaMART – 7: ListNet – 8: Random Forests The default used value is LambdaMART. Additionally setting ES_METRIC_TYPE we can use the optimization metric. Possible values: – MAP – NDCG@k – DCG@k – P@k – RR@k – ERR@k The default value is ERR@10 Finally we obtain the trained model which is deployed to the elastic search. The project can deploy multiple trained models and the deployed model name is defined by ES_MODEL_NAME. ### D. Testing In the last step we can test trained and deployed model. We can choose the model using the ES_MODEL_NAME parameter. It is used in the search query and can be different in each request which is useful when we need to perform A/B testing. Happy searching 🙂 # Tensorflow meets C# Azure function Tensorflow meets C# Azure function and … . In this post I would like to show how to deploy tensorflow model with C# Azure function. I will use the TensorflowSharp the .NET bindings to the tensorflow library. The InterceptionInterface will be involved to create http endpoint which will recognize the images. ## Code dotnet new classlib dotnet add package TensorFlowSharp -v 1.9.0 Then create file TensorflowImageClassification.cs: Here I have defined the http entrypoint for the AzureFunction (Run method). The q query parameter is taken from the url and used as a url of the image which will be recognized. The solution will analyze the image using the convolutional neural network arranged with the Interception architecture. The function will automatically download the trained interception model thus the function first run will take little bit longer. The model will be saved to the D:\home\site\wwwroot\. The convolutional neural network graph will be kept in the memory (graphCache) thus the function don’t have to read the model every request. On the other hand the input image tensor has to be prepared and preprocessed every time (ConstructGraphToNormalizeImage). Finally I can run command: dotnet publish which will create the package for the function deployment. ## Azure function To deploy the code I will create the Azure Function (Consumption) with the http trigger. Additionally I will set the function entry point, the function.json will be defined as: The kudu will be used to deploy the already prepared package. Additionally I have to deploy the libtensorflow.dll from /runtimes/win7-x64/native (otherwise the Azure Functions won’t load it). The bin directory should look like: Finally I can test the azure function: The function recognize the image and returns the label with the highest probability. # Another brick in the … recommendation system – Databricks in action Today I’d like to investigate the Databricks. I will show how it works and how to prepare simple recommendation system using collaborative filtering algorithm which can be used to help to match the product to the expectations and preferences of the user. Collaborative filtering algorithm is extremely useful when we know the relations (eg ratings) between products and the users but it is difficult to indicate the most significant features. ## Databricks First of all, I have to setup the databricks service where I can use Microsoft Azure Databricks or Databricks on AWS but the best way to start is to use the Community version. ### Data In this example I use the movielens small dataset to create recommendations for the movies. After unzipping the package I use the ratings.csv file. On the main page of Databrick click Upload Data and put the file. The file will be located on the DBFS (Databrick file system) and will have a path /FileStore/tables/ratings.csv. Now I can start model training. ### Notebook The data is ready thus in the next step I can create the databricks notebook (new notebook option on the databricks main page) similar to Jupyter notebook. Using databricks I can prepare the recommendation in a few steps: First of all I read and parse the data, because the data file contains the header additionally I have to cut it. In the next step I split the data into training which will be used to train the model and testing part for model evaluation. I can simply create the ratings for each user/product pair but also export user and products (in this case movies) features. The features in general are meaningless factors but deeper analysis and intuition can give them meaning eg movie genre. The number of features is defined by the rank parameter in training method (used for model training). The user/product rating is defined as a scalar product of user and product feature vectors. This gives us ability to use them outside the databricks eg in relational database prefilter the movies using defined business rules and then order using user/product features. Finally I have shown how to save the user and product features as a json and put it to Azure blob. # Hello from serverless messanger chatbot Messanger chatbots are now becoming more and more popular. They can help us order pizzas, ask about the weather or check the news. In this article, I would like to show you how to build a simple messanger chatbot in python and pass it on AWS lambda. Additionally use the wit.ai service to add to it the natural language understanding functionality and make it more intelligent. To build the messanger chatbot I will need facebook app and facebook page. The whole communication is going through a Facebook page thus I need to create it I will need the page id which you can find at the bottom of your page: ### Settings I will copy the AppId and AppSecret which will be needed in the next steps: ### Messanger product Then I will add the messanger product and setup it. ### Webhook Finally I have to setup the webhooks for messanger product To finish this step I need to setup our chatbot on aws lambda. I also have to provide the verify token which will be used to validate our endpoint. ## AWS Lambda Now I will prepare my chatbot endpoint. I will setup it on the AWS lambda. ### Trigger For my chatbot I need to configure API Gateway. I have to choose security open otherwise I won’t be able to call it from messanger ### Code I also need to provide code which will handle the messanger webhook and send response. I will simply put the code in the online editor. Let’s take a look at the code: ### Configuration Bellow I have to setup environment variables: verify_token – verification token (I use keepass to generate it) which we will use in webhook setup access_token – value from the messanger webhook page setup Now I’m ready to finish the webhook configuration: I use api gateway url ass Callback URL and verify_token I have just generated. ## Natural language undestanding Messanger give easy way to add natural language undestanding functionality. To add this I simply configure it on messanger product setup page Here I can choose already trained models but I will go further and I will create custom model. Messanger will create the new wit.ai project for me. On the wit.ai I can simply add some intents (like: hungry) and additional information which can be retrieved from the phrase (like: I want some pizza) The messanger/wit integration is very smooth let’s analyze the webhook json I get when I put I want to eat pizza After wit integration the nlp object was added. Now I can get the recognized intent with some confidence (like: hungry) and additional entities (like: dish). Finally I can talk with my chatbot 🙂
{}
JFIF>CREATOR: gd-jpeg v1.0 (using IJG JPEG v62), default quality C    \$.' ",#(7),01444'9=82<.342C  2!!22222222222222222222222222222222222222222222222222XX" }!1AQa"q2#BR\$3br %&'()*456789:CDEFGHIJSTUVWXYZcdefghijstuvwxyz w!1AQaq"2B #3Rbr \$4%&'()*56789:CDEFGHIJSTUVWXYZcdefghijstuvwxyz ?( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ("s} 5A \$Ar>\$z1Ҿ"]DNҵ?Yxxmt>EtQE0 ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( (R@P2I891k/'q#,Gp?|yYI%s)9%\$\j;ɢ ?3m8W%{>T1Xl,pbW<9В :_x(b HO!#Yl Rk'P[.IrO;8|#q(׎xk-HLBd({jV]+Ənx?!A>tJD Xdskto_.\$0 /:z9LAk!ypc@d@k (d`I뒥;6͌]^2Yi2H\$?pYsi zp\$v K&UVHOb{:DA! Ig*QEQEQEQEQEQEQEQEQEQEQEQEQEQEQEQEQEQEQEQEQEQEQEQEQEQEQEQEQEQEQEQP^\eg5ΩH]w-≓~~xc8#ӌaU*uO|Eڴ@% d°/ 1 q*GcOt588 ]:K@F'\$cekuz,e^k<-?um,8k22rHI_V^\$I1Ĝ`x <_#F]Ub ʉ(Imjxĺuƃeq,"PX\$v=zիXGFƬLq'U{8UfPF=Ib`2s\$*|;Qȡ"&8-(>*.<;J'"i 2{׿x_Рm6^Y夐rex7D[XeF{?JHl(Q@Q@Q@Q@Q@Q@Q@Q@Q@Q@Q@Q@Q@Q@Q@Q@Q@Q@Q@Q@Q@Q@Q@Q@Q@Q@Q@Q@Q@Q@Q@xů\JΗ/ suA܊kp,J~~'Xljs\vs7idfbŘOrNk|G6XA,([?I\$@x?QUa6Q44 xl-eY +7v>߭pwڡI;Nܞκz /%{UdB2Inh:V2HǒI9 <-B"\vdb2AJ܉%";9'גOް'rsPʱJI*=AX.n"Y7M0屆BFNFkܐJpZp?CB]ky&67EȐAq^rLbgimzX9 W'vY!RAJ,I\$Os}Tedi XYIr;G=I) 54\w=jDi'Yw/VɕrQ%uÂ̒8\$ʽJ-&\$b_:Aʒ2O89䓊f_ b6[#& V֯ #E)3 ?+*YVKTY@>EU"cӠB bKRGɞ\$w\Ǭ܃ޠ\~U|>IJJ:ɛ? !Kh PEB ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( (<ν~hJ*8d=SYI-̤1`  uol >eI"̊9{q^5_kXsbIr*zR zGr' dS:e_ʺ?xY!ѴT&G=vNkiq^L#[!+{'#sB@k_ZjJk rq lYucn.2C1ys N:W i)m < cR#`GCߥK,p%\$;s}񊷣{@B6훉M`qUvI)!s zopJ*}O)ntbHvHHAES((((((((((((((((((((((((((((((((3 \2Yܓ-`c>kx[N&s˟POO:x|7HBFʧdu|zq}q{y/\I\$O"úhjt\ dy1FQ5E.TPK1t~ }[45io]2D'#q79uc r2[H<9Ѥu{"bLk .4P-^yei-`;00iHQE@QEQEQEQEQEQEQEQEQEQEQEQEQEQEQEQEQEQEQEQEQEQEQEQEQEQEQEQEQEQEQEQEQEQEEtaV=RQ@D/`PnI" Y5{0y;~uRʺ}aK#`Zb ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( (?
{}
# How to calculate resistivity of coaxial cylinder 1. Jun 18, 2012 ### jacobier Is it any model on resisivity of two tightly-attached coaxial cylinder? For example, a copper core wire is coated with layer of aluminum. How to calculate the final resistivity along axis? 2. Jun 19, 2012 ### Hassan2 There is a method on this but that's related to the radial resistance ( the insulator is consider imperfect). The resistance along axis seems much simpler to me. Just plug in the cross-sectional area, length and conductivity of both the core and the shield in the related formula ( $R=\frac{l}{σS}$), and add together the resistance of the core and the shield. Hopefully I'm not missing something! 3. Jun 19, 2012 ### haruspex I don't think you mean add the resistances together. These are in parallel here, so it's the conductances that add. Btw, in the OP, you mean resistance, not resistivity. Resistivity is a property of the material, independent of shape and size. 4. Jun 19, 2012 ### jacobier Thank you very much. Actually I mean "resistance". I have been looking for solution for this for a long time. Probably the problem is not as complicated as I thought, no one have interest to talked about that. 5. Jun 19, 2012 ### Hassan2 Actually what I had in mind is that the two , with a load connected to the end form a series circuit, that's why i added the " resistances" together. (OP asked about resistance along the axis.) 6. Jun 19, 2012 ### Hassan2 7. Jun 19, 2012 ### haruspex I get the feeling you have the wrong model for the set-up. There's a copper core and an Al coating. The axial current will consist of some current in each, in parallel. At each end there may be some radial flow, but I'm assuming we can ignore that. 8. Jun 19, 2012 ### marcusl Haruspex is correct that the conductances add. To do it with resistance instead, separately calculate the resistance of a length of the copper core and the same length of hollow Al tube. The total resistance is the parallel combination of the two, using the usual formula for resistors in parallel. 9. Jun 19, 2012 ### Hassan2 marcusl and haruspex, In the attached figure, aren't the series resistances of the core and the shield added together to give the cable resistance? Sorry I understand that this is very simple question but I would like to know what I am not getting. Thanks. #### Attached Files: • ###### coax.jpg File size: 7.7 KB Views: 57 Last edited: Jun 20, 2012 10. Jun 20, 2012 ### haruspex Ah - it was me that had the wrong set-up in mind. Yes, they're in series. 11. Jun 20, 2012 ### nasu But this is a coaxial cable and not what is described in the OP. The copper core is not "coated with layer of aluminum" but they are separated by insulator. The OP describes a two tightly attached coaxial cylinders. 12. Jun 20, 2012 ### Hassan2 You are right nasu. I got it wrong from the beginning. They are parallel then. Many thanks.
{}
Accelerating the pace of engineering and science asin y = asin(x) Description y = asin(x) returns the Inverse Sine (sin-1) of the elements of x. The asin function operates element-wise on arrays. For real elements of x in the interval [-1,1], asin(x) returns values in the interval [-pi/2,pi/2]. For real elements of x outside the interval [-1,1] and for complex values of x, asin(x) returns complex values. All angles are in radians. Examples expand all Inverse Sine of a Value `asin(0.5)` ```ans = 0.5236``` Inverse Sine of a Vector of Complex Values Find the inverse sine of the elements of vector x. The asin function acts on x element-wise. ```x = [0.5i 1+3i -2.2+i]; y = asin(x)``` ```y = 0.0000 + 0.4812i 0.3076 + 1.8642i -1.1091 + 1.5480i``` Graph of the Inverse Sine Function Graph the inverse sine over the domain . ```x = -1:.01:1; plot(x,asin(x)) grid on ``` expand all Inverse Sine The inverse sine is defined as ${\mathrm{sin}}^{-1}\left(z\right)=-i\mathrm{log}\left[iz+{\left(1-{z}^{2}\right)}^{1/2}\right].$
{}
## Introduction G-protein-coupled receptors (GPCRs) have evolved to transduce signals from the outside of a cell to the inside, thereby allowing the cell to respond to changes in its environment1. As a consequence of their role as transducers, GPCRs feature at least two interaction sites: one on the extracellular side, sensing the signalling agents (from photons to peptides), the other on the intracellular side, providing a place for the effector proteins to bind2. As the repertoires of extracellular signalling agents and intracellular effector proteins are quite limited, these sites are oftentimes conserved within a receptor subclass. This can pose a challenge to ligand and drug discovery efforts when the treatment of an ailment requires the selective targeting of a particular receptor subtype. An example of such a challenge are the β1- and β2-adrenergic receptors (β1- and β2AR), which differ only by a Phe/Tyr substitution in their orthosteric sites. Blockade of the β1AR in heart by beta-blockers (such as bisoprolol) is desired for cardiovascular disease, but antagonising the β2AR in lung tissue is detrimental for chronic obstructive pulmonary disease or asthma. Conversely, stimulation of the β2AR (by e.g. salmeterol) helps asthma patients but potentially damages their heart through concomitant agonism of the β1AR3. As a possible way of circumventing this challenge of highly similar pockets, the targeting of allosteric pockets is billed as a sensible alternative4. Due to the nature of GPCRs as bundles of seven transmembrane helices that are only relatively loosely coupled5, one could indeed expect that a ligand binding to one of these pockets is able to modulate the response of a receptor. Moreover, it is generally claimed—but has never been shown—that these alternative pockets share lower sequence homology4. There are examples of individual ligands binding to non-orthosteric sites on a few receptors (e.g. refs. 6,7,8,9), but it is currently unknown to what extent such binding sites exist across the receptorome and how different or similar they are in shape and sequence. In this work, we therefore identify and analyse the ensemble of all discernible pockets—the pocketome—of 557 GPCR structures of 113 different receptors. We discover potential pockets by exhaustive docking of small molecular probes, taking into account the different electrostatics of the solvent-exposed and transmembrane parts of the receptors, and compare these data across all receptors. Based on class A and B1 structures in active and inactive conformations, we compute residue contacts including both backbone and side chain atoms. In doing so, we identify interhelical residue contacts crucial for an active or inactive state of both class A and class B1 GPCRs (we follow the nomenclature in IUPHAR’s “Guide to Pharmacology” and refer to classes of GPCRs rather than families). We are then able to show that known and as-of-yet-untargeted (orphan) allosteric sites (abbreviated as KS and OS, respectively, in the following) contain such contacts of importance, speaking to the likelihood of their functional relevance. These computational investigations are strengthened with experimental studies of two model class A receptors, the muscarinic acetylcholine receptor M3 (M3R) and the β2AR. Through mutations of two pockets that have not been targeted by a synthetic ligand before, we demonstrate that the residues forming these pockets are indeed involved in receptor activation after stimulation with an orthosteric agonist. Last, but not least, we compare the sequence similarity of the most frequently occurring pockets, thereby providing a quantitative assessment of their overall selectivity potential. This therefore represents the currently most exhaustive analysis of the GPCR pocketome, spanning receptors from classes A, B1, B2, C, D1, and F. ## Results ### Probe docking & conversion to volumes Our definition of a pocket is based on the computational docking of small molecules (probes; while the probes we used are probably too small to bind strongly to a receptor by themselves, they represent chemical moieties that are typical for GPCR ligands and are thus suited to investigate the details of cavities on receptors) to the surface of each GPCR structure individually. We therefore first show the results of our docking calculations and the conversion to volumes before turning to the identified hotspots (the pockets) themselves. Please note that, for our approach, we did not consider dimerisation of the 7TM bundle (as has been described for class C GPCRs), but rather docked to the individual monomers. Moreover, we treated each receptor structure as rigid. Exhaustively docking the 40 small, chemically diverse molecular probes (see Methods and Supplementary Table 1) into 557 structures from 113 distinct receptors, we obtained 1621367 poses in total (a more detailed description of the statistics is provided in the Supplementary Notes and Supplementary Fig. 1). We provide a list of all analysed structures together with the docking files as Supplementary Data 110. To analyse the vast number of docked molecules in a statistical manner, we used our volumetric averaging algorithm (see Methods) in order to transform the poses of each docking into visualisable probe density maps. These maps are divided into equal volume elements, each of them giving information about how often a probe atom occupied a particular region. On average, each of the obtained maps consisted of 1000000 up to 3500000 volume elements. Since we wanted to investigate the density maps for trends across the different receptor classes, maps of individual receptors were added up for each class to yield a single map with higher populations overall. ### General distribution of pockets The class-specific density maps provided with this work can be visualised using Pymol (see Supplementary Data 210 for the grid files, template, and README) and might aid a reader with the following description. Said density maps reveal multiple contiguous regions that represent common cavities on the surface of all GPCRs analysed in this study (Fig. 1). Particularly for class A GPCRs, these pockets are distributed in a notably symmetric manner: both at the intra- and extracellular end of the 7TM bundle, pockets can be seen between each pair of adjacent helices. The density maps for the other classes are somewhat less well-defined and more scattered overall. This is owed to the lower numbers of structures and therefore poorer statistics, as individual structures—and possible deviations in them—carry a relatively higher weight than for the more numerous class A structures. Here, we present only those pockets that we will discuss and examine in depth, whereas the rest of them is described in the Supplementary Notes. We chose to focus on three of the largest and—by our analysis—best-defined orphan sites and contrast them with an equal number of known sites, which we picked because they are clearly defined and because they host synthetic ligands. While the vast majority of sites defined by the densities is located at the outward-facing receptor portion (i.e. receptor residues in contact with the membrane), we also were able to identify regions of density inside the 7TM bundle. In each class, a large interhelical site (Interhelical Binding Site 1, IBS1) and adjacent secondary binding pockets (IBS2 and IBS3) can clearly be discerned. Whereas IBS1 represents the classic orthosteric site in class A GPCRs, it forms—together with the extra-cellular domain (ECD)—the peptide binding site in class B GPCRs. Furthermore, IBS2 and IBS3 are two known exosites in class A GPCRs. Since the orthosteric site of class C receptors is located in the extracellular Venus flytrap (VFT) domain, IBS1 is commonly referred to as an allosteric site in class C receptors. Our methodology was able to correctly depict the size and shape of these known pockets for different classes, and we therefore hypothesized that the other pockets identified in this work can indeed also host ligands. By aligning our density maps with each other, one can see that the average IBS1 for class C receptors protrudes significantly deeper than the one of class B1, which again goes slightly deeper than the one in class A. This is perfectly consistent with experimental evidence11. Due to the overall higher flexibility and thus often worse resolution of extra- and intracellular loops, pockets found within these regions will not be further analysed or discussed. Comparing the densities on the outward-facing receptor portion for all analysed GPCR classes, we assigned pocket identifiers to several volumes that appeared well-defined and clearly distinct from their neighbouring densities. This facilitated later analysis and provided the means for a common orientation and discussion. However, since not only the GPCR structures themselves but also the density map shapes differ across the classes, the reader’s view on whether a particular region is an individual pocket might differ from ours. That being said, our general conclusions are independent of any such small differences in definitions. The full list of pockets is presented in Table 1 and Supplementary Table 2. Going around the 7TM bundle, one can observe regions of density at the upper and lower ends between helices V and VI. These sites are referred to as KS12 and OS5, respectively. For some classes, another separated hotspot resides right between these two sites (OS4). At the lower end of the 7TM bundle, OS5 shows a large spot for classes A and F. When directly compared to class A, the density of class B1 is subdivided into multiple regions. While for classes B2 and C a small hotspot is visible, class D1 only shows some fragmented density in front of helix V. Another larger spot is visible between helices I and VII above helix VIII for classes A, B1, C, and F (OS9). The classes B2 and D1 maps only show a small spot in this region, which might be due to the lack of helix VIII in the available structures. Encouragingly, we identified density near the region of the sodium binding pocket (SODIUM) for some classes. While classes A, B1, and F show somewhat weaker densities, the class C IBS1 extends down into this region, which makes it clearly defined. Lastly, two regions of density were found at the intracellular portion of the 7TM bundle for all classes. Here, one spot could be identified as the G-Protein binding site between helices II, III, V and VI (GPROT). Adjacent to it, density for KS11 resides between helices I, II, VII, and VIII. Despite the fact that we only considered monomeric subunits of the 7TM bundle in our calculations, our methodology was able to also reveal all dimerisation interfaces, which have predominantly been described for class C GPCRs. The conserved helix VI-helix VI dimerisation interface in active-state class C receptors encompasses KS7, KS8, KS9, and partially OS5 and is known to bind positive allosteric modulators (PAMs)12,13. Two other dimerisation interfaces can be found between helix III-helix IV (mGlu2) or helix III-helix V (GABAB) in inactive-state class C GPCRs14,15. While the former is mainly formed by residues at the extracellular end of the helices and is thus represented by KS2, the latter dimerisation interface is located in the region of KS5. ### GPCR states can be described by their residue contact network In order to provide evidence that it is possible to achieve modulation of receptor function with a ligand binding to one of the allosteric pockets, we investigated to what extent these pockets are formed by residues that also participate in contact patterns specific for an active or inactive conformation of the receptor. The rationale is that residues which are involved in crucial state-specific contacts are more susceptible to interference by a ligand. In Fig. 2 and Supplementary Fig. 2, the principal component analyses of the class A and B1 residue contacts are shown, respectively. Here, we decided to focus on the first two components, since they contributed the most to the overall variance as shown in Supplementary Fig. 3 and revealed a clear separation of activation states. Across the diagonal of the PC1 vs. PC2 plot for class A, we identified a distribution of states ranging all the way from structures classified as active to those classified as inactive, with intermediate structures positioned inbetween, congruent with the assignment of states in GPCRdb16. To a certain degree, the large accumulation of structures in the bottom left shows a mixture of the three classifications. Interestingly, structures classified as inactive are spread over a wide range of values of PC1, with only small differences in PC2, while active and intermediate structures display greater variance along PC2. Our contact-map-based PCA seems to indicate a slightly different view of activation compared to the assignment in GPCRdb which is based on helix II-helix VI distance cutoffs, the presence of G-protein or arrestin and further similarity measurements. The re-calculated PCA for those points that are clearly active or inactive according to our measures shows that one principal component is sufficient in order to explain the difference between the residue contacts of clearly active and inactive structures. The PCA for class B1 contacts (Supplementary Fig. 2) shows that the structures classified as active or inactive are separated along the second principal component. Notably, four structures are separated from the others across the first principal component. As, by the time of this analysis, only one B1 structure with an assignment as an intermediate conformation in the GPCRdb was available, it was not included in the PCA. As for class A, the class B1 PCA was re-calculated considering only those structures belonging to the groups of points clearly classifiable as active or inactive. The four outliers described before were not considered in this recalculation. As expected, the PCA now shows a separation of the states across the most important first principal component. Based on our analysis for two GPCR classes, we show that the structural state of a receptor by GPCRdb definition is closely linked to its entire residue contact network. However, we point out that a non-negligible number of class A GPCR structures with a GPCRdb-assignment as active or inactive would fall into the intermediate classification by our contact map categorisation (351 out of 417 class A structures). Hence, the residue contact map of a given structure might provide additional information on top of the GPCRdb definition of a conformational state based on interhelical distances and the type of co-crystallised ligand. Finally, we used the well-separated groups of structures from the re-calculated PCA (Fig. 2, right panel) to extract the most important and conserved active- and inactive-state-specific contacts for each of the sites of interest. We focused on contacts formed between residues of two distinct helices, since such contacts could potentially be targeted by a ligand. ### Identification of known pockets As mentioned in the general description (above and Supplementary Notes), we found all the allosteric binding sites already known from crystallographic experiments (e.g. refs. 6,7,8,9), which can be considered an excellent validation of the general applicability of our docking-based approach (see Table 1). In this section, we focus on one exemplary site, describe its conservation across the receptorome and explain possible modes of action by using our residue contact data. Two more sites are discussed in the Supplementary Notes. This known pocket, KS2, is located at the outward-facing residues of the upper ends of helices III and IV. While this site is only known for two class A GPCRs, namely the free fatty acid receptor FFAR1 and protease-activated receptor-2 (PAR2), our density maps show that it seems to be conserved across all GPCR classes. In order to further validate this finding, we analysed the receptorome-wide sequence identity and similarity of residues forming this site by using the definition of Table 2. While the matrices in Supplementary Figs. 4 and 5 show that the overall identity is considerably low, the similarity based on physicochemical properties is much higher with an average value above 50% (Supplementary Fig. 6). We then investigated the interactions of known ligands with KS2 and compared them to our residue contact analysis for class A and class B1 GPCRs. Two cases are known from the available structural data: In the case of the FFAR1 (PDB: 4PHU17 [https://doi.org/10.2210/pdb4PHU/pdb] 5TZR18, [https://doi.org/10.2210/pdb5TZR/pdb], 5TZY18 [https://doi.org/10.2210/pdb5TZY/pdb]), the agonists fasiglifam and MK-8666 penetrate between the upper ends of helices III and IV coming from the inner portion of the receptor. While being anchored by polar contacts in the orthosteric region, hydrophobic interactions are dominant in KS2. A structure for the PAR2 (PDB: 5NDZ19 [https://doi.org/10.2210/pdb5NDZ/pdb]) reveals a different mode of binding. Here, the allosteric antagonist AZ3451 stacks against the outward-facing portion of helices III and IV while only making one polar contact to residue 3.30. Instead of pushing the two helices apart, this allosteric ligand seems to hold them together, mainly through hydrophobic interactions. Our class A contact analysis for known sites shown in Supplementary Fig. 7 revealed multiple helix III-IV contacts crucial for an inactive conformation of the receptor such as 3.23–4.61, 3.27–4.61, 3.30–4.60, and 3.34–4.58. Furthermore, this analysis indicated one highly conserved active state contact, 3.30–4.61. ### Identification of orphan pockets Here, we focus on two orphan sites that could represent binding sites for allosteric modulators. They are among the best-defined and largest-volume sites that emerged from our analysis based on the previously mentioned docking of small molecular probes. A third site is discussed in Supplementary Notes. Since no structural data of ligands binding to these regions is known yet, we will describe the pockets based on their amino acid sequences and our class A and class B1 residue contact analysis. Similar to the three known sites described in more detail in this work, these pockets also reside in the outward-facing portion of the receptor. As shown in Supplementary Fig. 8, they mainly consist of hydrophobic residues. This is expected, since most of their volume lies within the membrane portion of the GPCRs. The first orphan site discussed here, OS5, is located at the lower portion of the 7TM bundle between helices V and VI. Density in this region was conserved across all GPCR classes. However, our receptorome-wide sequence similarity analysis reveals that the physicochemical properties slightly differ across the classes. While class A and B1 receptors share a high sequence similarity with each other in this region, GPCRs belonging to classes C and F only show high sequence similarity within their respective subclass. This fact might explain the different shapes of the densities. In direct comparison to the other classes, the class F density is shifted more towards the intracellular side of the GPCR. Hence, the receptorome-wide definition of OS5 (Table 2) might not be suitable for class F receptors. In contrast, the position and shape of OS9 between helices I, VII and VIII is highly conserved across all GPCR structures. The physicochemical properties of this site are more conserved across classes A, B1 and C, with an average sequence similarity above 50%. Again, one exception is class F with a much lower homology to the other classes. In order to identify the impact of OS5 and OS9 on the biological function of the receptors, we also conducted mutation studies with the M3R and β2AR. A visualisation of location of the residues mutated in our experiments is shown in Supplementary Fig. 9. Of note, we chose the residues such that their side chains are pointing into the sites, and are thus available for interaction with a ligand. For the M3R, we constructed double or quadruple mutants, where two or four, respectively, of the residues that form these pockets were changed (residues mutated in a particular mutant are connected by a grey vertical bar in Fig. 3). These mutants were tested in a BRET-based G-protein activation assay as well as in a FRET-based β-arrestin2 recruitment assay. The summary of the resulting data is shown in Fig. 3. Regarding OS5, our results show a clear increase in the logEC50 values of the concentration-response curve of acetylcholine-induced Gαq activation relative to M3R wt (Fig. 3 left and Supplementary Fig. 11A). Of note, except for one double mutation, all others resulted in shifts of at least one log unit. In addition, a marked decrease in the efficacy of acetylcholine to recruit β-arrestin2 to the mutant M3Rs was observed, with only a small effect on the logEC50 value. This is consistent with our finding that residues forming this pocket are involved in rearrangements important for receptor activation as shown in the middle panel of Fig. 3. The same is true for OS9, where a similar right shift of Gαq activation in the mutants occurs (Fig. 3 and Supplementary Fig. 11B). Similarly, the extent of the recruitment of β-arrestin2 is substantially reduced for all mutations of OS9 (Fig. 3 and Supplementary Fig. 12), without major effects on the logEC50 values, suggesting a major impact of the mutants on agonist efficacy. By demonstrating that the partial agonist arecoline led to a greatly diminished G-protein activation even under saturating conditions (Supplementary Fig. 11C), we confirmed that the mutations indeed strongly affect the efficacy of muscarinic agonists. To further support our findings, we also mutated individual OS5 and OS9 residues in the β2AR. The results are summarised in Fig. 4 and full curves are shown in Supplementary Fig. 13. Similar to our findings for the M3R, our results show an increase in the logEC50 values of adrenaline-induced activation of two different biosensors (Gαs and β-arrestin2) and a decrease in efficacy relative to the β2AR wt. Our β2AR signalling results show that the logEC50 is increased for either Gαs or β-arrestin2 or both for half the mutations across the OS5 and OS9 pockets. Similarly, for about half of the mutations, the efficacy for β-arrestin2 recruitment is reduced. As was the case for the M3R, these results further strengthen the assumption that OS5 and OS9 are indeed physiologically relevant. In order to obtain deeper insight into the possible reasons behind the interference of residues in OS5 and OS9 with GPCR activation, we compared the mutational data with our class A contact analysis (middle panel of Figs. 3 and 4). For OS5, we found that the mutated residues were frequently involved in conserved active and inactive state contacts. Starting with residue 5.54, our analysis revealed two contacts (3.43–5.54 and 5.54–6.44) important for an inactive and active state of the receptor, respectively. The decrease of function upon mutating 5.54 indicates that 5.54–6.44 might be a contact crucial for receptor activation. The same holds true for residues 5.58, 5.61, and 6.41. In our contact analysis, we found that the region formed by these residues includes numerous contacts important for an active state. While the microswitch contacts 3.50–5.58 and 5.58-6.40 are known for their importance for an active conformation of the receptor, 5.61 shows two active-state contacts with 3.55 and 6.33, respectively. Furthermore, residue 6.41 makes exactly one crucial active-state contact, namely to 5.55. Overall, our mutagenesis experiments and residue contact analysis suggest that a considerable fraction of the residues of OS5 are involved in key active-state contacts. These residues could therefore be addressed by a synthetic allosteric modulator in order to reduce receptor activity. A largely similar picture holds true for OS9. Here, mutating the conserved residues 7.50 and 8.50 led to a decrease in receptor function. Per our contact analysis, both residues are involved in several important active-state contacts such as 2.50–7.50 and 7.54–8.50. Finally, we mutated two residues (6.34 and 6.37) that are not part of the two sites described here by themselves, but are involved in relevant active-state contacts to OS5 and OS9 residues, namely 6.34–5.66 and 5.62–6.37. These mutations led to a decrease in Gαq activation in the M3R, speaking to the occurrence of a second-shell effect. ### Occupancy of known allosteric pockets In order to obtain deeper insight into the properties of the allosteric pockets, we analysed the known sites for the occurrence of molecules besides (synthetic) ligands designed for them, i.e. focusing on crystallisation additives and co-purified substances. We tabulated all structure determination adjuvants resolved in any of the 557 structures investigated in this work (Supplementary Table 3). This analysis revealed that in around 35% of the investigated structures, the KS host additional chemical compounds besides the added orthosteric and allosteric ligands. In particular, a recurrence of cholesterol (or cholesteryl hemisuccinate), oleic acid and glyceryl monooleate can be noticed, all of them adjuvants in purification and/or structure determination processes. Binding is more frequent to the more superficial pockets. To avoid bias introduced by the fact that not all adjuvants are present in all buffers, we also calculated the background distribution for all substances. We found that for 12.4% (69 out of 557) of the structures, the known pockets are occupied by at least one type of crystallisation adjuvant. We grouped the additives into five categories: surfactants, steroids (cholesterol and derivatives), fatty acids, polymers (predominantly PEG and PPG), and anions. This allows us to deduce a preference of the pockets for certain types of components. Of course, not every known pocket was occupied by a component, and, as mentioned above, only a relatively small set of structures out of the total 557 contained at least one occupied pocket. Hence, we assume that the presence of a particular chemical moiety can be interpreted as a preference of a pocket rather than a random occurrence attributable to the crystallisation conditions. In other words, if a component was stable enough in order to be resolved in a pocket (and given that the electron density was sufficient to determine this), it constitutes a binding event. A more detailed analysis of the chart (Fig. 5) elucidates a rather clear picture: The preference of KS11 for anions (26%, or 6 out of 23) is evident, probably fuelled by the interactions with R3.50 in this pocket. The next most frequent category are polymers (17.4% or 4 out of 23). In contrast, KS5 hosts an abundance of fatty acids, with approximately 70% occurence (45 out of 64), followed by the substantially smaller percentage of ≈6% (4 out of 64) of steroids. Known pockets KS7 and KS8 are also mostly populated by fatty acids, but less frequently than in the previous case, as they are only observed up to ≈43% (or 13 out of 30). In both cases, the preference for this category is favoured by the presence of several aromatic and hydrophobic residues in the middle sections of helix III and helix IV. KS2 is the pocket with the highest value of occupancy overall (12.4% or 69 out of 557), and in this case as well, the most recurrent category are fatty acids (≈48% or 33 out of 69), followed by steroids (≈32% or 22 out of 69). KS10 and KS1 are populated the least, only 3 structures each (i.e. less than 1%) contain a component—fatty acids in both cases (33% or 1 out of 3 and 100% or 3 out of 3 of the occupied structures, respectively). ## Discussion In our study, we have determined the occurrence of pockets across the largest part of the currently available structural G-protein-coupled receptorome. Because the ultimate goal of this research is to identify ligands for these pockets through which receptors might be modulated in their activity, we chose the docking of small molecular probes, i.e. chemically valid compounds, as a method, rather than definitions of a pocket based on the protein surface. Thus, we optimised our definition towards what a potential ligand would see. As mentioned before, the probes by themselves are likely too small to bind with reasonable affinity. However, one can certainly imagine that connections of individual probes with appropriate linkers will lead to higher-affinity ligands, a future direction of research. We explicitely accounted for the different environments of the various receptor portions, viz. the membrane-embedded core and the solvent-exposed ends, by using a docking method where the dielectric constant of the environment can be set to the appropriate values, thus avoiding artefacts. A specifically adapted aggregation method allowed us to average over the probe docking calculations to all 557 GPCR structures. Hence, the pockets we identified exist in the majority of receptors and the pocketome we present in Fig. 1 thus constitutes a representation of the shape and distribution of frequently observable cavities. In addition, we were able to make statements on the conservation of pockets across receptor classes. Our findings strongly suggest that while the pockets are quite dissimilar in sequence space, they are more similar than generally assumed when focusing on the properties of the amino acids rather than their identity. This will merit attention when designing selective ligands, so as not to rely too much on nondirectional interactions. Despite this overall conservation, we can show that some sites are more conserved regarding their shape and physicochemical properties than others. This is also borne out at the level of the presence of crystallisation additives in the pockets. Of course, our analysis is based on rigid receptor structures and it stands to reason that some of these sites will undergo rearrangements upon changes of receptor conformation. This will need appropriate attention during ligand design. Still, the physicochemical characteristics should be exploitable for the design of class- or type-specific allosteric modulators, which has implications when designing ligands for such pockets in order to avoid unintended polypharmacology. Most of the sites identified in this work are located on the outward-facing portion of the 7TM bundle within the membrane, and water molecules close to an allosteric ligand were only observed in 14 of the 71 structures that featured an allosteric ligand and therefore not investigated further. It cannot be ruled out that they play a role in certain sites or for certain ligands, but this is likely more important for sites that are directly accessible by the solvent such as the G-protein binding site or KS11. We are convinced that the number of structures investigated is such that the general trends observed, at least for class A and class B1, will hold even as new structures become available. In fact, we reran our analysis pipeline shortly before submission, approximately nine months after the first time, and did not observe noticeable changes in the pocket definitions. During this time, the number of class A and B1 structures available increased from 404 to 455 and 46 to 55, respectively. While we found all pockets that have previously been localised through structure determination with a ligand, we also identified several that have not yet successfully been targeted, at least according to publicly available data. To demonstrate that the two most prominent orphan pockets OS5 and OS9 have potential as target sites for small-molecule modulators, we mutated several of the residues lining these two pockets in two model GPCRs, the M3R and the β2AR. In both cases, mutations had a robust effect on both G-protein activation as well as β-arrestin recruitment. Of note, we did not only observe substantial right-shifts of up to more than 10-fold of the concentration-response curves for G-protein activation (indicating the need for higher orthosteric agonist concentrations to achieve similar levels of stimulation), but also decreases in the maximum level of response for β-arrestin. These results indeed point towards the modulatory potential of ligands binding at these sites. Moreover, given the fact that logEC50-values were shifted in opposite directions for Gαq and β-arrestin2 at the M3R, one might speculate that pathway-selective ligands could be designed for these pockets. A receptor region similar to OS9 has also recently been investigated in the angiotensin II type 1 receptor20, where it has been termed a cryptic site. As mutations of amino acids led to changes in Gαq and β-arrestin2 responses similar to what we show for our receptors, this lends further credibility to our finding that OS9 is a pan-class A pocket. In addition, we compared the experimental findings to our contact analysis, which is based on the residue contact maps of 557 GPCR structures. We found that ligands addressing the two pockets could potentially act as negative allosteric modulators (NAMs) by disrupting contacts crucial for an active state. This was expected, since both OS5 and OS9 reside near the G-protein binding site, which is known for undergoing profound rearrangements upon receptor activation. However, it also seems plausible that by strategically targeting the inactive-state contacts or stabilising active-state contacts, one could potentially design positive allosteric modulators (PAMs) for both pockets. Our probe docking allows us to make observations also for several of the known pockets. In the case of KS2, the available structural data from the FFAR1 and the PAR2 suggests that either positive or negative modulation of agonism at a receptor could be achieved in this pocket by separating or keeping in place, respectively, the upper ends of helices III and IV with a small molecule ligand. This hypothesis is supported by our contact analysis shown in Supplementary Fig. 7. KS2 contains highly conserved inactive-state contacts between helices III and IV, e.g. 3.23–4.61, 3.27–4.61, 3.30–4.60, and 3.34–4.58, and one crucial active-state contact, 3.30–4.61. By either breaking these contacts or keeping them intact with hydrophobic interactions, an allosteric ligand binding to KS2 could modulate the activation state of a GPCR. A hypothesis for KS5 can be found in the Supplementary Discussion. Similar rationales as for KS2 above can be applied to the design of ligands for the orphan sites. Of course, in these cases, the challenges will be to first demonstrate that each orphan site (beyond OS5 and OS9) can indeed be exploited to modulate receptor function by small-molecule ligands, to unequivocally determine the binding locations of such ligands, and to design assays that are fast yet precise enough to be utilised in their optimisation. We certainly hope that the three-dimensional atlas laid out in this work will aid the community in achieving this goal. ## Methods Unless stated otherwise, all operations in this workflow were scripted using python 3.7 and bash. The python packages requests (version 2.25)21 and urllib3 (version 1.25.11) were used in order to access the REST API of websites listed below. The retrieved data was handled using pandas (version 1.1.4)22. All protein structures and sequence data were handled in Biopython (version 1.78)23 and BioPandas (version 0.2.7)24. Mathematical operations were carried out using NumPy (version 1.19.4)25. Open-source PyMOL (version 2.3.0)26 and Visual Molecular Dynamics (VMD, version 1.9.3)27 were used for the visualisation and further editing of protein structures and volumes. Any other type of data was visualised using plotnine (version 0.8.0)28 and RStudio 1.4.1717.29 ### Collection of structural information Information about all available GPCR structures was fetched from the GPCRdb30, UniProt31, and the Protein Data Bank32 by using our information retrieval pipeline33. The data most relevant for our work was extracted from GPCRdb and included the PDB identification code, UniProt entry name, class, activation state and preferred chain of each GPCR structure. This data was enriched by fetching the accession numbers and canonical amino acid sequences from UniProt. In order to correctly assign solvent-accessible and intra-membrane regions at a later stage of the workflow, information about positioning of GPCR residues relative to the membrane (inside or outside) was also included. For a more convenient and uniform handling of GPCR amino acid sequences, the canonical amino acid sequences were mapped to their respective class-specific GPCR numbering scheme. By accessing the GPCRdb generic residue number tables34, the Ballesteros-Weinstein35, Wootten36, Pin37, Wang,38 and fungal numbering schemes were utilised for class A, B, C, F and D1 GPCRs, respectively. Finally, the PDB-formatted structures were retrieved from the Protein Data Bank. ### Preparation of structures for docking For each structure, the transmembrane portion and adjacent motifs belonging to the GPCR were separated from all non-native insertions (i.e. non-GPCR proteins, water, other small molecules) by using the information about the preferred chain retrieved from the GPCRdb and the DBREF tag in the PDB file. In the case of dimeric GPCR structures, only one of the monomeric subunits was considered for docking. Residues listed in the SEQADV section as expression tags and insertions were not considered. For residues that were resolved in multiple conformations, only the first conformation was extracted. Structures that contained a faulty or non-uniform DBREF or SEQADV section were manually corrected before extraction. After visually inspecting the extracted portions, the structures were prepared by using the Molecular Operating Environment (MOE, version 2020.09) software39. Here, incomplete residues were built utilising the “Structure Preparation” function. Termini and chain breaks that contained only one atom were removed. The built-in method “Protonate3D” was used to assign protonation states to histidine and cysteine residues. For consistency, all other residues were assigned their most frequent protonation state under physiological conditions. Preparations were continued using CHARMM together with the CHARMM36 protein force field40. Termini and breaks were capped by adding ACE and NME caps to the N- and C-terminal ends, respectively. Hydrogen atoms were placed with the HBUILD command. In order to remove too close van-der-Waals contacts, an energy minimisation was carried out for each structure with a short 20-step steepest-descent optimisation followed by an adopted-basis Newton-Raphson optimisation until convergence. In order to keep as much original structural information as possible, only the side chains of formerly incomplete residues and the backbone and caps of terminal residues were allowed to move. Hydrogen atoms were rebuilt using HBUILD again after all previous operations. Then, structures were aligned by using the “cealign” algorithm as implemented in Pymol. Finally, structures were converted to MOL2 file format using UCSF Chimera41. The correct CHARMM atom types and charges were reassigned based on the information from the CHARMM PSF output file. ### Preparation of molecular probes for docking In order to exhaustively scan the receptors for possible binding sites, a diverse set of small molecular probes was assembled. Diversity was achieved by including probes with different physicochemical properties such as size, charge and hydrogen bond acceptor/donor distribution. Forty probes were selected as representatives of different functional groups (Supplementary Table 1) and their protonation states were calculated at physiological condition using the ChemAxon Software Solution (Calculator Plugins, Marvin 20.10)42. MOL2 3D-conformers were generated with OpenEye’s OMEGA2 and default settings43. Next, CGenFF4.0 parameters were generated for each probe by using the CGenFF webservice accessible via https://cgenff.umaryland.edu/44. In order to update the MOL2 files with the CGenFF parameters and prepare a SEED 4.1.2-ready library, scripts from the SEED 4.1.2 repository45 were used. ### Docking calculations with SEED For each structure, two docking calculations were carried out using SEED 4.1.245. For the first docking calculation, only the intramembranous residues were considered and the dielectric constant of the surrounding medium was set to 3.0 in order to better reflect the lipid bilayer. The second docking calculation only considered the solvent-accessible residues and the solvent dielectric constant was set to 78.5, the value for water. The SEED search algorithm works by exhaustively matching multiple copies of each molecular probe to the polar and apolar portions of the defined region, treating the protein as rigid. The poses are then spatially clustered and evaluated with energy models that also account for receptor and fragment desolvation. The maximum number of allowed clusters per probe was set to 2000 and only the best-ranked pose per cluster was considered for the output. All other parameters and settings were used with their default values. ### Extraction of molecular features In order to aggregate and average the information from the SEED45 docking calculations to volumes, a custom software was developed and applied46. Within this tool, docking poses are searched for substructures relevant for protein:ligand interactions using RDKit 2020.09.1.047 and the cartesian coordinates, atom types, molecule identity, and substructure are stored. The substructures are hydrogen bond donors, hydrogen bond acceptors, aromatic atoms, halogen atoms, basic substructures, acidic substructures, aliphatic rings and an everything substructure (SMARTS are listed in Supplementary Table 4). The docking poses output by SEED were used to construct three-dimensional grids of a user-specified voxel spacing sv (0.5 Å in this work), encompassing all molecules. For each substructure investigated, a separate grid was constructed. To reduce the influence of arbitrary parameters such as the precise grid placement and voxel boundaries, each occurrence of a substructure was not only recorded in the grid voxel it was directly located in, but also—with a fractional value—in neighbouring voxels. A distance-dependent dampening factor ensured that the majority of the change introduced in the grid is still recorded close to the grid voxel the substructure was primarily located in. In practice, each recording operation will affect four different types of grid voxels: The centre voxel (in which the substructure is located); six directly adjacent voxels sharing a surface with the centre voxel, at a distance d of sv; twelve voxels at $$d=\sqrt{2}{s}_{v}$$; and eight voxels at $$d=\sqrt{3}{s}_{v}$$. In each grid voxel of a type, an equal change v is introduced, which is multiplied by a dampening factor t and a distance penalty of 1/d — except for the centre voxel, where no dampening factor is applied. The change v is chosen such that the overall change introduced in the grid is equal to 1, i.e. $$\mathop{\sum}\nolimits_{{{{{{{{\rm{voxels}}}}}}}}}v\cdot t\cdot 1/d=1$$. In the present work, the variables used led to 83.34% of each change being applied to the neighbouring voxels and 16.66% to the centre voxel. Using this data, it is possible to average and visualise the areas in which each feature is frequently represented for any number of docking calculations. The grids can either be exported as a PDB file containing dummy atoms at the voxel centres that correspond to a user-given percentage of the sum of each grid or by exporting a grid file using GridDataFormats (https://griddataformats.readthedocs.io/en/latest/gridData/formats.html) which can be opened in commonly used molecular visualisation tools. It is also possible to calculate and save the grids of a single docking calculation and then combine multiple docking calculation outputs. The potential problem of grids that are not aligned is solved by constructing a master grid that is encompassing all individual grids. The values of grid voxels in the single grids are then added to the master grid using the volume overlap of the grid voxel in the eight respective grid voxels of the master grid. In this way, we were able to calculate average grids and volumes across arbitrary combinations of structures, e.g. for each GPCR class. ### Definition of allosteric pockets In the following, our approach of obtaining a generalised, receptorome-wide definition for each site discussed in this work is described. First, reference structures were selected for each class (A: 1F88 [https://doi.org/10.2210/pdb1F88/pdb], B1: 5EE7 [https://doi.org/10.2210/pdb5EE7/pdb], C: 7CA3 [https://doi.org/10.2210/pdb7CA3/pdb], F: 4JKV [https://doi.org/10.2210/pdb4JKV/pdb]). Since class B2 and D1 structures were heavily underrepresented in our data set, they were not considered for this generalised definition. In addition to our density maps, all structures that have an allosteric ligand were visualised. Then, all structures and maps were aligned to rhodopsin (PDB: 1F88 [https://doi.org/10.2210/pdb1F88/pdb]). The region around each density and allosteric ligand was examined and matched with the residues of the reference structures. For better comparability and in order to obtain a receptor-wide definition, the site definitions for each class were converted to Ballesteros-Weinstein numbers35 using the GPCRdb residue tables. For each site, only the residues that occurred for at least two classes were used for the final definition. ### Sequence analysis The amino acid sequence of known and orphan allosteric pockets described here was analysed across the receptorome in order to determine the degree of conservation of these pockets in the GPCR spectrum. Only sequences of receptors that are structurally resolved were taken into account. For each of the sites discussed in this work, the amino acid sequence was extracted for each receptor by using the site definitions described above and the GPCRdb residue tables. For each pair of receptors, the sequence identity and sequence similarity were calculated. In order to determine the sequence similarity, the following classifications were used: polar, apolar, positively or negatively charged, aromatic. Furthermore, the overall site polarity was calculated by averaging the ratio of polar and apolar amino acids of each pocket across all receptors analysed. ### Occupancy of known allosteric pockets In order to verify whether the known allosteric sites identified here are also occupied by other types of compounds that could influence our results, e.g. crystallisation additives, an alignment of all 557 investigated GPCR structures was performed, using rhodopsin as the main template. Every binding site occupied by a known allosteric ligand was visually inspected. The occurrence of the different components in the selected known pockets was then collected, grouped, and analysed. A text-based analysis of the crystallisation conditions stated in all pdb files was also performed to retrieve the background distribution and use it as reference. ### Materials for the M3R DMEM, penicillin/streptomycin, FCS, L-glutamine, PBS and trypsin-EDTA were purchased from Capricorn Scientific GmbH, Ebsdorfergrund, Germany. Poly-L-Lysine hydrobromide, PEI and acetylcholine iodide were acquired from Sigma-Aldrich, Merck KGaA, Darmstadt, Germany. Arecoline hydrobromide was purchased from TCI Chemicals, Eschborn, Germany. Coelenterazine h was obtained from NanoLight Technologies, Pinetop, USA. ### Plasmids for the M3R cDNAs encoding Gαq-YFP48, Gβ149, GRK250, β-arrestin2-mTurq51, and M3-mCit52 were described previously. The human M3R was obtained from the Missouri S&T cDNA Resource Center. The DNA for pNluc-Gγ2 was a kind gift from Dr. N. Lambert (Augusta University, Georgia, USA). The cDNA encoding mCit-β-arrestin2-mTurq was analogously cloned as described for similar reference constructs in Dorsch et al.53 The M3R mutants were generated from these plasmids by mutagenesis using the following primers listed in Supplementary Table 5. The M3Rs containing four mutations were cloned analogously in two steps. The M3R-mCit mutants were generated in the same way. ### Cell culture and transfection, M3R All experiments were performed in HEK293T cells. Cells were cultured at 37 °C and 5% CO2 in Dulbecco’s Modified Eagle’s Medium (4.5 g/L glucose), supplemented with 100 units/mL penicillin, 0.1 mg/mL streptomycin, 2 mM L-glutamine and 10% FCS. The cells were transiently transfected in a 6 cm dish using linear polyethylenimine (PEI) 25 kDa as the transfecting agent. For the Gαq activation experiments, HEK293T cells were transfected with the following quantities of plasmids encoding for the respective proteins: 1.5 μg M3R wt/M3R mutants, 2.4 μg Gαq-YFP, 0.75 μg Gβ1, 0.75 μg GRK2, and 0.3 μg pNluc-Gγ2. For β-arrestin2 recruitment, the cells were transfected with the following quantities of plasmids encoding for the respective proteins: 1.5 μg M3R-mCit/M3R-mCit mutants, 1.5 μg β-arrestin2-mTurq, and 0.75 μg GRK2. For the quantification of relative expression levels 1.5 μg of plasmid encoding for mCit-β2AR-mTurq were transfected. The ratio of DNA and PEI was determined as 1 to 3. For 1 μg DNA, 50 μL DMEM w/o FCS were added to the DNA and PEI solutions. Both solutions were mixed, incubated at 20 °C for 30 min, being protected against light and were afterwards added to the HEK293T cells in a 6 cm dish. For the BRET-based Gαq activation, cells were counted after 24 h and 16000 cells/well were seeded into a poly-L-lysine coated 96-well plate (Greiner 96 Flat White). For the FRET-based β-arrestin2 recruitment the cells were plated into six-well plates with poly-L-lysine coated 25 mm coverslips 24 h after transfection. All experiments were performed 48 h after transfection at room temperature. ### BRET-based measurements of the M3R Transiently transfected adherent HEK293T cells were measured in a 96-well plate with a Spark 20M Multimode Microplate Reader (Tecan), using the luciferase reporter Nluc54. Gαq activation was assessed with Gαq-YFP/Gβ1/pNluc-Gγ2 biosensors in the presence of M3R wt/M3R mutants55. Fluorescence and luminescence intensities were acquired using the Spark-Control application and the BRET emission ratio was calculated as the YFP signal (light emission between 520 nm and 700 nm) divided by the Nluc signal (light emission between 415 nm and 485 nm). In general, sixteen wells were measured in one round. Cells were washed once with extracellular buffer (137 mM NaCl, 5.4 mM KCl, 2 mM CaCl2, 1 mM MgCl2, 10 mM HEPES, pH 7.3) and 80 μL of a 3.07 μM solution of Coelenterazine h in buffer were added to every well. After 10 min of incubation, 10 cycles of baseline measurement were performed with a duration of about 6.5 min altogether. The measurement was paused shortly, 20 μL buffer or agonist in buffer were added and 10 cycles of agonist measurement were performed. Afterwards 20 μL agonist in a saturating concentration was added and 10 cycles of BRET measurement were performed once again. The agonist-induced change in BRET emission ratio was calculated as the difference in average values of the third cycles before and after adding the agonist. The additional change in BRET emission ratio induced by a saturating concentration of acetylcholine was calculated as the difference in average values of the third cycles before and after adding the saturating concentration of acetylcholine. The maximum change in BRET emission ratio was calculated as the sum of the agonist-induced change and the additional change induced by the saturating concentration of acetylcholine. The agonist-induced change was normalised to the maximum change in BRET emission ratio for every well. Concentration response curves were fitted by GraphPad Prism 8.3 with variable slopes. The bottom constrained to 0 and the top to 1 were determined for the concentration response curves of acetylcholine. In order to calculate the cut-off area shown in Fig. 3, all single concentration response curves of M3R wt were plotted individually and the minimal and maximal EC50 values of wt measurements were identified. ### Single-cell FRET imaging of the M3R A FRET-based assay was used to measure the agonist-induced interaction between β-arrestin2-mTurq and M3R-mCit wt/mutant sensors56. The measurements were performed as previously described by Milde et al.,57 except where declared otherwise, using an inverted fluorescence microscope (Eclipse Ti, Nikon, Germany). The cells were excited with an LED excitation system (pE-2; CoolLED, UK) at 425 nm and 500 nm. The intensity of both LEDs was set to 2%. The fluorescence intensity was measured using the software NIS-Elements advanced research (Nikon Corporation) and the image recording frequency was set to 2 Hz. FRET emission ratio was calculated as the ratio of mCitrine intensity divided by mTurquoise intensity upon excitation of mTurquoise at 425 nm by plotting over time. All fluorescence data were corrected for background fluorescence, bleed-through and false mCitrine excitation using Excel 2019. The measurements were additionally baseline-corrected for photobleaching, using OriginPro 2018 (Originlab, USA). The cells were constantly superfused with either extracellular buffer (described in BRET-based Measurement) or acetylcholine. Every cell was stimulated for 30 s with each concentration of acetylcholine. The concentration-dependent change in FRET emission ratio were calculated as the average value of the last 5 s of stimulating with each concentration of acetylcholine. Concentration response curves were fitted by GraphPad Prism 8.3 with variable slopes. ### Quantification and correction of relative expression levels for the experiments with M3R The relative expression level of M3R-mCit and β-arrestin2-mTurquoise was corrected, using the construct mCit-β-arrestin2-mTurq for calibration of the stoichiometry. mTurquoise was excited with 425 nm whereas mCitrine was excited with 500 nm. The fluorescence intensity was measured and corrected for background fluorescence. The calibration factor was calculated as FmCitrine/FmTurquoise. For each single-cell measurement, the factor was calculated in the same way. Due to the influence on the extent of FRET signal, the relative expression level of M3R-mCit and β-arrestin2-mTurquoise was corrected for an equal stoichiometry. Therefore, the factor of every single-cell FRET-measurement was divided by the calibration factor (Supplementary Fig. 16) and every measurement was multiplied with its individual reciprocal (Supplementary Fig. 12). ### Plasmids and mutagenesis for the β2AR Human β2AR (ADRB2 except for R16 and Q27) and all biosensor constructs were assembled in pcDNA3.1. The β2AR was codon-optimised and a sequence encoding a SNAP tag and an N-terminal signal sequence were cloned in at the N-terminus. The biosensor plasmids are based on genes encoding Renilla luciferase (RlucII) and a GFP, either GFP10 or Renilla GFP (rGFP), with the RlucII on the Gαs or β-arrestin2, respectively, and the GFP on Gγ1 and a membrane anchor (CAAX), respectively58,59,60. Wild-type Gβ1 was used for the G-protein activation assay. Single-point mutants of the β2AR were generated as described in an earlier work61, using the primers listed in Supplementary Table 5. Non-alanine amino acids were mutated to alanine, native alanine residues were mutated to glycine. Primers were designed using custom software62 (available at: https://github.com/dmitryveprintsev/AAScan). ### BRET-based Signalling Assays of the β2AR All assays were using human embryonic kidney (HEK)-293 SL cells (a gift from Stephane Laporte). Cells were grown at 37 °C with 5% CO2 in DMEM with 4.5 g/L glucose, L-glutamine, and 10% newborn calf serum (NCS, Wisent BioProducts, Canada) and penicillin-streptomycin (PS, Wisent BioProducts, Canada). Two days prior to measurements, cells were transfected using polyethyleneimine (PEI, Polysciences Inc., Canada, No. 23966), with a ratio between PEI and DNA of 3:1. Afterwards, 20000 cells per well were seeded into white Cellstar PS 96-well cell culture plates (Greiner Bio-One, Germany). On the day of the measurement, medium was removed and Tyrode’s buffer (137 mM NaCl, 0.9 mM KCl, 1 mM MgCl2, 11.9 mM NaHCO3, 3.6 mM NaH2PO4, 25 mM Hepes, 5.5 mM glucose, 1 mM CaCl2, pH 7.4) was added, followed by incubation at 37 °C of at least 30 min. Ten minutes before measurement, adrenaline was added, with concentrations ranging from 31.6 nM to 3.16 mM in half-log steps, as well as a buffer control. At 5 min prior to measurement, coelenterazine 400a (DeepBlueC, Nanolight Technology) was added for a final concentration of 5 μM. Coelenterazine 400a was initially dissolved in DMSO and diluted into Tyrode’s buffer with 1% Pluronic F-127 for increased solubility. BRET was measured in a Synergy Neo microplate reader (Biotek) using detection at 410 nm and 515 nm. All experiments were done at least in biological triplicates. Cut-off areas shown in Fig. 4 were calculated in a manner similar to the M3R. Data analysis was done with RStudio 2021.09.2+382 (utilising R 4.1.2 and packages tidyverse 1.3.1 and drc 3.0-1). ### Calculation of residue contact maps For all class A and class B1 structures, residue contact maps were calculated only considering the transmembrane portions. A contact was defined to occur when the distance between any two atoms of two distinct residues was smaller than the sum of their van der Waals radii plus a buffer distance of 0.5 Å. In order to prevent sampling of local contacts which might introduce noise at later stages of the analysis, contacts between residues less than four positions apart in sequence and where one of the atoms involved was in the backbone were not considered. ### Creation of contact fingerprints In order to describe the residue contact distribution of each GPCR in a simplistic manner, a class-specific contact fingerprint was calculated for every structure. First, for class A and B1 GPCRs, the set of all residue-residue contacts that occurred in at least one of its members was compiled. Only residues that were found in all analysed structures were considered. Further, this set of contacts was treated as a fingerprint in which the individual bits were set to either 1 or 0 depending whether or not a particular contact occurred in a structure. Finally, for each of our 557 structures, the appropriate class-specific fingerprint was calculated, the aggregate of which was then used to determine activation networks. ### Principal component analysis of fingerprints A class-specific principal component analysis was carried out based on the contact fingerprints using the python package Scikit-learn (version 0.23.2)63. Here, each structure can be seen as a sample while each contact can be seen as a variable. The contribution of each of the first 10 principal components to the overall variance of the data was plotted and evaluated. Then, PCA plots were created for the principal components that explained most of the variance. The data points, each of them representing one PDB structure, were coloured according to their activation state as ascribed in the GPCRdb. Principal components that showed a clear separation between the active and the inactive state were used in order to classify contacts important for the respective state. For each contact, the sign and the absolute value of the pertaining principal component coefficient gave the necessary information about the state and importance, respectively. The procedures described above and the PCA were carried out again on those structures to which a clear inactive or active state could be assigned in the first PCA. This ensured that no intermediate structures were considered for the following analysis. For the class A PCA, we decided on structures with PC1 values larger than 7 and PC2 values larger than 7.5 for the inactive and active state, respectively. For class B1, structures with PC1 values larger than 3 were not considered for the re-calculation. Since the re-calculated class B1 PCA still showed two outliers, we decided on eliminating them (PDB: 6NIY https://doi.org/10.2210/pdb6NIY/pdbhttps://doi.org/10.2210/pdb6P9X/pdb) from the PCA before continuing with the network analysis. ### Contact analysis For both classes, the PCA coefficients were used in order to estimate the importance of a contact for a certain receptor state. Here, we focused on those contacts that were formed between residues in KS2, KS5, KS8, OS5, OS6 and OS9. Contacts between two residues that belong to the same helix were not considered. By investigating the re-calculated PCA plots (Fig. 2 and Supplementary Fig. 2), each contact was considered either as an active or inactive contact depending on the sign of its corresponding PCA coefficient. The PCA coefficients were normalised to their highest absolute value such that they ranged from 0 (not important) to ±1 (important). For each pocket, the residue contacts together with their normalised PCA coefficient were plotted. ### Reporting summary Further information on research design is available in the Nature Research Reporting Summary linked to this article.
{}
Coq code for this post available here. Note that the proof scripts are meant to be stepped through interactively in Coq. After having gone through most of the first 3 books in Software Foundations, and proving various theorems in classical real analysis in Coq, I decided to formalize some basic stuff from Algebra: Chapter 0, in particular the following statement. Proposition 2.1. Assume $$A$$ is non-empty, let $$f : A \to B$$ be a function. $$f$$ has a left-inverse if and only if it is injective. This is a pretty trivial statement with a pretty trivial proof on paper (try it!), so I expected the Coq proof to be quite easy as well. What I didn’t know was that it would take me through a short journey through mathematical foundations, computability and some philosophy of mathematics as well. Why was it harder than expected, and what does it say about mathematics as a field? I will replicate the dead-ends I ran into here, to illustrate various points. ## A reasonable start On the first attempt, I translated the statements as directly as possible into Coq. The forward direction is easy. Definition injective {A B} (f : A -> B) := forall a' a'', a' <> a'' -> f a' <> f a''. Definition left_inverse {A B} (f : A -> B) g := forall a, g (f a) = a. Definition right_inverse {A B} (f : A -> B) g := forall b, f (g b) = b. Lemma P_2_1 {A B} (f : A -> B) (s : A) : (exists g, left_inverse f g) <-> injective f. Proof. unfold left_inverse, injective. split; intros H. - intros a' a'' eq. destruct H as [g Hg]. congruence. - Now we prove the reverse direction, that if a function is injective, then it has a left-inverse. We have the following proof state: A : Type B : Type f : A -> B s : A H : forall a' a'' : A, a' <> a'' -> f a' <> f a'' ============================ exists g : B -> A, forall a : A, g (f a) = a The goal is exists g : B -> A, .... In Coq we prove such statements by providing the function $$g$$ then showing indeed it is a left-inverse. The paper proof constructed such a $$g$$ as well, so just translate! Here is the paper proof: $$(\Longleftarrow)$$ Now assume $$f : A\to B$$ is injective. In order to construct a function $$g : B\to A$$ we have to assign a unique value $$g(b)\in A$$ for each element $$b\in B$$. For this, choose any fixed element $$s\in A$$ (which we can do because $$A\neq\emptyset$$); then set ${\displaystyle g(b)={ \begin{cases} a&{\text{if }}b = f(a)\,\text{for some}\,a\in A,\\[4px] s&{\text{if }b\notin \text{im } f} \end{cases}}}$ ## Solving the halting problem We have a function $$g$$ that, given a $$b$$, returns an $$a$$ if such an $$a$$ satisfying $$b = f(a)$$ exists, or returns a fixed element $$s$$ of $$f$$. Except, the devil is in the details, or in this case, in the word if. We have to make a decision depending on whether something is in the image or not. Actually, if we can do this for arbitrary sets $$A$$ and $$B$$, we can solve the halting problem. Here’s a short proof: Assume we can always tell if $$b : B$$ is in the image of $$f$$ or not. Let $$M$$ be a Turing machine. Define $$f : \mathbb{N} \to \mathbb{N}$$ where $$f(n) = 0$$ if $$M$$ halts in $$n$$ steps, otherwise return $$n+1$$. This is obviously injective. So, what would $$g(0)$$ give? It would return the number of steps that it takes for $$M$$ to halt, or the fixed element if $$M$$ diverges, but $$M$$ is an arbitrary Turing machine, so that means we can solve the halting problem. (In fact you can also prove LEM from the theorem, see Appendix B). In type theory, all the functions we ever write are computable by construction, so it actually turns out to be impossible to prove this lemma as stated. Thus, we need to strengthen our hypothesis, by assuming that the condition is decidable. In the Coq standard library, there is the following datatype: (** [sumor] is an option type equipped with the justification of why it may not be a regular value *) Inductive sumor (A:Type) (B:Prop) : Type := | inleft : A -> A + {B} | inright : B -> A + {B} where "A + { B }" := (sumor A B) : type_scope. Essentially this an option type that gives the value of type $$A$$ or a proof why such a value cannot be produced. This is important because we want to use the left of the sum as the return value for the left-inverse of $$f$$, and the right of the sum as a single bit to decide to return the default value $$s$$. (* Property of decidability of being in the image *) Definition im_dec {A} {B} (f : A -> B) := forall b, { a | f a = b } + ~ (exists a, f a = b). (* If being in the image of an injective function f is decidable, it has a left inverse. *) Lemma injective_left_inverse {A B} (s : A) (f : A -> B) : im_dec f -> injective f -> { g | left_inverse f g }. Proof. unfold im_dec, injective; intros dec inj. exists (fun b => match dec b with inl (exist _ a _) => a | inr _ => s end). intros a'; destruct (dec (f a')) as [[a Ha] | contra]. With this assumption, we can continue the proof, but then we are stuck at following proof state. A : Type B : Type s : A f : A -> B dec : forall b : B, {a : A | f a = b} + (~ (exists a : A, f a = b)) inj : forall a' a'' : A, a' <> a'' -> f a' <> f a'' a', a : A Ha : f a = f a' ============================ a = a' We need to prove $$a = a'$$, which follows from $$f$$ being injective, however note that the hypothesis inj states the contrapositive claim. In an undergrad discrete math class, one quickly learns about the contrapositive law in classical logic, $$(P \to Q) \leftrightarrow (\neg Q \to \neg P)$$. It turns out that in type theory (more generally, in intuitionistic logic), the forward implication is provable but the backward implication requires double negation elimination, which is equivalent to LEM. As a result, we can make prove a slightly more general theorem if we use the following definition of injective. The proof that our definition of injectivity implies the one used in the book is trivial. (* New definition of injective *) Definition injective {A B} (f : A -> B) := forall a' a'', f a' = f a'' -> a' = a''. (* Book's definition *) Definition injective' {A B} (f : A -> B) := forall a' a'', a' <> a'' -> f a' <> f a''. (* injective implies injective' *) Theorem injective_injective' : forall {A B} (f : A -> B), injective f -> injective' f. Proof. cbv; auto. Qed. With all this done, we can finally prove the backwards direction. One last twist is that we’ll use a sigma type and make the proof transparent using Defined. so that we can extract the computational content later. Lemma injective_left_inverse {A B} (s : A) (f : A -> B) : im_dec f -> injective f -> { g | left_inverse f g }. Proof. unfold injective, left_inverse, im_dec. intros dec inj. (* It's decidable to check if b is in the image or not *) exists (fun b => match dec b with inl (exist _ a _) => a | inr _ => s end). intros a'. destruct (dec (f a')) as [[a Ha] | contra]. - apply inj; auto. - exfalso. apply contra. exists a'; auto. Defined. ## Recap and final thoughts So, what did we learn from this not-so-trivial proof of a trivial theorem? • when we used the truth-value of properties of values over arbitrary types (in this case, checking if an element of $$B$$ is in the image of a function), we might want this property to be decidable • sometimes we can restate things in a more general way that make it easier to prove in type theory, while still being classically equivalent This experience left me feeling a bit philosophical. Note some of these points are subjective, and I speak from my perspective as an undergraduate in pure math and CS. We lost a bit of symmetry. What used to be a simple iff is now two separate lemmas, where the backward direction takes a proof that finding a preimage is decidable. Did that make matters worse? I don’t think so. I think the central question is, what do we want from this theorem? We can obtain left-inverses of any injective function, presumably we would want compute with the left-inverse! If we used the law of the excluded middle anywhere, we would lose computability. But by paying careful attention and performing minor adjustments to the theorem, we have still preserved computability, and in fact can use this to find left-inverses of injective functions (see Appendix A). What does this say about using constructive type theory as a foundation for mathematics at large? This is a difficult question that far more qualified researchers such as Andrej Bauer can answer better than I can (I highly recommend his talk on constructive mathematics). My naïve view is that once a foundation such as set theory is fixed, it is inevitable that “implementation details” will be used to the fullest extent. You can “just” check if something is in the image of an arbitrary function and voila, you have your well-defined function! You can “just” construct the set of all Turing machines that halt. No words said about if it’s computable or not. Another analogous situation I ran into was when trying to formalize concepts from differential topology, but the problem was at my very feet, I couldn’t prove a function restricted to the whole domain is equal to the original function, something taken for granted in the classical world. Or how about quotients? They are ubiquitous in mathematics, one can perform quotients of groups, rings, topological spaces and more. See this passionate discussion regarding setoid hell. On the other hand, type theory feels like the right logical system to work in for CS. Most proofs in CS are constructive anyway, making them very easy to translate into Coq. You also get nicer properties: all functions are computable, all functions are Scott-continuous, or even topologically continuous. You can also extract computational content from proofs. See Appendix A for how the left-inverse of the successor function can be obtained, something you couldn’t easily do in a set-theoretic setting. Do we have to give up LEM have all these things? Not necessarily. To quote Bauer from his talk, Constructive mathematics keeps silent about the law of the excluded middle. We do not accept it, we do not deny it, we just don’t use it. Particular cases of law of excluded middle might be OK, but you have to establish them first. While $$P \vee \neg P$$ is not provable for an arbitrary proposition $$P$$, if you can show it for some particular $$P$$ (or a family of such $$P$$’s), you regain classical reasoning. For further excellent discussion on this topic, I recommend this Zulip discussion regarding LEM and decidability. ## Appendix A: Extracting the computational content of the proof Using the proof-as-programs principle, we can in fact obtain left-inverses of functions, provided that being in the image is decidable and that the function is injective. Definition eq_dec A := forall (a1 a2 : A), a1 = a2 \/ a1 <> a2. Lemma nat_eq_dec : eq_dec nat. Proof. unfold eq_dec. induction a1; destruct a2; auto. destruct (IHa1 a2); auto using f_equal. Qed. Definition succ (n : nat) := S n. Definition pred' : nat -> nat. Proof. refine (fun n => _ (injective_left_inverse 0 succ _ _)). - intros H. destruct H as [g Hg]. exact (g n). - unfold im_dec. induction b. + right. intros H. destruct H; discriminate. + left. refine (exist _ b _). reflexivity. - unfold injective. intros a' a'' H. inversion H; auto. Defined. Eval compute in (pred' 1000). (* => 999 *) Exercise (4 stars): define double n = n + n and derive its left-inverse halve in a similar manner. You’ll need to prove that being in the image of double is decidable (hint: parity) and that it’s injective, along with some additional lemmas as you see fit. You might want to use the following induction principle to help. Don’t forget to make some proofs transparent so that Eval compute in (halve 1000). reduces to 500! Send me an email if you solve it! (* Idea: if a property P holds on 0 and 1, and n+2 whenever it holds on n, then it holds for all n. *) Definition nat_ind2 : forall (P : nat -> Type), P 0 -> P 1 -> (forall n : nat, P n -> P (S (S n))) -> forall n : nat, P n := fun P => fun P0 => fun P1 => fun PSS => fix f (n:nat) := match n with 0 => P0 | 1 => P1 | S (S n') => PSS n' (f n') end. ## Appendix B: Proving LEM Here’s something wild, we can prove LEM from the backward direction of the original theorem! (Assuming proof irrelevance) Require Import ProofIrrelevance. Lemma inj_left_inverse_implies_lem : (forall {A B} (f : A -> B), A -> injective f -> exists g, left_inverse f g) -> (forall (P : Prop), P \/ ~ P). Proof. unfold left_inverse. intros H P. set (f := fun a : P + True => match a with | inl _ => inl I | inr _ => inr I end). pose proof (H _ _ f (inr I)). assert (Hf : injective f). { unfold injective; intros. destruct a', a''; try discriminate. - f_equal. apply proof_irrelevance. - destruct t, t0. reflexivity. } specialize (H0 Hf). destruct H0 as [g Hg]. destruct (g (inl I)) eqn:E; auto. right. intros a. destruct t. replace (inl I) with (f (inl a)) in E by auto. rewrite Hg in E. inversion E. Qed.
{}
## Introduction Over the past few years, metal halide perovskites (MHPs) have attracted immense attention due to their extraordinary performance as the light absorbing layer in solar cells1,2. This exciting family of semiconductors exhibits large carrier mobilities3,4, long charge carrier lifetimes5,6, and a linear absorption coefficient over 105 cm−1 above the bandgap1. Recently, appreciable nonlinear absorption coefficients (and refractive indices) have been reported rendering these materials of interest for nonlinear photonics,7 including two-photon-pumped lasers8,9 and saturable absorption-based ultrafast pulsed lasers10. As a typical nonlinear process, two-photon absorption (2PA) features deep penetration depths and a quadratic dependence on the intensity providing opportunities for bio-imaging11,12, photodynamic therapy13, three-dimensional optical data storage,14 microfabrication, testing the influence of environmental gases15 and up-conversion lasing and amplification16. In addition, quantifying the wavelength dependence of the 2PA process is of fundamental interest, since the 2PA can yield detailed information of the energy-band structure in crystalline solids, which may not be accessible by ordinary optical absorption spectroscopy. Transposing the initial concept proposed by M. Göppert17 to a semiconductor, the simultaneous absorption of two photons can lead to an excitation of an electron from the valence band (VB) to conduction band (CB) via a virtual state. The generation rate of charge carriers, n0 for the 2PA process is given by $$\frac{{{\mathrm{d}}n_0}}{{{\mathrm{d}}t}} = \frac{{\beta I^2}}{{2\hbar \omega }}$$ (1) where β (cm W−1) is the 2PA coefficient, I (W cm−2) is the light intensity entering the sample and $$\hbar \omega$$ (J) is the incident photon energy. Since the absorption coefficient of 2PA is typically low, high intensities are required. Despite the importance of revealing the 2PA spectrum18, only a few studies have been devoted to the 2PA properties in MHPs performed by the Z-scan or optimized Z-scan technique yielding β differing by several orders of magnitude between different MHPs or even between the same perovskites19,20,21. Furthermore, all Z-scan based measurements were performed at a single fixed wavelength. Wavelength dependent characterization was achieved by photoluminescence excitation spectroscopy, however, only up to $$\hbar \omega /E_{\mathrm{g}}$$ = 0.52422. Three-photon absorption was also evidenced below the threshold for 2PA in CsPbBr323. In this work, we have recorded the 2PA spectrum of methylammonium lead iodide perovskite (CH3NH3PbI3) polycrystalline thin films using the time-resolved microwave conductivity (TRMC) technique, as well as CH3NH3PbBr3 thin films and single crystals, and CsPbI3 thin films. From the photoconductance induced by a nanosecond laser pulse, the initial number of photogenerated charge carriers is obtained for wavelengths ranging between 0.49 and 1 times the bandgap energy $$(0.49E_{\mathrm{g}} < \hbar \omega < E_{\mathrm{g}})$$. It has been postulated that the Z-scan technique is prone to overestimate the value of β7, since free carrier absorption can lead to an additional reduction of the transmitted light24. Since with the TRMC technique excess charges are probed by microwaves instead of light, this problem is surmounted. A two-step upward trend for the 2PA spectrum is observed in MHPs, which is explained for CH3NH3PbI3 by a combination of multiple bandgap transitions derived from a symmetry-based empirical tight-binding model: a primary bandgap of 1.58 eV and a secondary bandgap above 2.25 eV owing to the spin-orbit coupling induced band splitting. Apart from 2PA we identify sub-bandgap linear absorption (SLA) at photon energies close to the band edge. Furthermore, we investigated for CH3NH3PbI3 the impact of the tetragonal-to-orthorhombic phase transition on the 2PA coefficient, β, by changing the temperature. ## Results ### Sub-bandgap absorption processes Thin films of CH3NH3PbI3 (about 200 nm thickness) were spin-coated on quartz. The X-ray diffraction pattern (Supplementary Fig. 1a) of the CH3NH3PbI3 film displays strong reflections for the <110> and <220> planes confirming the formation of highly crystalline CH3NH3PbI3. The 1PA spectrum (Supplementary Fig. 2a) shows a cut-off wavelength of 785 nm indicating a bandgap energy of 1.58 eV. To probe 2PA in the CH3NH3PbI3 film, the sample was measured by the time-resolved microwave conductivity (TRMC) technique25,26 for a wide range of photon energies varying from 0.775 to 1.55 eV. The light intensity was attenuated by an array of neutral density filters yielding light intensities varying from INO: 2 × 1011 to 2 × 1015 photons cm−2 per pulse. The TRMC technique can be used to study the dynamics of photoinduced charge carriers in low conductive semiconductor materials, in an electrodeless way. The photoconductance, ΔG, of the samples was deduced from the laser-induced change in absorbed microwave power, ΔP, normalized by the incident power, P according to $$\frac{{\Delta P(t)}}{P} = - K\Delta G\left( t \right)$$ (2) where K is the sensitivity factor. To compare the photoconductance traces recorded at different intensities and wavelengths, we normalized ΔG for the incident photon intensities, INO yielding: $$\frac{{\Delta G}}{{e\beta _0I_{{\mathrm{N}}0}}}$$. Here, e is the elementary charge and β0 is a dimensionless constant of the microwave cell. In Fig. 1a traces for different laser wavelengths and intensities are shown. For all photon energies, ΔG increases rapidly on excitation, followed by a slow decay due to recombination or immobilization of charges in trap states5. Interestingly, for photon energies of 1.45 eV traces for all intensities overlap, while for energies of 1.3 eV we observe a gradual increase in signal size with intensity. At 1.42 eV an intermediate regime is visible. Next, we plotted the intensity normalized maximum photoconductance values $$\frac{{\Delta G_{{\mathrm{max}}}}}{{e\beta _0I_{{\mathrm{N}}0}}}$$ versus the incident intensity INO for three different photon energies in Fig. 1b. At 1.45 eV the values of $$\frac{{\Delta G_{{\mathrm{max}}}}}{{e\beta _0I_{{\mathrm{N}}0}}}$$ are almost constant with intensity suggesting a first-order excitation process. This process is explained by the optically induced electronic transitions of electrons from the VB to sub-bandgap levels, or from the latter to the CB, as depicted in Fig. 1c and denoted by sub-bandgap linear absorption (SLA). On the contrary, at 1.3 eV a clear linear dependence between $$\frac{{\Delta G_{{\mathrm{max}}}}}{{e\beta _0I_{{\mathrm{N}}0}}}$$ versus INO is observed which implies that the conductance is proportional to $$I_{{\mathrm{N}}0}^2$$ agreeing with the 2PA process. In Fig. 1d, the mechanism is illustrated showing the generation of a charge carrier pair on absorbing two photons. At 1.42 eV an intermediate regime is visible: at low intensities ($$I_{{\mathrm{N}}0} < 2 \times 10^{14}$$ photons cm−2 per pulse), $$\frac{{\Delta G_{{\mathrm{max}}}}}{{e\beta _0I_{{\mathrm{N}}0}}}$$ is almost constant. On increasing INO the signal gradually increases, demonstrating the transition from predominantly SLA to the 2PA process. The lowest detectable photon energy is found to be 0.8 eV in close agreement with the energy threshold of 0.79 eV for the 2PA process. In short, the optical absorption below the bandgap by the CH3NH3PbI3 film is explained as follows: in the far below-bandgap regime (0.8–1.4 eV), photoinduced charge carriers are predominantly generated by the 2PA process; a transition regime is found between 1.4 eV and 1.45 eV, where both SLA and 2PA are contributing to the signal; In the near band-edge regime (1.45–1.55 eV) only SLA was detected due to much higher densities of sub-bandgap levels compared to that in the far below-bandgap regime. We realize that the transition regime might shift depending on the quality of the MHP film. However, differences for this transition regime observed for five different CH3NH3PbI3 films were less than 0.02 eV. ### Wavelength dependence of 2PA coefficient β To extract the 2PA coefficient β, the concentration of initially photogenerated charge carriers n0 is first obtained from the maximum photoconductance $$\Delta G_{{\mathrm{max}}}$$ by $$n_0 = \frac{{\Delta G_{{\mathrm{max}}}}}{{e{\mathrm{\Sigma }}\mu \beta _0L}}$$ (3) Where Σμ is the sum of the electron and hole mobilities, β0 is the dimensionless constant of the microwave cell, and L is the sample thickness. As justified in Supplementary Fig. 3 and Supplementary Note 2, we assume that the initial signal is not affected by recombination from which the number of photoinduced charge carriers, n0 is deduced. Since Σμ is an intrinsic property of the sample, we have deduced this value from a TRMC measurement above the bandgap (Supplementary Fig. 4)26,27. Next, we calculated the 2PA coefficient, β according to $$\beta = \frac{{n_02\Delta t}}{{I_{{\mathrm{N}}0}^2\hbar \omega }}$$ (4) Here, Δt is the full width at half-maximum (FWHM) of the laser pulse. To acquire the accurate laser pulse width, a sub nanosecond photodetector was used to record the pulse duration (Supplementary Fig. 5 and Supplementary Note 3). To obtain the actual intensity entering the sample, the light intensity measured by the power meter was corrected for reflection at the air/film interface (Supplementary Fig. 6 and Supplementary Note 4). Basically, the values of β (cm W−1) can be derived from the slope of $$\frac{{\Delta G_{{\mathrm{max}}}}}{{e\beta _0I_{{\mathrm{N}}0}}}$$ versus INO in Fig. 1b. Figure 2a displays β as a function of photon energy, showing a rise by about two orders of magnitude on increasing photon energy. Considering that the reported β for CH3NH3PbX3 (X = Cl, Br, I) varies from $$2.5 \times 10^{ - 4}(\hbar \omega /E_{\mathrm{g}} = 0.524)$$ to 272 cm MW−1 ($$\hbar \omega /E_{\mathrm{g}}$$ ~ 0.76)19,22 excluding the negative values21, we can state that our values ranging from 0.18 ($$\hbar \omega /E_{\mathrm{g}}$$ = 0.506) to 15.8 cm MW−1 ($$\hbar \omega /E_{\mathrm{g}}$$ = 0.886) are plausible. The most commonly used model for explaining the wavelength dependence of β is the classical semiconductor scaling law24, given by: $$\beta = a\frac{1}{{E_{\mathrm{g}}^3}}f(\hbar \omega /E_{\mathrm{g}}) = a\frac{1}{{E_{\mathrm{g}}^3}}\frac{{\left( {\frac{{2\hbar \omega }}{{E_{\mathrm{g}}}} - 1} \right)^{\frac{3}{2}}}}{{\left( {\frac{{2\hbar \omega }}{{E_{\mathrm{g}}}}} \right)^5}}$$ (5) Where a (cm MW−1) is a product of several constants including the linear refractive index and material-independent constants. To further clarify the wavelength dependence of β, we first compare the wavelength-dependent function $$f(\hbar \omega /E_{\mathrm{g}})$$ given by the scaling law (Eq. 5) with our data. A bandgap energy, Eg of 1.58 eV obtained from the 1PA absorption spectrum was used for the fitting. As shown by the blue curve in Fig. 2a, the large deviation mostly stems from the high energy range. The experimental wavelength dependence of β shows a sharp rise at excitation energies close to 0.5 Eg followed by a second upward trend. Although the band gaps of CH3NH3PbBr3 and CsPbI3 are different, basically similar trends are observed as shown in Supplementary Fig. 7 and Supplementary Note 5. The absorption is dominated for MHPs by SLA at energy higher than 0.9 Eg thereby not being presented here. Interestingly, the second upward contribution at energy higher than 0.7 Eg is opposite to the trend predicted by the scaling law, which predicts that β should reach a maximum at 0.7 Eg. Obviously, the scaling law is not directly applicable to MHPs, which is not surprising as it was designed for direct zinc blende semiconductors28,29. In fact, the scaling law is based on a parabolic three-band model located at the Γ-point (k = (0, 0, 0)), which is comprised of a 2-fold VB and a single CB24. However, the electronic band structure close to the bandgap has been found to be fundamentally different for CH3NH3PbI3, where in the absence of relativistic effects, the CB is degenerated instead of the VB30. The dramatic spin-orbit coupling that results from the heavy metal atom, leads to spin-orbit split-off (SO) bands at the bottom of the CB, with heavy (HE) and light electron (LE) states lying at higher energies31 (Fig. 2d). Moreover, the lowest energetic transition occurs at the R-point (k = (1/2, 1/2, 1/2)) for the cubic phase of MHPs32. Besides, apart from a discussion on the matrix elements29, the validity of the downward trend of the classical scaling law as $$\hbar \omega /E_{\mathrm{g}}$$ approaching 1 (Fig. 2a) has been rarely addressed in the literature. It is therefore imperative to introduce a model suitable to reflect the main features of MHPs. Figure 2b shows the electronic band structure computed from the empirical tight-binding model recently developed for the cubic phase of MHPs33. This empirical model was designed to yield electronic band structures over the entire Brillouin zone, and not restricted to the proximity of the R-point. It compares well with more advanced first–principles calculations including many-body (e.g., sc-GW) corrections for interband electronic transitions up to about 2 eV above the electronic bandgap33. This model allows computing various properties at a low computational cost, similar to that of an effective mass model, but including the effect of spin-orbit coupling. Here, we compute the 2PA coefficients of CH3NH3PbI3 (see Supplementary Note 1 for details). Noteworthy, although the parabolic approximation usually holds for k values close to the high symmetry points, the non-parabolicity of the CB and VB has to be considered for photon energies away from the bandgap34. The empirical tight-binding model is including such non-parabolicity effects in a natural way. This non-parabolicity is expected to contribute to the wavelength dependence of β for energy higher than 0.7 Eg. Consistently with our experimental data, the 2PA spectrum computed using the tight-binding model exhibits additional contributions related to the HE and LE bands (Fig. 2c, straight line). They may enter both the summations over the final states (index c in Supplementary Equation 4) and over intermediate virtual s states (index s in Supplementary Equation 4 and Fig. 2d). Therefore, it is interesting to analyze in detail the various contributions. The dashed line represents in Fig. 2c the 2PA curve obtained by considering only the optical transitions to the bottom of the conduction band (SO bands), but HE and LE bands are still considered as possible intermediate s states (Fig. 2d). By comparison to the straight line in Fig. 2c, we may therefore directly trace back the origin of the enhanced 2PA above 2.25 eV (Fig. 2a) to the presence of secondary gaps in the band structure induced by the spin-orbit coupling. Next, the dash-dotted line in Fig. 2c represents the 2PA curve obtained by removing also HE and LE bands from the list of possible intermediate s states. By comparison to the dashed line, we conclude that HE and LE bands also influence the 2PA over the entire energy range as intermediate virtual states (Fig. 2d). This can be considered as a second, indirect, effect of the spin-orbit coupling. Finally, the dotted line represents a further restriction of the initial and intermediate states to the top of the valence bands. Comparison to the complete calculation (straight line) evidences that a computation of the 2PA restricted merely to band edge states is not justified. Thus, it is essential to consider also remote intermediate states. Noteworthy, it was impossible to include more bands away from the band edge states using our empirical tight-binding approach. This intrinsic limitation of the tight-binding model, might be in principle overcome by switching to a completer and more accurate DFT calculation. However, the computational cost of such a 2PA DFT computation including many-body corrections to yield accurate band gaps as well as effective masses shall be extremely high and, to the best of our knowledge, has not yet been implemented in available computational software. ### Bandgap dependence of 2PA coefficient β In classical semiconductors, the dependence of the 2PA coefficient, β as a function of the electronic bandgap has been discussed by considering various semiconductors with different band gaps24. Here, we might expect changes in β in MHPs due to temperature-induced changes in the bandgap and the presence of temperature-related structural phase transitions (Supplementary Fig. 8)34,35. Hence, we investigated β in CH3NH3PbI3 by using a temperature-controlled microwave cell. The values of Eg at each temperature ware determined by the cut-off wavelength from the linear absorption spectra (Supplementary Fig. 9). In contrast to the TRMC results at room temperature, at lower temperatures the sub bandgap absorption process at 1.43 eV is dominated by the 2PA process (Supplementary Fig. 10). As depicted in Fig. 3a, in the tetragonal phase β at a given photon energy increases, as the bandgap energy becomes smaller with decreasing temperature. This is consistent with the reported smaller 2PA coefficients observed at wider bandgap energies in CH3NH3PbI3 powder36 as well as CsPbBr3 quantum dots37. This is basically the same trend as observed in Fig. 2a. Moreover, after rescaling the β coefficient for its variation as a function of Eg and $$\hbar \omega$$ according to the scaling law (Eq. 5), we do not observe a constant value over the entire energy range (Fig. 3b). We attribute this to additional contributions related to the HE and LE bands, confirming again the importance of relativistic effects to the 2PA process. For orthorhombic CH3NH3PbI3 (squares in Fig. 3a), β is invariably smaller than the values obtained for the tetragonal phase, which can be associated to the wider bandgap in the orthorhombic phase. In addition, it has been reported that both the M-point and R-point electronic states are folded back to the Γ-point in the orthorhombic phase due to a reduction of the Brillouin zone volume38,39. Hence, we expect that the 2PA in the orthorhombic phase should correspond to the transitions at the Γ-point. Although it has been shown that the oscillator strengths (Kane energies) remain comparable in the tetragonal and orthorhombic phases39, it is important to note that the matrix element (e.g. comparable to the Kane energy that enters the prefactor a in Eq. 5), the density of states as well as the allowed transitions regarding the 2PA will be different in the orthorhombic phase40. Therefore, the smaller values of β observed in the orthorhombic phase can be explained by a combination of the above-mentioned factors. ## Discussion In summary using the TRMC technique, we record the 2PA spectra of different metal halide perovskites, which demonstrate a two-step upward trend suggesting a primary bandgap in correspondence with the 1PA bandgap and an additional secondary bandgap. For CH3NH3PbI3, sub-bandgap linear absorption is found to be the dominant process for photon energies close to the bandgap edge $$(\hbar \omega > 0.9E_{\mathrm{g}})$$ due to high density of trap states. A purpose build tight-binding model is designed to rationalize experimental 2PA spectra for CH3NH3PbI3, which allows accounting empirically for spin-obit coupling and dispersion non-parabolicity over the entire Brillouin zone. Computed β values are in good agreement with experimental values over the entire investigated spectral region. It reveals that the additional contribution to the 2PA starting at 2.25 eV for tetragonal CH3NH3PbI3 can be attributed to a secondary bandgap at the R-point. This is traced back to the HE and LE bands that are separated from the SO conduction band edge states at the R-point as a result of relativistic effects. It is further demonstrated that they also contribute indirectly to the 2PA response over the entire energy range as intermediate virtual states. A negative correlation between the values of β and the 1PA bandgap associated with temperature and structural phase is found. Bandgap dependent results confirm that the scaling law is inapplicable to the 2PA in MHPs. Overall, the simultaneous implementation of a 2PA experiment and a tight-binding model shows great promises to gain in-depth into the band structure of MHPs. ## Methods ### Preparation of the CH3NH3PbI3 Films 13.93 mL CH3NH2 (4.24 g, 0.137 mol, 40% in methanol) was mixed with 15 mL HI (14.52 g, 0.114 mol, 57 wt% in water) in a 250 mL round-bottom flask. The system was immersed in the ice-water bath for 4 h with stirring. Subsequently, the precipitate was recovered by evaporation at 55 °C for 1 h. To purify the product, CH3NH3I was washed by diethyl ether for three times, and dried at 60 °C in a vacuum oven for 24 h. CH3NH3PbI3 thin films were prepared by the one-step method. Quartz substrates were washed with soap and de-ionized water, and sequentially soaked in acetone and ethanol with ultrasonic cleaning for 15 min, followed by a 10 min oxygen plasma cleaning. 477 mg CH3NH3PbI3 (0.003 mol) and 379 mg lead acetate trihydrate (Pb(CH3COO)2. 3H2O) (0.001 mol) were dissolved in 1.54 mL DMF with the concentration of 37 wt% and stirred for over 30 min. Subsequently, 120 μL precursor solution was dropped on a cleaned substrate and then spin-coated at 2000 rpm for 45 s in the nitrogen atmosphere. After the substrate was dried at room temperature for 15 min and annealed at 100 °C for 5 min, the samples are stored in a nitrogen-filled glove box before further characterization. ### TRMC measurements Nanosecond tunable laser pulses were produced from an integrated optical parametric oscillator (OPO) system (EKSPLA NT342 B-SH/SFG). All samples measured in the near infrared (NIR) regime were always first measured at 500 nm to obtain referential TRMC results for estimating Σμ. Two extra filters (630 and 665 nm) were employed to avoid the interference of the visible light during the NIR measurements. After the measurements, the sample was measured at 500 nm again under preferably the same intensities as before to evaluate the effect of NIR irradiation on the sample. Only the samples with stable performance were further used for data analysis. All TRMC traces were averaged over at least 200 times to minimize the inaccuracy induced by the powermeter and the disturbance in the laser pulse. For temperature-dependent TRMC measurements, the experiments were carried out using another setup equipped with a liquid nitrogen cryostat. To switch from the visible regime to the NIR. regime, the alignment in the system was adjusted because of the different generation principles of the laser pulse. Upon cooling down from the room temperature (RT), the sample was recorded at 220, 180, and 100 K. The temperature was maintained for about 10 min before the measurement to assure the equilibrium of the system. ### Optical characterization Absorption spectra of perovskite thin films were acquired in a PerkinElmer LAMBDA 1050 UV/Vis/NIR spectrometer embodying a 150 mm InGaAs integrating sphere. The thin film sample was placed in a holder between the input light and the integrating sphere to measure the fraction of transmitted light FT and was clamped by a center mount accessory under an angle of 15° inside the sphere to obtain the total fraction of reflected and transmitted light FR + FT. Similarly, the fraction of reflected light FR was detected by placing the sample behind the integrating sphere. A Labsphere’s Spectralon Reflectance Standard was used for the calibration of the reflection measurement, which provides 100% reflection. ### Tight-binding modeling The tight-binding model is a flexible empirical and symmetry-based atomistic model, widely used for classical semiconductors and has been recently adapted to the MHPs in ref. 33. This atomistic method aims at empirically describing the chemical bonding in halide perovskites and the electronic band structure over the entire Brillouin zone, while keeping the computational effort at the level of multiband effective mass approaches and allowing descriptions of nanostructures up to a few millions atoms41. Its standard limitations are the lack of long-range interactions, on-site optical matrix elements and descriptions of excitonic effects. For this last two features not yet implemented for MHPs, also rarely taken into account for conventional semiconductors42, explicit representations of atomic orbitals would be required43. Such an extension of the model could possibly lead to a considerable increase of the computational cost, especially when targeting nonlinear optical responses. Compared to the original work33, the current tight-binding parameters have only been slightly adapted to better match the energetic position of the contributions to the 2PA related to the transitions involving HE and LE conduction band states. In addition, second nearest neighbor overlap integrals have also been added for p-orbitals to better capture the dispersions of the valence band33.
{}
# Euclidean and cosine distance for unit vectors The Euclidean distance between two vectors p and q is the length of the line segment that connects them (here and in all following formulas the sum is over all dimensions of the vectors, i.e., if we have n dimensions the sum ranges from i=0 to n): $d_{\text{euclid}}(\vec{p},\vec{q}) = |\vec{p} - \vec{q}| = \sqrt{\sum_i (p_i - q_i)^2}$ Using the binomial expansion, we can write this as follows: $d_{\text{euclid}}(\vec{p},\vec{q}) = \sqrt{\sum_i p_i^2 - 2\sum_i p_i q_i +\sum_i q_i^2}$ Unit vectors have a length of 1 (by definition), length is calculated as the Euclidean norm, that is, the Euclidean distance of a vector to the zero vector, i.e., the square root of the sum of all sqared entries in the vector: $|\vec{p}| = d_{\text{euclid}}(\vec{p},0) = \sqrt{\sum_i (p_i-0)^2 } = \sqrt{\sum_i p_i^2 }$ If something is 1, its square is also 1: $\sqrt{\sum_i p_i^2 } = 1 \Leftrightarrow \sum_i p_i^2 = 1$ We can now replace the squared sums over all vector elements in the formula for Euclidean distance with 1: $d_{\text{euclid}}(\vec{p},\vec{q}) = \sqrt{1 - 2\sum_i p_i q_i + 1} = \sqrt{2 - 2\sum_i p_i q_i} = \sqrt{2(1 - \sum_i p_i q_i)}$ Now let’s see how the cosine distance is defined. The more common thing to do is to calculate the cosine similarity of two vectors as the cosine of the angle between them: $s_{\text{cosine}}(\vec{p},\vec{q}) = \frac{\vec{p} \cdot \vec{q}}{|\vec{p}| |\vec{q}|} = \frac{\sum_i p_i q_i}{|\vec{p}| |\vec{q}|}$ As we have unit vectors, we can get rid of the division by the length (which is always 1), so the formula is simplified to the dot product between the two vectors: $s_{\text{cosine}}(\vec{p},\vec{q}) = \sum_i p_i q_i$ When we have a vector space where the entries correspond to occurrences of terms in a document, all entries are positive, so the value of the cosine similarity will always be between zero and one. This means, we can define the cosine distance as: $d_{\text{cosine}}(\vec{p},\vec{q}) = 1 - s_{\text{cosine}}(\vec{p},\vec{q}) = 1 - \sum_i p_i q_i$ So let’s put it together. Let’s say we have two vectors v and w and we know that measured with Euclidean distance, v is closer to some other point p than w is*: $d_{\text{euclid}}(\vec{p},\vec{v}) \leq d_{\text{euclid}}(\vec{p},\vec{w})$ We can now replace the Euclidean distance with the formula from above, square both sides (because that doesn’t change the inequality relation) and get rid of the two that appears on both sides: $\sqrt{2(1 - \sum_i p_i v_i)} \leq \sqrt{2(1 - \sum_i p_i w_i)}$ $\Leftrightarrow 2(1 - \sum_i p_i v_i) \leq 2(1 - \sum_i p_i w_i)$ $\Leftrightarrow 1 - \sum_i p_i v_i \leq 1 - \sum_i p_i w_i$ What we are left with is the cosine distance! So, putting start and end together, what we have shown is: $d_{\text{euclid}}(\vec{p},\vec{v}) \leq d_{\text{euclid}}(\vec{p},\vec{w}) \Leftrightarrow d_{\text{cosine}}(\vec{p},\vec{v}) \leq d_{\text{cosine}}(\vec{p},\vec{w})$ This doesn’t mean that when you calculate Euclidean distance and cosine distance between two vectors that you will get the same number. But whenever you are only interested in relative distances (that means you only want to know which of two vectors is closer to something than the other) and you have vectors that are normalized to unit length with only positive entries, then the result should be the same whether you use cosine or Euclidean distance. * The text says “closer” and not “closer or the same” and that is actually what I wanted to say, but there seems to be some strange bug in this LaTeX plugin that doesn’t allow you to use the < sign in a formula... so we'll take the less-or-equal sign and just ignore the equal-part. This entry was posted in Machine Learning and tagged , , , , , , by swk. Bookmark the permalink. ## About swk I am a software developr, data scientist, computational linguist, teacher of computer science and above all a huge fan of LaTeX. I use LaTeX for everything, including things you never wanted to do with LaTeX. My latest love is lilypond, aka LaTeX for music. I'll post at irregular intervals about cool stuff, stupid hacks and annoying settings I want to remember for the future.
{}
$$3^{(3^3)} = 3^{(27)} = 7625597484987\quad\quad (3^3)^3 = 27^3 = 19683$$ The difference rapidly grows for larger values: $$4^{(4^4)} = 4^{(256)} \sim 10^{154} \quad\quad (4^4)^4 = 256^4\sim 10^9$$ However, for $2$ the values are the same $$2^{(2^2)} = 2^{(4)} = 16\quad\quad (2^2)^2 =4^2 =16$$ The extension of the definitions are naturally either 'powers evaluated from the right' or 'powers evaluated from the left'. The difference for a stack of four powers is gigantic $$(((3^3)^3)^3) = (((27)^3)^3) = (19683)^3\sim 10^{12}$$ $$(3^{(3^{(3^{(3)})})}) =(3^{(3^{27})}) =(3^{(7.6\times 10^{12})})\sim 10^{3.6\times 10^{12}}$$ Using a spreadsheet we found that both definition of stacking four numbers leads to the same value when the base is $1.02092370325178$
{}
28 # Questions 1 4 1 533 views ### Minimal prime ideals and Axiom of Choice(revised version) jun 4 at 0:07 Will Sawin 18.5k12448 3 1 292 views ### On One point Lindeloffication of topological spaces jun 18 at 9:22 AliReza Olfati 1,14017 1 2 2 256 views ### Ring isomorphism $\Phi \colon C(X) \to C(Y)$ and zero dimensionality of $X$ jul 2 at 8:45 KP Hart 2,281311 2 1 177 views ### Existence of a non-submetrizable topological space $(X, \tau)$ may 10 12 at 13:18 Ramiro de la Vega 3,590715 2 1 236 views ### About subspaces of $F$-spaces jun 5 at 22:22 MathOverflow 123 2 2 309 views ### A question about some special compactifications of $\mathbb{R}$ may 2 12 at 20:16 Gjergji Zaimi 37.4k272149 2 0 215 views ### separability of commutative rings sep 15 at 14:33 Charles Staats 4,05911945 2 1 348 views ### special extremally disconnected spaces with only finite isolated points nov 25 at 1:34 Goldstern 5,25011227 1 vote 0 219 views ### Existence of algebraic closure and Axiom of choice [closed] sep 18 at 16:00 AliReza Olfati 1,14017 1 vote 1
{}
# Mathematical expressions using short multiplication formulas ### Mathematical expressions using short multiplication formulas Solve these mathematical expressions using short multiplication formulas: (2x-3)(2x+3)-(x-3)(x+3)= ? x(x+3)(x-3)-(x+1)(x+2)(x-1)= ? sisoni ### Re: Mathematical expressions using short multiplication form About the both expressions use this short multiplication formula: $$(a - b)(a + b) = (a^2 - b^2)$$ $$(2x-3)(2x+3)-(x-3)(x+3) = ((2x)^2 - 3^2) - (x^2 - 3^2) = 4x^2 - 9 - x^2 + 9 = 3x^2$$ Math Tutor Posts: 410 Joined: Sun Oct 09, 2005 11:37 am Reputation: 28 ### Re: Mathematical expressions using short multiplication form x(x+3)(x-3)-(x+1)(x+2)(x-1)= x((x+3)(x-3)) - ((x+1)(x-1))(x+2) = =x(x^2 - 9) - (x^2 - 1)(x+2) = x^3 - 9x - (x^3 + 2x^2 - x - 2) = x^3 - 9x - x^3 - 2x^2 + x + 2 = -2x^2- 8x + 2 John ### Re: Mathematical expressions using short multiplication form The formulas is (a+b)(a-b) = a^2 - b^2 So you cans solve this answer of Rational Expressions in mathematical world. But be prepare to solve yourself .Thanks. Guest
{}
# Math Help - Width and Height of Solid 1. ## Width and Height of Solid The length of a rectangular solid is 7. The width of the solid is 2 more than the height. The volume of the solid is 105. Find the width and the height of the solid. 2. Hello, Originally Posted by magentarita The length of a rectangular solid is 7. The width of the solid is 2 more than the height. The volume of the solid is 105. Find the width and the height of the solid. Let l, w, h be respectively the length, the width and the height of the solid. We know that the volume of such a shape is defined as being : V=h*l*w We know that w=2h ("the width is 2 more than the height) So $105=7*2h*h=14h^2$ Hence $h^2=\frac{105}{14}=\frac{15}{2}$ this is the height. Multiply by 2 to get the width 3. ## ok.... Originally Posted by Moo Hello, Let l, w, h be respectively the length, the width and the height of the solid. We know that the volume of such a shape is defined as being : V=h*l*w We know that w=2h ("the width is 2 more than the height) So $105=7*2h*h=14h^2$ Hence $h^2=\frac{105}{14}=\frac{15}{2}$ this is the height. Multiply by 2 to get the width A very nice reply.
{}
So, Jane has 8 marbles with her. Meaning of Double, Double. How many marbles does Jane have? Example: Double 4 is 8. dou′ble-time v. The number 3 is one more than 2. The Double class wraps a value of the primitive type double in an object. The addition of any two consecutive numbers can be done by using doubles plus 1 or doubles minus 1 strategy. 3. designed for two users: a double room. The int also deals with data, but it serves a different purpose. Consider, for example, a function of two variables $$z = f\left( {x,y} \right).$$ Also refers to a zero of a polynomial function with multiplicity 2. Doubles plus 1 and doubles minus 1 are also called near doubles strategy. Just like a line graph, the lines can ascend and descend in a double line graph. It accommodates 15 to 16 digits, with a range of approximately 5.0 × 10 −345 to 1.7 × 10 308 . Tom has double the candies as many as John has implies that Tom has 2 × 4 = 8 candies. A double line graph is a line graph with two lines connecting points to show a continuous change. C. y will increase by 2 That is 13. ; twofold in size, amount, number, extent, etc. Define double time. It is a computer number format, which pre occupies 8 bytes of data in the computer memory, and by using the floating point it delivers a wide range of values. woodleywonderworks/CC-BY 2.0. Step 3: = 24x = double the original value of y [24x = 2(12x) = 2y.] doblen, dublen, doublen, F. doubler, fr. When a number is multiplied by 2, we say that it is doubled. double. The definition of a double integral is motivated through a hair density example. See Double, a.] For example, double of 2 is 2  + 2 = 4. Any doubled number is a double fact, but double facts are most commonly used when they are small numbers, usually less than 12. Step 4: So, when x is doubled, y is also doubled. A. y will be doubled It is easy to remember the numbers that we get by doubling one-digit numbers. double factorial: The double factorial, symbolized by two exclamation marks (!! How many marbles does Jane have? Double-precision floating-point format (sometimes called FP64 or float64) is a computer number format, usually occupying 64 bits in computer memory; it represents a wide dynamic range of numeric values by using a floating radix point.. John has 4 candies. So, we can write 6 as 7 – 1. Math glossary - definitions with examples. 2. Correct Answer: A. What does Double, Double mean? That is 5. (Entry 1 of 4) 1 : having a twofold relation or character : dual. The range for a negative number of type double is between -1.79769 x 10 308 and -2.22507 x 10-308, and the range for positive numbers is between 2.22507 x 10-308 and 1.79769 x 10 308. Tom has double the candies as many as John has implies that Tom has 2 × 4 = 8 candies. For example, double of 2 is 2 + 2 = 4. The Complete K-5 Math Learning Program Built for Your Child. See more ideas about doubles facts, math doubles, math addition. duplus. Example: Michelle has 4 marbles and Jane has double the marbles that Michelle has. Sorry, we could not process your request. A double is a double-precision, 64-bit floating-point data type. So, we can write 3 as 2 + 1. : a double portion; a new house double the size of the old one. double meaning: 1. twice the size, amount, price, etc., or consisting of two similar things together: 2. The double negatives give some rule in which the math rules can be made while the summing of the numbers is made and results to find the solution of the numbers. Disclaimer. Examples of Double. Round(Double, Int32, MidpointRounding) Rounds a double-precision floating-point value to a specified number of fractional digits, and uses the specified rounding convention for midpoint values. Here is the official definition of a double integral of a function of two variables over a rectangular region $$R$$ as well as the notation that we’ll use for it. ‘The master bedroom has a huge double bath, while there is a twin room in the cellar and a double on the ground floor.’ 1.1 A double measure of spirits. A marching pace of 180 three-foot steps per minute. Free math problem solver answers your algebra, geometry, trigonometry, calculus, and statistics homework questions with step-by-step explanations, just like a math tutor. ( ˈdʌbəl) adj ( usually prenominal) 1. as much again in size, strength, number, etc: a double portion. Learn more. Copyright © 2020 Studypad Inc. All Rights Reserved. Step 1: y = 12x Definition Of Double. For an even integer n , the double factorial is the product of all even integers less than or equal to n but greater than or equal to 2. Therefore, 2 + 3 = 5. A rate of pay, as for overtime work, that is twice the regular rate. To get a double of a number, we add the same number to itself. 4. folded in two; composed of two layers: double paper. double time synonyms, double time pronunciation, double time translation, English dictionary definition of double time. DICTIONARY.COM Double (adj) being in pairs; presenting two of a kind, or two in a set together; coupled It is usually used to refer binary64, as mentioned by IEEE 754 standard. Definition of Double Integral. Triple-double definition, a score in a basketball game of at least ten points, ten rebounds, and ten assists by a single player. n. 1. L. duplare, fr. 2 : consisting of two usually combined members or parts an egg with a double yolk. The double of 4 is 8. All content on this website, including dictionary, thesaurus, literature, geography, and other reference data is for informational purposes only. Double Root A root of a polynomial equation with multiplicity 2. Double definition, twice as large, heavy, strong, etc. For more information on double- and single-precision floating-point values, see Floating-Point Numbers. The brightest star in the Southern Hemisphere is actually a double star system approximately 424 light years from the earth in the constellation of Scorpio. A double…. Double number refers to the Double-precision floating-point format. So, the addition 2 + 3 can be shown as: So, the required sum is one more than double. So, the addition 7 + 6 can be shown as: So, the required sum is one less than double. Identifying-Points,-and-Describing-Paths-on-a-Coordinate-Plane-Gr-5, Adding-and-Subtracting-Fractions-Unlike-Denominators-Gr-5, Collecting-Data-Formulating-a-Question-Gr-6. Double Roots This case will lead to the same problem that we've had every other time we've run into double roots (or double eigenvalues). For an odd integer p , the double … The definition of Double: To multiply by 2. This can also be extended to the numbers that are not immediately next to each other. Step 2: y = 12(2x) [x is doubled.] Double may refer to: . In addition, this class provides several methods for converting a double to a String and a String to a double, as well as other constants and methods useful when dealing with a double. Rounds a double-precision floating-point value to a specified number of fractional digits, and rounds midpoint values to the nearest even number. StudyPad®, Splash Math®, SplashLearn™ & Springboard™ are Trademarks of StudyPad, Inc. D. y will decrease by 2 An addend of an addition sentence will be the subtrahend/difference of the corresponding subtraction sentence. For example, 1/10, which is represented precisely by .1 as a decimal fraction, is represented by .001100110011 as a binary fraction, with the pattern "0011" repeating to infinity. The Doubledata type stores double-precision floating-point values in a 64-bit binary format, as shown in the following table: Just as decimal fractions are unable to precisely represent some fractional values (such as 1/3 or Math.PI), binary fractions are unable to represent some fractional values. Example: Michelle has 4 marbles and Jane has double the marbles that Michelle has. ‘‘Two whiskies, and make it doubles please’’ B. y will be multiplied by 4 We only get a single solution and will need a second solution. 3 a : being twice as great or as many … To get a double of a number, we add the same number to itself. Jan 29, 2017 - Explore Tricia Stohr-Hunt's board "Addition - Doubles/Near Doubles Facts", followed by 7013 people on Pinterest. In fact, this is also the definition of a double integral, or more exactly an integral of a function of two variables over a rectangle. Double definition: You use double to indicate that something includes or is made of two things of the same... | Meaning, pronunciation, translations and examples Webster Dictionary (3.00 / 1 vote) Rate this definition: Double (adj) twofold; multiplied by two; increased by its equivalent; made twice as large or as much, etc. The definite integral can be extended to functions of more than one variable. The double negative in math deals with the signed numbers in the math. Illustrated definition of Digit: A single symbol used to make a numeral. 2. composed of two equal or similar parts; in a pair; twofold: a double egg cup. See more. Look-alike, a person who closely resembles another person; Body double, someone who substitutes for the credited actor of a character; Doppelgänger, ghostly double of a living person; Polish Enigma doubles, replicating the function of Nazi Germany's cipher machines; Double, a bet which combines two selections; see Glossary of bets offered by UK bookmakers#Double How to use double bind in a sentence. It is easy to remember the numbers that we get by doubling one-digit numbers. Information and translations of Double, Double in the most comprehensive dictionary definitions resource on the web. Video Examples: Mulberger: Using Doubles Facts in Math Etymology: [OE. John has 4 candies. See more. An object of type Double contains a single field whose type is double. The double negative can have the values in positive manner. Definition of Double, Double in the Definitions.net dictionary. The name antares tells what it is not, literally meaning "not Mars" drawn from anti, for not, and Ares which was the Greek name for the Planet Mars. Since 2 + 3 is the same as 3 + 2, we can apply any of the near doubles strategies to find the sum. A double fact in math is a doubled value that is easy to remember, such as the equation "8 + 8 = 16." ), is a quantity defined for all integer s greater than or equal to -1. When a number is multiplied by 2, we say that it is doubled. The number 6 is one less than 7. Therefore, 7 + 6 = 13. Definition of double. Quick Reference from A Maths Dictionary for Kids - over 600 common math terms explained in simple language. To have 2 of something. In this case, the floating-point value provide… MATLAB constructs the double data type according to IEEE ® Standard 754 for double precision. Double bind definition is - a psychological predicament in which a person receives from a single source conflicting messages that allow no appropriate response to be made; broadly : dilemma. Parents, we need your age to give you an age-appropriate experience. Home Styles Monarch Kitchen Island With Granite Top, Home Styles Monarch Kitchen Island With Granite Top, Ramones Bass Tab I Wanna Be Sedated, Gst Due Dates 2021, Home Styles Monarch Kitchen Island With Granite Top, Acetylcholinesterase Vs Cholinesterase, Se Me In English,
{}
• Advertisement # Frame Rates for beginners! This topic is 3494 days old which is more than the 365 day threshold we allow for new replies. Please post a new topic. If you intended to correct an error in the post then please contact us. ## Recommended Posts Hi everyone! I recently decided to start learning C++, specifically for game programming, and have had some great fun working through various tutorials, and reading posts on this forum (you all seem very knowledgeable and helpful!!). I have a question which is probably really stupid, but I hope you don't mind me asking: It's a bit tricky to explain, but I'll do my best.... I've been using the tutorials from http://www.directxtutorial.com/ which seem to be quite well used, and everything is working as expected. However my quesiton is to do with frame rates; let's say the main code is as follows (I won't put the whole thing here): // include the basic windows header files and the Direct3D header file #include <windows.h> #include <windowsx.h> #include <d3d9.h> #include <d3dx9.h> // define the screen resolution and keyboard macros #define SCREEN_WIDTH 640 #define SCREEN_HEIGHT 480 #define KEY_DOWN(vk_code) ((GetAsyncKeyState(vk_code) & 0x8000) ? 1 : 0) #define KEY_UP(vk_code) ((GetAsyncKeyState(vk_code) & 0x8000) ? 0 : 1) // include the Direct3D Library files #pragma comment (lib, "d3d9.lib") #pragma comment (lib, "d3dx9.lib") // global declarations LPDIRECT3D9 d3d; // the pointer to our Direct3D interface LPDIRECT3DDEVICE9 d3ddev; // the pointer to the device class LPDIRECT3DVERTEXBUFFER9 t_buffer = NULL; // the pointer to the vertex buffer // function prototypes void initD3D(HWND hWnd); // sets up and initializes Direct3D void render_frame(void); // renders a single frame void cleanD3D(void); // closes Direct3D and releases memory void init_graphics(void); // 3D declarations struct CUSTOMVERTEX {FLOAT X, Y, Z; DWORD COLOR;}; #define CUSTOMFVF (D3DFVF_XYZ | D3DFVF_DIFFUSE) // the WindowProc function prototype LRESULT CALLBACK WindowProc(HWND hWnd, UINT message, WPARAM wParam, LPARAM lParam); // the entry point for any Windows program int WINAPI WinMain(HINSTANCE hInstance, HINSTANCE hPrevInstance, LPSTR lpCmdLine, int nCmdShow) { HWND hWnd; WNDCLASSEX wc; ZeroMemory(&wc, sizeof(WNDCLASSEX)); wc.cbSize = sizeof(WNDCLASSEX); wc.style = CS_HREDRAW | CS_VREDRAW; wc.lpfnWndProc = (WNDPROC)WindowProc; wc.hInstance = hInstance; wc.hCursor = LoadCursor(NULL, IDC_ARROW); wc.lpszClassName = L"WindowClass"; RegisterClassEx(&wc); hWnd = CreateWindowEx(NULL, L"WindowClass", L"Our Direct3D Program", WS_EX_TOPMOST | WS_POPUP, 0, 0, SCREEN_WIDTH, SCREEN_HEIGHT, NULL, NULL, hInstance, NULL); ShowWindow(hWnd, nCmdShow); // set up and initialize Direct3D initD3D(hWnd); // enter the main loop: MSG msg; while(TRUE) { DWORD starting_point = GetTickCount(); if (PeekMessage(&msg, NULL, 0, 0, PM_REMOVE)) { if (msg.message == WM_QUIT) break; TranslateMessage(&msg); DispatchMessage(&msg); } render_frame(); // check the 'escape' key if(KEY_DOWN(VK_ESCAPE)) PostMessage(hWnd, WM_DESTROY, 0, 0); while ((GetTickCount() - starting_point) < 25); } // clean up DirectX and COM cleanD3D(); return msg.wParam; } OK, so this bit: while ((GetTickCount() - starting_point) < 25); makes sure it only updates the frame once every 25ms or therebouts. So taking it out should mean that it updates as fast as the computer can handle. To this code I've added a basic framerate counter. So when I take out the 'limiter', it still only runs at about 100 frames per second. Now I know my PC plays some older games waaayyy faster than that, so the question I have is what in this code limits the frame rate? My guess is perhaps something to do with the way windows prioritises applications, or perhaps that's just the nature of DirectX? I'm sorry for such a silly quesiton, but any thoughts would be gratefully received. I guess 100fps is pretty much fast enough for anything, but I'm working on a pinball game with a bit of 'real world physics', and the faster the 'framerate' (or I guess you'd say main looprate) the more accurate my physics will be. Just to say I'm convinced it's not the limit of my PC, because a) the frame rate of some older games I have, and b) I've expanded the code to display multiple objects, with lodas of key inputs, text boxes etc., running at 1600x1200 and it still runs at the same framerate as when it is completely basic like above- ???!! Hope this makes sense? cheers, Joe Advertisement #### Share this post ##### Share on other sites Thanks for that ToohrVyk - that's really quite interesting. If that is what's limiting my 'frame rate', I guess I need to introduce something to my code that does this: 1) Turns off 'Vsync' 2) Executes the main loop as often as it possibly can but 3) Only uses render_frame() at the correct time i.e. the main loop runs v fast (thus the physics becomes more accurate), but the rendering only happens say 85 times per second (if that's my refresh rate) to ensure there's no tearing. This sounds really good - I'll research how to do this; Out of interest does anyone else think it's a different issue? thanks again! Joe #### Share this post ##### Share on other sites Are you using GetTickCount to measure fps? On average it's only accurate to ~10ms, which means it can't measure framerates > 100 or so. You'll have to use a different timing function, like QueryPerformanceCounter/QueryPerformanceFrequency or timeGetTime. #### Share this post ##### Share on other sites Also one other thing... This line here: PostMessage(hWnd, WM_DESTROY, 0, 0); It's not really a big deal for a simple program with one window, but this isn't how you destroy a window. You use the DestroyWindow function. That function actually cleans up resources created for the window, which won't happen if you just send yourself a WM_DESTROY message. #### Share this post ##### Share on other sites Thanks MJP, yes, I measure the 'loop rate' using QueryPerformanceCounter/QueryPerformanceFrequency , and (I should have posted that code here, but I'm at work at the moment!) just before the render_frame() I use counter++, and work out the fps (or lps!) by comparing the counter int to the results of the 'realtime' counter above. In fact, I have gone as far as displaying how many microseconds each loop takes - its about 13,333 - and as mentioned doesn't change significantly no matter how many polys & objects I throw at it; this is why I reckon something is 'holding up' each loop? perhaps I'm just being obsessive! thanks for all the help so far, cheers, joe #### Share this post ##### Share on other sites Quote: Original post by Beginner_Joe1) Turns off 'Vsync'2) Executes the main loop as often as it possibly canbut3) Only uses render_frame() at the correct time This is a possibility. However, what's wrong with: 1) Gets the current time 2) Computes the world state for the current time 3) Renders the frame using the world state and with VSync Regardless of VSync, you'll still get a framerate that's reasonable enough for collecting input, and you can avoid VSync problems entirely by using triple buffering if it really annoys you. The physics loop doesn't have to run in real-time—it only has to run as if it happened in real-time. #### Share this post ##### Share on other sites Quote: Original post by Beginner_Joe#define SCREEN_WIDTH 640#define SCREEN_HEIGHT 480#define KEY_DOWN(vk_code) ((GetAsyncKeyState(vk_code) & 0x8000) ? 1 : 0)#define KEY_UP(vk_code) ((GetAsyncKeyState(vk_code) & 0x8000) ? 0 : 1) Is this still how C++ is taught? C++ has had const and inline functions since 1983 or something. #### Share this post ##### Share on other sites Thanks ToohrVyk that's of course a good point. Actually that's how my prog works at the moment (e.g. for the ball position it works out what the accel would be at the point in time the function is called, from that the velocity and from that the world position). And in 99.9% of circumstances this is fine (although for collisions I 'back-engineer' to the time of collision, and update before and after etc.) however in some circumstances I want to figure out changes smaller than the current delta ( approx 13ms), Plus I'm thinking there must be a way to do it cos other progs I've used don't seem to have this 'limit' (i.e. played severance the other day and it runs at 400fps...!) DevFred - I'm sure there are 100 better ways to do things than how I'm learning! let's just say the tutorials that I'm doing are about the right level for me (I did used to program the speccy in hex dumps, but I was 10 at the time, and for some reason things seemed easier then...?!!) #### Share this post ##### Share on other sites If you just want to disable VSYNC then, use D3DPRESENT_IMMEDIATE as the value for PresentationInterval in the D3DPRESENT_PARAMETERS struct you pass to CreateDevice. #### Share this post ##### Share on other sites Quote: Original post by Beginner_Joehowever in some circumstances I want to figure out changes smaller than the current delta ( approx 13ms), You're limiting yourself to thinking in terms of "one update step per frame". Suppose for a minute that you want your physics (very simple and easy to compute) to update every 2 milliseconds, but your frames are only rendered every 14 milliseconds. What do you do? You just run seven updates before every frame! In short: int updates = 0;float start_time = time();while (running){ int expected_updates = (time() - start_time) * UPDATES_PER_SECOND; while (updates < expected_updates) { run_update(); ++updates; } render_frame();} This guarantees time-independence, a great update frequency for your physics, and is still simple. Quote: Plus I'm thinking there must be a way to do it cos other progs I've used don't seem to have this 'limit' (i.e. played severance the other day and it runs at 400fps...!) To do what? Display a very high value followed by "FPS" ? Unless it has a very clear impact on the gameplay (and at 85FPS, chances are that it won't), don't bother with it. Quote: DevFred - I'm sure there are 100 better ways to do things than how I'm learning! let's just say the tutorials that I'm doing are about the right level for me The issue is with them being outdated. The correct version of those lines would be: const int SCREEN_WIDTH = 640;const int SCREEN_HEIGHT = 480;inline bool KEY_DOWN(int vk_code) { return GetAsyncKeyState(vk_code) & 0x8000; }inline bool KEY_UP(int vk_code) { return GetAsyncKeyState(vk_code) & 0x8000; } #### Share this post ##### Share on other sites hahaha - genius, yes of course ToohrVyk that makes perfect sense! I have to get used to not thinking 'step-by-step'. Comment noted re fps of >100 fps etc. - you're right, what impact does it have?! The solution you suggest is exactly what I need; the fps doesn't matter, but I want the time delta for calculating motion to be as small as possible. And thanks for the corrections re #define / const. I'm getting my head round what the difference is (with a little assistance from 'C++ for dummies'). I've only been learning C++ for 3 weeks, so I've had a lot to take in! I only understood what the point of pointers was about 5 days ago.... ;) Thanks again for your time & help, I'm really grateful for you assisting this rookie! Now I'd better get back to my day job (which is recruiting IT people!). And one of the reasons why I'm learning C++? I've been asked by games companies to recruit coders & artists for them, been out to meet a few studios, and thought 'that is a job that I want'. I don't care what the entry level pays, I'm imagining working in a job that I really enjoy. One day, one day! #### Share this post ##### Share on other sites Excellent - Of course, once I disabled VSync in the code (and after a bit of puzzlement, in my graphics card driver global settings as well!) the app runs at 2000fps. Which I don't need obviously. This means that you guys were right it's the Vsync. So the aim is now to follow the advice, and run a loop until the monitor is ready to refresh. I don't want to guestimate the number of loops, and I want to squeeze as many loops as I can between frames Bit of research has shown up D3DRASTER_STATUS and GETRASTERSTATUS as possible ways of timing this right. Would any one be kind enough to show me a bit of code that would allow me to check the raster status each loop so I can call the frame rendering at exactly the right time? I've looked at the MSDN but I'm getting confused, and can't understand how to get anything meaningful! Hope you don't mind me asking this, cheers, Joe #### Share this post ##### Share on other sites • Advertisement • Advertisement • ### Popular Tags • Advertisement • ### Popular Now • 11 • 16 • 11 • 13 • 11 • Advertisement
{}
# Homework Help: Modeling a vibrating string 1. Apr 28, 2010 ### Mechdude 1. The problem statement, all variables and given/known data hi im reading Guenther & Lee, "partial differential equations of mathematical physics and integral equations" on the first chapter second section i think, on "Small vibrations of an elastic string" they give this argument: 1. consider a string of x length L, the string is assumed to be a continuum and tied to posts at $x=0$ and $x=L$. a continuous density function, $\rho$, with its integral over any segment of the string gives the mass of the segment. the string is perfectly elastic, vibrations are very small. 2. an axis perpendicular to the x axis is constructed at $x=0$ , the equilibrium position of the string is the horizontal segment $0 \leq x \leq L$ . the position of a given point which was at x during equilibrium will be $u(x,t)$ at time t. if time is kept constant the function gives the shape of the string at that instant. 3. the function $\rho_0 (x)$ denotes the density at equilibrium, and $\rho (x,t)$ at time t. as the string stretches the density will change; if we focus on an arbitrary interval between $x=x_1$ & $x=x_2$ along the string we find that the mass m in this interval satisfies: $$\int_{x_1}^{x_2} \rho_0 (x) dx = \int^{x_2}_{x_1} \rho (x,t) [ 1 + u^{2}_{x} (x,t) ]^{\frac{1}{2}} dx$$ this last expression has me stumped, it seems to me like he pulled it right from under his sleeve, why would the mass satisfy that expression? i think the squared term is a partial derivative with respect to x imho. 2. Relevant equations 3. The attempt at a solution Last edited: Apr 28, 2010 Share this great discussion with others via Reddit, Google+, Twitter, or Facebook Can you offer guidance or do you also need help? Draft saved Draft deleted
{}
thdl # THDL THDL is a bunch of tools (hidden under single program) for easing the work with VHDL language. It is (and will always be) based solely on the text processing, no semantic analysis. Such approach draws clear line what might be included and what will never be supported. 'T' in the 'THDL' stands for 'Text'. However, do not read THDL as "Text Hardware Description Language" and do not treat it as such. Check the wiki. ## Project details Uploaded source
{}
The intensity mapping of the [CII] 157.7 $\rm \mu$m fine-structure emission line represents an ideal experiment to probe star formation activity in galaxies, especially in those that are too faint to be individually detected. Here, we investigate the feasibility of such an experiment for $z > 5$ galaxies. We construct the $L_\rm CII - M_\rm h$ relation from observations and simulations, then generate mock [CII] intensity maps by applying this relation to halo catalogs built from large scale N-body simulations. Maps of the extragalactic far-infrared (FIR) continuum, referred to as "foreground", and CO rotational transition lines and [CI] fine-structure lines referred to as "contamination", are produced as well. We find that, at 316 GHz (corresponding to $z_\rm CII = 5$), the mean intensities of the extragalactic FIR continuum, [CII] signal, all CO lines from $J=1$ to 13 and two [CI] lines are $\sim 3\times10^5$ Jy sr$^-1$, $\sim 1200$ Jy sr$^-1$, $\sim 800$ Jy sr$^-1$ and $\sim 100$ Jy sr$^-1$, respectively. We discuss a method that allows us to subtract the FIR continuum foreground by removing a spectrally smooth component from each line of sight, and to suppress the CO/[CI] contamination by discarding pixels that are bright in contamination emission. The $z > 5$ [CII] signal comes mainly from halos in the mass range $10^11-12 \,M_\odot$; as this mass range is narrow, intensity mapping is an ideal experiment to investigate these early galaxies. In principle such signal is accessible to a ground-based telescope with a 6 m aperture, 150 K system temperature, a $128\times128$ pixels FIR camera in 5000 hr total integration time, however it is difficult to perform such an experiment by using currently available telescopes. ### Intensity mapping of [C II] emission from early galaxies #### Abstract The intensity mapping of the [CII] 157.7 $\rm \mu$m fine-structure emission line represents an ideal experiment to probe star formation activity in galaxies, especially in those that are too faint to be individually detected. Here, we investigate the feasibility of such an experiment for $z > 5$ galaxies. We construct the $L_\rm CII - M_\rm h$ relation from observations and simulations, then generate mock [CII] intensity maps by applying this relation to halo catalogs built from large scale N-body simulations. Maps of the extragalactic far-infrared (FIR) continuum, referred to as "foreground", and CO rotational transition lines and [CI] fine-structure lines referred to as "contamination", are produced as well. We find that, at 316 GHz (corresponding to $z_\rm CII = 5$), the mean intensities of the extragalactic FIR continuum, [CII] signal, all CO lines from $J=1$ to 13 and two [CI] lines are $\sim 3\times10^5$ Jy sr$^-1$, $\sim 1200$ Jy sr$^-1$, $\sim 800$ Jy sr$^-1$ and $\sim 100$ Jy sr$^-1$, respectively. We discuss a method that allows us to subtract the FIR continuum foreground by removing a spectrally smooth component from each line of sight, and to suppress the CO/[CI] contamination by discarding pixels that are bright in contamination emission. The $z > 5$ [CII] signal comes mainly from halos in the mass range $10^11-12 \,M_\odot$; as this mass range is narrow, intensity mapping is an ideal experiment to investigate these early galaxies. In principle such signal is accessible to a ground-based telescope with a 6 m aperture, 150 K system temperature, a $128\times128$ pixels FIR camera in 5000 hr total integration time, however it is difficult to perform such an experiment by using currently available telescopes. ##### Scheda breve Scheda completa Scheda completa (DC) astro-ph.GA; astro-ph.GA; astro-ph.CO File in questo prodotto: Non ci sono file associati a questo prodotto. I documenti in IRIS sono protetti da copyright e tutti i diritti sono riservati, salvo diversa indicazione. Utilizza questo identificativo per citare o creare un link a questo documento: https://hdl.handle.net/11384/63523 ##### Attenzione Attenzione! I dati visualizzati non sono stati sottoposti a validazione da parte dell'ateneo • ND • 57 • 55
{}
4 The mathematics of single object stereology - part 1 Properties of discrete and contiunous random variables, sampling theory, random position and orientation. The roots of stereology, which may be defined as the statistical inference of geometric parameters from sampled information, are found in the fields of statistics and geometrical probability. Whenever stereology is used to estimate a geometric parameter, it is important to have some idea of how accurate and precise the estimate is. Bias is a measure of the accuracy of an estimator. After an infinite number of trials, an unbiased estimator that exhibits finite variance will converge on the correct (true) result. The archery targets shown in Figure 4.1 illustrate the concepts of bias and precision. The objective is to hit the centre of the target. The shots fired at the uppermost targets are unbiased. As the number of shots fired increases, their mean position tends towards the centre of the target. The shots fired at the left-hand targets are precise. Alternatively put, the variance of the set of numbers representing the distances between each shot and their mean position is small. Figure 4.1: Illustration of bias and precision. The shots fired at the uppermost targets are unbiased, while those fired at the left-hand targets are precise. Stereological estimators described in this chapter are unbiased. The estimation of precision is a more complex matter and is the subject of Section 4.10. Stereology works by interrogating objects under investigation with geometric test probes such as points, straight lines and curved lines. There are two approaches to stereology - design and model based (Cruz-Orive, 1997; Karlsson and Cruz-Orive, 1992 section 2.2). The design-based approach requires that test probes hit an object (or finite, non-random population of objects) with a well-defined mechanism of randomness. The model-based approach is applicable when the target object (or structure) can be regarded as a realisation of a random set of particles which is homogeneous. For an object to be homogeneous, the mean contents of the object in some window must not depend on the position of the window in space. In this case, test probes may be arbitrarily located as the randomness required to estimate geometric parameters is “contained” within the target object. As an example of these concepts, consider the problem of determining the area, $$A$$, of the unit circle, $$C$$. The area is, of course, $$\pi$$. One method for tackling the problem is shown in Figure 4.2(a). $$C$$ is broken into strips of width $$\mathrm{\delta}x$$ and height $$L = 2 \sqrt{1 - x^2}$$. The area of the circle is determined by the equation \begin{align} A = 2 \int_{-1}^{1} \sqrt{1 - x^2} \mathrm{d}x = \pi. \tag{4.1} \end{align} Suppose a grid of points is overlain on the circle, as shown in Figure 4.2(b). The total number of points falling within the circle multiplied by the area per point is an unbiased estimate of the area of $$A$$. In effect, the systematic sampling of the circle by a grid of points is a numerical integration of (4.1). The design-based stereological approach, which is employed throughout this thesis, stipulates the grid of points must hit the object with a well-defined mechanism of randomness. This is achieved by randomly positioning the grid of points with respect to the stationary circle. Refinement of the grid, by reducing the distance between points, or taking the average of repeated estimations are two ways of increasing precision. Both methods involve counting more points. Figure 4.2: (a) The area of the unit circle can be determined by solving the appropriate integral. Figure 4.2: (b) The area of the unit circle can be estimated by point counting. The dotted square shows the area associated with each point. This chapter summarises some of the fundamental results regarding the stereology of single, bounded objects (Cruz-Orive, 1997). Much of the work is based on a book called "Geometrical Probability" by Kendall and Moran (1963). Section 4.1 describes random variables and this is followed by a discussion concerning the representation of orientations in the plane and in $$\mathbb{R}^3$$. This groundwork is followed by the derivations of various stereological estimators. Section 4.3 opens by considering the problems of area and volume estimation by point counting. An important result, named after the Italian mathematician Bonaventura Cavalieri, is arrived at. In Section 4.4, the classical two-dimensional (2D) problem of Buffon's needle (1777) is tackled. Section 4.5 applies the theory of Buffon's needle to the problem of estimating the unknown length of a bounded, rectifiable curve in the plane (Steinhaus, 1930). A curve is rectifiable if it has a well-defined length. Curves that are smooth (infinitely differentiable or $$C^{\infty}$$) are rectifiable. An example of a non-rectifiable curve is the coastline of Britain which exhibits some degree of self-similarity and is said to be fractal (Mandelbrot, 1967). The extension of Buffon's needle to three dimensions (Section 4.6) motivates the problems of estimating the length of a bounded rectifiable curve in $$\mathbb{R}^3$$ (Saltykov, 1946) (Section 4.7) and of estimating the area of a bounded piecewise smooth surface in three dimensional (3D) space (Saltykov, 1945) (Section 4.8). Section 4.9 describes the problem of estimating the length of a bounded rectifiable curve in $$\mathbb{R}^3$$ from total vertical projections. Finally, the precision of various stereological estimators is investigated in Section 4.10. 4.1 Random variables In this section probability theory and random variables are introduced. Key results that underpin further sections are given here. If more information is required, the book "Probability and Random Processes" by Grimmett & Stirzaker (1982) is a good place to start. Suppose a needle is dropped onto a floor made up of planks of wood. The needle may or may not intersect one of the joints between the planks. A single throw of the needle is called an experiment or trial. There are two possible outcomes. Either the needle intersects a joint(s) or it lands between the joints. By repeating the experiment a large number of times, the probability $$\mathbf{P}$$ of a particular outcome or event can be calculated. Let $$A$$ be the event "needle intersects a joint". Let $$N\left(A\right)$$ be the number of occurrences of $$A$$ over $$n$$ trials. As $$n \to \infty$$, $$N\left(A\right) / n$$ converges to the probability that $$A$$ occurs, $$\mathbf{P}\left(A\right)$$, on any particular trial. On occasions, a probability can be assigned to an outcome without experiment. Section 4.4 describes such an approach, attributable to Buffon (1777), to the needle throwing experiment. The set of all possible outcomes of an experiment is called the sample space and is denoted by $$\Omega$$. A random variable is a function $$X : \Omega \to \mathbb{R}$$. Uppercase letters will be used to represent generic random variables, whilst lowercase letters will be used to represent possible numerical values of these variables. To describe the probability of possible values of $$X$$, consider the following definition. The distribution function of a random variable $$X$$ is the function $$F_X : \mathbb{R} \to \left[0, 1\right]$$ given by $$F_X\left(x\right) = \mathbf{P}\left(X \leq x\right)$$. 4.1.1 Discrete random variables The random variable $$X$$ is discrete if it takes values in some countable subset $$\left\{x_1, x_2, ...\right\}$$, only, of $$\mathbb{R}$$. The distribution function of such a random variable has jump discontinuities at the values $$x_1, x_2, ...$$ and is constant in between. The function $$f_X : \mathbb{R} \to \left[0, 1\right]$$ given by $$f_X\left(x\right) = \mathbf{P}\left(X = x\right)$$ is called the (probability) mass function of $$X$$. The mean value, or expectation, or expected value of $$X$$ with mass function $$f_X$$, is defined to be \begin{align} \mathbf{E}\left(X\right) &= \sum\limits_{x} x f_X\left(x\right) \\ &= \sum\limits_{x} x \mathbf{P}\left(X = x\right). \tag{4.2} \end{align} The expected value of $$X$$ is often written as $$\mu$$. It is often of great interest to measure the extent to which a random variable $$X$$ is dispersed. The variance of $$X$$ or $$\mathrm{Var}\left(X\right)$$ is defined as follows: $$\mathrm{Var}\left(X\right) = \mathbf{E}\left(\left(X - \mathbf{E}\left(X\right)\right)^2\right). \tag{4.3}$$ The variance of $$X$$ is often written as $$\sigma^2$$, while its positive square root is called the standard deviation. Since $$X$$ is discrete, (4.3) can be re-expressed accordingly: \begin{align} \sigma^2 &= \mathrm{Var}\left(X\right) \\ &= \mathbf{E}\left(\left(X - \mu\right)^2\right) \\ &= \sum\limits_{x}\left(x - \mu\right)^2 f_X\left(x\right). \tag{4.4} \end{align} In the special case where the mass function $$f_X\left(x\right)$$ is constant and $$X$$ takes $$n$$ real values, (4.4) reduces to a well known equation determining the variance of a set of $$n$$ numbers: $$\sigma^2 = \frac{1}{n} \sum\limits_{X}\left(x - \mu\right)^2. \tag{4.5}$$ Events $$A$$ and $$B$$ are said to be independent if and only if the incidence of $$A$$ does not change the probability of $$B$$ occurring. An equivalent statement is $$\mathbf{P}\left(A \cap B\right) = \mathbf{P}\left(A\right)\mathbf{P}\left(B\right)$$. Similarly, the discrete random variables $$X$$ and $$Y$$ are called independent if the numerical value of $$X$$ does not affect the distribution of $$Y$$. In other words, the events $$\left\{X = x\right\}$$ and $$\left\{Y = y\right\}$$ are independent for all $$x$$ and $$y$$. The joint distribution function $$F_{X, Y} : \mathbb{R}^2 \to \left[0, 1\right]$$ of $$X$$ and $$Y$$ is given by $$F_{X, Y}\left(x, y\right) = \mathbf{P}\left(X \leq x \text{ and } Y \leq y\right)$$. Their joint mass function $$f_{X, Y} : \mathbb{R}^2 \to \left[0, 1\right]$$ is given by $$f_{X, Y}\left(x, y\right) = \mathbf{P}\left(X = x \text{ and } Y = y\right)$$. $$X$$ and $$Y$$ are independent if and only if $$f_{X, Y}\left(x, y\right) = f_X\left(x\right)f_Y\left(y\right)$$ for all $$x, y \in \mathbb{R}$$. Consider an archer, shooting arrows at the target shown in Figure 4.3. Suppose the archer is a very poor shot and hits the target randomly - in other words, target regions of equal area will have the same probability of being hit. For simplicity, it is assumed the archer always hits the target. If the archer is allowed to fire two arrows, the sample space $$\Omega = \left\{ \begin{array}{ccccc} AA, & AB, & AC, & AD, & AE, \\ BA, & BB, & \text{...} & DD, & DE,\\ EA, & EB, & EC, & ED, & EE \end{array} \right\}.$$ Figure 4.3: An archery target. A hit in region A scores 4 points, B scores 3 points, C scores 2 points, D scores 1 point and E scores nothing. Let the variable $$X\left(\omega\right)$$ represent the score of a particular outcome. The scoring guidelines outlined in Figure 4.3 imply $$\begin{array}{rcl} X\left(AA\right) & = & 8, \\ X\left(AB\right) = X\left(BA\right) & = & 7, \\ X\left(AC\right) = X\left(BB\right) = X\left(CA\right) & = & 6, \\ \text{...} & & \\ X\left(CE\right) = X\left(DD\right) = X\left(EC\right) & = & 2, \\ X\left(DE\right) = X\left(ED\right) & = & 1, \\ X\left(EE\right) & = & 0. \end{array}$$ Clearly $$X$$ is a discrete random variable, mapping the sample space $$\Omega$$ to scores (real numbers). The probability that an arrow hits a target region is directly proportional to the area of the region. The regions A to E are annuli with inner and outer radii as shown in Figure 4.3. The probabilities of hitting A to E are 1/25, 3/25, 5/25, 7/25 and 9/25 respectively. The mass function of $$X$$, $$f_X\left(x\right)$$, is then $$\begin{array}{rcl} f_X\left(0\right) & = & \mathbf{P}\left(X = 0\right) \\ & = & \mathbf{P}\left(\text{Hit E}\right)\mathbf{P}\left(\text{Hit E}\right) \\ & = & 81 / 625, \\ f_X\left(1\right) & = & \mathbf{P}\left(X = 1\right) \\ & = & 2 \cdot \mathbf{P}\left(\text{Hit D}\right)\mathbf{P}\left(\text{Hit E}\right) \\ & = & 126 / 625, \\ f_X\left(2\right) & = & \mathbf{P}\left(X = 2\right) \\ & = & 2 \cdot \mathbf{P}\left(\text{Hit C}\right)\mathbf{P}\left(\text{Hit E}\right) + \\ & & \mathbf{P}\left(\text{Hit D}\right)\mathbf{P}\left(\text{Hit D}\right) \\ & = & 139 / 625, \\ & & \text{...} \end{array}$$ From (4.2), the expected value of $$X$$ is $$\begin{array}{rcl} \mathbf{E}\left(X\right) & = & 0 \cdot 81 / 625 + \\ & & 1 \cdot 126 / 625 + \\ & & 2 \cdot 139 / 625 + \\ & & 3 \cdot 124 / 625 + \\ & & \text{...} \\ & = & 2.4. \end{array}$$ From (4.4), the variance of $$X$$ is $$\begin{array}{rcl} \mathrm{Var}\left(X\right) & = & 2.4^2 \cdot 81/625 + \\ & & 1.4^2 \cdot 126/625 + \\ & & 0.4^2 \cdot 139/625 + \\ & & 0.6^2 \cdot 124/625 + \\ & & \text{...} \\ & = & 2.72. \end{array}$$ The distribution function of $$X$$, $$F_X\left(x\right)$$, is then $$\begin{array}{rcccl} F_X\left(0\right) & = & \mathbf{P}\left(X \leq 0\right) & = & f_X\left(0\right), \\ F_X\left(1\right) & = & \mathbf{P}\left(X \leq 1\right) & = & f_X\left(1\right) + f_X\left(0\right), \\ F_X\left(2\right) & = & \mathbf{P}\left(X \leq 2\right) & = & f_X\left(2\right) + f_X\left(1\right) + \\ & & & & f_X\left(0\right), \\ & & & & \text{...} \end{array}$$ The distribution function $$F_X\left(x\right)$$ is shown in Figure 4.4. Figure 4.4: The distribution function $$F_X$$ of $$X$$ for the archery target. 4.1.2 Continuous random variables The random variable $$X$$ is continuous if its distribution function can be expressed as \begin{align} F_X\left(x\right) &= \mathbf{P}\left(X \leq x\right) \\ &= \int_{-\infty}^{x} f_X\left(u\right) \mathrm{d}u, x \in \mathbb{R}, \tag{4.6} \end{align} for some integrable function $$f_X : \mathbb{R} \to \left[0, \infty\right)$$. In this case, $$f_X$$ is called the (probability) density function of $$X$$. The fundamental theorem of calculus and (4.6) imply \begin{align} \mathbf{P}\left(a \leq X \leq b\right) &= F_X\left(b\right) - F_X\left(a\right) \\ &= \int_a^b f_X\left(x\right) \mathrm{d}x. \end{align} $$f\left(x\right)\delta x$$ can be thought of as the element of probability $$\mathbf{P}\left(x \leq X \leq x + \delta x\right)$$ where \begin{align} \mathbf{P}\left(x \leq X \leq x + \delta x\right) &= F_X\left(x + \delta x\right) - F_X\left(x\right) \\ &\approx f_X\left(x\right) \delta x. \tag{4.7} \end{align} If $$B_1$$ is a measurable subset of $$\mathbb{R}$$ (such as a line segment or union of line segments) then $$\mathbf{P}\left(x \in B_1\right) = \int_{B_1} f_X\left(X\right) \mathrm{d}x \tag{4.8}$$ where $$\mathbf{P}\left(X \in B_1\right)$$ is the probability that the outcome of this random choice lies in $$B_1$$. The expected value (or expectation) of $$X$$ with density function $$f_X$$ is $$\mu = \mathbf{E}X = \int_{-\infty}^{\infty} x f_X\left(x\right) \mathrm{d}x \tag{4.9}$$ whenever this integral exists. The variance of $$X$$ or $$\mathrm{Var}\left(X\right)$$ is defined by the already familiar (4.3). Since $$X$$ is continuous, (4.3) can be re-expressed accordingly: \begin{align} \sigma^2 &= \mathrm{Var}\left(X\right) \\ &= \mathbf{E}\left(\left(X - \mu\right)^2\right) \\ &= \int_{-\infty}^{\infty}\left(x - \mu\right)^2 f_X\left(x\right) \mathrm{d}x. \tag{4.10} \end{align} The joint distribution function of the continuous random variables $$X$$ and $$Y$$ is the function $$F_{X, Y} : \mathbb{R}^2 \to \left[0, 1\right]$$ given by $$F_{X, Y}\left(x, y\right) = \mathbf{P}\left(X \leq x, Y \leq y\right)$$. $$X$$ and $$Y$$ are (jointly) continuous with joint (probability) density function $$f_{X, Y} : \mathbb{R}^2 \to \left[0, \infty\right)$$ if $$F_{X, Y}\left(x, y\right) = \int_{-\infty}^y \int_{-\infty}^x f_{X, Y}\left(u, v\right) \mathrm{d}u \mathrm{d}v$$ for each $$x, y \in \mathbb{R}$$. The fundamental theorem of calculus suggests the following result $$\mathbf{P}\left(a \leq X \leq b, c \leq Y \leq d\right)$$ $$= F_{X, Y}\left(b, d\right) - F_{X, Y}\left(a, d\right)$$ $$- F_{X, Y}\left(b, c\right) + F_{X, Y}\left(a, c\right)$$ $$= \int_{c}^{d} \int_{d}^{b} f_{X, Y}\left(x, y\right) \mathrm{d}x \mathrm{d}y.$$ $$f_{X, Y}\left(x, y\right) \delta x \delta y$$ can be thought of as the element of probability $$\mathbf{P}\left(x \leq X \leq x + \delta x, y \leq Y \leq y + \delta y\right)$$ where $$\mathbf{P}\left(x \leq X \leq x + \delta x, y \leq Y \leq y + \delta y\right)$$ $$= F_{X, Y}\left(x + \delta x, y + \delta y\right) - F_{X, Y}\left(x, y + \delta y\right)$$ $$- F_{X, Y}\left(x + \delta x, y\right) + F_{X, Y}\left(x, y\right)$$ $$= f_{X, Y}\left(x, y\right) \delta x \delta y. \tag{4.11}$$ If $$B_2$$ is a measurable subset of $$\mathbb{R}^2$$ (such as a rectangle or union of rectangles and so on) then $$\mathbf{P}\left(\left(X, Y\right) \in B_2\right) = \int \int_{B_2} f_{X, Y}\left(x, y\right) \mathrm{d}x \mathrm{d}y \tag{4.12}$$ where $$\mathbf{P}\left(\left(X, Y\right) \in B_2\right)$$ is the probability that the outcome of this random choice lies in $$B_2$$. $$X$$ and $$Y$$ are independent if and only if $$\left\{X \leq x\right\}$$ and $$\left\{Y \leq y\right\}$$ are independent events for all $$x, y \in \mathbb{R}$$. If $$X$$ and $$Y$$ are independent, $$F_{X, Y}\left(x, y\right) = F_X\left(x\right) F_Y\left(y\right)$$ for all $$x, y \in \mathbb{R}$$. An equivalent condition is $$f_{X, Y}\left(x, y\right) = f_X\left(x\right)f_Y\left(y\right)$$ whenever $$F_{X, Y}$$ is differentiable at $$\left(x, y\right)$$. An example of a continuous random variable can be found in the needle throwing described earlier. A needle is thrown onto the floor and lands with random angle $$\omega$$ relative to some fixed axis. The sample space $$\Omega = \left[0, 2\pi\right)$$. The angle $$\Omega$$ is equally likely in the real interval $$\left[0, 2\pi\right)$$. Therefore, the probability that the angle lies in some interval is directly proportional to the length of the interval. Consider the continuous random variable $$X\left(\omega\right) = \omega$$. The distribution function of $$X$$, shown graphically in Figure 4.5, is $$\begin{array}{rclcl} F_X\left(0\right) & = & \mathbf{P}\left(X \leq 0\right) & = & 0, \\ F_X\left(x\right) & = & \mathbf{P}\left(X \leq x\right) & = & x / 2\pi \\ & & & & \left(0 \leq x \lt 2\pi\right), \\ F_X\left(2\pi\right) & = & \mathbf{P}\left(X \leq 2\pi\right) & = & 1. \\ \end{array}$$ The density function, $$f_X$$, of $$F_X$$ is as follows: $$F_X\left(x\right) = \int_{-\infty}^{x} f_X\left(u\right) \mathrm{d}u$$ where $$f_X\left(u\right) = \begin{cases} 1/2\pi & \text{ if 0 \leq u \leq 2\pi } \\ 0 & \text{ otherwise } \end{cases}$$ Figure 4.5: The distribution function $$F_X$$ of $$X$$ for the needle. 4.1.3 Further properties of random variables In this section, fundamental results regarding the expectation and variance of random variables (discrete or continuous) are stated and proved. For some constant $$c \in \mathbb{R}$$ and random variable $$X$$, $$\mathbf{E}\left(cX\right) = c\mathbf{E}\left(X\right). \tag{4.13}$$ The proof of (4.13) is trivial. From (4.2), for $$X$$ discrete \begin{align} \mathbf{E}\left(cX\right) &= \sum_x cx \cdot f_X\left(x\right) \\ &= c \sum_x x \cdot f_X\left(x\right) \\ &= c \mathbf{E}\left(X\right), \end{align} while from (4.9), for $$X$$ continuous \begin{align} \mathbf{E}\left(cX\right) &= \int_{-\infty}^{\infty} cx \cdot f_X\left(x\right) \mathrm{d}x \\ &= c \int_{-\infty}^{\infty} x \cdot f_X\left(x\right) \mathrm{d}x \\ &= c \mathbf{E}\left(X\right). \end{align} $$\tag*{\blacksquare}$$ For discrete or continuous random variables $$X$$ and $$Y$$, $$\mathbf{E}\left(X + Y\right) = \mathbf{E}\left(X\right) + \mathbf{E}\left(Y\right). \tag{4.14}$$ The proof of (4.14) is as follows. Suppose $$X$$ and $$Y$$ have joint mass function $$f_{X,Y} : \mathbb{R}^2 \to \left[0, 1\right]$$ given by $$f_{X,Y}\left(x, y\right) = \mathbf{P}\left(X = x \text{ and } Y = y\right)$$. Then, for $$X$$ and $$Y$$ discrete, an extension of (4.2) gives $$\begin{array}{rcl} \mathbf{E}\left(X + Y\right) & = & \sum_x \sum_y \left(x + y\right) \cdot f_{X, Y}\left(x, y\right) \\ & = & \sum_x \sum_y x \cdot f_{X, Y}\left(x, y\right) + \\ & & \sum_x \sum_y y \cdot f_{X, Y}\left(x, y\right) \\ & = & \sum_x x \sum_y f_{X, Y}\left(x, y\right) + \\ & & \sum_y y \sum_x f_{X, Y}\left(x, y\right) \\ & = & \sum_x x \cdot f_X\left(x\right) + \sum_y y \cdot f_Y\left(y\right) \\ & = & \mathbf{E}X + \mathbf{E}Y. \end{array}$$ Noting (4.9), for $$X$$ and $$Y$$ continuous the proof begins $$\mathbf{E}\left(X + Y\right) = \int_{-\infty}^{\infty} \int_{-\infty}^{\infty} \left(x + y\right) \cdot f_{X, Y}\left(x, y\right) \mathrm{d}x \mathrm{d}y$$ where $$f_{X, Y}\left(x, y\right) : \mathbb{R}^2 \to \left[0, \infty\right)$$ is the joint density function of $$X$$ and $$Y$$. The proof then proceeds in a similar way to the discrete case with summations replaced by integrations. $$\tag*{\blacksquare}$$ For discrete or continuous independent random variables $$X$$ and $$Y$$, $$\mathbf{E}\left(XY\right) = \mathbf{E}\left(X\right) \cdot \mathbf{E}\left(Y\right). \tag{4.15}$$ The proof of (4.15) is first presented for discrete random variables $$X$$ and $$Y$$. Let $$X$$ and $$Y$$ have joint mass function $$f_{X,Y}\left(x, y\right) : \mathbb{R}^2 \to \left[0, 1\right]$$ given by $$f_{X,Y} = \mathbf{P}\left(X = x \text{ and } Y = y\right)$$. If $$X$$ and $$Y$$ are independent, then (by definition) the probability of $$Y$$ occurring is not affected by the occurrence or non-occurrence of $$X$$. For $$X$$ and $$Y$$ independent, \begin{align} \mathbf{P}\left(X = x \text{ and } Y = y\right) &= \mathbf{P}\left(\left(X = x\right) \cap \left(Y = y\right)\right) \\ &= \mathbf{P}\left(X = x\right) \cdot \mathbf{P}\left(Y = y\right), \end{align} so that $$f_{X,Y}\left(x, y\right) = f_X\left(x\right) \cdot f_Y\left(y\right)$$. Therefore, \begin{align} \mathbf{E}\left(XY\right) &= \sum_x \sum_y xy \cdot f_{X, Y}\left(x, y\right) \\ &= \sum_x \sum_y xy \cdot f_X\left(x\right) f_Y\left(y\right) \\ &= \sum_x \left\{ x \cdot f_X\left(x\right) \cdot \sum_y y \cdot f_Y\left(y\right) \right\} \\ &= \sum_x \left\{ x \cdot f_X\left(x\right) \cdot \mathbf{E}Y\right\} \\ &= \mathbf{E}X \cdot \mathbf{E}Y. \end{align} For $$X$$ and $$Y$$ continuous, the proof begins $$\mathbf{E} = \int_{-\infty}^{\infty} \int_{-\infty}^{\infty} xy \cdot f_{X, Y}\left(x, y\right) \mathrm{d}x \mathrm{d}y$$ where $$f_{X, Y} : \mathbb{R}^2 \to \left[0, \infty\right)$$ is the joint density function of $$X$$ and $$Y$$. The proof then proceeds in a similar way to the discrete case with summations replaced by integrations. $$\tag*{\blacksquare}$$ For the discrete or continuous random variable $$X$$, \begin{align} \sigma^2 &= \mathbf{E}\left(\left(X - \mathbf{E}\left(X\right)\right)^2\right) \\ &= \mathbf{E}\left(X^2\right) - \left(\mathbf{E}\left(X\right)\right)^2. \tag{4.16} \end{align} The proof of (4.16) holds for $$X$$ discrete or continuous. As a shorthand notation, let $$\mu = \mathbf{E}\left(X\right)$$. Then, $$\begin{array}{rcl} \mathbf{E}\left(\left(X - \mu\right)^2\right) & = & \mathbf{E}\left(X^2 - 2 \cdot \mu \cdot X + \mu^2\right) \\ & = & \mathbf{E}\left(X^2\right) - 2 \cdot \mu \cdot \mathbf{E}\left(X\right) + \mu^2 \\ & & \text {from (1) and (2)} \\ & = & \mathbf{E}\left(X^2\right) - \mu^2 \\ & = & \mathbf{E}\left(X^2\right) - \left(\mathbf{E}\left(X\right)\right)^2. \end{array}$$ $$\tag*{\blacksquare}$$ For the discrete or continuous random variable $$X$$ and the constant $$c \in \mathbb{R}$$, $$\mathrm{Var}\left(cX\right) = c^2 \cdot \mathrm{Var}\left(X\right). \tag{4.17}$$ The proof of (4.17) holds for $$X$$ discrete or continuous. Again, let $$\mu = \mathbf{E}\left(X\right)$$. Then, \begin{align} \mathrm{Var}\left(cX\right) &= \mathbf{E}\left(\left(cX - c\mu\right)^2\right) \\ &= \mathbf{E}\left(c^2 \cdot \left(X - \mu\right)^2\right) \\ &= c^2 \cdot \mathbf{E}\left(\left(X - \mu\right)^2\right) \\ &= c^2 \cdot \mathrm{Var}\left(X\right). \end{align} $$\tag*{\blacksquare}$$ Finally, for discrete or continuous independent random variables $$X$$ and $$Y$$, $$\mathrm{Var}\left(X + Y\right) = \mathrm{Var}\left(X\right) + \mathrm{Var}\left(Y\right). \tag{4.18}$$ The proof of (4.18) holds for $$X$$ and $$Y$$ discrete or continuous. As a shorthand notation, let $$\mu_X = \mathbf{E}\left(X\right)$$ and $$\mu_Y = \mathbf{E}\left(Y\right)$$. Then, $$\mathrm{Var}\left(X + Y\right) = \mathbf{E}\left\{\left(\left(X + Y\right) - \left(\mu_X + \mu_Y\right)\right)^2\right\}$$ $$\begin{array}{rcl} & = & \mathbf{E}\left\{\left(\left(X - \mu_X\right) + \left(Y - \mu_Y\right)\right)^2\right\} \\ & = & \mathbf{E}\left\{\begin{array}{c}\left(X - \mu_X\right)^2 + \\ 2\left(X - \mu_X\right) \cdot \left(Y - \mu_Y\right) + \\ \left(Y - \mu_Y\right)^2\end{array}\right\} \\ & = & \mathbf{E}\left(\left(X - \mu_X\right)^2\right) + \\ & & 2\mathbf{E}\left(\left(X - \mu_X\right) \cdot \left(Y - \mu_Y\right)\right) + \\ & & \mathbf{E}\left(\left(Y - \mu_Y\right)^2\right) \\ & = & \mathrm{Var}\left(X\right) + \\ & & 2\mathbf{E}\left(\left(X - \mu_X\right) \cdot \left(Y - \mu_Y\right)\right) + \\ & & \mathrm{Var}\left(Y\right). \end{array}$$ However, from (4.15), if $$X$$ and $$Y$$ are independent random variables, then $$\mathbf{E}\left(\left(X - \mu_X\right) \cdot \left(Y - \mu_Y\right)\right)$$ $$= \mathbf{E}\left(X - \mu_X\right) \cdot \mathbf{E}\left(Y - \mu_Y\right) = 0.$$ $$\tag*{\blacksquare}$$ 4.1.4 Sampling theory As an introduction to sampling theory, consider the problem of estimating the average IQ of students attending the University of Liverpool. To test the entire group, or population, of students would take too long. Instead, it is decided that tests should be handed out to a sample of the student population. From the sample, results regarding the population can be statistically inferred. The reliability of the survey depends on whether the sample is properly chosen. IQ scores range between 0 and 200. The set of all possible scores can be represented by the sample space $$\Omega = \left\{ 0, 1, 2, ..., 200\right\}$$. Let the variable $$X\left(\omega\right) = \omega$$ represent a particular outcome after completing a test. Clearly $$X$$ is a discrete random variable. An alphabetic roll call of students is used to select a systematic sample. The list is first split into groups of $$k$$ students (where $$k$$ is an integer greater than 1). If $$k$$ is not a multiple of the population size, $$N$$, the last group has a smaller size than $$k$$. Next a random integer, $$r$$, between 0 and $$k$$ – 1 is chosen. Students are included in the sample if their position in the roll call is $$r$$ modulo $$k$$. Let the size of the sample be $$n$$. On receipt of $$n$$ tests, each student in the sample is assigned a score, $$x_i$$, in the range 0 to 200 where $$x_i$$ is the value of a random variable $$X_i$$. The sample mean is a random variable defined by $$\bar{X} = \frac{1}{n} \sum_{i=1}^n X_i$$ whose value is $$\bar{x} = \frac{1}{n} \sum_{i=1}^n x_i.$$ $$X_1$$, ..., $$X_n$$ are independent random variables whose distribution functions are the same as the population which has mean $$\mu$$ and variance $$\sigma^2$$. The expected value of the sample mean, $$\mathbf{E}\left(\bar{X}\right)$$, is the population mean, $$\mu$$, because $$\mathbf{E}\left(\bar{X}\right) = \frac{1}{n}\left(\sum_{i=1}^n \mathbf{E}\left(X_i\right)\right) = \frac{1}{n}\left(n\mu\right) = \mu.$$ Furthermore, $$X_1$$, ..., $$X_n$$ have variance $$\sigma^2$$ and so the variance of $$\bar{X}$$, $$\mathrm{Var}\left(\bar{X}\right)$$, is \begin{align} \mathbf{E}\left(\left(\bar{X} - \mu\right)^2\right) &= \mathrm{Var}\left(\frac{1}{n}\sum_{i=1}^n X_i\right) \\ &= \frac{1}{n^2}\sum_{i=1}^n \mathrm{Var}\left(X_i\right) \\ &= \frac{1}{n^2} n \sigma^2 \\ &= \frac{\sigma^2}{n}. \tag{4.19} \end{align} As the sample size increases the variation or scatter of the sample means tends to zero. From (4.5) the random variable, $$S^2$$, giving the sample variance is $$S^2 = \frac{1}{n} \sum_{i=1}^n \left(X_i - \bar{X}\right)^2.$$ It turns out that sample variance, $$S^2$$, is not an unbiased estimator of population variance, $$\sigma^2$$. $$S^2$$ underestimates $$\sigma^2$$ by a factor of $$(n – 1)/n$$ so that $$\mathbf{E}\left(S^2\right) = \frac{n-1}{n} \sigma^2. \tag{4.20}$$ The proof of (4.20) is as follows. Consider the term $$X_i - \bar{X} = \left(X_i - \mu\right) - \left(\bar{X} - \mu\right)$$. Then, $$\left(X_i - \bar{X}\right)^2 = \left(X_i - \mu\right)^2 - 2\left(X_i - \mu\right)\left(\bar{X} - \mu\right) + \left(\bar{X} - \mu\right)^2$$ and so \begin{align} \sum_{i=1}^n \left(X_i - \bar{X}\right)^2 &= \sum_{i=1}^n\left(X_i - \mu\right)^2 - \\ & 2 \left(\bar{X} - \mu\right) \sum_{i=1}^n\left(X_i - \mu\right) + \\ & \sum_{i=1}^n\left(\bar{X} - \mu\right)^2 \\ &= \sum_{i=1}^n\left(X_i - \mu\right)^2 - \\ & 2n\left(\bar{X} - \mu\right)^2 + \\ & n\left(\bar{X} - \mu\right)^2. \tag{4.21} \end{align} Equation (4.19) together with the expectation of (4.21) give \begin{align} \mathbf{E}\left(\sum_{i=1}^n\left(X_i - \bar{X}\right)^2\right) &= \mathbf{E}\left(\sum_{i=1}^n\left(X_i - \mu\right)^2\right) - \\ & n\mathbf{E}\left(\left(\bar{X} - \mu\right)^2\right) \\ &= n\sigma^2 - n\left(\frac{\sigma^2}{n}\right) \\ &= (n - 1)\sigma^2 \end{align} so that $$\mathbf{E}\left(S^2\right) = \frac{n-1}{n}\sigma^2$$ and $$\sigma^2 = \frac{n}{n-1}\mathbf{E}\left(S^2\right). \tag{4.22}$$ Equation (4.22) is an important result that is relevant to the experimental work of Chapters 7 and 8. It states that the population variance, $$\sigma^2$$, is equal to the expected sample variance, $$\mathbf{E}\left(S^2\right)$$, multiplied by $$n/(n - 1)$$. 4.2 Random position and orientation An orientation in the plane can be represented by points on the boundary of a circle. An orientation is simply an angle $$\mathit{\Phi} \in \left[0, 2\pi\right)$$. The orientation is said to be isotropic random (IR) if and only if the angle $$\mathit{\Phi}$$ is equally likely, i.e. uniform random (UR), in the interval $$\left[0, 2\pi\right)$$ (written $$\mathit{\Phi} \in \mathrm{UR}\left[0, 2\pi\right)$$). An equivalent statement is that the orientation, $$\mathit{\Phi}$$, is IR if and only if the density function of $$\mathit{\Phi}$$ is constant. To determine the probability density function of $$\mathit{\Phi}$$, consider an arc on the unit circle given by $$\left\{ \mathit{\Phi} : \phi \leq \mathit{\Phi} \leq \phi + \delta\phi \right\}$$. The arc, of length $$\delta\phi$$, is shown in Figure 4.6(a). The probability of an orientation falling inside the arc is $$\mathbf{P}\left(\phi \leq \mathit{\Phi} \leq \phi + \delta\phi\right)$$ $$= \frac{\text{boundary length of arc}}{\text{boundary length of circle}}$$ $$= \frac{\delta\phi}{2\pi} \tag{4.23}$$ Equations (4.7) and (4.23) imply the density function is $$1/2\pi$$. Furthermore, the function is constant. The idea can be extended to three dimensions. Orientations in $$\mathbb{R}^3$$ can be represented by points on the surface of the unit sphere. Figure 4.6(b) shows how points on the unit sphere can be obtained by choosing $$\mathit{\Phi} \in \left[0, 2\pi\right)$$ and $$\mathit{\Theta} \in \left[0, \pi\right)$$. To determine an IR orientation, choose a point uniform randomly on the surface of the sphere (so that regions of equal area will have the same probability to contain the point). It does not suffice to choose $$\mathit{\Phi}$$ and $$\mathit{\Theta}$$ uniform randomly in the intervals $$\left[0, 2\pi\right)$$ and $$\left[0, \pi\right)$$ respectively. Figure 4.7(a) shows 10,000 orientations chosen this way. The orientations cluster about the north and south poles. Figure 4.6: (a) An orientation in the plane (represented by the angle $$\phi$$). (b) An orientation in $$\mathbb{R}^3$$ (represented by the angles $$\phi$$ and $$\theta$$). Consider a surface patch on the unit sphere given by $$\left\{\left(\mathit{\Phi}, \mathit{\Theta}\right) : \phi \leq \mathit{\Phi} \leq \phi + \delta\phi, \theta \leq \mathit{\Theta} \leq \theta + \delta\theta\right\}$$. For $$\delta\phi$$ and $$\delta\theta$$ small, the patch approximates a rectangle with dimensions $$\sin\left(\theta\right)\delta\phi$$ by $$\delta\theta$$. Suppose $$\mathit{\Phi}$$ and $$\mathit{\Theta}$$ are chosen uniform randomly in the intervals $$\left[0, 2\pi\right)$$ and $$\left[0, \pi\right)$$ respectively. The probability of the chosen orientation falling inside the patch is $$\mathbf{P}\left(\phi \leq \mathit{\Phi} \leq \phi + \delta\phi, \theta \leq \mathit{\Theta} \leq \theta + \delta\theta\right)$$ $$= \frac{\text{surface area of patch}}{\text{surface area of sphere}}$$ $$= \frac{\sin\left(\theta\right)\delta\phi\delta\theta}{4\pi} \tag{4.24}$$ Equations (4.11) and (4.24) imply the joint density function of $$\mathit{\Phi}$$ and $$\mathit{\Theta}$$ is $$\sin\left(\theta\right) / 4\pi$$. To pick IR orientations the density function must remain constant. Clearly $$\mathbf{P}\left(\delta\theta\right) = \sin\left(\theta\right)\delta\theta$$. Consider the sine-weighting or change of variable $$\theta = \arccos\left(1 - 2u\right)$$, where $$u$$ is uniform random in the interval [0, 1). Differentiating, $$\mathrm{d}\theta = - \frac{1}{\sqrt{1 - u^2}} \mathrm{d}u$$ and $$\sin\left(\theta\right)$$ becomes $$\sin\left(\arccos\left(1 - 2u\right)\right) = \sqrt{1 - u^2}$$. Now $$\mathbf{P}\left(\delta\theta\right) = \sin\left(\theta\right)\delta\theta = -\delta u$$. Figure 4.7(b) shows 10,000 IR orientations. An object (such as a straight line or curve) in some domain $$D \subset \mathbb{R}^2$$ is said to be Isotropic Uniform Random (IUR), if it is IR and has UR position within $$D$$. The latter condition holds if a single identifiable point $$P$$ on the object has UR position within $$D$$. This is equivalent to asking that the coordinates of $$P$$ be chosen so that regions of equal area have the same probability of containing $$P$$. Similarly an object such as a plane in some domain $$D \subset \mathbb{R}^3$$ is said to be IUR if it is IR and has UR position within $$D$$. Figure 4.7: 10,000 orientations chosen (a) non-isotropically and (b) isotropically.
{}
## Elementary and Intermediate Algebra: Concepts & Applications (6th Edition) $\bf{\text{Solution Outline:}}$ To solve the given equation, $|n-3|=|3-n| ,$ use the definition of absolute value equality. Then use the properties of equality to isolate the variable. $\bf{\text{Solution Details:}}$ Since $|x|=|y|$ implies $x=y \text{ or } x=-y,$ the equation above is equivalent to \begin{array}{l}\require{cancel} n-3=3-n \\\\\text{OR}\\\\ n-3=-(3-n) .\end{array} Solving each equation results to \begin{array}{l}\require{cancel} n-3=3-n \\\\ n+n=3+3 \\\\ 2n=6 \\\\ n=\dfrac{6}{2} \\\\ n=3 \\\\\text{OR}\\\\ n-3=-(3-n) \\\\ n-3=-3+n \\\\ 0=0 \text{ (TRUE)} .\end{array} Since the solution above ended with a TRUE statement, then the solution set is the set of all real numbers.
{}
# Tag Info 18 An unloaded power transformer is a pure inductance. As you increase the load, the power factor improves. So, take a Variac, plug an incandescent light into it, and you should be able to dial up any power factor you like. 8 The cheaper LED lamps - the ones that use a capacitive dropper rather than a switch-mode power supply - can have a very poor power factor. The power factor could be as low as 0.2. Buy a bunch of them, and wire them in parallel to get a significant current draw. 5 Take a look at figure 6 on that page (I've added the red lines): - So, when the input supply voltage is constant and the output voltage and load current is constant (i.e. the circuit is operating in "steady state"), the two current changes (during inductor charge and discharge) are equal. This mode of operation is called continuous conduction ... 4 Here, "steady state" means periodic steady state. In other words, all node voltages and branches currents have waveforms of the form f(t) = f(t+kT), where k is an integer and T is the period. During the periodic steady state, the (ideal) inductor current must increase linearly when a positive voltage is being applied and decrease linearly when a ... 3 In this context, steady state means that the circuit has completed the start up phase, and if left undisturbed, it will continue to operate in the future as it does now. In other words, the behavior of the circuit will look the same now, 10 seconds from now, or 10 hours from now. Of course, a circuit will not in reality continue to operate the same way ... 3 You can buy inductors with a wide range of current and inductance ratings. An unloaded induction motor has a low power factor. If you use a single-phase motor, you can probably disconnect the capacitor if it has one. Unloaded, it should start with a little manual twist of the shaft. You may have difficulty with the distorted magnetizing current waveform if ... 2 Poor power factor in practice is usually inductance-based. On the other hand, inductors for testing purposes look rather bulky and expensive. If you are OK with capacitive load, the capacitors for asynchronous motors come in various capacitances and are rated above mains voltage. They are also rather cheap. You can combine them with incandescent lamps, space ... 2 In general, you can run this at 37V input and 380mA. This would still comply with the absolute maximum ratings where neither a maximum current nor maximum power dissipation are specified directly. But: as you might have seen, this device has some protection features, like short circuit and overtemperature protection. Those implicitly limit both the current (&... 2 How much current it can pass is one property, and how much it can heat up by the power dissipated is another property. You can input 5V and make 380mA 3.3V, and the regulator would heat up only at 0.65W, which is certainly doable. If you input 37V, and output output 3.3V at 380mA, the poor chip has to dissipate 12.8W, which is completely absurd amount of ... 2 Power = Voltage * Current I'm assuming that you have a continuous forward current of 400 mA. Note that these diodes are very small so they heat up very quickly. So even if the current is 400 mA for a short time, that would need to be really short like less than 1 second if you wait a long time (many minutes) before doing that again. Now look up the forward ... 2 It's not that 'a lot of commercial transformers like sinusoid'. It's simply that every transformer has issues (usually heating or vibration) with frequencies over the one it is designed for. So a 50Hz transformer would have to handle 100Hz, 150Hz, 200Hz and so on. The EN 61xxx standard (in EU, the US would probably have some FCC related stuff) mandate a ... 2 I'm confused though as on the datasheet there is no voltage mentioned, only a max phase current of 2.1A and same holding torque. But on the line right under the amps/phase of 2.1A, there's a line that says the coil resistance is $1.6\Omega$. $2.1\mathrm A \cdot 1.6\Omega = 3.36 \mathrm V$, so there you go. I know that voltage applied to a stepper, ... 2 In general, for a circuit where the output is switched on and off, how can we analyze the power factor? In general, the power factor in such a circuit is not the power factor defined as the cosine of the angle between the current and voltage but as the real power divided by the apparent power. In this case, the apparent power is the total RMS voltage ... 1 However when the micro-controller are being manufactured what determines their operational voltage? All ICs are fabricated in a certain manufacturing process. That process is the recipe to make the transistors and other components. That process together with the design of the transistors (large or small) also determines what the maximum allowed voltages are ... 1 Both cases are protected against reverse voltage. That's what Q1 is there for. That will not change just by swapping Q3. The n-mosfet is definitely more suited for your purpose, since the bipolar transistor requires some current to be polarized, if you want to have low quiescent currents, then the mosfet is the way to go. 1 You can replace it with an MOS transistors. No problem. The BJT had an internal resistive divider to increase the threshold where the switch "flips". I am wondering why was it implemented inside the BJT. The transistor is less versatile this way. But I also not see any reason why a BJT should be used here. The current it has to supply is very low, ... 1 An isolation transformer is just a normal transformer that has the same input voltage as output voltage. It is also usually fused or has a circuit breaker. You can make one by putting two transformers back-to-back such as 120:24 and 24:120. It's not usually a great thing to do unless your current requirements are modest and you have no other option. Good ... 1 All torque is proportional to current BEMF reduces applied V from velocity so holding torque reduces slightly with rising step rates. Voltage affects current from I= V/DCR(=1.6ohm) V=L(=3mH)dI/dt or I as the integral of voltage. 1 But why do some devices have a current limit and some a power limit? The current limit will be determined by the maximum allowable current density in the bonding wires and internal components of the chip. As current density increases so will heating and there has to be a physical limit to that. The power limit will also be due to thermal considerations but ... 1 You don't provide any details about your FTDI board. But if an USB to serial converter has a 5V pin, there is no reason to assume it is a power input. The 5V pin would be expected to give out the 5V USB supply provided by the PC. When that ia connected to the 5V node on the Nucleo it will power it up. Same way like connecting it directly to USB from the ... 1 "Off the shelf" requests for specific devices make for shopping questions, which are forbidden, but it's ok to ask what the name of such a device is. In this case, "Power Factor Load Bank", "Resistive/Reactive Load Bank" or separate resistive and reactive banks will work for you. They look expensive. A large inductor has a ... Only top voted, non community-wiki answers of a minimum length are eligible
{}
# Large cardinals and constructible universe We know that if $V=L$ holds, then $|\cal{P}(\omega)|=|\cal{P}(\omega)\cap \textrm{L}|=\aleph_1$ whereas, in the presence of a measurable cardinal (in fact, even Ramsey) $|\cal{P}(\omega)\cap \textrm{L}|=\aleph_0$. I remark that the cardinalities are of course computed in (the corresponding) $V$. The first is just the fact that the constructible universe satisfies CH, while the second has to do with the fact that in the presence of a measurable, $\omega_1^{L}<\omega_1$ i.e. the existence of large cardinals makes the relative $\omega_1^{L}$ "drop" below its "maximum possible" value (which is attained, if you want, in the "extreme case" when $V=L$). My question is, what can we say, in general, about the beaviour of $\omega_1^{L}$ given axioms of increasing strength above (or equal to, in strength) $V\neq L$? In particular, what happens if we just assume $V\neq L$? - Each of the following implies that (the true) $\omega_1$ is inaccessible in $L$, and hence that there are only countably many constructible reals: • The proper forcing axiom • There is a Ramsey cardinal • $0^\#$ exists • All projective sets are Lebesgue measurable • All $\Sigma^1_3$-sets are Lebesgue measurable (EDIT: These are just some of the well-known examples that came to my mind. This list is neither exhaustive nor canonical.) The mere existence of a nonconstructible set, or even a nonconstructible real, does not imply that $\omega_1^L$ is countable. There are many forcing notions in $L$ which do not collapse $\omega_1$: adding one or many Cohen reals, destroying Souslin trees, etc. Each such forcing (over L) results in a model where $\omega_1=\omega_1^L$. In fact, "Martin's axiom plus continuum is arbitrarily large" is consistent with $\omega_1^L=\omega_1$. (But also with $\omega_1^L<\omega_1$.) ADDED: Preserving $\aleph_1$ of the ground model (which may or may not be the constructible universe $L$) is a key component in many independence proofs concerned with the theory of the reals. The "countable chain condition", which is enjoyed by all the forcings I mentioned above, is a property of forcing notions that guarantees preservation of $\aleph_1$; there are several other (weaker) properties which also suffice, most prominently (Baumgartner's) "Axiom A" and (Shelah's) "properness". - Also, my favorite, if every uncountable $\Pi^1_1$-set has a perfect subset then $\aleph_1$ is inaccessible in $L$. – Dave Marker Jul 12 '11 at 14:06 Thanks! This result (of Solovay? or is it older?) predates Shelah's theorem on $\Sigma^1_3$-measurability.Was this perhaps the first large cardinal whose existence follows from a statement in descriptive set theory? – Goldstern Jul 12 '11 at 21:29 According to Kanamori's book (p.135) Specker gets at least part of the credit for the result mentioned by Dave. – Ali Enayat Jul 12 '11 at 23:55 Thanks a lot everybody! – kvagk Jul 13 '11 at 10:40
{}
Skip to main content ## Section6.2Lead–Sheet Symbols Lead–sheet symbols (also known as “lead–sheet notation” and “lead–sheet chord symbols”) are often used as shorthand for chords in popular music and jazz. These symbols allow a guitarist or pianist to choose how to “voice” the chords, i.e., how they want to arrange the notes. Lead–sheet symbols for triads communicate the root and quality of a chord. Lead–Sheet Symbol Chord Quality Notes in the Chord $\left.\text{F}\right.$ major $\text{F}$–$\text{A}$–$\text{C}$ $\left.\text{G}\text{m}\right.$ minor $\text{G}$–$\text{B}^♭$–$\text{D}$ $\left.\text{D}^{\circ}{}\right.$ diminished $\text{D}$–$\text{F}$–$\text{A}^♭$ $\left.\text{C}{+}\right.$ augmented $\text{C}$–$\text{E}$–$\text{G}^♯$ Here is a musical example with lead–sheet symbols and guitar tablature. As you can see in the example above, major triads are represented by an uppercase letter ($\left.\text{A}\right.$, $\left.\text{E}\right.$, and $\left.\text{D}\right.$) while minor triads are represented with the root in uppercase followed by a lowercase “m” (e.g., $\left.\text{F}^♯{}\text{m}\right.$). Diminished triads are represented by including the diminished symbol ($\left.\text{}^{\circ}{}\right.$) after the chord root (e.g., $\left.\text{C}^{\circ}{}\right.$) while augmented triads are represented by including the augmented symbol after the root ($\left.\text{C}{+}\right.$).
{}
The algebra of boolean satisfiability They who are acquainted with the present state of the theory of Symbolical Algebra, are aware, that the validity of the processes of analysis does not depend upon the interpretation of the symbols which are employed, but solely upon the laws of their combination. The Mathematical Analysis of Logic, George Boole (1847) Many roads lead to boolean satisfiability. In spite of its theoretical intractability and because of its surprising practical feasability, the problem has acquired a central importance in logic and computer science. It is easy to state: given a Boolean formula $f$, does there exist an interpretation that satisfies $f$? In other words, it is the problem of determining if there exists an assignment of boolean values to the variables of $f$ that makes it true. Boolean satisfiability, which we will just write as SAT from now on, is the paradigmatic NP-complete problem. Despite this theoretical barrier, SAT-solving has undergone a revolution in this millenium, with modern solvers now able to handle instances containing hundreds of thousands or even millions of literals. What is there to say about SAT from an algebraic perspective? Surely everything has already been said a long time ago. George Boole worked out the algebraic laws of classical propositional logic in the 19th century. What new ideas could we possibly contribute here? Certainly, most of what I have writen below will look like a reformulation of old ideas, in a new language. But I am writing them here in the hope that others with similar interests find them useful and beautiful. This post is a teaser for a preprint with Tao Gu and Fabio Zanasi. I am also grateful for many helpful discussions with Guillaume Boisseau's about the equational theory. (Actually, Guillaume has also started a Youtube channel on diagrammatic algebra, so go check it out!) SAT is not algebraic Let's start with a remark that will be obvious to many, but was not for me at first: SAT is not a problem that can be stated in the algebraic theory of boolean algebras. Let me explain. An algebraic theory is a logical language in which we can form terms using variables and $n$-ary function symbols (including constants, which are just $0$-ary functions), and give axioms as equations between terms with free variables. A model of an algebraic theory is simply a set equipped with bona-fide functions of the appropriate arity interpreting each of the function symbols and satisfying the required axioms. Thanks to Boole, we know that classical propositional logic can be formulated as an algebraic theory, whose models are what we now call boolean algebras. There are several presentations of the algebraic theory of boolean algebras. One possible choice uses two binary function symbols $\lor,\land$ for disjunction and conjunction, one unary function symbol $\bar{\cdot}$ for negation, two constants $0,1$. The associated axioms are well-known and I won't list them all here. One example is that the conjunction of the same variable appearing twice, in positive and negative form, is equal to false: $x\land \bar{x} = 0$ This is the algebraic counterpart of the propositional inference rule that allows us to derive a contradiction from... well, contradictory assumptions. Note that we can think of the free variables appearing in an axiom as implicitly universally quantified: $\forall x (x\land \bar{x} = 0)$ This is because, in first-order logic, the two formulations make no difference in terms of what we can prove. Indeed, if we can prove a statement $P$ containing some free variable $x$ (meaning that we are not assuming anything about $x$), we can immediately derive a proof of $\forall x. P$. Conversely, if we have a proof of the $\forall x.P$, we can instantiate it by substituting any term into $x$. There are many other presentations of the algebraic theory of boolean algebras: for example, it is well-known that a single binary function denoting NAND suffices (albeit with much less intuitive axioms). What matters is that the models of these different theories are all the same. But wait – didn't I say that we are only allowed to use terms involving free variables for axioms? Satisfiability involves some existential quantifiers, so it is not clear how we could state it in an algebraic theory. This is what I meant when I wrote above that "SAT is not algebraic". Given a term $f$ in the theory of boolean algebras, containing free variables $x_1,\dots, x_n$, the logical statement that $f$ is satisfiable is $\exists x_1\dots\exists x_n (f = 1)$ which is outside of the algebraic realm. It is not difficult to see that this statement is equivalent to $\lnot(\forall x_1\dots\forall x_n (\bar{f} =0))$ The first-order formula $\forall x_1\dots\forall x_n \bar{f} =0$ can be seen as the equivalent unquantified statement $\bar{f} =0$ which is algebraic. So if we can prove $\bar{f} =0$, we know that $f$ is satisfiable. But if we cannot, this does not mean we have proven $\lnot(\forall x_1\dots\forall x_n \bar{f} =0)$. Not finding a proof of something is different from having a proof of its negation! Boolean formulas have different canonical forms, with different use cases. For satisfiability, formulas are typically given in conjunctive normal form (CNF), i.e. as a conjunction of disjunctions of literals (variables or negated variables). The conjuncts are typically called clauses. The dual form disjunctive normal form (DNF) – a disjunction of conjunctions of literals – is trivial for satisfiability, since one can read all satisfying assignments of a DNF formula $f$ from its disjuncts directly. This suggests one way of dealing with SAT algebraically: use the axioms of boolean algebra to rewrite a CNF formula $f$ to DNF. In this form, each disjunct gives a satisfying assignment of the formula, and $f$ is satisfiable iff it has a non-contradictory term (one that does not contain $x\land\lnot x$. But this approach is not so useful if all we care about is satisfiability, as it requires computing all assignments in the process, potentially writing down a number of terms exponential in the size of the $f$. We don't want to compute all satisfying assignments of a given $f$, we just care about the existence or non-existence of one. This is where the existential quantification comes in. It is what allows us to forget some information in the process of looking for such an assignment. Consider for example $f =(C\lor x)\land (\lnot x\lor D)$. Then $\exists x f = C\lor D$ since we don't really care about the value of $x$ anymore – $f$ is satisfiable iff $C\lor D$ is (this is an application of the important resolution rule. We'll come back to this below). Hopefully this has convinced you that the algebraic theory of boolean algebras is not a convenient theory in which to encode SAT. Of course, we can just use a first-order existential statement to encode it, and use the full might of first-order logic to reason about SAT instances. But I'm looking for something that is more closely tailored to SAT itself, sufficiently expressive to encode the problem, but not much more. More importantly, I am looking for a genuinely algebraic treatment, in which simple equational axioms are sufficient to derive the (un)satisfiability of any given instance. I will show below that an algebraic treatment of SAT is possible, if we are willing to move to a different syntax. Unsurprisingly, this syntax will be diagrammatic. SAT is algebraic, diagrammatically Why would we need a diagrammatic syntax? As we saw in the previous section, the algebraic theory of boolean algebras is insufficiently expressive. So we're looking for some other formal system. Existential quantification is what gives SAT a particularly relational flavour: we are not simply evaluating boolean functions, but checking that there exists some assignment for which the function evaluates to true. This is a fundamentally relational, not functional constraint. In my experience, diagrammatic calculi are particularly well suited to express relational constraints. In fact, one way to think about string diagrams for relations is as a generalisation of standard algebraic syntax to the regular fragment of first-order logic, i.e. the fragment containing truth, conjunction, and existential quantification. In this setting, string diagrams have the advantage of highlighting key structural features, such as dependencies and connectivity between different sub-terms/diagrams. We'll proceed in two steps. First I'll present a diagrammatic syntax to encode sets of boolean constraints before explaining how we can represent SAT instances in this language. Then, I will give a number of axioms that we can use to derive the (un)satisfiability of any given instance purely equationally. A diagrammatic syntax for SAT The language I'll introduce now is a graphical notation for managing sets of boolean constraints expressed in CNF. A diagram with $n$ dangling wires at the top and $m$ at the bottom is interpreted as the set of satisfying assignments of some CNF formula over the set of variables $y_1,\dots,y_m,x_1,\dots,x_n$ (which we will see how to construct below). The constraint comes from setting the corresponding CNF formula to $1$. We will keep this implicit from now on, writing $(x_1\lor \bar x_2\lor x_3)\land (x_2\lor \bar x_1)$ instead of $(x_1\lor \bar x_2\lor x_3)\land (x_2\lor \bar x_1) = 1$. We will often use the CNF formula as a shorthand for its satisfying assignments, as it is a convenient notation for this set. But our diagrams are interpreted as sets though, if you prefer, you can also think of them as CNF formulas quotiented by the equivalence relation that identifies any two formulas that have the same satisfying assignments. We have a few simple generating nodes, listed below. • A plain wire is used to encode an implication between two variables, $\bar y\lor x$: $DIAGRAM$ • A white node constrains the associated variables to satisfy $\bar y_1 \lor \dots\lor \bar y_n \lor x_1 \lor \dots \lor x_m$: $DIAGRAM$ A few special cases. A white node with only one bottom leg and no top leg represents the constraint $\bar y_1$ (forces this variable to be false) and, conversely, a white node with a single top leg but no bottom leg, represents the constraint $x_1$ (which forces this variable to be true). A white node with no dangling wires at all is the contradiction, which has no satisfying assignment (and therefore represents the empty set). • A ternary black node with two bottom legs can be thought of as copying its top leg and enforces $(\bar y_1\lor x)\land (\bar y_2\lor x)$: $DIAGRAM$ • Dually, a ternary black node with one bottom leg and two top legs is interpreted as $(\bar y \lor x_1)\land (\bar y \lor x_2)$: $DIAGRAM$ • Unary black nodes places no constraints on their associated variable: $DIAGRAM$ We also need to explain how to interpret composite diagrams. • Placing any two diagrams side by side (without connecting any of their wires) corresponds to taking the product of the two associated relations: $DIAGRAM$ At the level of formulas, the same operation corresponds to taking the conjunction two associated CNF formulas, which is still in CNF. • Things get a bit more interesting when we connect two wires. In general, we can connect several wires together in a single operation, which we depict as vertical composition and interpret as relational composition: $DIAGRAM$ At the level of the formulas, this is interpreted by first identifying the two opposite literals that share the same wire, and existentially quantifying over this variable: $DIAGRAM$ If the semantics of the first diagram $c$ is given by a CNF formula of the form $C\land (C_1\lor x)\land\dots\land(C_k\lor x)$ where the $C_i$ are all the clauses in which the variable $x$ appears and the diagram $d$ is given by $D\land (D_1\lor y)\land\dots\land(D_l\lor y)$, where the $D_i$ are all the clauses in which the variable $y$ appears, then joining the $x$-wire and the $y$-wire, gives $\exist z(C\land (C_1\lor z)\land\dots\land(C_k\lor z)\land D\land (D_1\lor z)\land\dots\land(D_k\lor z))$ with $z$ fresh to avoid capturing some other variable. One last thing: we're allowed to cross wires as long as we preserve the connectivity of the different parts. This is just variable naming/managment. Depicting SAT instances Given these generators and the two rules to compose them, we can represent any SAT instance as a diagram. Before giving you the general procedure, let's see it at work on two simple examples. The first is $\exist x\exist y(\bar x\lor y)\land (x\lor \bar y)$. Here, we have two variables that appear as negative and positive literals. So we will need two white nodes – one for $x,\bar x$ and one for $y,\bar y$, each with two bottom wires, to encode the relationship $x\lor \bar x$ and $y\lor \bar y$: $DIAGRAM$ Each of these wires correspond to a literal that appears precisely once in one of the two clauses. We can just connect them directly to two white nodes representing each of the two clauses in $(\bar x\lor y)\land (x\lor \bar y)$. This gives: $DIAGRAM$ For another example, let's consider the formula $\exist x\exist y(x\lor y)\land (\bar x\lor y)\land (x\lor \bar y)\land (\bar x\lor \bar y)$. Applying the same principle as above, we need two white nodes with two bottom wires each. We now have four clauses, so we will need four white nodes, each with two top wires for each of the variables appearing within the relevant clause: $DIAGRAM$ Each literal appears in two clauses, so we have more wires at the bottom of the diagram than at the top. To deal with this, we can simply duplicate each of the wires to connect the corresponding literal to the appropriate clause and get the diagram encoding the whole SAT instance: $DIAGRAM$ From these two examples, it is clear how one can use the same idea to encode arbitrary SAT instances. First, juxtapose as many white nodes with two bottom wires (and no top wires) as there are variables in the formula. Then, for each clause add a white node with as many top wires as it contains literals (and no bottom wires). Finally, connect the clauses to literals, duplicating or discarding wires where necessary. This takes the following form: $DIAGRAM$ Monotonicity and negation. Some readers might have noticed something strange about our encoding of SAT: in the diagrammatic syntax, we can never enforce that a variable is the negation of another, as we do not have unrestricted access to a negation operation. Here, negation is a change of direction of a wire. This leads to an apparent mismatch between the diagrammatic and standard CNF representation of a SAT instance. A a variable appearing as a positive and negative literal in a given instance, appears as the bottom and top wire connected to some white nodes. The relationship between the two occurrences is given by the clause $\bar y\lor x$, for two different variables $y$ and $x$. This leaves the possibility of setting both variables to $0$, which satisfies $\bar y\lor x$ but does not match the intuition that these two variables should be negated version of each other. However, we can show that the assignment $y=1, x=0$ does not affect the satisfiability of the overall diagram representing the corresponding SAT instance. An issue can only arise if the assignment $y=1, x=0$ is required to make the diagram satisfiable. But this can never be the case because $y$ always appears negatively as $\bar y$ in any clause and $x$ always appear positively. This implies that the satisfiability of the clauses in which they appear will only depend on the other variables and therefore leave the overall unsatisfiability of the diagram unaffected. Axioms for SAT We don't just want to draw SAT instances as diagrams in this syntax; we would also like to reason about them directly at the level of the diagrams, without having to compile them to their CNF interpretation. In particular, we would like to derive the (un)satisfiability of a diagrammatic instance purely equationally, as we do when reasoning about standard algebraic syntax. The axioms we'll need all take the form of equations between diagrams with potentially dangling wires. This is the diagrammatic counterpart of axioms between algebraic terms involving free variables. The difference is that a diagrammatic axiom can involve some composite diagram with wires that are not dangling. At the semantic leve, this allows us to specify equations that involve existentially quantified variables, when at the syntactic level we are reasoning purely equationally. This is where, in my opinion, lies the power of diagrammatic algebra (more on this in the additional remarks at the end of the post). Some of these axioms can look daunting for the uninitiated. But, like for algebraic theories, there is a recurring cast of familiar characters that one learns to recognise. First of all, as I explained earlier, the diagrammatic syntax generalises the usual algebraic term syntax so, if you're familiar with the latter, you will recognise the usual suspects, like monoids, semirings, or rings, some commutative, some idempotent etc. Diagrammatic syntax also allows us to speak about dual operations, like co-monoids, that can cohabit(!) with their standard algebraic counterpart, and interact with them in different ways. I'll introduce the cast progressively. The copying and discarding nodes together form a commutative comonoid. This means that they satisfy mirrored versions of the associativity, unitality and commutativity axioms of commutative monoids. $DIAGRAM$ Unsurprisingly, their mirrored versions satisfy the same axioms, mirrored about the vertical axis: $DIAGRAM$ White nodes form what is called a commutative Frobenius algebra. This is a very convenient structure, which has a natural diagrammatic axiomatisation – every connected diagram of white nodes, can be collapsed to just a single node, keeping the dangling wires fixed: $DIAGRAM$ This axiom is sound for our semantics because $\exists x.(C\lor x)\land (D\lor \bar x)$ is equivalent to $C\lor D$. It is thus the diagrammatic translation of the resolution rule in classical porpositional logic! Notice that when we collapse several white nodes to a single one, we may introduce loops, if the original diagram is not simply connected: $DIAGRAM$ This is interpreted as $\exists x (C\lor x\lor \bar x)$ for some clause $C$. We can always satisfy $x\lor \bar x$ and therefore the whole clause $C\lor x\lor \bar x$, regardless of $C$'s value. Hence we can remove all constraints associated with $C$. Diagrammatically, this amounts to adding the following axioms: $DIAGRAM$ Another special case: when the network has no dangling wires, it represents a contradiction. For example, $DIAGRAM$ is interpreted as $\exists x(x\land \bar x)$, which is clearly contradictory. By the principle of explosion, a contradiction allows us to derive anything we want. Diagrammatically, this principle is translated as a rule that allows us to disconnect any wire: $DIAGRAM$ An important principle in logic is that conjunctions distribute over disjunctions. Even though our language deals with constraints in CNF, the distributivity of the underlying lattice is witnessed by the following laws that allows us to push white and black nodes past each other: $DIAGRAMS$ A similar form of distributivity holds in a different direction: $DIAGRAMS$ Completeness. With Tao we have shown that (one version of this) set of axioms is complete for the given semantics in terms of boolean formulas. This means that if any two diagrams have the same semantics, we can show that they are equal using only these equations. In other words, you can forget about the semantics, and just reason diagrammatically! In fact, some of the axioms I have given can be derived from more fundamental interactions between the generators, involving inequalities instead of equalities (cf. remark below). However, I tried to keep things a bit simpler in this blog post. For the details, check out our forthcoming paper, which should be coming to the arXiv soon. All I will say for the moment is that the completeness proof is reminiscent of quantifier elimination algorithms. Inequalities vs equalities So far, I have not mentionned inequalities. Formally, the expressive power of our equational theory does not change by adding inequalities. In fact, the inequality relation can be defined using just equality and the black nodes: $DIAGRAM$ So every time I write SAT-solving in all this Modern SAT solvers rely on Clause-Driven Conflict Learning (CDCL), itself an improved version of the Davis–Putnam–Logemann–Loveland (DPLL) algorithm. There is a known correspondence between these two algorithms and restricted classes of resolution proofs. The latter is easy to understand in the diagrammatic syntax. It procedes by progressively merging clauses that contain literals appearing in positive and negative form, until the diagram can be reduced to a single white node with no dangling wires, a contradiction (cf. above). This is perhaps best understood using inequalities, rather than the equalities of our axiomatisation. $DIAGRAM$ Beyond standard SAT-solving heuristics, there is something potentially interesting about the algebraic approach I have sketched in this blog post: each axiom of our equational theory identifies rewrites for SAT instances that preserve the information we have about the instance. Contrast this with the naive approach to SAT-solving by computing every possible assignment. In doing so, for every value that we compute we learn some information that we then discard before computing a new value for the next assignment. In this process, some information is learned but immediately forgotten. The reason why modern SAT-solving algorithms, like DPLL and CDCL, generally work better than this brute force approach (though obviously not in the worst-case) is that, seen as resolution proofs, they learn new clauses at each step, and thus update their state of knowledge about the instance. Is it crazy to hope that, by identifying algebraic laws underlying deductions in SAT-solving, we will be able to explore different heuristics that are guaranteed to preserve the information we have already learned in the process of solving a given instance? Maybe. But really, why a diagrammatic syntax? An alternative to using string diagrams, is to move to an algebraic syntax with binders, as is often needed to model the syntax of programming languages with abstraction (e.g. the lambda calculus). Here, we would use the binders for existential quantifiers. Terms of an algebraic syntax with binders can be seen as syntax trees with back pointers, so that the syntax is no longer one of labelled trees but of certain directed graphs. I would argue that the diagrams of this post are in fact a convenient notation for these graphs. The main difference is that, in our string diagrammatic syntax, the direction of the dangling wires enforces a type discipline that distinguishes positive and negative literals. Due to the monotonocity of our semantics, we do not have access to unrestricted negation, as I mentionned above. Neither do we have access to unrestricted existentital quantification – we can only existentially quantify pairs of opposite literals. This discipline is healthy, as it reduces the complexity of the diagrams and of the algebraic axioms (this is how we get the Frobenius law for example, which seriously simplifies the treatment of clauses).
{}
# Programming Assignment 1 Getting Started Given: Thursday September 10th, 2015 Due: Thursday September 24th, 2015 at 11:59pm ## Part 1: Normal Integrator Follow the preliminaries step-by-step guide. Compile Wakame and create your first Wakame class (shading normal integrator). Once you are finished, render the scene in data/scenes/pa1/ajax-normal.xml and show a side by side comparison against the reference scenes/pa1/ref/ajax-normal.pfm in your report. ## Part 2: Average Visibility Integrator In this exercise you will implement a new integrator named AverageVisibility (bound to the name "av" in the XML scene description language) which derives from Integrator to visualize the average visibility of surface points seen by a camera, while ignoring the actual material parameters (i.e. the surface's Bsdf). ### Implementing the Average Visibility Integrator Take a look at the wakame.util.Warp class. You will find that it implements the static method: public static void sampleUniformHemisphere(Sampler sampler, Vector3d northPole, Tuple3d output) which takes an instance of the Sampler class and the direction of the north pole and returns a uniformly distirubted random vector on the surface of a unit hemisphere (of radius 1) oriented in the direction of the north pole. Please use this function to implement a new kind of integrator, which computes the average visibility at every surface point visible to the camera. This should be implemented as follows: First find the surface intersected by the camera ray as was done in the previous example. When there is no intersection, return Color3d(1.0,1.0,1.0). Otherwise, you must now compute the average visibility. Using the intersection point its.p, the world space shading normal its.shFrame.n, and the provided sampler, generate a point on the hemisphere and trace a ray into this direction. The ray should have a user-specifiable length that can be passed via an XML declaration as follows: <integrator type="av"> <float name="length" value="... ray length ..."/> </integrator> The integrator should return Color3d(0.0,0.0,0.0) if the ray segment is occluded and Color3d(1.0,1.0,1.0) otherwise. ### Validation The directory data/scenes/pa1 contains several example scenes that you can use to try your implementation. These scenes invoke your integrator many times for each pixel, and the (random) binary visibility values are accumulated to approximate the average visibility of surface points. Make sure that you can reproduce the reference images in data/scenes/pa1/ref/ajax-av-1024spp.pfm and data/scenes/pa1/ref/sponza-av-1024spp.pfm by rendering: data/scenes/pa1/ajax-av.xml and data/scenes/pa1/sponza-av.xml In addition you should pass all the tests in data/scenes/pa1/test-av.xml. Finally provide a side by side comparison with the reference images in your report. ## Part 3: Direct Illumination Integrator ### Point Lights Before starting, read the source code of wakame.emitter.Emitter class to study its interface. Implement a PointLight class which derives from Emitter and implements an infinitesimal light source which emits light uniformly in all directions. Note that an empty Emitter interface already exists. Your task is to find a good abstraction that can be used to store necessary information related to light sources and query it at render-time from an Integrator instance. (You can use the PBRT textbook as a guide for this.) You will also have to store constructed emitters in the Scene (currently, an exception is being thrown when a light source is added to the scene). Parametrize your point light with a Color3d power (Watts) and the world space position (Point3d) of the point light. See data/scenes/pa1/sponza-direct.xml for how these parameters should be used in your XML files. ### Direct Illumiation Integrator Create an integrator call Direct, which renders the scene taking into account direct illumination from light sources. Direct.Li will be called multiple times for each camera ray and will be internally averaged by Wakame. Its expected to return a single estimate of the incident radiance along the camera ray which is given as a parameter. The equation this integrator solves is the standard rendering equation: $$L_o(p, \omega_o) = L_e(p, \omega_o) + \int_{S^2} f(p, \omega_o, \omega_i) L_d(p, \omega_i) |\cos \theta_i| \ \mathrm{d} \omega_i$$ where • $L_o(p, \omega_o)$ is the outgoing light from point $p$ in direction $\omega_o$, • $L_e(p, \omega_o)$ is the light emitted from point $p$ in direction $\omega_o$ (in case the point $p$ is on a light source), • $f(p, \omega_o, \omega_i)$ is the BSDF at point $p$, • $L_d(p, \omega_i)$ is the light incident on point $p$ from direction $\omega_i$ that comes directly from the lights source, and • $\theta_i$ is the angle that the vector $\omega_i$ makes with the normal at point $p$. For the purposes of this exercise you can safely assume that there will be no emission at the first intersection. In other words, $L_e(p,\omega_o)$ is always zero. At the first camera ray intersection compute incident irradiance from all your point lights in the scene, multiply that by the BSDF and the cosine term between the shading normal and the direction towards the light source. For this exercise you will only need to use the already implemented Diffuse BSDF. ### Validation Make sure that you can reproduce the reference image in data/scenes/pa1/ref/sponza-direct-4spp.exr by rendering: data/scenes/pa1/sponza-direct.xml. Also you should pass all tests in data/scenes/pa1/test-direct.xml. Finally provide a side by side comparison with the reference image in your report. ## Part 4: Sample Warping In this part, you will generate sample points on various domains: disks, spheres, hemispheres, and a few more. There is an interactive visualization and testing tool to make working with point sets as intuitive as possible. Note that all work you do in this assignment will serve as building blocks in later assignments when we apply Monte Carlo integration to render images. Run the java class wakame.app.WarpTest, which will launch the interactive warping tool. It allows you to visualize the behavior of different warping functions given a range of input point sets (independent, grid, and stratefied). This part is split into several subparts; in each case, you will be asked to implement a sample warping scheme and an associated probability distribution function. It is crutial that both are consistent with respect to each other (i.e., that warped samples have exactly the distribution described by the density function you implemented). Otherwise, errors would arise if we used inconsistent warpings for Monte Carlo integration. Teh warping test tool comes with a $\chi^2$ test to check this consistency condition. The input point set (stratified samples passed through a "no-op" warp function) This point set passed the test for uniformity. A more interesting case that you will implement (with a grid visualization of the mapping) This warping passed the tests as well. Implement the missing functions in wakame.util.Warp. This class consists of various warp methods that take as input a 2D point $(s, t) \in [0, 1) \times [0, 1)$ (and maybe some other domain specific parameters) and return the warped 2D (or 3D) point in the new domain. Each method is accompanied by another method that returns the probability density with which a sample was picked. Our default implementations all throw an exception, which shows up as an error message in the graphical user interface. Note that the PBRT textbook also contains considerable information on this topic. • #### Warp.squareToUniformDisk and Warp.squareToUniformDiskPdf Implement a method that transforms uniformly distributed 2D points on the unit square into uniformly distributed points on a planar disk with radius 1 centered at the origin. Next, implement a probability density function that matches your warping scheme. • #### Warp.squareToUniformSphere and Warp.squareToUniformSpherePdf Implement a method that transforms uniformly distributed 2D points on the unit square into uniformly distributed points on the unit sphere centered at the origin. Implement a matching probability density function. • #### Warp.squareToUniformHemisphere and Warp.squareToUniformHemispherePdf Implement a method that transforms uniformly distributed 2D points on the unit square into uniformly distributed points on the unit hemisphere centered at the origin and oriented in direction $(0, 0, 1)$. Add a matching probability density function. • #### Warp.squareToUniformSphericalCap and Warp.squareToUniformSphericalCapPdf Implement a method that transforms uniformly distributed 2D points on the unit square into uniformly distributed points on the spherical cap centered at the origin and oriented in direction $(0, 0, 1)$. Add a matching probability density function. A spherical cap is the subset of a unit sphere whose directions make an angle of less than $\theta$ with the central direction. Note that the above functions expects $cos\,\theta$ as a parameter. • #### Warp.squareToCosineHemisphere and Warp.squareToCosineHemispherePdf Transform your 2D point to a point distributed on the unit hemisphere with a cosine density function $p(\theta)=\frac{\cos\theta}{\pi},$ where $\theta$ is the angle between a point on the hemisphere and the north pole. • #### Warp.squareToBeckmann and Warp.squareToBeckmannPdf Transform your 2D point to a point distributed on the unit hemisphere with a cosine weighted Beckmann density function (we will describe applications of this distribution in class soon). Including the cosine weighting, this distribution is given by the following expression: $p(\theta,\phi) = D(\theta) \cos{\theta} = \frac{e^{\frac{-\tan^2{\theta}}{\alpha^2}}}{\pi\, \alpha^2 \cos^4 \theta }\cos \theta$ Where $\alpha$ is a user specified roughness parameter and $\theta$ is the angle between a direction on the hemisphere and the north pole. Begin by computing the cumulative distribution $P(\theta,\phi)$ of $p(\theta,\phi)$ and use that to compute the integral of $p(\theta,\phi)$ over the entire hemisphere, i.e.: $I = P(\pi/2,2\pi) = \int_0^{2\pi} \int_0^{\frac{\pi}{2}} p(\theta,\phi) \sin{\theta} ~ d\theta ~ d\phi$ Verify that your result integrates to 1. Use the inversion method to turn the CDF into a method for sampling points that match the Beckmann distribution. Show the steps in your report. You should pass all $\chi^2$ tests of the above warpings and include screen shots in your report.
{}
Compute the differential of a form From Munkres "Analysis on Manifolds" Consider the form $\omega = xydx + 3dy -yzdz$. Check by direct computation that $d(d\omega) = 0$. Can someone show me how to do it, because I don't seem to be getting how to compute these differentials... - If $\omega= f(x,y)dx+g(x,y)dy$, can you compute $d\omega$ in function of the partial derivatives of $f$ and $g$? The idea is the same for three variables. –  Davide Giraudo Nov 27 '12 at 20:59 If $\omega = \sum f_idx_i$, then $d\omega = \sum (df_i)\wedge dx_i$. –  Neal Nov 27 '12 at 21:25 Ok, I got it. I was a little confused with variables, but I think I finally understood it. –  Ormi Nov 27 '12 at 21:32 You can answer your question. –  Davide Giraudo Nov 27 '12 at 21:49 Differentiating means exactly what you do with usual functions. The only additional rule is: $d^2x=d^2y=d^2z=0$. Therefore, $dx\wedge dy=-dy\wedge dx$, etc. $$d\omega = d(xy\,dx) + 3\,d^2y - d(yz\,dz) \\ = dx\wedge y\,dx +x\,dy\wedge dx + 0 - dy\wedge z\,dz - y\,dz^2 =\\ = 0 -x\,dx\wedge dy - z\,dy\wedge dz - 0=\\ = -x\,dx\wedge dy - z\,dy\wedge dz.$$ $$d^2\omega = -dx\wedge dx\wedge dy - dz\wedge dy\wedge dz = 0+0=0.$$
{}
# How and where was the notion of a primitive root formulated before Gauss? Gauss credits Euler (and I think some others) with having known of the existence of primitive roots. How did these predecessors of Gauss formulate the notion of a primitive root without a concept of congruence? In what works, and in what context? In particular, since the notion of a primitive root seems quite unnatural to me if you don't have a notion of congruence (I mean, the shortest definition I can think of is "A number $a$ such that for every possible remainder $r$ there is an integer $n$ such that $a^n$ is $r$ more than a multiple of $p$"), what led Euler and other pre-Gaussian mathematicians to consider such a concept? ## 1 Answer According to Dickson's history book Lambert in 1769 was the first to grasp the concept by claiming that for any prime $p$ there was a number $g$ such that $g^{p-1}-1$ was divisible by $p$, but $g^e-1$ was not for any $0<e<p-1$. Euler coined the term "primitive root" in 1773 when he attempted to prove Lambert's claim in 1773, but his proof had a flaw. He also listed primitive roots for primes up to 41, but noted that he had no general way of finding them. Lagrange gave a result about polynomials in 1777 that fills the gap, and Lagrange in 1785 connected primitive roots to roots of unity, where the concept arises very naturally. Gauss himself was partly motivated by roots of unity since he wanted to prove that certain polygons were constructible when a primitive root of unity could be expressed using only square roots. Gauss only defined congruences in their modern form in Disquisitiones Arithmeticae (completed in 1798 but published only in 1801), where he gave two proofs of their existence for primes. • His work on regular polygons is not why he cared about primitive roots mod primes. It's his work on decimals that shows the importance of primitive roots: for any prime $p$ other than $2$ or $5$, the period length of the decimal expansion for $1/p$ is the order of $10 \bmod p$, so this period length is at most $p-1$ (in fact, it divides $p-1$) and it can be $p-1$ if and only if $10$ is a primitive root mod $p$. This is why Lambert, before Gauss, wanted to know whether there is always some primitive root mod $p$. See hal.archives-ouvertes.fr/halshs-00663295/document. – KCd Feb 10 '15 at 22:30 • @KCd Nice, I didn't know about decimals. But one does not exclude the other. He was working on inscribable polygons around the same time (1796), and that involved in his words "intensive consideration of the relation of all the roots to one another on arithmetical ground", i.e. in terms of congruences. Constructions of polygons are directly related to finding primitive roots "on arithmetical ground". jstor.org/stable/2972265?seq=1#page_scan_tab_contents – Conifold Feb 11 '15 at 1:27 • Sure, constructing regular polygons using unmarked straightedge and compass is related to complex roots of unity, but I don't see why the phenomena of primitive roots in modular arithmetic (particularly modulo primes) is linked to that. In any case, the question was why the notion of a primitive root mod primes came up before Gauss and the study of periods in decimal expansions of rational numbers was one motivation. – KCd Feb 11 '15 at 2:35 • @KCd Primitive root modulo a prime are needed to form quadratic equations (from so-called Gauss periods) that express cyclotomic roots via nested square roots when the prime is of the form $2^m+1$, see a sketch of Gauss's proof in $\S$5 of journals.cambridge.org/… Gauss gives the construction in section VII of Disquisitiones Arithmeticae after proving existence of primitive roots modulo primes in section III. – Conifold Feb 11 '15 at 19:14 • @KCd please consider writing an answer of your own: It seems like you have some interesting things to say about this issue! :) – Danu Feb 14 '15 at 8:28
{}
# Meeting Our Long-Term Energy Needs through Federal R&D By Senator Pete V. Domenici July 2006 The funds we spend on research and development (R&D) for new energy technologies are some of the most important dollars in the federal budget. But we have a problem – federal funding for energy R&D has been declining for years, and it is not being made up by increased private sector R&D expenditures. There is a vital need for a bipartisan effort to increase federal R&D funding for energy technology, to leverage those funds with increased private sector investment and to work with the Executive Branch to bring new energy technologies quickly to the market place. In the last year, we have taken important steps to implement this vision. Over the 25-year period from 1978 to 2004, federal appropriations for energy R&D fell from $6.4 billion to$2.75 billion in constant year-2000 dollars, a reduction of nearly 60 percent. Even worse, federal and private sector expenditures combined are less than one percent of total energy sales. Private sector investment in energy R&D fell from about $4 billion in 1990 to about$2 billion today. Of our nation’s high-technology industries, energy is the least intensive in terms of R&D. Consider, for comparision, that private sector R&D investments equal about 12 percent in the pharmaceuticals industry, and about 15 percent of sales in the airline industry. It is past time to reverse that trend. Last August, Congress enacted the first comprehensive energy legislation in 12 years – the Energy Policy Act of 2005. Already we are seeing results. But the challenges we face are long-term — they will require continued hard work for years to come. To this end, the Act strengthens our commitment to investing in energy-related R&D. In all, it calls for $24.2 billion in funding over the next three years for research programs in energy technology and energy-related science. The Energy Policy Act also provides a framework for a balanced set of programs in energy research, development, demonstration, and commercial application. Previously, the Secretary of Energy had no guidance in choosing research topics and program components for energy R&D. The Act addresses this problem, establishing clear guidelines for research programs in energy efficiency, renewable energy, fossil energy, and nuclear energy technologies. With the Energy Policy Act, the Department will be better able to manage our R&D investments. The Act creates a new Under Secretary for Science to serve as the primary science and technology advisor to the Secretary of Energy. The new Under Secretary is responsible for monitoring civilian research and development programs, and advising the Secretary in managing national laboratories supporting basic research. The Under Secretary for Science will also ensure that the Department remains focused on our long-term energy goals. In particular, we need to build bridges between basic science and applied energy functions. This is vital for crossover applications — so that areas in applied energy where we need scientific breakthroughs are addressed. An example is the workshop held in 2003 that produced the report on Basic Research Needs for the Hydrogen Economy. The new Under Secretary for Science should help ensure that more of this kind of bridging work is undertaken by the Department and that the Department gives it high priority. While our nation must increase domestic energy production, we must also increase our production of new energy technologies. And these technologies must move from laboratory to market, or we will be no closer to realizing a stronger energy economy. Crossing this “Valley of Death” is not easy. Even technologies with obvious commercial potential often confound attempts to find successful markets. Federal funding for energy R&D is critical, but we also need policies that encourage greater private sector investment. The Energy Policy Act strengthens Department of Energy efforts to partner with private companies interested in lab-developed technologies. The Act establishes a technology transfer coordinator to advise the Secretary on technology transfer and commercialization. It also creates a technology commercialization fund with a budget of about$25 million annually. That federal funding will be seed money to leverage private sector investments through partnerships with local businesses. Helping laboratories “spin-off” technologies to the private sector will lead to new businesses, job creation, and a more innovative economy. The Energy Policy Act also gives the Department of Energy new authority to hold prize competitions in “grand challenge” areas of energy technology. The Department can use this authority to accelerate progress in challenging areas — such as hydrogen and fuel cell vehicles and carbon capture and storage. This prize authority is modeled after that used successfully by the Defense Advanced Research Programs Agency (DARPA). DARPA spurred private sector investment in robotics technologies, for example, through a well-publicized race through the Mohave Desert. The X-Prize stands as another example of successful use of prize authority. This $10 million privately-funded award produced the first successful space flight ever achieved without public support. These prizes encourage multiple teams to undertake novel approaches, and they generate significant private sector investment due to their inherent prestige. We need to encourage high-technology industries, including energy sector industries, to increase their R&D investments. Legislation that I introduced with my Senate colleagues Jeff Bingaman (D-NM) and Lamar Alexander (R-TN) will do just that. The Protecting America’s Competitive Edge through Finance (PACE-Finance) Act will modernize and make permanent the R&D tax credit. After two decades of extending the tax credit for just a year or two in advance, it is time to give industry the certainty it needs. This certainty will lead to greater spending on R&D, leading to more innovation, and to a stronger, more competitive economy. In his State of the Union address earlier this year, the President announced his Advan-ced Energy Initiative (AEI). The President’s Advanced Energy Initiative builds on the Energy Policy Act by identifying key technologies where we will focus our efforts. The purpose of the AEI is to reduce our national dependence on foreign sources of energy, including the natural gas we use to heat our homes and the crude oil we rely upon to fuel our cars. To support this initiative, the President has requested for an overall 22 percent increase in fiscal year 2007 funding for the development of key technologies. Under the President’s Initiative, we will invest in technologies for zero-emission coal fired power plants. These plants will capture and store pollutants and carbon dioxide rather than releasing them into the atmosphere. We will continue our support for revolutionary new solar and wind technologies, to make them more cost competitive. Through the Global Nuclear Energy Partnership, we will develop a nuclear fuel cycle that enhances energy security, while addressing proliferation concerns. The AEI emphasizes the importance of advanced transportation technologies. To accelerate consumer adoption of hybrid-electric vehicles, the administration has committed to increase the energy storage and the lifetimes of batteries for these vehicles. To achieve greater use of homegrown renewable fuels, the initiative will develop advanced technologies to make competitively priced ethanol from cellulosic biomass, such as agricultural and forestry residues, trees, and grasses. Moreover, President Bush three years ago gave Americans the vision of a hydrogen future free from a reliance on foreign oil. The Energy Policy Act moves us toward that future with an authorization of over$3 billion in research on hydrogen and hydrogen fuel cells. Our nation has a bright energy future. Greater public and private investment in energy R&D will produce a suite of new technologies that will make our energy sector cleaner, more secure, and more resilient. We laid the groundwork in the Energy Policy Act, and by following through on the President’s vision of the Advanced Energy Initiative we will meet the energy challenges that lie ahead. Senator Pete V. Domenici (R, NM) chairs the Senate Energy & Natural Resources Committee and the Senate Appropriations Energy & Water subcommittee. He is serving in his sixth term. Senator Pete V. Domenici
{}
# zbMATH — the first resource for mathematics ## Found 19 Documents (Results 1–19) 100 MathJax Full Text: Burns, David (ed.) et al., $$L$$-functions and Galois representations. Based on the symposium, Durham, UK, July 19–30, 2004. Cambridge: Cambridge University Press (ISBN 978-0-521-69415-5/pbk). London Mathematical Society Lecture Note Series 320, 59-120 (2007). Full Text: Cremona, John (ed.) et al., Modular curves and Abelian varieties. Based on lectures of the conference, Bellaterra, Barcelona, July 15–18, 2002. Basel: Birkhäuser (ISBN 3-7643-6586-2/hbk). Prog. Math. 224, 23-44 (2004). MSC:  11F33 11F80 11F85 Full Text: all top 5
{}
## anonymous 4 years ago Prove that dy/dx=dy/du*du/dx 1. lalaly 2. anonymous This is the chain rule. Proofs of the chain rule First proof One proof of the chain rule begins with the definition of the derivative: Assume for the moment that g(x) does not equal g(a) for any x near a. Then the previous expression is equal to the product of two factors: When g oscillates near a, then it might happen that no matter how close one gets to a, there is always an even closer x such that g(x) equals g(a). For example, this happens for g(x) = x2sin(1 / x) near the point a = 0. Whenever this happens, the above expression is undefined because it involves division by zero. To work around this, introduce a function Q as follows: We will show that the difference quotient for f ∘ g is always equal to: Whenever g(x) is not equal to g(a), this is clear because the factors of g(x) - g(a) cancel. When g(x) equals g(a), then the difference quotient for f ∘ g is zero because f(g(x)) equals f(g(a)), and the above product is zero because it equals f′(g(a)) times zero. So the above product is always equal to the difference quotient, and to show that the derivative of f ∘ g at a exists and to determine its value, we need only show that the limit as x goes to a of the above product exists and determine its value. To do this, recall that the limit of a product exists if the limits of its factors exist. When this happens, the limit of the product of these two factors will equal the product of the limits of the factors. The two factors are Q(g(x)) and (g(x) - g(a)) / (x - a). The latter is the difference quotient for g at a, and because g is differentiable at a by assumption, its limit as x tends to a exists and equals g′(a). It remains to study Q(g(x)). Q is defined wherever f is. Furthermore, because f is differentiable at g(a) by assumption, Q is continuous at g(a). g is continuous at a because it is differentiable at a, and therefore Q ∘ g is continuous at a. So its limit as x goes to a exists and equals Q(g(a)), which is f′(g(a)). This shows that the limits of both factors exist and that they equal f′(g(a)) and g′(a), respectively. Therefore the derivative of f ∘ g at a exists and equals f′(g(a))g′(a). Second proof Another way of proving the chain rule is to measure the error in the linear approximation determined by the derivative. This proof has the advantage that it generalizes to several variables. It relies on the following equivalent definition of differentiability at a point: A function g is differentiable at a if there exists a real number g′(a) and a function ε(h) that tends to zero as h tends to zero, and furthermore Here the left-hand side represents the true difference between the value of g at a and at a + h, whereas the right-hand side represents the approximation determined by the derivative plus an error term. In the situation of the chain rule, such a function ε exists because g is assumed to be differentiable at a. Again by assumption, a similar function also exists for f at g(a). Calling this function η, we have The above definition imposes no constraints on η(0), even though it is assumed that η(k) tends to zero as k tends to zero. If we set η(0) = 0, then η is continuous at 0. Proving the theorem requires studying the difference f(g(a + h)) − f(g(a)) as h tends to zero. The first step is to substitute for g(a + h) using the definition of differentiability of g at a: f(g(a + h)) − f(g(a)) = f(g(a) + g'(a)h + ε(h)h) − f(g(a)). The next step is to use the definition of differentiability of f at g(a). This requires a term of the form f(g(a) + k) for some k. In the above equation, the correct k varies with h. Set kh = g′(a)h + ε(h)h and the right hand side becomes f(g(a) + kh) − f(g(a)). Applying the definition of the derivative gives: To study the behavior of this expression as h tends to zero, expand kh. After regrouping the terms, the right-hand side becomes: Because ε(h) and η(kh) tend to zero as h tends to zero, the bracketed terms tend to zero as h tends to zero. Because the above expression is equal to the difference f(g(a + h)) − f(g(a)), by the definition of the derivative f ∘ g is differentiable at a and its derivative is f′(g(a))g′(a). The role of Q in the first proof is played by η in this proof. They are related by the equation: The need to define Q at g(a) is analogous to the need to define η at zero. However, the proofs are not exactly equivalent. The first proof relies on a theorem about products of limits to show that the derivative exists. The second proof does not need this because showing that the error term vanishes proves the existence of the limit directly. 3. anonymous I just did a copy paste from Wikipedia!!!!! 4. anonymous Wikipedia is awesome XD nice one Aron 5. anonymous THATS why so SOPA is silly XD 6. anonymous I don't understand the part (on http://web.mit.edu/wwmath/calculus/differentiation/chain-proof.html) after it says "Differentiablility implies continuity; therefore..." Why does $du \rightarrow 0$ as $dx \rightarrow 0$ And how does that get inserted into the equations? I don't know why, but I don't understand the wikipedia explanation.
{}
# Scripting: to block or not to block [closed] Context: I'm making a little "program your robot army" sort of game in Java in which the player writes Lua scripts (which are then run by LuaJ) to program their robots to do stuff. So far there are two objectives: 1. Keep It Simple, Stupid, and 2. React to the environment. To take a simple example of what I currently have, the player should be able to tell their robot to go to a particular location with something like robot:goTo(5, 8) Like Colobot, the scripting engine effectively halts (blocks) at that instruction until the robot has reached its destination. I like this -- it's meets the first requirement nicely, in that you can't do anything until goTo decides to return. It's also simple on the Java side -- I can basically do while (/* condition */) yield() and just resume the coroutine every tick (then maintaining state in local variables is trivial). But: what if that resource at (5,8) disappears, or an enemy comes near, or energy runs low? There's no opportunity to bail -- the robot has to reach (5,8) before it can do anything else. The main other idea I can think of is to produce some sort of loop: while not robot:near(5,8) do if robot:enemyNearby() then break -- and do some other stuff, presumably else robot:moveTo(5,8) end end where moveTo this time sets the velocity/acceleration and then immediately returns, without blocking until arrival. I could have robot:goTo return the actual coroutine (either wrapped, so you just call it, or directly, which requires calling coroutine.resume and passing the coroutine), which might look something like this: moveTask = robot:moveTo(5,8) while not robot:near(5,8) do if robot:enemyNearby() then break else -- or some other syntactic sugar end end These last two approaches look pretty similar, but reusing a coroutine as in the second example makes managing state (e.g. a path) trivial, at the cost of exposing coroutines to the player. To reiterate, the goal is to allow for writing simple scripts, in which a set of actions are executed in series (sorta like queued actions in The Sims), while simultaneously allowing more complex scripts with actions that may be interrupted by arbitrary events (nearby enemy, target moved, stats dropping). Any tips or ideas would be much appreciated! ## closed as primarily opinion-based by Alexandre Vaillancourt♦, Josh♦Jul 7 '16 at 15:32 Many good questions generate some degree of opinion based on expert experience, but answers to this question will tend to be almost entirely based on opinions, rather than facts, references, or specific expertise. If this question can be reworded to fit the rules in the help center, please edit the question. • Hi Max, please ask a specific question. This is a Q&A site, and open-ended questions aren't a good fit. – Sean Middleditch Aug 1 '15 at 6:07 • The question is "what's the best way to tackle this situation?" Even though it's somewhat open-ended, there can still be a best, well-thought out, comprehensive answer that could be Accepted. Lastly, is there a recommended alternative place to post, or simply "not here"? – Max Aug 1 '15 at 15:57 • "what's the best" never has a single answer. There is no best. :) I think you could reword this question to fit, but "here's two choices, which do you like most?" certainly doesn't. gamedev.net has traditional forums and there's the Chat feature of this site, too. – Sean Middleditch Aug 1 '15 at 19:24 • is there a recommended alternative place to post: This list might help. – Anko Aug 2 '15 at 0:05 You can make the moveTo() function a blocking function in the user script, but you should definitely not make it block your script. This way you should have moveTo() return a boolean telling weather the destination was reached or the action was interrupted. I suppose this would be the simplest solution for the user. You cold also have the moveTo() function just sends the destination and actionToTake to your program and return immediately. Then your program can just move the robot towards that destination unless it gets interrupted, then when/if it reaches destination just execute the action that was assigned (action could be a function call or you could predefine a set of possible actions). Multiple solutions: 1. Pass a callback to the moveTo function that can check other things from inside the loop and optionally stop it in the middle. 2. Create two new coroutines - one for movement, one for doing things while moving. Yield main thread until either of the two has finished, abort the remaining one afterwards. 3. Turn movement into state, resolve it in parallel (possibly from outside Lua). robot.targetPos = {5,7}, then do the additional work in the same thread. Which one's 'best' for this case is up to you. When something bad happens while running a Java function from a Lua script, have that function exit by throwing an exception. There are several ways to implement Lua from Java, so refer to the documentation for details about how to throw an exception from a java function back to Lua. The downside is that the player will have to put lots of error checking into their code and wrap everything in pcall, which might be a bit unfun. Another option is to just do nothing when there is an error on the Java side and return silently to Lua. It's now up to the player to implement error-checking code by calling a getLastError Java-method . If they don't, their program will continue as if nothing had happened and might simply not do what they want to do.
{}
# Chapter 5 Graphs By default, it is possible to make a lot of graphs with R without the need of any external packages. However, in this chapter, we are going to learn how to make graphs using {ggplot2} which is a very powerful package that produces amazing graphs. There is an entry cost to {ggplot2} as it works in a very different way than what you would expect, especially if you know how to make plots with the basic R functions already. But the resulting graphs are well worth the effort and once you will know more about {ggplot2} you will see that in a lot of situations it is actually faster and easier. Another advantage is that making plots with {ggplot2} is consistent, so you do not need to learn anything specific to make, say, density plots. There are a lot of extensions to {ggplot2}, such as {ggridges} to create so-called ridge plots and {gganimate} to create animated plots. By the end of this chapter you will know how to do basic plots with {ggplot2} and also how to use these two extensions. ## 5.1 Resources Before showing some examples and the general functionality of {ggplot2}, I list here some online resources that I keep coming back to: When I first started using {ggplot2}, I had a cookbook approach to it; I tried findinge examples online that looked like what I needed, copy and paste the code and then adapted it to my case. The above resources are the ones I consulted and keep consulting in these situations (I also go back to past code I’ve written, of course). Don’t hesitate to skim these resources for inspiration and to learn more about some extensions to {ggplot2}. In the next subsections I am going to show you how to draw the most common plots, as well as show you how to customize your plots with {ggthemes}, a package that contains pre-defined themes for {ggplot2}. ## 5.2 Examples I think that the best way to learn how to use {ggplot2} is to jump right into it. Let’s first start with barplots. ### 5.2.1 Barplots library(ggplot2) library(ggthemes) {ggplot2} is an implementation of the Grammar of Graphics by Wilkinson (2006), but you don’t need to read the books to start using it. If we go back to the Star Wars data (contained in dplyr), and wish to draw a barplot of the gender, the following lines are enough: ggplot(starwars, aes(gender)) + geom_bar() The first argument of the function is the data (called starwars in this example), and then the function aes(). This function is where you list the variables that you want to map to the aesthetics of the geoms functions. On the second line, you see that we use the geom_bar() function. This function creates a barplot of gender variable. You can get different kind of plots by using different geom_ functions. You can also provide the aes() argument to the geom_*() function: ggplot(starwars) + geom_bar(aes(gender)) The difference between these two approaches is that when you specify the aesthetics in the ggplot() function, all the geom_*() functions that follow will inherited these aesthetics. This is useful if you want to avoid writing the same code over and over again, but can be problematic if you need to specify different aesthetics to different geom_*() functions. This will become clear in a later example. You can add options to your plots, for instance, you can change the coordinate system in your barplot: ggplot(starwars, aes(gender)) + geom_bar() + coord_flip() This is the basic recipe to create plots using {ggplot2}: start with a call to ggplot() where you specify the data you want to plot, and optionally the aesthetics. Then, use the geom_*() function you need; if you did not specify the aesthetics in the call to the ggplot() function, do it here. Then, you can add different options, such as changing the coordinate system, changing the theme, the colour palette used, changing the position of the legend and much, much more. This chapter will only give you an overview of the capabilities of {ggplot2}. ### 5.2.2 Scatter plots Scatter plots are very useful, especially if you are trying to figure out the relationship between two variables. For instance, let’s make a scatter plot of height vs weight of Star Wars characters: ggplot(starwars) + geom_point(aes(height, mass)) As you can see there is an outlier; a very heavy character! Star Wars fans already guessed it, it’s Jabba the Hut. To make the plot easier to read, let’s remove this outlier: starwars %>% filter(!str_detect(name, "Jabba")) %>% ggplot() + geom_point(aes(height, mass)) There is a positive correlation between height and mass. Later, we are going to see how we can also plot a regression line that goes through the scatter plot, but first, let’s discover some other geom_*() functions. ### 5.2.3 Density geom_density() is the geom that allows you to get density plots: ggplot(starwars, aes(height)) + geom_density() ## Warning: Removed 6 rows containing non-finite values (stat_density). Let’s go into more detail now; what if you would like to plot the densities for females and males only (removing the droids from the data first)? This can be done by first filtering the data using dplyr and then separating the dataset by gender: starwars %>% filter(gender %in% c("female", "male")) The above lines do the filtering; only keep gender if gender is in the vector "female", "male". This is much easier than having to write gender == "female" | gender == "male". Then, we pipe this dataset to ggplot: starwars %>% filter(gender %in% c("female", "male")) %>% ggplot(aes(height, fill = gender)) + geom_density() ## Warning: Removed 5 rows containing non-finite values (stat_density). Let’s take a closer look to the aes() function: I’ve added fill = gender. This means that there will be one density plot for each gender in the data, and each will be coloured accordingly. This is where {ggplot2} might be confusing; there is no need to write explicitly (even if it is possible) that you want the female density to be red and the male density to be blue. You just map the variable gender to this particular aesthetic. You conclude the plot by adding geom_density() which is this case is the plot you want. We will see later how to change the colours of your plot. An alternative way to write this code is first to save the filtered data in a variable, and define the aesthetics inside the geom_density() function: filtered_data <- starwars %>% filter(gender %in% c("female", "male")) ggplot(filtered_data) + geom_density(aes(height, fill = gender)) ## Warning: Removed 5 rows containing non-finite values (stat_density). ### 5.2.4 Line plots For the line plots, we are going to use official unemployment data (the same as in the previous chapter, but with all the available years). Get it from here (downloaded from: http://www.statistiques.public.lu/stat/TableViewer/tableView.aspx?ReportId=12950&IF_Language=eng&MainTheme=2&FldrName=3&RFPath=91). Let’s plot the unemployment for the canton of Luxembourg only: unemp_lux_data <- import("datasets/unemployment/all/unemployment_lux_all.csv") unemp_lux_data %>% filter(division == "Luxembourg") %>% ggplot(aes(x = year, y = unemployment_rate_in_percent, group = 1)) + geom_line() Because line plots are 2D, you need to specify the y and x axes. There is also another option you need to add, group = 1. This is to tell aes() that the dots have to be connected with a single line. What if you want to plot more than one commune? unemp_lux_data %>% filter(division %in% c("Luxembourg", "Esch-sur-Alzette")) %>% ggplot(aes(x = year, y = unemployment_rate_in_percent, group = division, colour = division)) + geom_line() This time, I’ve specified group = division which means that there has to be one line per as many communes as in the variable division. I do the same for colours. I think the next example illustrates how {ggplot2} is actually brilliant; if you need to add a third commune, there is no need to specify anything else; no need to add anything to the legend, no need to specify a third colour etc: unemp_lux_data %>% filter(division %in% c("Luxembourg", "Esch-sur-Alzette", "Wiltz")) %>% ggplot(aes(x = year, y = unemployment_rate_in_percent, group = division, colour = division)) + geom_line() The three communes get mapped to the colour aesthetic so whatever the number of communes, as long as there are enough colours, the communes will each get mapped to one of these colours. ### 5.2.5 Facets In some case you have a factor variable that separates the data you wish to plot into different categories. If you want to have a plot per category you can use the facet_grid() function. Careful though, this function does not take a variable as an argument, but a formula, hence the ~ symbol in the code below: starwars %>% mutate(human = case_when(species == "Human" ~ "Human", species != "Human" ~ "Not Human")) %>% filter(gender %in% c("female", "male"), !is.na(human)) %>% ggplot(aes(height, fill = gender)) + facet_grid(. ~ human) + #<--- this is a formula geom_density() ## Warning: Removed 4 rows containing non-finite values (stat_density). I first created a factor variable that specifies if a Star Wars character is human or not, and then use it for facetting. By changing the formula, you change how the facetting is done: starwars %>% mutate(human = case_when(species == "Human" ~ "Human", species != "Human" ~ "Not Human")) %>% filter(gender %in% c("female", "male"), !is.na(human)) %>% ggplot(aes(height, fill = gender)) + facet_grid(human ~ .) + geom_density() ## Warning: Removed 4 rows containing non-finite values (stat_density). Recall the categorical variable more_1 that we computed in the previous chapter? Let’s use it as a faceting variable: starwars %>% rowwise() %>% mutate(n_films = length(films)) %>% mutate(more_1 = case_when(n_films == 1 ~ "Exactly one movie", n_films != 1 ~ "More than 1 movie")) %>% mutate(human = case_when(species == "Human" ~ "Human", species != "Human" ~ "Not Human")) %>% filter(gender %in% c("female", "male"), !is.na(human)) %>% ggplot(aes(height, fill = gender)) + facet_grid(human ~ more_1) + geom_density() ## Warning: Removed 4 rows containing non-finite values (stat_density). ### 5.2.6 Pie Charts I am not a huge fan of pie charts, but sometimes this is what you have to do. So let’s see how you can create pie charts. First, let’s create a mock dataset with the function tibble::tribble() which allows you to create a dataset line by line: test_data <- tribble( ~id, ~var1, ~var2, ~var3, ~var4, ~var5, "a", 26.5, 38, 30, 32, 34, "b", 30, 30, 28, 32, 30, "c", 34, 32, 30, 28, 26.5 ) This data is not in the right format though, which is wide. We need to have it in the long format for it to work with {ggplot2}. For this, let’s use tidyr::gather() as seen in the previous chapter: test_data_long = test_data %>% gather(variable, value, starts_with("var")) Now, let’s plot this data, first by creating 3 bar plots: ggplot(test_data_long) + facet_wrap(~id) + geom_bar(aes(variable, value, fill = variable), stat = "identity") In the code above, I introduce a new option, called stat = "identity". By default, geom_bar() counts the number of observations of each category that is plotted, which is a statistical transformation. By adding stat = "identity", I force the statistical transformation to be the identity function, and thus plot the data as is. To create the pie chart, first we need to compute the share of each id to var1, var2, etc… To do this, we first group by id, then compute the total. Then we use a new function ungroup(). After using ungroup() all the computations are done on the whole dataset instead of by group, which is what we need to compute the share: test_data_long <- test_data_long %>% group_by(id) %>% mutate(total = sum(value)) %>% ungroup() %>% mutate(share = value/total) Let’s take a look to see if this is what we wanted: print(test_data_long) ## # A tibble: 15 x 5 ## id variable value total share ## <chr> <chr> <dbl> <dbl> <dbl> ## 1 a var1 26.5 160. 0.165 ## 2 b var1 30 150 0.2 ## 3 c var1 34 150. 0.226 ## 4 a var2 38 160. 0.237 ## 5 b var2 30 150 0.2 ## 6 c var2 32 150. 0.213 ## 7 a var3 30 160. 0.187 ## 8 b var3 28 150 0.187 ## 9 c var3 30 150. 0.199 ## 10 a var4 32 160. 0.199 ## 11 b var4 32 150 0.213 ## 12 c var4 28 150. 0.186 ## 13 a var5 34 160. 0.212 ## 14 b var5 30 150 0.2 ## 15 c var5 26.5 150. 0.176 If you didn’t understand what ungroup() did, rerun the last few lines with it and inspect the output. To plot the pie chart, we create a barplot again, but specify polar coordinates: ggplot(test_data_long) + facet_wrap(~id) + geom_bar(aes(y = share, x = "", fill = variable), stat = "identity") + theme() + coord_polar("y", start = 0) As you can see, this typical pie chart is not very easy to read; compared to the barplots above it is not easy to distinguish if a has a higher share than b or c. You can change the look of the pie chart, for example by specifying variable as the x: ggplot(test_data_long) + facet_wrap(~id) + geom_bar(aes(y = share, x = variable, fill = variable), stat = "identity") + theme() + coord_polar("x", start = 0) But as a general rule, avoid pie charts if possible. I find that pie charts are only interesting if you need to show proportions that are hugely unequal, to really emphasize the difference between said proportions. ### 5.2.7 Adding text to plots Sometimes you might want to add some text to your plots. This is possible with geom_text(): ggplot(test_data_long) + facet_wrap(~id) + geom_bar(aes(variable, value, fill = variable), stat = "identity") + geom_text(aes(variable, value + 1.5, label = value)) You can put anything after label = but in general what you want are the values, so that’s what I put there. But you can also refine it, imagine the values are actually in euros: ggplot(test_data_long) + facet_wrap(~id) + geom_bar(aes(variable, value, fill = variable), stat = "identity") + geom_text(aes(variable, value + 1.5, label = paste(value, "€"))) You can also achieve something similar with geom_label(): ggplot(test_data_long) + facet_wrap(~id) + geom_bar(aes(variable, value, fill = variable), stat = "identity") + geom_label(aes(variable, value + 1.5, label = paste(value, "€"))) ## 5.3 Customization Every plot you’ve seen until now was made with the default look of {ggplot2}. If you want to change the look, you can apply a theme, and a colour scheme. Let’s take a look at themes first by using the ones found in the package ggthemes. But first, let’s learn how to change the names of the axes and how to title a plot. ### 5.3.1 Changing titles, axes labels, options, mixing geoms and changing themes The name of this subsection is quite long, but this is because everything is kind of linked. Let’s start by learning what the labs() function does. To change the title of the plot, and of the axes, you need to pass the names to the labs() function: unemp_lux_data %>% filter(division %in% c("Luxembourg", "Esch-sur-Alzette", "Wiltz")) %>% ggplot(aes(year, unemployment_rate_in_percent, group = division, colour = division)) + labs(title = "Unemployment in Luxembourg, Esch/Alzette and Wiltz", x = "Year", y = "Rate") + geom_line() What if you want to make the lines thicker? unemp_lux_data %>% filter(division %in% c("Luxembourg", "Esch-sur-Alzette", "Wiltz")) %>% ggplot(aes(year, unemployment_rate_in_percent, group = division, colour = division)) + labs(title = "Unemployment in Luxembourg, Esch/Alzette and Wiltz", x = "Year", y = "Rate") + geom_line(size = 2) Each geom_*() function has its own options. Notice that the size=2 argument is not inside an aes() function. This is because I do not want to map a variable of the data to the size of the line, in other words, I do not want to make the size of the line proportional to a certain variable in the data. Recall the scatter plot we did earlier, where we showed that height and mass of star wars characters increased together? Let’s take this plot again, but make the size of the dots proportional to the birth year of the character: starwars %>% filter(!str_detect(name, "Jabba")) %>% ggplot() + geom_point(aes(height, mass, size = birth_year)) Making the size proportional to the birth year (the age would have been more informative) allows us to see a third dimension. It is also possible to “see” a fourth dimension, the gender for instance, by changing the colour of the dots: starwars %>% filter(!str_detect(name, "Jabba")) %>% ggplot() + geom_point(aes(height, mass, size = birth_year, colour = gender)) As I promised above, we are now going to learn how to add a regression line to this scatter plot: starwars %>% filter(!str_detect(name, "Jabba")) %>% ggplot() + geom_point(aes(height, mass, size = birth_year, colour = gender)) + geom_smooth(aes(height, mass), method = "lm") geom_smooth() adds a regression line, but only if you specify method = "lm" (“lm” stands for “linear model”). What happens if you remove this option? starwars %>% filter(!str_detect(name, "Jabba")) %>% ggplot() + geom_point(aes(height, mass, size = birth_year, colour = gender)) + geom_smooth(aes(height, mass)) ## geom_smooth() using method = 'loess' and formula 'y ~ x' By default, geom_smooth() does a non-parametric regression called LOESS (locally estimated scatterplot smoothing), which is more flexible. It is also possible to have one regression line by gender: starwars %>% filter(!str_detect(name, "Jabba")) %>% ggplot() + geom_point(aes(height, mass, size = birth_year, colour = gender)) + geom_smooth(aes(height, mass, colour = gender)) ## geom_smooth() using method = 'loess' and formula 'y ~ x' Because there are only a few observations for females and NAs the regression lines are not very informative, but this was only an example to show you some options of geom_smooth(). Let’s go back to the unemployment line plots. For now, let’s keep the base {ggplot2} theme, but modify it a bit. For example, the legend placement is actually a feature of the theme. This means that if you want to change where the legend is placed you need to modify this feature from the theme. This is done with the function theme(): unemp_lux_data %>% filter(division %in% c("Luxembourg", "Esch-sur-Alzette", "Wiltz")) %>% ggplot(aes(year, unemployment_rate_in_percent, group = division, colour = division)) + theme(legend.position = "bottom") + labs(title = "Unemployment in Luxembourg, Esch/Alzette and Wiltz", x = "Year", y = "Rate") + geom_line() What I also like to do is remove the title of the legend, because it is often superfluous: unemp_lux_data %>% filter(division %in% c("Luxembourg", "Esch-sur-Alzette", "Wiltz")) %>% ggplot(aes(year, unemployment_rate_in_percent, group = division, colour = division)) + theme(legend.position = "bottom", legend.title = element_blank()) + labs(title = "Unemployment in Luxembourg, Esch/Alzette and Wiltz", x = "Year", y = "Rate") + geom_line() The legend title has to be an element_text object.element_text objects are used with theme to specify how text should be displayed. element_blank() draws nothing and assigns no space (not even blank space). If you want to keep the legend title but change it, you need to use element_text(): unemp_lux_data %>% filter(division %in% c("Luxembourg", "Esch-sur-Alzette", "Wiltz")) %>% ggplot(aes(year, unemployment_rate_in_percent, group = division, colour = division)) + theme(legend.position = "bottom", legend.title = element_text(colour = "red")) + labs(title = "Unemployment in Luxembourg, Esch/Alzette and Wiltz", x = "Year", y = "Rate") + geom_line() If you want to change the word “division” to something else, you can do so by providing the colour argument to the labs() function: unemp_lux_data %>% filter(division %in% c("Luxembourg", "Esch-sur-Alzette", "Wiltz")) %>% ggplot(aes(year, unemployment_rate_in_percent, group = division, colour = division)) + theme(legend.position = "bottom") + labs(title = "Unemployment in Luxembourg, Esch/Alzette and Wiltz", x = "Year", y = "Rate", colour = "Administrative division") + geom_line() You could modify every feature of the theme like that, but there are built-in themes that you can use: unemp_lux_data %>% filter(division %in% c("Luxembourg", "Esch-sur-Alzette", "Wiltz")) %>% ggplot(aes(year, unemployment_rate_in_percent, group = division, colour = division)) + theme_minimal() + theme(legend.position = "bottom", legend.title = element_blank()) + labs(title = "Unemployment in Luxembourg, Esch/Alzette and Wiltz", x = "Year", y = "Rate") + geom_line() For example in the code above, I have used theme_minimal() which I like quite a lot. You can also use themes from the ggthemes package, which even contains a STATA theme, if you like it: unemp_lux_data %>% filter(division %in% c("Luxembourg", "Esch-sur-Alzette", "Wiltz")) %>% ggplot(aes(year, unemployment_rate_in_percent, group = division, colour = division)) + theme_stata() + labs(title = "Unemployment in Luxembourg, Esch/Alzette and Wiltz", x = "Year", y = "Rate") + geom_line() As you can see, theme_stata() has the legend on the bottom by default, because this is how the legend position is defined within the theme. However the legend title is still there. Let’s remove it: unemp_lux_data %>% filter(division %in% c("Luxembourg", "Esch-sur-Alzette", "Wiltz")) %>% ggplot(aes(year, unemployment_rate_in_percent, group = division, colour = division)) + theme_stata() + theme(legend.title = element_blank()) + labs(title = "Unemployment in Luxembourg, Esch/Alzette and Wiltz", x = "Year", y = "Rate") + geom_line() ggthemes even features an Excel 2003 theme (don’t use it though): unemp_lux_data %>% filter(division %in% c("Luxembourg", "Esch-sur-Alzette", "Wiltz")) %>% ggplot(aes(year, unemployment_rate_in_percent, group = division, colour = division)) + theme_excel() + labs(title = "Unemployment in Luxembourg, Esch/Alzette and Wiltz", x = "Year", y = "Rate") + geom_line() You can create your own theme by using a simple theme, such as theme_minimal() as a base and then add your options. We are going to create one theme after we learn how to create our own functions, in Chapter 7. Then, we are going to create a package to share this theme with the world, and we are going to learn how to make packages in Chapter 9. ### 5.3.2 Colour schemes You can also change colour schemes, by specifying either scale_colour_*() or scale_fill_*() functions. scale_colour_*() functions are used for continuous variables, while scale_fill_*() functions for discrete variables (so for barplots for example). A colour scheme I like is the Highcharts colour scheme. unemp_lux_data %>% filter(division %in% c("Luxembourg", "Esch-sur-Alzette", "Wiltz")) %>% ggplot(aes(year, unemployment_rate_in_percent, group = division, colour = division)) + theme_minimal() + scale_colour_hc() + theme(legend.position = "bottom", legend.title = element_blank()) + labs(title = "Unemployment in Luxembourg, Esch/Alzette and Wiltz", x = "Year", y = "Rate") + geom_line() An example with a barplot: ggplot(test_data_long) + facet_wrap(~id) + geom_bar(aes(variable, value, fill = variable), stat = "identity") + geom_text(aes(variable, value + 1.5, label = value)) + theme_minimal() + scale_fill_hc() It is also possible to define and use your own palette. To use your own colours you can use scale_colour_manual() and scale_fill_manual() and specify the html codes of the colours you want to use. unemp_lux_data %>% filter(division %in% c("Luxembourg", "Esch-sur-Alzette", "Wiltz")) %>% ggplot(aes(year, unemployment_rate_in_percent, group = division, colour = division)) + theme_minimal() + scale_colour_manual(values = c("#FF336C", "#334BFF", "#2CAE00")) + theme(legend.position = "bottom", legend.title = element_blank()) + labs(title = "Unemployment in Luxembourg, Esch/Alzette and Wiltz", x = "Year", y = "Rate") + geom_line() To get html codes of colours you can use this online tool. There is also a very nice package, called colourpicker that allows you to pick colours from with RStudio. Also, you do not even need to load it to use it, since it comes with an Addin: For a barplot you would do the same: ggplot(test_data_long) + facet_wrap(~id) + geom_bar(aes(variable, value, fill = variable), stat = "identity") + geom_text(aes(variable, value + 1.5, label = value)) + theme_minimal() + theme(legend.position = "bottom", legend.title = element_blank()) + scale_fill_manual(values = c("#FF336C", "#334BFF", "#2CAE00", "#B3C9C6", "#765234")) For countinuous variables, things are a bit different. Let’s first create a plot where we map a continuous variable to the colour argument of aes(): ggplot(diamonds) + geom_point(aes(carat, price, colour = depth)) To change the colour, we need to use scale_color_gradient() and specify a value for low values of the variable, and a value for high values of the variable. For example, using the colours of the theme I made for my blog: ggplot(diamonds) + geom_point(aes(carat, price, colour = depth)) + scale_color_gradient(low = "#bec3b8", high = "#ad2c6c") ## 5.4 Saving plots to disk There are two ways to save plots on disk; one through the Plots plane in RStudio and another using the ggsave() function. Using RStudio, navigate to the Plots pane and click on Export. You can then choose where to save the plot and other various options: knitr::include_graphics("pics/rstudio_save_plots.gif") This is fine if you only generate one or two plots but if you generate a large number of them, it is less tedious to use the ggsave() function: my_plot1 <- ggplot(my_data) + geom_bar(aes(variable)) ggsave("path/you/want/to/save/the/plot/to/my_plot1.pdf", my_plot1) There are other options that you can specify such as the width and height, resolution, units, etc… ## 5.5 Exercises ### Exercise 1 Load the Bwages dataset from the Ecdat package. Your first task is to create a new variable, educ_level, which is a factor variable that equals: • “Primary school” if educ == 1 • “High school” if educ == 2 • “Some university” if educ == 3 • “Master’s degree” if educ == 4 • “Doctoral degree” if educ == 5 Use case_when() for this. Then, plot a scatter plot of wages on experience, by education level. Add a theme that you like, and remove the title of the legend. The scatter plot is not very useful, because you cannot make anything out. Instead, use another geom that shows you a non-parametric fit with confidence bands. ### References Wilkinson, Leland. 2006. The Grammar of Graphics. Springer Science & Business Media.
{}
# What's the difference between: operational, denotational and axiomatic semantics? Recap of the terms from the dictionary: • semantics: the study of meaning in a language (words, phrases, etc) and of language constructs in programming languages (basically any syntactically valid part of a program that generates an instruction or a sequence of instructions that perform a specific task when executed by the CPU) • operational: related to the activities involved in doing or producing something • denotational: the main meaning of a word • axiomatic: obviously true and therefore not needing to be proved This says that the meaning of a language construct is specified by the computation it induces. It is of interest how the effect of a computation is produced. My understanding of this is that this basically describes the meaning of all the operations involved in a program (from the most basic to the most complex). Examples: arithmetic operations: 1 + 1, 10 ** 2, 19 // 3 etc. In this case it analyzes the meaning of the steps involved in producing a result given n operands and n operators. This can be further boiled down to what each operand means (so in my examples each number is defined in the domain of natural numbers [1, 2, ..., n], etc. assignment operations: x = 5, y = 5 ** 2, z = 10 ** 2 // 3 * (99 + 1024) etc. In this case it involves an evaluation of the value of the mathematical expression on the right and assigning it to the identifier on the left. augmented assignment operations: x += y, z *= t etc. In this case it involves an evaluation of each identifier once, and performing an arithmetic operation first, followed by an assignment operation last. etc. This says that meanings are modelled by mathematical objects that represent the effect of executing the constructs. It is of interest only the effect of a computation, not how it is produced. My understanding of this is basically that it's related to mathematical functions, which take something as an input, do some computation which you don't care about and produce a result, which you care about. Since denotational means the main meaning, I take this as: the name of your functions/methods/identifiers should constrain the possible interpretations of what they do, ideally to be exact. Examples: sort(iterable): should pretty much do what it says, take an unordered iterable as its input and return it ordered (you don't care how it does that under the hood) min(iterable): should take an array return the smallest value (you don't care how it does it) max(iterable): should take an array return the largest value (you don't care how it does it) etc. Some properties of the effect of executing the constructs are expressed as assertions. Some aspects of the executions may be ignored. My understanding of this is that it's related to boolean algebra and logic. Examples: expr1 and expr2: if expr1 is False then the entire boolean expression is False and it short-circuits and expr2 is not evaluated Or even compound statements: if expr1: elif expr2: elif expr3: else: The effect is the result of executing the above construct and you assert its value based on whichever boolean expression yield true, the rest of them being ignored. Question: Are my examples for each category of semantics accurate and if not can one please provide some simple (not formal as Wikipedia does it with mathematical formulas, etc) examples of each semantic category? Examples that would match what we encounter in normal, day to day computer programs. The gist of it is that semantic ties an identifier (word, symbol, sign) to its real meaning. The way it does this can be further boiled down to: Operational semantics ties any type of operation (arithmetic, assignment, etc.) to the computation involved. Denotational semantics ties identifiers to their meaning (so this is basically the most common one in programming). It's when you define a function it should do what it says. So for instance if you define a function dat says add_numbers(x, y) it should add x to y and not multiply them. Axiomatic semantics ties the outcome of constructs to assertions. Basically boolean algebra (short-circuits in logical expressions, code branching (if-elif-else, switch-cases, etc). I think your examples show you do somehow understand the basic points of the several styles of semantics. Still, note that the whole point of having a semantics of a programming language is to have a formal, mathematically rigorous description of the program behavior. That inherently involves math and several formulae -- one can't really do without math. Math is used as the foundation of every science, and computer science is no exception to this fact. You are correct when you say that the denotational semantics for a program is, very roughly, a function from its input to its output. E.g. "this program computes the factorial function on naturals". Things start being more complex here when we want to model that the program might not terminate, or when the result is not a simple number but rather a record (an object, if you prefer) containing functions, which in turn might not terminate (or return another record which ...). The mathematical models which arise there become less and less trivial as soon as you allow for more complex data types (e.g. polymorphic functions). Operational semantics of imperative programs roughly relates the initial variables state to the final one. E.g. "if we start the program with $$x=3,y=4$$ we end up with $$x=7,y=2$$". If the semantics is in "small step" style, we further get to see all the intermediate values of variables, as if we were running a debugger and stepping through each program line, so to speak. Such "small step" semantics allows to count, say, the number of multiplications performed by the program, which can not be seen in its denotation alone (however, note that, if we consider that count to be a kind of "observable output", we could define another denotational semantics returning that count as well -- the semantics styles are rather flexible). Axiomatic semantics instead roughly works with set of states instead of single ones. It can express statements like "if we start the program with variables having any value satisfying $$x>y>5$$ we end up with $$x<100,y=x^2$$". Note that we do not know the exact value of each variable here, but only a property that variables satisfy, expressed using logic. That being said, if you want to achieve a solid understanding, you really have to read about the technical details: all the mathematical definitions and basic theorems. This is not hard, but is also not trivial either -- typically, a single university course can cover the basics of each style. • @George I would not say anything more than "the semantics of this program is not the intended one". This is true whatever semantics you use. Keep in mind that denotational/operational/axiomatic are semantics styles, that is they are three distinct ways to define (essentially) the same thing, the program behavior. They are not defining three independent aspects of a program. So, if the semantics is wrong (not the intended one) it is usually wrong according all the styles. We use three styles because each has its own strengths/weaknesses when we write proofs, not to obtain more information. – chi Mar 22 '19 at 16:35 • Thanks for your answer... so for instance if I have the following scenario: def add_numbers(x, y): return x ** y and then I have something like result = add_numbers(3, 3) Clearly this is a semantic error because I'm expecting the function to add my 2 numbers and yield 6, but it will yield 27. But what kind of semantic error ? Denotational or axiomatic ? Since you said axiomatic semantics deals with sets of data I assume this would fall into denotational ? In general naming things (variables, constants, functions, classes, etc.) fall into denotational, correct? – George Mar 22 '19 at 16:35 • @George You may name your variables and functions however you like. The semantics of your program does not "see" those names. Of course, the choice of the name add_numbers for the function you gave would be highly confusing to any reader of your program, including yourself, but that has nothing to do with the semantics of the program. – Daniel Gerigk Nov 6 '19 at 20:35
{}
# Does the polarized Kagome antiferromagnet contain Dirac or Weyl points? I've been reading about frustrated quantum magnets lately and a prominent topic is the study of antiferromagnets on the Kagome lattice. A calculation of the spectrum for the sort of model I have in mind is presented in figure 1 (a) of this paper. The spectrum of the theory is very interesting, partly because there are often dispersionless bands, which is obviously odd. I want to know about a more banal part of the spectrum which I haven't seen discussed elsewhere: the linear band crossings of the top two bands at the $$K$$ point. These band crossings look like they could be a sign of some kind of topological non-triviality, like Dirac or Weyl points. However I haven't found any references discussing the nature of these gapless points, and it's also possible that they are just protected by some kind of symmetry. Either way, I would like to understand the nature of those gapless points, so if someone is aware of the origin of this degeneracy a discussion or reference on them would be very helpful. Thanks!
{}
GATE CSE 1999 Question 1 Suppose that the expectation of a random variable X is 5. Which of the following statements is true? A There is a sample point at which X has the value 5. B There is a sample point at which X has value greater than 5. C There is a sample point at which X has a value greater than equal to 5. D None of the above Discrete Mathematics   Probability Theory Question 1 Explanation: Question 2 The number of binary relations on a set with n elements is: A $n^2$ B $2^n$ C $2^{n^2}$ D None of the above Discrete Mathematics   Relation Question 2 Explanation: Question 3 The number of binary strings of n zeros and k ones in which no two ones are adjacent is A $^{n-1}C_k$ B $^nC_k$ C $^nC_{k+1}$ D None of the above Discrete Mathematics   Combination Question 3 Explanation: Question 4 Consider the regular expression (0 + 1) (0+1) ... n times. The minimum state finite automaton that recognizes the language represented by this regular expression contains A n states B n+1 states C n+2 states D None of the above Theory of Computation   Finite Automata Question 4 Explanation: Question 5 Context-free languages are closed under: A Union, intersection B Union, Kleene closure C Intersection, complement D Complement, Kleene closure Theory of Computation   Context Free Language Question 5 Explanation: Question 6 Let $L_1$ be the set of all languages accepted by a PDA by final state and $L_2$ the set of all languages accepted by empty stack. Which of the following is true? A $L_1 = L_2$ B $L_1 \supset L_2$ C $L_1 \subset L_2$ D None Theory of Computation   Push-down Automata Question 6 Explanation: Question 7 Which of the following expressions is not equivalent to $\bar{x}$? A x NAND x B x NOR x C x NAND 1 D x NOR 1 Digital Logic   Boolean Algebra Question 7 Explanation: Question 8 Which of the following functions implements the Karnaugh map shown below? A $\bar{A}B + CD$ B $D(C+A)$ C $AD+\bar{A}B$ D $(C+D) (\bar{C}+D) + (A+B)$ Digital Logic   Boolean Algebra Question 8 Explanation: Question 9 Listed below are some operating system abstractions (in the left column) and the hardware components (in the right column) $\small \begin{array}{cl|cl}\hline \text{(A)}& \text{Thread} & \text{1.}& \text{Interrupt} \\\hline \text{(B)}& \text{Virtual address space} & \text{2.}& \text{Memory} \\\hline \text{(C)} &\text{File system} & \text{3.} &\text{CPU} \\\hline \text{(D)} &\text{Signal} & \text{4.}& \text{Disk} \\\hline \end{array}$ A (A) - 2 (B) - 4 (C) - 3 (D) - 1 B (A) - 1 (B) - 2 (C) - 3 (D) - 4 C (A) - 3 (B) - 2 (C) - 4 (D) - 1 D (A) - 4 (B) - 1 (C) - 2 (D) - 3 Operating System   Memory Management Question 9 Explanation: Question 10 Which of the following disk scheduling strategies is likely to give the best throughput? A Farthest cylinder next B Nearest cylinder next C First come first served D Elevator algorithm Operating System   Disk Scheduling Question 10 Explanation: There are 10 questions to complete.
{}
# zbMATH — the first resource for mathematics Compute Distance To: Author ID: dehghan.mehdi Published as: Deghan, Mehdi; Dehghan, M.; Dehghan, Mehdi Homepage: https://aut.ac.ir/cv/2123/Mehdi%20Dehghan%20Takht%20Fooladi External Links: MGP · Wikidata · ORCID · dblp Documents Indexed: 561 Publications since 1993 all top 5 all top 5 #### Serials 76 Applied Mathematics and Computation 47 Computers & Mathematics with Applications 44 Numerical Methods for Partial Differential Equations 36 Journal of Computational and Applied Mathematics 34 Engineering Analysis with Boundary Elements 28 Applied Numerical Mathematics 22 International Journal of Computer Mathematics 21 Applied Mathematical Modelling 19 Computer Physics Communications 18 Mathematical Methods in the Applied Sciences 18 Mathematical and Computer Modelling 15 Journal of Vibration and Control 12 Chaos, Solitons and Fractals 12 Numerical Algorithms 10 Mathematical Problems in Engineering 10 Communications in Nonlinear Science and Numerical Simulation 9 Computer Methods in Applied Mechanics and Engineering 7 Computational and Applied Mathematics 7 Physica Scripta 6 Journal of Computational Physics 6 Kybernetes 6 International Journal for Numerical Methods in Biomedical Engineering 5 Mathematics and Computers in Simulation 5 Computational Mechanics 5 International Journal of Numerical Methods for Heat & Fluid Flow 4 Applied Mathematics Letters 4 Journal of Difference Equations and Applications 4 CMES. Computer Modeling in Engineering & Sciences 3 Physics Letters. A 3 Communications in Numerical Methods in Engineering 3 Nonlinear Dynamics 2 Applicable Analysis 2 Computers and Electrical Engineering 2 International Journal of Engineering Science 2 International Journal of Systems Science 2 International Journal for Numerical Methods in Engineering 2 Nonlinear Analysis. Theory, Methods & Applications. Series A: Theory and Methods 2 Bulletin of the Iranian Mathematical Society 2 Linear Algebra and its Applications 2 Mediterranean Journal of Mathematics 2 Inverse Problems in Science and Engineering 2 Numerical Mathematics: Theory, Methods and Applications 1 International Journal of Modern Physics B 1 IMA Journal of Numerical Analysis 1 International Journal for Numerical Methods in Fluids 1 International Journal of Solids and Structures 1 Linear and Multilinear Algebra 1 Rocky Mountain Journal of Mathematics 1 Zhurnal Vychislitel’noĭ Matematiki i Matematicheskoĭ Fiziki 1 Computing 1 Numerical Functional Analysis and Optimization 1 Bulletin of the Korean Mathematical Society 1 Operations Research Letters 1 Acta Mathematica Hungarica 1 International Journal of Approximate Reasoning 1 Journal of Scientific Computing 1 Pattern Recognition 1 Proceedings of the Indian Academy of Sciences. Mathematical Sciences 1 Pattern Recognition Letters 1 International Journal of Bifurcation and Chaos in Applied Sciences and Engineering 1 Bulletin of the Belgian Mathematical Society - Simon Stevin 1 Integral Transforms and Special Functions 1 ELA. The Electronic Journal of Linear Algebra 1 Mathematics and Mechanics of Solids 1 Far East Journal of Applied Mathematics 1 International Journal of Numerical Modelling 1 International Journal of Uncertainty, Fuzziness and Knowledge-Based Systems 1 Nonlinear Analysis. Real World Applications 1 Mathematical Modelling and Analysis 1 Dynamics of Continuous, Discrete & Impulsive Systems. Series A. Mathematical Analysis 1 Journal of Systems Science and Complexity 1 Acta Mathematica Scientia. Series B. (English Edition) 1 Journal of Numerical Mathematics 1 Journal of Concrete and Applicable Mathematics 1 EURASIP Journal on Advances in Signal Processing 1 Applied and Computational Mathematics 1 Computational Methods for Differential Equations all top 5 #### Fields 465 Numerical analysis (65-XX) 234 Partial differential equations (35-XX) 46 Integral equations (45-XX) 41 Fluid mechanics (76-XX) 37 Ordinary differential equations (34-XX) 36 Linear and multilinear algebra; matrix theory (15-XX) 34 Approximations and expansions (41-XX) 30 Difference and functional equations (39-XX) 21 Biology and other natural sciences (92-XX) 17 Mechanics of deformable solids (74-XX) 16 Real functions (26-XX) 14 Calculus of variations and optimal control; optimization (49-XX) 11 Special functions (33-XX) 9 Probability theory and stochastic processes (60-XX) 9 Statistical mechanics, structure of matter (82-XX) 8 Computer science (68-XX) 8 Systems theory; control (93-XX) 7 Dynamical systems and ergodic theory (37-XX) 7 Classical thermodynamics, heat transfer (80-XX) 6 Harmonic analysis on Euclidean spaces (42-XX) 6 Optics, electromagnetic theory (78-XX) 6 Quantum theory (81-XX) 5 General algebraic systems (08-XX) 4 Operations research, mathematical programming (90-XX) 4 Information and communication theory, circuits (94-XX) 3 Mechanics of particles and systems (70-XX) 2 Integral transforms, operational calculus (44-XX) 2 Functional analysis (46-XX) 2 Operator theory (47-XX) 2 Astronomy and astrophysics (85-XX) 2 Geophysics (86-XX) 2 Game theory, economics, finance, and other social and behavioral sciences (91-XX) 1 Functions of a complex variable (30-XX) 1 Potential theory (31-XX) #### Citations contained in zbMATH 500 Publications have been cited 10,903 times in 3,708 Documents Cited by Year A new operational matrix for solving fractional-order differential equations. Zbl 1189.65151 2010 Finite difference procedures for solving a problem arising in modeling and design of certain optoelectronic devices. Zbl 1089.65085 Dehghan, Mehdi 2006 A numerical method for solution of the two-dimensional sine-Gordon equation using the radial basis functions. Zbl 1155.65379 Dehghan, Mehdi; Shokri, Ali 2008 Solving nonlinear fractional partial differential equations using the homotopy analysis method. Zbl 1185.65187 Dehghan, Mehdi; Manafian, Jalil; Saadatmandi, Abbas 2010 Numerical solution of the nonlinear Klein-Gordon equation using radial basis functions. Zbl 1168.65398 Dehghan, Mehdi; Shokri, Ali 2009 On the convergence of He’s variational iteration method. Zbl 1120.65112 Tatari, Mehdi; Dehghan, Mehdi 2007 Solution of delay differential equations via a homotopy perturbation method. Zbl 1145.34353 Shakeri, Fatemeh; Dehghan, Mehdi 2008 On the solution of an initial-boundary value problem that combines Neumann and integral condition for the wave equation. Zbl 1059.65072 Dehghan, Mehdi 2005 The general coupled matrix equations over generalized bisymmetric matrices. Zbl 1187.65042 Dehghan, Mehdi; Hajarian, Masoud 2010 An iterative algorithm for the reflexive solutions of the generalized coupled Sylvester matrix equations and its optimal approximation. Zbl 1154.65023 Dehghan, Mehdi; Hajarian, Masoud 2008 The one-dimensional heat equation subject to a boundary integral specification. Zbl 1139.35352 Dehghan, Mehdi 2007 Parameter determination in a partial differential equation from the overspecified data. Zbl 1080.35174 Dehghan, M. 2005 Numerical simulation of two-dimensional sine-Gordon solitons via a local weak meshless technique based on the radial point interpolation method (RPIM). Zbl 1205.65267 Dehghan, Mehdi; Ghesmati, Arezou 2010 A tau approach for solution of the space fractional diffusion equation. Zbl 1228.65203 2011 An iterative method for solving the generalized coupled Sylvester matrix equations over generalized bisymmetric matrices. Zbl 1185.65054 Dehghan, Mehdi; Hajarian, Masoud 2010 On generalized moving least squares and diffuse derivatives. Zbl 1252.65037 Mirzaei, Davoud; Schaback, Robert; Dehghan, Mehdi 2012 Efficient techniques for the second-order parabolic equation subject to nonlocal specifications. Zbl 1063.65079 Dehghan, Mehdi 2005 The sinc-Legendre collocation method for a class of fractional convection-diffusion equations with variable coefficients. Zbl 1250.65121 2012 A numerical method for solving the hyperbolic telegraph equation. Zbl 1145.65078 Dehghan, Mehdi; Shokri, Ali 2008 A meshless based method for solution of integral equations. Zbl 1202.65174 Mirzaei, Davoud; Dehghan, Mehdi 2010 A computational study of the one-dimensional parabolic equation subject to nonclassical boundary specifications. Zbl 1084.65099 Dehghan, Mehdi 2006 Meshless local Petrov-Galerkin (MLPG) method for the unsteady magnetohydrodynamic (MHD) flow through pipe with arbitrary wall conductivity. Zbl 1159.76034 Dehghan, Mehdi; Mirzaei, Davoud 2009 A numerical method for two-dimensional Schrödinger equation using collocation and radial basis functions. Zbl 1126.65092 Dehghan, Mehdi; Shokri, Ali 2007 Analysis of an iterative algorithm to solve the generalized coupled Sylvester matrix equations. Zbl 1227.65037 Dehghan, Mehdi; Hajarian, Masoud 2011 Numerical solution of the Klein-Gordon equation via He’s variational iteration method. Zbl 1179.81064 Shakeri, Fatemeh; Dehghan, Mehdi 2008 Solution of the second-order one-dimensional hyperbolic telegraph equation by using the dual reciprocity boundary integral equation (DRBIE) method. Zbl 1244.65137 Dehghan, Mehdi; Ghesmati, Arezou 2010 Inverse problem of diffusion equation by He’s homotopy perturbation method. Zbl 1110.35354 Shakeri, Fatemeh; Dehghan, Mehdi 2007 An approximation algorithm for the solution of the nonlinear Lane-Emden type equations arising in astrophysics using Hermite functions collocation method. Zbl 1216.65098 Parand, K.; Dehghan, Mehdi; Rezaei, A. R.; Ghaderi, S. M. 2010 On the solution of the non-local parabolic partial differential equations via radial basis functions. Zbl 1168.65403 Tatari, Mehdi; Dehghan, Mehdi 2009 The use of the decomposition procedure of Adomian for solving a delay differential equation arising in electrodynamics. Zbl 1159.78319 Dehghan, Mehdi; Shakeri, Fatemeh 2008 Dehghan, Mehdi; Hamidi, Asgar; Shakourifar, Mohammad 2007 Numerical solution of hyperbolic telegraph equation using the Chebyshev tau method. Zbl 1186.65136 2010 A high-order and unconditionally stable scheme for the modified anomalous fractional sub-diffusion equation with a nonlinear source term. Zbl 1287.65064 Mohebbi, Akbar; Abbaszadeh, Mostafa; Dehghan, Mehdi 2013 The construction of operational matrix of fractional derivatives using B-spline functions. Zbl 1276.65015 Lakestani, Mehrdad; Dehghan, Mehdi; Irandoust-Pakchin, Safar 2012 A numerical technique for solving fractional optimal control problems. Zbl 1228.65109 Lotfi, A.; Dehghan, Mehdi; Yousefi, S. A. 2011 The use of a meshless technique based on collocation and radial basis functions for solving the time fractional nonlinear Schrödinger equation arising in quantum mechanics. Zbl 1352.65397 Mohebbi, Akbar; Abbaszadeh, Mostafa; Dehghan, Mehdi 2013 A compact split-step finite difference method for solving the nonlinear Schrödinger equations with constant and variable coefficients. Zbl 1206.65207 Dehghan, Mehdi; Taleei, Ameneh 2010 Rational Legendre pseudospectral approach for solving nonlinear differential equations of Lane-Emden type. Zbl 1177.65100 Parand, K.; Shahini, M.; Dehghan, Mehdi 2009 Solution of a partial differential equation subject to temperature overspecification by He’s homotopy perturbation method. Zbl 1117.35326 Dehghan, Mehdi; Shakeri, Fatemeh 2007 A method for solving partial differential equations via radial basis functions: application to the heat equation. Zbl 1244.80024 Tatari, Mehdi; Dehghan, Mehdi 2010 An iterative algorithm for solving a pair of matrix equations $$AYB=E$$, $$CYD=F$$ over generalized centro-symmetric matrices. Zbl 1165.15301 Dehghan, Mehdi; Hajarian, Masoud 2008 An inverse problem of finding a source parameter in a semilinear parabolic equation. Zbl 0995.65098 Dehghan, Mehdi 2001 Combination of meshless local weak and strong (MLWS) forms to solve the two-dimensional hyperbolic telegraph equation. Zbl 1244.65147 Dehghan, Mehdi; Ghesmati, Arezou 2010 A not-a-knot meshless method using radial basis functions and predictor-corrector scheme to the numerical solution of improved Boussinesq equation. Zbl 1426.76569 Shokri, Ali; Dehghan, Mehdi 2010 The meshless local Petrov-Galerkin (MLPG) method for the generalized two-dimensional nonlinear Schrödinger equation. Zbl 1244.65139 Dehghan, Mehdi; Mirzaei, Davoud 2008 Finite iterative algorithms for the reflexive and anti-reflexive solutions of the matrix equation $$A_1X_1B_1+A_2X_2B_2=C$$. Zbl 1171.15310 Dehghan, Mehdi; Hajarian, Masoud 2009 Application of He’s homotopy perturbation method for non-linear system of second-order boundary value problems. Zbl 1162.34307 Saadatmandi, Abbas; Dehghan, Mehdi; Eftekhari, Ali 2009 Identification of a time-dependent coefficient in a partial differential equation subject to an extra measurement. Zbl 1069.65104 Dehghan, Mehdi 2005 Variational iteration method for solving a generalized pantograph equation. Zbl 1189.65172 2009 A moving least square reproducing polynomial meshless method. Zbl 1284.65137 Salehi, Rezvan; Dehghan, Mehdi 2013 The numerical solution of the non-linear integro-differential equations based on the meshless method. Zbl 1243.65154 Dehghan, Mehdi; Salehi, Rezvan 2012 Numerical solution of the system of second-order boundary value problems using the local radial basis functions based differential quadrature collocation method. Zbl 1426.65113 2013 The solution of linear and nonlinear systems of Volterra functional equations using Adomian-Padé technique. Zbl 1197.65223 Dehghan, Mehdi; Shakourifar, Mohammad; Hamidi, Asgar 2009 Determination of a control parameter in a one-dimensional parabolic equation using the method of radial basis functions. Zbl 1137.65408 Dehghan, Mehdi; Tatari, Mehdi 2006 The numerical solution of nonlinear high dimensional generalized Benjamin-Bona-Mahony-Burgers equation via the meshless method of radial basis functions. Zbl 1369.65126 Dehghan, Mehdi; Abbaszadeh, Mostafa; Mohebbi, Akbar 2014 Application of the collocation method for solving nonlinear fractional integro-differential equations. Zbl 1296.65106 Eslahchi, M. R.; Dehghan, Mehdi; Parvizi, M. 2014 Two high-order numerical algorithms for solving the multi-term time fractional diffusion-wave equations. Zbl 1321.65129 Dehghan, Mehdi; Safarpoor, Mansour; Abbaszadeh, Mostafa 2015 Meshless local Petrov-Galerkin (MLPG) approximation to the two dimensional sine-Gordon equation. Zbl 1183.65113 Mirzaei, Davoud; Dehghan, Mehdi 2010 High order implicit collocation method for the solution of two-dimensional linear hyperbolic equation. Zbl 1156.65087 Dehghan, Mehdi; Mohebbi, Akbar 2009 He’s variational iteration method for computing a control parameter in a semi-linear inverse parabolic equation. Zbl 1131.65084 Tatari, Mehdi; Dehghan, Mehdi 2007 The use of He’s variational iteration method for solving the telegraph and fractional telegraph equations. Zbl 1210.65173 Dehghan, Mehdi; Yousefi, S. A.; Lotfi, A. 2011 High-order compact solution of the one-dimensional heat and advection-diffusion equations. Zbl 1201.65183 Mohebbi, Akbar; Dehghan, Mehdi 2010 Application of semi-analytic methods for the Fitzhugh-Nagumo equation, which models the transmission of nerve impulses. Zbl 1196.35025 Dehghan, Mehdi; Heris, Jalil Manafian; Saadatmandi, Abbas 2010 A numerical method for KdV equation using collocation and radial basis functions. Zbl 1185.76832 Dehghan, Mehdi; Shokri, Ali 2007 The dual reciprocity boundary element method (DRBEM) for two-dimensional sine-Gordon equation. Zbl 1169.76401 Dehghan, Mehdi; Mirzaei, Davoud 2008 Solution of a model describing biological species living together using the variational iteration method. Zbl 1156.92332 Shakeri, Fatemeh; Dehghan, Mehdi 2008 Solution of a partial integro-differential equation arising from viscoelasticity. Zbl 1087.65119 Dehghan, Mehdi 2006 An implicit RBF meshless approach for solving the time fractional nonlinear sine-Gordon and Klein-Gordon equations. Zbl 1403.65082 Dehghan, Mehdi; Abbaszadeh, Mostafa; Mohebbi, Akbar 2015 Numerical solution to the unsteady two-dimensional Schrödinger equation using meshless local boundary integral equation method. Zbl 1195.81007 Dehghan, Mehdi; Mirzaei, Davoud 2008 Application of He’s variational iteration method for solving the Cauchy reaction-diffusion problem. Zbl 1135.65381 Dehghan, Mehdi; Shakeri, Fatemeh 2008 The solitary wave solution of coupled Klein-Gordon-Zakharov equations via two different numerical methods. Zbl 1344.82041 2013 Identifying an unknown function in a parabolic equation with overspecified data via He’s variational iteration method. Zbl 1152.35390 Dehghan, Mehdi; Tatari, Mehdi 2008 The use of Adomian decomposition method for solving problems in calculus of variations. Zbl 1200.65050 Dehghan, Mehdi; Tatari, Mehdi 2006 Computational methods for solving fully fuzzy linear systems. Zbl 1101.65040 Dehghan, Mehdi; Hashemi, Behnam; Ghatee, Mehdi 2006 Iterative solution of fuzzy linear systems. Zbl 1137.65336 Dehghan, Mehdi; Hashemi, Behnam 2006 A meshless local Petrov-Galerkin method for the time-dependent Maxwell equations. Zbl 1293.65128 Dehghan, Mehdi; Salehi, Rezvan 2014 A meshless method for solving nonlinear two-dimensional integral equations of the second kind on non-rectangular domains using radial basis functions with error analysis. Zbl 1255.65233 Assari, Pouria; Adibi, Hojatollah; Dehghan, Mehdi 2013 A meshless method using radial basis functions for the numerical solution of two-dimensional complex Ginzburg-Landau equation. Zbl 1357.65202 Shokri, Ali; Dehghan, Mehdi 2012 Application of the dual reciprocity boundary integral equation technique to solve the nonlinear Klein-Gordon equation. Zbl 1219.65104 Dehghan, Mehdi; Ghesmati, Arezou 2010 Solution of a nonlinear time-delay model in biology via semi-analytical approaches. Zbl 1219.65062 Dehghan, Mehdi; Salehi, Rezvan 2010 Weighted finite difference techniques for the one-dimensional advection-diffusion equation. Zbl 1034.65069 Dehghan, Mehdi 2004 Numerical solution of the delay differential equations of pantograph type via Chebyshev polynomials. Zbl 1266.65115 Sedaghat, S.; Ordokhani, Y.; Dehghan, Mehdi 2012 High-order solution of one-dimensional sine-Gordon equation using compact finite difference and DIRKN methods. Zbl 1190.65126 Mohebbi, Akbar; Dehghan, Mehdi 2010 The use of compact boundary value method for the solution of two-dimensional Schrödinger equation. Zbl 1159.65081 Mohebbi, Akbar; Dehghan, Mehdi 2009 Numerical solution of the three-dimensional advection–diffusion equation. Zbl 1038.65074 Dehghan, Mehdi 2004 The numerical solution of the second Painlevé equation. Zbl 1172.65037 Dehghan, Mehdi; Shakeri, Fatemeh 2009 The use of Chebyshev cardinal functions for solution of the second-order one-dimensional telegraph equation. Zbl 1169.65102 2009 High order compact solution of the one-space-dimensional linear hyperbolic equation. Zbl 1151.65071 Mohebbi, Akbar; Dehghan, Mehdi 2008 Fourth-order techniques for identifying a control parameter in the parabolic equations. Zbl 1211.65120 Dehghan, Mehdi 2002 Determination of a control parameter in the two-dimensional diffusion equation. Zbl 0982.65103 Dehghan, Mehdi 2001 The spectral collocation method with three different bases for solving a nonlinear partial differential equation arising in modeling of nonlinear waves. Zbl 1219.65106 2011 A meshless method for numerical solution of a linear hyperbolic equation with variable coefficients in two space dimensions. Zbl 1159.65084 Dehghan, Mehdi; Shokri, Ali 2009 A Legendre collocation method for fractional integro-differential equations. Zbl 1271.65157 2011 The solitary wave solution of the two-dimensional regularized long-wave equation in fluids and plasmas. Zbl 1263.76047 Dehghan, Mehdi; Salehi, Rezvan 2011 Fourth-order compact solution of the nonlinear Klein-Gordon equation. Zbl 1180.65114 Dehghan, Mehdi; Mohebbi, Akbar; Asgari, Zohreh 2009 Determination of a control function in three-dimensional parabolic equations. Zbl 1014.65097 Dehghan, Mehdi 2003 Numerical solution of a class of fractional optimal control problems via the Legendre orthonormal basis combined with the operational matrix and the Gauss quadrature rule. Zbl 1286.49030 Lotfi, A.; Yousefi, S. A.; Dehghan, Mehdi 2013 Determination of a control function in three-dimensional parabolic equations by Legendre pseudospectral method. Zbl 1252.65161 Shamsi, M.; Dehghan, Mehdi 2012 Solution of the fully fuzzy linear systems using iterative techniques. Zbl 1144.65021 Dehghan, Mehdi; Hashemi, Behnam; Ghatee, Mehdi 2007 The use of interpolating element-free Galerkin technique for solving 2D generalized Benjamin-Bona-Mahony-Burgers and regularized long-wave equations on non-rectangular domains with error estimate. Zbl 1315.65086 Dehghan, Mehdi; Abbaszadeh, Mostafa; Mohebbi, Akbar 2015 Investigation of the Oldroyd model as a generalized incompressible Navier-Stokes equation via the interpolating stabilized element free Galerkin technique. Zbl 1444.76083 2020 Analysis and application of the interpolating element free Galerkin (IEFG) method to simulate the prevention of groundwater contamination with application in fluid flow. Zbl 1433.65233 2020 Reduced order modeling of time-dependent incompressible Navier-Stokes equation with variable density based on a local radial basis functions-finite difference (LRBF-FD) technique and the POD/DEIM method. Zbl 1442.65287 2020 Crank-Nicolson/Galerkin spectral method for solving two-dimensional time-space distributed-order weakly singular integro-partial differential equation. Zbl 1435.65170 Abbaszadeh, Mostafa; Dehghan, Mehdi; Zhou, Yong 2020 Direct meshless local Petrov-Galerkin (DMLPG) method for time-fractional fourth-order reaction-diffusion problem on complex domains. Zbl 1443.65188 2020 A direct meshless local collocation method for solving stochastic Cahn-Hilliard-Cook and stochastic Swift-Hohenberg equations. Zbl 1404.65207 2019 Error analysis and numerical simulation of magnetohydrodynamics (MHD) equation based on the interpolating element free Galerkin (IEFG) method. Zbl 1412.65070 2019 The reproducing kernel particle Petrov-Galerkin method for solving two-dimensional nonstationary incompressible Boussinesq equations. Zbl 07110343 2019 Numerical and analytical investigations for neutral delay fractional damped diffusion-wave equation based on the stabilized interpolating element free Galerkin (IEFG) method. Zbl 1428.65073 2019 Error estimate of finite element/finite difference technique for solution of two-dimensional weakly singular integro-partial differential equation with space and time fractional derivatives. Zbl 1419.65015 2019 A meshless local discrete Galerkin (MLDG) scheme for numerically solving two-dimensional nonlinear Volterra integral equations. Zbl 1429.65308 Assari, Pouria; Dehghan, Mehdi 2019 A generalized modified Hermitian and skew-Hermitian splitting (GMHSS) method for solving complex Sylvester matrix equation. Zbl 1429.65085 Dehghan, Mehdi; Shirilord, Akbar 2019 A multilevel Monte Carlo finite element method for the stochastic Cahn-Hilliard-Cook Equation. Zbl 07119146 2019 Two-dimensional simulation of the damped Kuramoto-Sivashinsky equation via radial basis function-generated finite difference scheme combined with an exponential time discretization. Zbl 07110385 2019 Simulation of the phase field Cahn-Hilliard and tumor growth models via a numerical scheme: element-free Galerkin method. Zbl 1440.74428 2019 DMLPG method for numerical simulation of soliton collisions in multi-dimensional coupled damped nonlinear Schrödinger system which arises from Bose-Einstein condensates. Zbl 1429.65229 2019 The double-step scale splitting method for solving complex Sylvester matrix equation. Zbl 07101783 Dehghan, Mehdi; Shirilord, Akbar 2019 A meshless local Galerkin method for solving Volterra integral equations deduced from nonlinear fractional differential equations using the moving least squares technique. Zbl 1417.65220 Assari, Pouria; Dehghan, Mehdi 2019 Numerical simulation and error estimation of the time-dependent Allen-Cahn equation on surfaces with radial basis functions. Zbl 1426.65149 Mohammadi, Vahid; Mirzaei, Davoud; Dehghan, Mehdi 2019 An efficient technique based on finite difference/finite element method for solution of two-dimensional space/multi-time fractional Bloch-Torrey equations. Zbl 1395.65074 2018 A finite difference/finite element technique with error estimate for space fractional tempered diffusion-wave equation. Zbl 1415.65224 2018 A Legendre spectral element method (SEM) based on the modified bases for solving neutral delay distributed-order fractional damped diffusion-wave equation. Zbl 1395.65098 2018 An upwind local radial basis functions-differential quadrature (RBF-DQ) method with proper orthogonal decomposition (POD) approach for solving compressible Euler equation. Zbl 1403.65109 2018 Solving a class of nonlinear boundary integral equations based on the meshless local discrete Galerkin (MLDG) method. Zbl 1377.65158 Assari, Pouria; Dehghan, Mehdi 2018 Numerical solution of 2D Navier-Stokes equation discretized via boundary elements method and finite difference approximation. Zbl 1403.76102 Sedaghatjoo, Zeynab; Dehghan, Mehdi; Hosseinzadeh, Hossein 2018 A meshless Galerkin scheme for the approximate solution of nonlinear logarithmic boundary integral equations utilizing radial basis functions. Zbl 1380.65395 Assari, Pouria; Dehghan, Mehdi 2018 A combination of proper orthogonal decomposition-discrete empirical interpolation method (POD-DEIM) and meshless local RBF-DQ approach for prevention of groundwater contamination. Zbl 1409.76095 2018 The two-grid interpolating element free Galerkin (TG-IEFG) method for solving Rosenau-regularized long wave (RRLW) equation with error analysis. Zbl 1395.65133 2018 A new approach to improve the order of approximation of the Bernstein operators: theory and applications. Zbl 1388.41002 Khosravian-Arab, Hassan; Dehghan, Mehdi; Eslahchi, M. R. 2018 Variational multiscale element-free Galerkin method combined with the moving Kriging interpolation for solving some partial differential equations with discontinuous solutions. Zbl 1404.65084 2018 A reduced proper orthogonal decomposition (POD) element free Galerkin (POD-EFG) method to simulate two-dimensional solute transport problems and error estimate. Zbl 1380.65255 2018 Solution of multi-dimensional Klein-Gordon-Zakharov and Schrödinger/Gross-Pitaevskii equations via local radial basis functions-differential quadrature (RBF-DQ) technique on non-rectangular computational domains. Zbl 1403.78037 2018 On a new family of radial basis functions: mathematical analysis and applications to option pricing. Zbl 1372.65283 Kazemi, Seyed-Mohammad-Mahdi; Dehghan, Mehdi; Foroush Bastani, Ali 2018 The approximate solution of nonlinear Volterra integral equations of the second kind using radial basis functions. Zbl 1446.65205 Assari, Pouria; Dehghan, Mehdi 2018 A stable boundary elements method for magnetohydrodynamic channel flows at high Hartmann numbers. Zbl 1388.76192 Sedaghatjoo, Zeynab; Dehghan, Mehdi; Hosseinzadeh, Hossein 2018 Fully spectral collocation method for nonlinear parabolic partial integro-differential equations. Zbl 1377.65169 2018 Error analysis of a meshless weak form method based on radial point interpolation technique for Sivashinsky equation arising in the alloy solidification problem. Zbl 1371.41024 2018 An element-free Galerkin meshless method for simulating the behavior of cancer cell invasion of surrounding tissue. Zbl 07167922 Dehghan, Mehdi; Narimani, Niusha 2018 Approximation of continuous surface differential operators with the generalized moving least-squares (GMLS) method for solving reaction-diffusion equation. Zbl 1413.65381 Dehghan, Mehdi; Narimani, Niusha 2018 Modal spectral element method in curvilinear domains. Zbl 1393.65034 2018 A $$hk$$ mortar spectral element method for the $$p$$-Laplacian equation. Zbl 1434.65298 Sabouri, Mania; Dehghan, Mehdi 2018 The smoothed particle hydrodynamics method for solving generalized variable coefficient Schrödinger equation and Schrödinger-Boussinesq system. Zbl 1424.65201 Karamali, Gholamreza; Abbaszadeh, Mostafa; Dehghan, Mehdi 2018 A local Galerkin integral equation method for solving integro-differential equations arising in oscillating magnetic fields. Zbl 1397.78010 Assari, Pouria; Dehghan, Mehdi 2018 Application of finite difference method of lines on the heat equation. Zbl 1390.65072 Kazem, Saeed; Dehghan, Mehdi 2018 An improved meshless method for solving two-dimensional distributed order time-fractional diffusion-wave equation with error estimate. Zbl 1412.65131 2017 The meshless local collocation method for solving multi-dimensional Cahn-Hilliard, Swift-Hohenberg and phase field crystal equations. Zbl 1403.74285 2017 Fourth-order numerical method for the space-time tempered fractional diffusion-wave equation. Zbl 1375.65173 Dehghan, Mehdi; Abbaszadeh, Mostafa; Deng, Weihua 2017 Spectral analysis and multigrid preconditioners for two-dimensional space-fractional diffusion equations. Zbl 1380.65240 Moghaderi, Hamid; Dehghan, Mehdi; Donatelli, Marco; Mazza, Mariarosa 2017 Spectral element technique for nonlinear fractional evolution equation, stability and convergence analysis. Zbl 1368.65261 2017 The use of proper orthogonal decomposition (POD) meshless RBF-FD technique to simulate the shallow water equations. Zbl 1380.65301 2017 Fractional spectral and pseudo-spectral methods in unbounded domains: theory and applications. Zbl 1415.65177 Khosravian-Arab, Hassan; Dehghan, Mehdi; Eslahchi, M. R. 2017 A numerical scheme based on radial basis function finite difference (RBF-FD) technique for solving the high-dimensional nonlinear Schrödinger equations using an explicit time discretization: Runge-Kutta method. Zbl 1411.65108 2017 Element free Galerkin approach based on the reproducing kernel particle method for solving 2D fractional Tricomi-type equation with Robin boundary condition. Zbl 1412.65138 2017 A meshless discrete collocation method for the numerical solution of singular-logarithmic boundary integral equations utilizing radial basis functions. Zbl 1426.65206 Assari, Pouria; Dehghan, Mehdi 2017 Two meshless procedures: moving Kriging interpolation and element-free Galerkin for fractional PDEs. Zbl 1369.65121 2017 Asymptotic expansion of solutions to the Black-Scholes equation arising from American option pricing near the expiry. Zbl 1352.91034 Kazemi, Seyed-Mohammad-Mahdi; Dehghan, Mehdi; Foroush Bastani, Ali 2017 Generalized product-type methods based on bi-conjugate gradient (GPBiCG) for solving shifted linear systems. Zbl 1383.65025 2017 Mixed two-grid finite difference methods for solving one-dimensional and two-dimensional Fitzhugh-Nagumo equations. Zbl 1380.65168 2017 An adaptive meshless local Petrov-Galerkin method based on a posteriori error estimation for the boundary layer problems. Zbl 1353.65121 Kamranian, Maryam; Dehghan, Mehdi; Tatari, Mehdi 2017 Error analysis of method of lines (MOL) via generalized interpolating moving least squares (GIMLS) approximation. Zbl 1366.65083 2017 Generalized Bessel functions: theory and their applications. Zbl 1381.33009 Khosravian-Arab, Hassan; Dehghan, Mehdi; Eslahchi, M. R. 2017 On uniqueness of numerical solution of boundary integral equations with 3-times monotone radial kernels. Zbl 1352.65595 Sedaghatjoo, Zeynab; Dehghan, Mehdi; Hosseinzadeh, Hossein 2017 Analysis of a meshless method for the time fractional diffusion-wave equation. Zbl 1352.65298 Dehghan, Mehdi; Abbaszadeh, Mostafa; Mohebbi, Akbar 2016 Analysis of the element free Galerkin (EFG) method for solving fractional cable equation with Dirichlet boundary condition. Zbl 1348.65141 2016 A numerical scheme for the solution of a class of fractional variational and optimal control problems using the modified Jacobi polynomials. Zbl 1365.26005 Dehghan, Mehdi; Hamedi, Ehsan-Allah; Khosravian-Arab, Hassan 2016 Analysis of two methods based on Galerkin weak form for fractional diffusion-wave: meshless interpolating element free Galerkin (IEFG) and finite element methods. Zbl 1403.65068 Dehghan, Mehdi; Abbaszadeh, Mostafa; Mohebbi, Akbar 2016 The dual reciprocity boundary elements method for the linear and nonlinear two-dimensional time-fractional partial differential equations. Zbl 1347.65182 Dehghan, Mehdi; Safarpoor, Mansour 2016 The numerical simulation of the phase field crystal (PFC) and modified phase field crystal (MPFC) models via global and local meshless methods. Zbl 1423.76321 2016 The use of element free Galerkin method based on moving Kriging and radial point interpolation techniques for solving some types of Turing models. Zbl 1403.65067 Dehghan, Mehdi; Abbaszadeh, Mostafa; Mohebbi, Akbar 2016 Numerical study of three-dimensional Turing patterns using a meshless method based on moving Kriging element free Galerkin (EFG) approach. Zbl 1359.65199 2016 Variational multiscale element free Galerkin (VMEFG) and local discontinuous Galerkin (LDG) methods for solving two-dimensional Brusselator reaction-diffusion system with and without cross-diffusion. Zbl 1425.65108 2016 Remediation of contaminated groundwater by meshless local weak forms. Zbl 1368.76035 2016 Legendre spectral element method for solving time fractional modified anomalous sub-diffusion equation. Zbl 07159582 Dehghan, Mehdi; Abbaszadeh, Mostafa; Mohebbi, Akbar 2016 The dual reciprocity boundary integral equation technique to solve a class of the linear and nonlinear fractional partial differential equations. Zbl 1342.65224 Dehghan, Mehdi; Safarpoor, Mansour 2016 A meshless method based on the dual reciprocity method for one-dimensional stochastic partial differential equations. Zbl 1337.65010 2016 Distributed optimal control of the viscous Burgers equation via a Legendre pseudo-spectral approach. Zbl 1344.49053 Sabeh, Z.; Shamsi, M.; Dehghan, Mehdi 2016 Numerical solution of a non-classical two-phase Stefan problem via radial basis function (RBF) collocation methods. Zbl 1403.80037 Dehghan, Mehdi; Najafi, Mahboubeh 2016 Parametric AE-solution sets to the parametric linear systems with multiple right-hand sides and parametric matrix equation $$A(p)X=B(p)$$. Zbl 1356.65113 2016 A fast and efficient two-grid method for solving $$d$$-dimensional Poisson equations. Zbl 1342.65226 Moghaderi, Hamid; Dehghan, Mehdi; Hajarian, Masoud 2016 Proper orthogonal decomposition variational multiscale element free Galerkin (POD-VMEFG) meshless method for solving incompressible Navier-Stokes equation. Zbl 1439.76060 2016 Two numerical meshless techniques based on radial basis functions (RBFs) and the method of generalized moving least squares (GMLS) for simulation of coupled Klein-Gordon-Schrödinger (KGS) equations. Zbl 1443.65240 2016 Two high-order numerical algorithms for solving the multi-term time fractional diffusion-wave equations. Zbl 1321.65129 Dehghan, Mehdi; Safarpoor, Mansour; Abbaszadeh, Mostafa 2015 An implicit RBF meshless approach for solving the time fractional nonlinear sine-Gordon and Klein-Gordon equations. Zbl 1403.65082 Dehghan, Mehdi; Abbaszadeh, Mostafa; Mohebbi, Akbar 2015 The use of interpolating element-free Galerkin technique for solving 2D generalized Benjamin-Bona-Mahony-Burgers and regularized long-wave equations on non-rectangular domains with error estimate. Zbl 1315.65086 Dehghan, Mehdi; Abbaszadeh, Mostafa; Mohebbi, Akbar 2015 Error estimate for the numerical solution of fractional reaction-subdiffusion process based on a meshless method. Zbl 1305.65211 Dehghan, Mehdi; Abbaszadeh, Mostafa; Mohebbi, Akbar 2015 The numerical solution of Cahn-Hilliard (CH) equation in one, two and three-dimensions via globally radial basis functions (GRBFs) and RBFs-differential quadrature (RBFs-DQ) methods. Zbl 1403.65085 2015 Fractional Sturm-Liouville boundary value problems in unbounded domains: theory and applications. Zbl 1352.65202 Khosravian-Arab, Hassan; Dehghan, Mehdi; Eslahchi, M. R. 2015 Numerical solution of fractional advection-diffusion equation with a nonlinear source term. Zbl 1319.35290 Parvizi, M.; Eslahchi, M. R.; Dehghan, Mehdi 2015 A meshless technique based on the local radial basis functions collocation method for solving parabolic-parabolic Patlak-Keller-Segel chemotaxis model. Zbl 1403.65084 Dehghan, Mehdi; Abbaszadeh, Mostafa; Mohebbi, Akbar 2015 The modified dual reciprocity boundary elements method and its application for solving stochastic partial differential equations. Zbl 1403.65192 2015 Meshless simulation of stochastic advection-diffusion equations based on radial basis functions. Zbl 1403.65086 2015 Numerical solution of stochastic elliptic partial differential equations using the meshless method of radial basis functions. Zbl 1403.65152 2015 The use of radial basis functions (RBFs) collocation and RBF-QR methods for solving the coupled nonlinear sine-Gordon equations. Zbl 1403.65092 2015 The numerical solution of the two-dimensional sinh-Gordon equation via three meshless methods. Zbl 1403.65083 Dehghan, Mehdi; Abbaszadeh, Mostafa; Mohebbi, Akbar 2015 A spectral element method using the modal basis and its application in solving second-order nonlinear partial differential equations. Zbl 1307.65138 2015 A multigrid compact finite difference method for solving the one-dimensional nonlinear sine-Gordon equation. Zbl 1335.35219 2015 An efficient meshfree point collocation moving least squares method to solve the interface problems with nonhomogeneous jump conditions. Zbl 1326.65165 Taleei, Ameneh; Dehghan, Mehdi 2015 Chebyshev polynomials and best approximation of some classes of functions. Zbl 1323.41027 Eslahchi, Mohammad R.; Dehghan, Mehdi; Amani, Sanaz 2015 A meshless numerical procedure for solving fractional reaction subdiffusion model via a new combination of alternating direction implicit (ADI) approach and interpolating element free Galerkin (EFG) method. Zbl 1443.65189 2015 Symmetrical weighted essentially non-oscillatory-flux limiter schemes for Hamilton-Jacobi equations. Zbl 1342.65172 Abedian, Rooholah; Adibi, Hojatollah; Dehghan, Mehdi 2015 ...and 400 more Documents all top 5 all top 5 #### Cited in 298 Serials 413 Applied Mathematics and Computation 252 Computers & Mathematics with Applications 176 Engineering Analysis with Boundary Elements 169 Journal of Computational and Applied Mathematics 157 Applied Mathematical Modelling 117 International Journal of Computer Mathematics 113 Applied Numerical Mathematics 92 Computational and Applied Mathematics 90 Numerical Algorithms 85 Mathematical Methods in the Applied Sciences 83 Abstract and Applied Analysis 82 Numerical Methods for Partial Differential Equations 76 Advances in Difference Equations 74 Mathematical Problems in Engineering 64 Journal of Computational Physics 59 Communications in Nonlinear Science and Numerical Simulation 55 Applied Mathematics Letters 55 International Journal of Applied and Computational Mathematics 47 Mathematical and Computer Modelling 46 Computer Physics Communications 44 Journal of Applied Mathematics 43 Chaos, Solitons and Fractals 41 Nonlinear Dynamics 39 International Journal of Numerical Methods for Heat & Fluid Flow 37 Journal of Scientific Computing 35 Discrete Dynamics in Nature and Society 33 Journal of the Franklin Institute 29 Journal of Vibration and Control 25 International Journal of Computational Methods 24 Computer Methods in Applied Mechanics and Engineering 23 Journal of Applied Mathematics and Computing 22 Inverse Problems in Science and Engineering 21 Journal of Difference Equations and Applications 20 Applicable Analysis 20 Mathematical Sciences 19 International Journal for Numerical Methods in Biomedical Engineering 19 S$$\vec{\text{e}}$$MA Journal 18 Journal of the Egyptian Mathematical Society 18 Advances in Mathematical Physics 17 Journal of Optimization Theory and Applications 16 Soft Computing 16 Fractional Calculus & Applied Analysis 15 International Journal of Nonlinear Sciences and Numerical Simulation 14 Mediterranean Journal of Mathematics 13 Iranian Journal of Science and Technology. Transaction A: Science 13 Computational Methods for Differential Equations 12 International Journal of Biomathematics 11 Journal of Mathematical Analysis and Applications 11 Linear and Multilinear Algebra 11 Calcolo 11 Computational Mathematics and Modeling 11 Turkish Journal of Mathematics 11 Nonlinear Analysis. Real World Applications 11 Boundary Value Problems 10 Computers and Fluids 10 International Journal for Numerical Methods in Engineering 10 Mathematics and Computers in Simulation 10 Computational Mechanics 10 Computational Mathematics and Mathematical Physics 10 Mathematics 9 Physics Letters. A 9 Advances in Computational Mathematics 9 Journal of Mathematical Chemistry 9 Applications and Applied Mathematics 9 International Journal of Differential Equations 9 Symmetry 9 Asian Journal of Control 8 Nonlinear Analysis. Theory, Methods & Applications. Series A: Theory and Methods 8 Complexity 8 Tbilisi Mathematical Journal 7 BIT 7 Information Sciences 7 Meccanica 7 Numerical Functional Analysis and Optimization 7 SIAM Journal on Scientific Computing 7 Nonlinear Analysis. Modelling and Control 7 Asian-European Journal of Mathematics 6 International Journal of Control 6 Journal of Mathematical Physics 6 Kybernetes 6 Acta Mathematicae Applicatae Sinica. English Series 6 Communications in Numerical Methods in Engineering 6 Sādhanā 6 Revista de la Real Academia de Ciencias Exactas, Físicas y Naturales. Serie A: Matemáticas. RACSAM 6 Afrika Matematika 5 International Journal of Modern Physics B 5 Circuits, Systems, and Signal Processing 5 Linear Algebra and its Applications 5 Integral Transforms and Special Functions 5 Journal of Inequalities and Applications 5 Differential Equations 5 Bulletin of the Malaysian Mathematical Sciences Society. Second Series 5 Journal of Intelligent and Fuzzy Systems 5 International Journal of Wavelets, Multiresolution and Information Processing 5 Boletim da Sociedade Paranaense de Matemática. Terceira Série 5 Discrete and Continuous Dynamical Systems. Series S 5 Advances in Fuzzy Systems 5 Science China. Mathematics 5 Journal of Applied Mathematics, Statistics and Informatics 5 Journal of Linear and Topological Algebra ...and 198 more Serials all top 5 #### Cited in 51 Fields 2,730 Numerical analysis (65-XX) 1,530 Partial differential equations (35-XX) 515 Ordinary differential equations (34-XX) 331 Integral equations (45-XX) 280 Fluid mechanics (76-XX) 253 Linear and multilinear algebra; matrix theory (15-XX) 224 Real functions (26-XX) 156 Calculus of variations and optimal control; optimization (49-XX) 154 Mechanics of deformable solids (74-XX) 135 Approximations and expansions (41-XX) 132 Biology and other natural sciences (92-XX) 127 Systems theory; control (93-XX) 111 Difference and functional equations (39-XX) 100 Special functions (33-XX) 73 Classical thermodynamics, heat transfer (80-XX) 57 Operations research, mathematical programming (90-XX) 56 Dynamical systems and ergodic theory (37-XX) 54 Harmonic analysis on Euclidean spaces (42-XX) 50 Probability theory and stochastic processes (60-XX) 49 Statistical mechanics, structure of matter (82-XX) 44 Operator theory (47-XX) 33 Game theory, economics, finance, and other social and behavioral sciences (91-XX) 31 Optics, electromagnetic theory (78-XX) 26 Computer science (68-XX) 24 Functional analysis (46-XX) 23 Quantum theory (81-XX) 18 Number theory (11-XX) 18 Statistics (62-XX) 17 Mechanics of particles and systems (70-XX) 16 Integral transforms, operational calculus (44-XX) 13 Information and communication theory, circuits (94-XX) 12 General algebraic systems (08-XX) 11 Astronomy and astrophysics (85-XX) 10 Geophysics (86-XX) 9 Combinatorics (05-XX) 7 General and overarching topics; collections (00-XX) 7 Sequences, series, summability (40-XX) 7 Global analysis, analysis on manifolds (58-XX) 5 Mathematics education (97-XX) 4 Mathematical logic and foundations (03-XX) 4 Functions of a complex variable (30-XX) 3 Algebraic geometry (14-XX) 3 Measure and integration (28-XX) 3 Potential theory (31-XX) 3 General topology (54-XX) 2 Field theory and polynomials (12-XX) 1 History and biography (01-XX) 1 Order, lattices, ordered algebraic structures (06-XX) 1 Commutative algebra (13-XX) 1 Associative rings and algebras (16-XX) 1 Group theory and generalizations (20-XX) #### Wikidata Timeline The data are displayed as stored in Wikidata under a Creative Commons CC0 License. Updates and corrections should be made in Wikidata.
{}
# Functions of $\mathbb{R}^d$ preserving convexity of sets Consider a function $$f : \mathbb{R}^d \to \mathbb{R}^d$$, with $$d\geq 2$$, such that: • $$f$$ is injective, • For any convex set $$A$$ of $$\mathbb{R}^d$$, $$f(A)$$ is also convex. What can we say about $$f$$ ? In particular, is $$f$$ necessarily affine ? I tend to think yes, but I can't prove it. Theorem 4. If $$V$$ and $$W$$ are real vector spaces with $$\dim V>1$$, and $$f:V\to W$$ is a one-to-one mapping which preserves convexity, then $$f$$ is either linear (if $$f(0)=0$$) or is the translate of a linear map.
{}
# Quantum circuits for quantum operations September 7, 2016 - 11:00am Speaker: Roger Colbeck Institution: U. York Every quantum gate can be decomposed into a sequence of single-qubit gates and controlled-NOTs. In many implementations, single-qubit gates are relatively 'cheap' to perform compared to C-NOTs (for instance, being less susceptible to noise), and hence it is desirable to minimize the number of C-NOT gates required to implement a circuit. I will consider the task of constructing a generic isometry from m qubits to n qubits, while trying to minimize the number of C-NOT gates required.  I will show a lower bound and then give an explicit gate decomposition that gets within a factor of about two of this bound. Through Stinespring's theorem this points to a C-NOT-efficient way to perform an arbitrary quantum operation.  I will then discuss the case of quantum operations in more detail. CSS 3100A
{}
# tensor product and einsum in numpy I am trying to understand the einsum function in NumPy. In this documentation, the last example, >>> a = np.arange(60.).reshape(3,4,5) >>> b = np.arange(24.).reshape(4,3,2) >>> np.einsum('ijk,jil->kl', a, b) array([[ 4400., 4730.], [ 4532., 4874.], [ 4664., 5018.], [ 4796., 5162.], [ 4928., 5306.]]) >>> np.einsum(a, [0,1,2], b, [1,0,3], [2,3]) array([[ 4400., 4730.], [ 4532., 4874.], [ 4664., 5018.], [ 4796., 5162.], [ 4928., 5306.]]) >>> np.tensordot(a,b, axes=([1,0],[0,1])) array([[ 4400., 4730.], [ 4532., 4874.], [ 4664., 5018.], [ 4796., 5162.], [ 4928., 5306.]]) I don't understand what's going on with this np.einsum('ijk,jil->kl', a, b) function. Can someone express it in a more explicit way, something like $$\sum_{???}a_{ijk}b_{ijk}$$? I'm not familiar with tensor product so that also contributes to my struggle here. I'm learning this to solve this problem of mine. - You will find a very good explanation by @ajcr, here – Ramon Crehuet Jan 11 at 20:53 The result is a new array c, with $$c_{kl} = \sum_{i,j} a_{ijk} b_{jil} .$$ Thanks! And I found np.einsum('ijk,jil', a, b) also gives the same result, without the ->kl part. – LWZ Mar 21 '13 at 15:10 The reason for this is that what comes after -> is how the output is treated. As you sum along repeated indices $i$ and $j$, the output depends on $k$ and $l$, the default without the -> is to keep it this way. If you want to sum over those indices as well (sum the final columns or rows), you can do it like this np.einsum('ijk,jil->k', a, b). If you want to sum columns and rows, you could do np.einsum('ijk,jil->', a, b). – Ramon Crehuet Jan 11 at 20:52
{}
Welcome guest You're not logged in. 260 users online, thereof 0 logged in ## Definition: Generalization of the Least Common Multiple Let $(R,\cdot,+)$ be an integral domain with the multiplicative neutral element $1,$ and let $M\subseteq R$ be its finite subset. The element $a$ is called the least common multiple of $M,$ if and only if: • $m\mid a\quad\forall m\in M$, i.e. all elements $m\in M$ are divisors of $a$, i.e. $a$ is a common multiple of $M$, and • $m\mid a’\quad\forall m\in M\Rightarrow a\mid a’$, i.e. $a$ divides any other common multiple $a’$ of $M.$ We express these two conditions being fulfilled simultaneously for $a$ by writing $a=\operatorname{lcm}(M).$ ### Notes | | | | | created: 2019-06-27 21:08:24 | modified: 2019-06-27 21:50:10 | by: bookofproofs | references: [8250] (none)
{}
# Experimental study of the 2s and 2p populations of hydrogen atoms resulting from the interaction of 0.8-MeV/amu H$^+$ H$^0$ and H$_2^+$ projectiles with thin carbon foils Document type : Journal articles http://hal.in2p3.fr/in2p3-00023040 Contributor : Sylvie Flores <> Submitted on : Tuesday, April 11, 2000 - 3:43:08 PM Last modification on : Thursday, April 8, 2021 - 2:28:02 PM ### Identifiers • HAL Id : in2p3-00023040, version 1 ### Citation A. Clouvas, M.J. Gaillard, J.-C. Poizat, J. Remillieux, A. Denis, et al.. Experimental study of the 2s and 2p populations of hydrogen atoms resulting from the interaction of 0.8-MeV/amu H$^+$ H$^0$ and H$_2^+$ projectiles with thin carbon foils. Physical Review A, American Physical Society, 1985, 31, pp.84-89. ⟨in2p3-00023040⟩ Record views
{}
# Math Help - square of a function is not integrable 1. ## square of a function is not integrable Is there a function f: R->R is integrable but f^2 is not? why? 2. ## Re: square of a function is not integrable Originally Posted by parklover Is there a function f: R->R is integrable but f^2 is not? why? Theorem: If $f$ is an integrable function in $[a,b]$ then $f^2$ is an integrable function in $[a,b]$. P.S. In response to the to reply #3. It depends on how one defines integral. Most often Riemann integrals require bounded functions. That example is not a bounced function. The term is improper integral. 3. ## Re: square of a function is not integrable Consider $f(x) =\begin{cases} \frac 1{\sqrt{|x|}}&\mbox{if } -1\leq x<0 \mbox{ or }01 \end{cases}$ and choose any value you want for $x=0$. 4. ## Re: square of a function is not integrable Originally Posted by girdav Consider $f(x) =\begin{cases} \frac 1{\sqrt |x|}&\mbox{if } -1\leq x<0 \mbox{ or }01 \end{cases}$ and choose any value you want for $x=0$. sorry, I am slow. why is it not integrable?i think ln (x) 5. ## Re: square of a function is not integrable The function $f$ is integrable but not its square. As you noticed, a primitive of $\frac 1x$ is $\ln x$, but if you compute $\int_{\varepsilon}^1\frac 1xdx$, you will find $-\ln \varepsilon$, and it has an infinite limit as $\varepsilon\to 0$. 6. ## Re: square of a function is not integrable Originally Posted by parklover sorry, I am slow. why is it not integrable?i think ln (x) The function $\frac{1}{\sqrt{|x|}}$ is not Riemann integrable on $[-1,0]$. However as an improper integral $\int_{ - 1}^0 {\frac{{dx}}{{\sqrt {\left| x \right|} }}}=2$. But $\int_{ - 1}^0 {\frac{{dx}}{{ {\left| x \right|} }}}$ does not exist even as an improper integral.
{}
# Theory of Computing Blog Aggregator (The following blog post serves as an introduction to the following notes:) Black Holes, Hawking Radiation, and the Firewall There are many different types of “theoretical physicists.” There are theoretical astrophysicists, theoretical condensed matter physicists, and even theoretical biophysicists. However, the general public seems to be most interested in the exploits of what you might call “theoretical high energy theorists.” (Think Stephen Hawking.) The holy grail for theoretical high energy physicists (who represent only a small fraction of all physicists) would be to find a theory of quantum gravity. As it stands now, physicists have two theories of nature: quantum field theory (or, more specifically, the “Standard Model”) and Einstein’s theory of general relativity. Quantum field theory describes elementary particles, like electrons, photons, quarks, gluons, etc. General relativity describes the force of gravity, which is really just a consequence of the curvature of spacetimes. Sometimes people like to say that quantum field theory describes “small stuff” like particles, while general relativity describes “big stuff” like planets and stars. This is maybe not the best way to think about it, though, because planets and stars are ultimately just made out of a bunch of quantum particles. Theoretical physicists are unhappy with having two theories of nature. In order to describe phenomena that depend on both quantum field theory and general relativity, the two theories must be combined in an “ad hoc” way. A so-called “theory of everything,” another name for the currently unknown theory of “quantum gravity,” would hypothetically be able to describe all the phenomena we know about. Just so we’re all on the same page, they don’t even have a fuly worked out hypothesis. (“String theory,” a popular candidate, is still not even a complete “theory” in the normal sense of the word, although it could become one eventually.) So what should these high energy theoretical physicists do if they want to discover what this theory of quantum gravity is? For the time being, nobody can think up an experiment that would be sensitive to quantum gravitation effects which is feasible with current technology. We are limited to so-called “thought experiments.” This brings us to Hawking radiation. In the 1970’s, Stephen Hawking considered what would happen to a black hole once quantum field theory was properly taken into account. (Of course, this involved a bit of “ad hoc” reasoning, as mentioned previously.) Hawking found that, much to everybody’s surprise, the black hole evaporated, realeasing energy in the form of “Hawking radiation” (mostly low energy photons). More strangely, this radiation comes out exactly in the spectrum you would expect from something “hot.” For example, imagine heating a piece of metal. At low temperatures, it emits low energy photons invisible to the human eye. Once it gets hotter, it glows red, then yellow, then perhaps eventually blue. The spectrum of light emitted follows a very specific pattern. Amazingly, Hawking found that the radiation which black holes emit follow the exact same pattern. By analogy, they have a temperature too! This is more profound than you might realize. This is because things which have an temperature should also have an “entropy.” You see, there are two notions of “states” in physics: “microstates” and “macrostates.” A microstate gives you the complete physical information of what comprises a physical system. For example, imagine you have a box of gas, which contains many particles moving in a seemingly random manner. A “microstate” of this box would be a list of all the positions and momenta of every last particle in that box. This would be impossible to measure in pratice. A “macrostate,” on the other hand, is a set of microstates. You may not know what the exact microstate your box of gas is in, but you can measure macroscopic quantities (like the total internal energy, volume, particle number) and consider the set of all possible microstates with those measured quantities. The “entropy” of a macrostate is the logarithm of the number of possible microstates. If black holes truly are thermodynamic systems with some entropy, that means there should be some “hidden variables” or microstates to the black hole that we currently don’t understand. Perhaps if we understood the microstates of the black hole, we would be much closer to understanding quantum gravity! However, Hawking also discovered something else. Because the black hole is radiating out energy, its mass will actually decrease as time goes on. Eventually, it should disappear entirely. This means that the information of what went into the black hole will be lost forever. Physicists did not like this, however, because it seemed to them that the information of what went into the black hole should never be lost. Many physicists believe that the information of what went into the black hole should somehow be contained in the outgoing Hawking radiation, although they do not currently understand how. According to Hawking’s original calculation, the Hawking radiation only depends on a parameters of the black hole (like its mass) and have nothing to do with the many finer details on what went in, the exact “microstate” of what went in. However, physicists eventually realized a problem with the idea that the black hole releases its information in the form of outgoing Hawking radiation. The problem has to do with quantum mechanics. In quantum mechanics, it is impossible to clone a qubit. That means that if you threw a qubit in a black hole and then waited for it to eventually come out in the form of Hawking radiation, then the qubit could no longer be “inside” the black hole. However, if Einstein is to be believed, you should also be able to jump into the black hole and see the qubit on the inside. This seems to imply that the qubit is cloned, as it is present on both the inside and outside of the black hole. Physicists eventually came up with a strange fix called “Black Hole Complementarity” (BHC). According to BHC, according to people outside the black hole, the interior does not exist. Also, according to people who have entered the black hole, the outside ceases to exist. Both descriptions of the world are “correct” because once someone has entered the black hole, they will be unable to escape and compare notes with the person on the outside. Of course, it must be emphasized that BHC remains highly hypothetical. People have been trying to poke holes in it for a long time. The largest hole is the so called “Firewall Paradox,” first proposed in 2012. Essentially, the Firewall paradox tries to show that the paradigm of BHC is self-contradictory. In fact, it was able to use basic quantum mechanics to show that, under some reasonable assumptions, the interior of the black hole truly doesn’t exist, and that anyone who tries to enter would be fried at an extremely hot “firewall!” Now, I don’t think most physicists actually believe that black holes really have a firewall (although this might depend on what day of the week you ask them). The interesting thing about the Firewall paradox is that it derives a seemingly crazy result from seemingly harmless starting suppositions. So these suppositions would have to be tweaked in the theory of quantum gravity order to get rid of the firewall. This is all to say that all this thinking about black holes really might help physicists figure out something about quantum gravity. (Then again, who can really say for sure.) If you would like to know more about the Firewall paradox, I suggest you read my notes, pasted at the top of this post! The goal of the notes was to write an introduction to the Black Hole Information Paradox and Firewall Paradox that could be read by computer scientists with no physics background. The structure of the notes goes as follows: 1. Special relativity 2. General relativity 3. Quantum Field Theory (in which I ambitiously tell you what QFT actually is!) 4. Statistical Mechanics (this is the best part) Because the information paradox touches on all areas of physics, I thought it was necessary to take “zero background, infinite intelligence” approach, introducing the all the necessary branches of physics (GR, QFT, Stat Mech) in order to understand what the deal with Hawking radiation really is, and why physicists think it is so important. I think it is safe to say that if you read these notes, you’ll learn a non-trivial amount of physics. by noahmiller5490 at January 20, 2019 06:00 AM UTC ### Resource-Aware Algorithms for Distributed Loop Closure Detection with Provable Performance Guarantees Authors: Yulun Tian, Kasra Khosoussi, Jonathan P. How Abstract: Inter-robot loop closure detection, e.g., for collaborative simultaneous localization and mapping (CSLAM), is a fundamental capability for many multirobot applications in GPS-denied regimes. In real-world scenarios, this is a resource-intensive process that involves exchanging observations and verifying potential matches. This poses severe challenges especially for small-size and low-cost robots with various operational and resource constraints that limit, e.g., energy consumption, communication bandwidth, and computation capacity. This paper presents resource-aware algorithms for distributed inter-robot loop closure detection. In particular, we seek to select a subset of potential inter-robot loop closures that maximizes a monotone submodular performance metric without exceeding computation and communication budgets. We demonstrate that this problem is in general NP-hard, and present efficient approximation algorithms with provable performance guarantees. A convex relaxation scheme is used to certify near-optimal performance of the proposed framework in real and synthetic SLAM benchmarks. ### Tight Bounds on the Minimum Size of a Dynamic Monopoly Abstract: Assume that you are given a graph $G=(V,E)$ with an initial coloring, where each node is black or white. Then, in discrete-time rounds all nodes simultaneously update their color following a predefined deterministic rule. This process is called two-way $r$-bootstrap percolation, for some integer $r$, if a node with at least $r$ black neighbors gets black and white otherwise. Similarly, in two-way $\alpha$-bootstrap percolation, for some $0<\alpha<1$, a node gets black if at least $\alpha$ fraction of its neighbors are black, and white otherwise. The two aforementioned processes are called respectively $r$-bootstrap and $\alpha$-bootstrap percolation if we require that a black node stays black forever. For each of these processes, we say a node set $D$ is a dynamic monopoly whenever the following holds: If all nodes in $D$ are black then the graph gets fully black eventually. We provide tight upper and lower bounds on the minimum size of a dynamic monopoly. ### Lower Bounds for Linear Decision Lists Abstract: We demonstrate a lower bound technique for linear decision lists, which are decision lists where the queries are arbitrary linear threshold functions. We use this technique to prove an explicit lower bound by showing that any linear decision list computing the function $MAJ \circ XOR$ requires size $2^{0.18 n}$. This completely answers an open question of Tur{\'a}n and Vatan [FoCM'97]. We also show that the spectral classes $PL_1, PL_\infty$, and the polynomial threshold function classes $\widehat{PT}_1, PT_1$, are incomparable to linear decision lists. ### Supportive Oracles for Parameterized Polynomial-Time Sub-Linear-Space Computations in Relation to L, NL, and P Authors: Tomoyuki Yamakami Abstract: We focus our attention onto polynomial-time sub-linear-space computation for decision problems, which are parameterized by size parameters $m(x)$, where the informal term "sub linear" means a function of the form $m(x)^{\varepsilon}\cdot polylog(|x|)$ on input instances $x$ for a certain absolute constant $\varepsilon\in(0,1)$ and a certain polylogarithmic function $polylog(n)$. The parameterized complexity class PsubLIN consists of all parameterized decision problems solvable simultaneously in polynomial time using sub-linear space. This complexity class is associated with the linear space hypothesis. There is no known inclusion relationships between PsubLIN and para-NL (nondeterministic log-space class), where the prefix "para-" indicates the natural parameterization of a given complexity class. Toward circumstantial evidences for the inclusions and separations of the associated complexity classes, we seek their relativizations. However, the standard relativization of Turing machines is known to violate the relationships of L$\subseteq$NL=co-NL$\subseteq$DSPACE[O($\log^2{n}$)]$\cap$P. We instead consider special oracles, called NL-supportive oracles, which guarantee these relationships in the corresponding relativized worlds. This paper vigorously constructs such NL-supportive oracles that generate relativized worlds where, for example, para-L$\neq$para-NL$\nsubseteq$PsubLIN and para-L$\neq$para-NL$\subseteq$PsubLIN. ### Boolean matrix factorization meets consecutive ones property Authors: Nikolaj Tatti, Pauli Miettinen Abstract: Boolean matrix factorization is a natural and a popular technique for summarizing binary matrices. In this paper, we study a problem of Boolean matrix factorization where we additionally require that the factor matrices have consecutive ones property (OBMF). A major application of this optimization problem comes from graph visualization: standard techniques for visualizing graphs are circular or linear layout, where nodes are ordered in circle or on a line. A common problem with visualizing graphs is clutter due to too many edges. The standard approach to deal with this is to bundle edges together and represent them as ribbon. We also show that we can use OBMF for edge bundling combined with circular or linear layout techniques. We demonstrate that not only this problem is NP-hard but we cannot have a polynomial-time algorithm that yields a multiplicative approximation guarantee (unless P = NP). On the positive side, we develop a greedy algorithm where at each step we look for the best 1-rank factorization. Since even obtaining 1-rank factorization is NP-hard, we propose an iterative algorithm where we fix one side and and find the other, reverse the roles, and repeat. We show that this step can be done in linear time using pq-trees. We also extend the problem to cyclic ones property and symmetric factorizations. Our experiments show that our algorithms find high-quality factorizations and scale well. ### Morphological Simplification of Archaeological Fracture Surfaces Authors: Hanan ElNaghy, Leo Dorst Abstract: We propose to employ scale spaces of mathematical morphology to hierarchically simplify fracture surfaces of complementarily fitting archaeological fragments. This representation preserves contact and is insensitive to different kinds of abrasion affecting the exact complementarity of the original fragments. We present a pipeline for morphologically simplifying fracture surfaces, based on their Lipschitz nature; its core is a new embedding of fracture surfaces to simultaneously compute both closing and opening morphological operations, using distance transforms. ### Generating Pareto records Authors: James Allen Fill, Daniel Q. Naiman Abstract: We present, (partially) analyze, and apply an efficient algorithm for the simulation of multivariate Pareto records. A key role is played by minima of the record-setting region (we call these generators) each time a new record is generated, and two highlights of our work are (i) efficient dynamic maintenance of the set of generators and (ii) asymptotic analysis of the expected number of generators at each time. ### The Pareto Record Frontier Authors: James Allen Fill, Daniel Q. Naiman Abstract: For iid $d$-dimensional observations $X^{(1)}, X^{(2)}, \ldots$ with independent Exponential$(1)$ coordinates, consider the boundary (relative to the closed positive orthant), or "frontier", $F_n$ of the closed Pareto record-setting (RS) region $\mbox{RS}_n := \{0 \leq x \in {\mathbb R}^d: x \not\prec X^{(i)}\ \mbox{for all 1 \leq i \leq n}\}$ at time $n$, where $0 \leq x$ means that $0 \leq x_j$ for $1 \leq j \leq d$ and $x \prec y$ means that $x_j < y_j$ for $1 \leq j \leq d$. With $x_+ := \sum_{j = 1}^d x_j$, let $F_n^- := \min\{x_+: x \in F_n\} \quad \mbox{and} \quad F_n^+ := \max\{x_+: x \in F_n\},$ and define the width of $F_n$ as $W_n := F_n^+ - F_n^-.$ We describe typical and almost sure behavior of the processes $F^+$, $F^-$, and $W$. In particular, we show that $F^+_n \sim \ln n \sim F^-_n$ almost surely and that $W_n / \ln \ln n$ converges in probability to $d - 1$; and for $d \geq 2$ we show that, almost surely, the set of limit points of the sequence $W_n / \ln \ln n$ is the interval $[d - 1, d]$. We also obtain modifications of our results that are important in connection with efficient simulation of Pareto records. Let $T_m$ denote the time that the $m$th record is set. We show that $\tilde{F}^+_m \sim (d! m)^{1/d} \sim \tilde{F}^-_m$ almost surely and that $W_{T_m} / \ln m$ converges in probability to $1 - d^{-1}$; and for $d \geq 2$ we show that, almost surely, the sequence $W_{T_m} / \ln m$ has $\liminf$ equal to $1 - d^{-1}$ and $\limsup$ equal to $1$. ### An analysis of the Geodesic Distance and other comparative metrics for tree-like structures Authors: Bernardo Lopo Tavares Abstract: Graphs are interesting structures: extremely useful to depict real-life problems, extremely easy to understand given a sketch, extremely complicated to represent formally, extremely complicated to compare. Phylogeny is the study of the relations between biological entities. From it, the interest in comparing tree graphs grew more than in other fields of science. Since there is no definitive way to compare them, multiple distances were formalized over the years since the early sixties, when the first effective numerical method to compare dendrograms was described. This work consists of formalizing, completing (with original work) and give a universal notation to analyze and compare the discriminatory power and time complexity of computing the thirteen here formalized metrics. We also present a new way to represent tree graphs, reach deeper in the details of the Geodesic Distance and discuss its worst-case time complexity in a suggested implementation. Our contribution ends up as a clean, valuable resource for anyone looking for an introduction to comparative metrics for tree graphs. ### Postdoc at UT Austin (apply by February 5, 2019) from CCI: jobs UT Austin invites applications for a Postdoctoral Fellow in theoretical computer science for the 2019-20 academic year to work with David Zuckerman. Research interests should overlap with his: pseudorandomness, computational complexity, coding theory, and more. Applications will be considered until the position is filled, but review of applicants will begin on February 5. Website: https://utaustin.wd1.myworkdayjobs.com/en-US/UTstaff/job/UT-MAIN-CAMPUS/Postdoctoral-Fellow_R_00001518 Email: diz@utexas.edu by shacharlovett at January 18, 2019 04:52 PM UTC from Theory Matters From the SIGACT executive committee: The deadlines to submit nominations for the Gödel Prize, Knuth Prize, and SIGACT Distinguished Service Award are coming soon. Calls for nominations for all three awards can be found at the links below. Note that March 1 is now the permanent deadline for SIGACT Distinguished Service Award nominations, this year and in future years. • Gödel Prize: deadline February 15, 2019 • Knuth Prize: deadline February 15, 2019 • SIGACT Distinguished Service Award: deadline March 1, every year (including 2019) Those who intend to submit a nomination for the Distinguished Service Award are strongly encouraged to inform the Selection Committee Chair at least two weeks in advance. by shuchic at January 18, 2019 02:37 PM UTC ### Upcoming Nomination Deadlines: Gödel, Knuth, and SIGACT Distinguished Service Award The deadlines to submit nominations for the Gödel Prize, Knuth Prize, and SIGACT Distinguished Service Award are coming soon. Calls for nominations for all three awards can be found at the links below. Note that March 1 is now the permanent deadline for SIGACT Distinguished Service Award nominations, this year and in future years. • Gödel Prize: deadline February 15, 2019 • Knuth Prize: deadline February 15, 2019 • SIGACT Distinguished Service Award: deadline March 1, every year (including 2019) Those who intend to submit a nomination for the Distinguished Service Award are strongly encouraged to inform the Selection Committee Chair at least two weeks in advance. by robertkleinberg at January 18, 2019 03:35 AM UTC ### Orientations of infinite graphs from David Eppstein An orientation of an undirected graph is the directed graph that you get by assigning a direction to each edge. Several kinds of orientations have been studied. For instance, in a graph with even vertex degrees, an Eulerian orientation makes the numbers of incoming and outgoing degrees equal at each vertex. In a bridgeless graph, a strong orientation makes the resulting directed graph strongly connected. In finite connected graphs, every Eulerian orientation is strong, but that is untrue in infinite graphs. Consider, for instance, the graph of unit distances on the integers, which is Eulerian (every vertex has degree two) but has no strong orientation (every edge is a bridge). Even when a strong orientation exists, an Eulerian orientation might not be strong: the graph of distance-1 and distance-2 integers, with the orientation from smaller to larger numbers, is Eulerian but not strong. So when does an Eulerian strong orientation exist? The answer turns out to be: whenever the obvious obstacles are not present. Every bridgeless connected even-degree infinite graph has an Eulerian strong orientation. To prove this, we can use a convenient tool for dealing with infinite orientations by looking only at finite graphs, a result of Rado from 1949 that is simultaneously a predecessor and generalization of the De Bruijn–Erdős theorem on graph colorings. Suppose each element in some infinite set has a finite set of labels, and we choose an assignment of labels for each finite subset of . These choices may be inconsistent with each other, so there may be no way of labeling all of consistently with all of the choices. But Rado proved (assuming the axiom of choice) that there exists a global labeling that, on every finite subset, is consistent with the assignment to one of its finite supersets. Another way of thinking about this is that if certain finite patterns must be avoided, and every finite subset has a labeling that avoids them, then some global labeling will also avoid them.1 The De Bruijn–Erdős theorem is the case where the elements are vertices, the labels are colors, and the patterns to be avoided are pairs of equal color on adjacent vertices. In our case the set elements will be edges, the labels will be which way to orient each edge, and the choice of assignment will be some way of defining a “good” orientation for finite subgraphs. So suppose we’re given a finite subgraph of an infinite 2-edge-connected even-degree graph. What kind of orientation on should we look for? A complication is that might have bridges or odd-degree vertices. So we’ll try to come as close as we can to what we want, an Eulerian strong orientation, while taking those deficiencies into account. Let’s define a good orientation of to be an orientation that is almost Eulerian, in that at each vertex the in-degree and out-degree differ by at most one, and almost strong, in that each edge of that belongs to an undirected cycle also belongs to a directed cycle. Another way of stating the almost strong condition is that the strongly connected components of the oriented graph should coincide with the 2-edge-connected components, or blocks, of the undirected graph. The existence of a good orientation of an arbitrary finite undirected graph (also having some other stronger properties) was proven in the 1960s by Nash-Williams.2 But now Rado’s method proves that every infinite graph also has a good orientation. If we find a global orientation that’s nearly-Eulerian on a supergraph of the star of neighbors of every vertex, then it must be nearly-Eulerian. And if it’s almost strong on a supergraph of every cycle, then it must be almost strong. This proves the theorem that a bridgeless connected even-degree infinite graph has an Eulerian strong orientation, because every good orientation in these graphs must be Eulerian and strong. Nash-Williams already considered the infinite case of his orientation theorem, but wrote that the details were too heavy to include. Perhaps he didn’t know about Rado’s theorem (despite it being published in the same journal), which makes the extension from finite to infinite easy. The same method of extending finite to infinite orientations can also be used to study notions of graph sparseness including arboricity, degeneracy, and pseudoarboricity. Historically the pseudoarboricity of infinite graphs was first. A graph has pseudoarboricity at most if it can be oriented so that each vertex has out-degree (or equivalently if it can be partitioned into subgraphs each with out-degree one). In the 1950 paper in which he first announced the De Bruijn–Erdős theorem, Erdős used it to prove that, when such an orientation is given, the resulting graph can be -colored.3 But he didn’t write about the conditions under which such an orientation exists. Rado’s theorem shows that they are the same as in the finite case: an outdegree- orientation exists if and only if every finite -vertex subgraph has at most edges. A graph has degeneracy at most if it has an acyclic orientation so that each vertex has out-degree . Unlike in the finite case, infinite graphs with low degeneracy might have high minimum degree; for instance, there exist graphs with degeneracy one in which all vertices have infinite degree. A finite -degenerate graph can be -colored greedily, and the De Bruijn–Erdős theorem shows that even in the infinite case such a coloring exists. Rado’s theorem shows that infinite graphs with pseudoarboricity are -degenerate, and that a graph is -degenerate if and only if every finite subgraph has a vertex with degree at most . As in the case of Eulerian strong orientations, this involves checking the orientation only on two kinds of finite subgraphs, stars and cycles. Arboricity is based on forests and there are multiple incompatible definitions of infinite forests. But the one we want to use is that a forest is a 1-degenerate graph. A graph has arboricity if its edges can be partitioned into forests. This is clearly at least as large as the degeneracy (no Rado needed). Rado’s theorem shows that infinite graphs with arboricity are -degenerate, and that a graph has arboricity at most if and only if every finite -vertex subset has at most edges. 1. Rado, R. (1949), “Axiomatic treatment of rank in infinite sets”, Canad. J. Math. 1: 337–343. 2. Nash-Williams, C. St. J. A. (1960), “On orientations, connectivity and odd-vertex-pairings in finite graphs”, Canad. J. Math., 12: 555–567; —— (1969), “Well-balanced orientations of finite graphs and unobtrusive odd-vertex-pairings”, Recent Progress in Combinatorics (Proc. Third Waterloo Conf. on Combinatorics, 1968), New York: Academic Press, pp. 133–149. 3. Erdős, P. (1950), “Some remarks on set theory”, Proc. AMS, 1: 127–141. by David Eppstein at January 17, 2019 08:39 PM UTC ### The Cost of Privacy Billboard at 2019 CES Computer scientists tend to obsess about privacy and we've had a privacy/security debate for decades now. But now machine learning has really given us a whole new spin on what privacy protects and takes away. I take an open approach and basically allow Google to know everything about my life. Google knows where I've been--sometimes my Pixel asks me which store in a shopping center I visited and I give up that info. Google knows who I communicate with, what websites I visit, what music and movies I listen to and watch, all my photos, what temperature makes me comfortable and so on. What do I get? A Google ecosystem that knows me sometimes better than I know myself. Google works best when it learns and integrates. I get asked to download maps for trips Google knows I'm about to take. I have Google assistant throughout my house, in my phone, in my car and it tailor answers and sometimes even the questions that I need answers to. If anything I wish there was further integration, like Google Voice should ring my office phone only when I'm in the office. Georgia Tech now forces us to use Microsoft Exchange for email. Outlook is not a bad email program but its capabilities, especially for search, does not work as well and think of all that unused knowledge. I trust Google to keep my information safe, with a random password and 2-factor encryption and even if someone would manage to break in they would find I'm a pretty boring person with an unhealthy obsession of opera (the musical form not the browser). Doesn't work for everyone and companies should make it easy to keep your info secure. But I say go use your machine learning on me and find ways to make my life easier and more fun, and sure send me some targeted ads as payment. The Internets will find a way to discover you anyway, might as well take advantage. by Lance Fortnow (noreply@blogger.com) at January 17, 2019 05:45 PM UTC ### TR19-007 | Lower Bounds for Linear Decision Lists | Arkadev Chattopadhyay, Meena Mahajan, Nikhil Mande, Nitin Saurabh from ECCC papers We demonstrate a lower bound technique for linear decision lists, which are decision lists where the queries are arbitrary linear threshold functions. We use this technique to prove an explicit lower bound by showing that any linear decision list computing the function $MAJ \circ XOR$ requires size $2^{0.18 n}$. This completely answers an open question of Tur{\'a}n and Vatan [FoCM'97]. We also show that the spectral classes $PL_1, PL_\infty$, and the polynomial threshold function classes $\widehat{PT}_1, PT_1$, are incomparable to linear decision lists. ### DIMEA Days 2019 from CS Theory Events June 12-13, 2019 Brno, Czech Republic https://www.fi.muni.cz/research/dimea/days19.html Registration deadline: March 31, 2019 by shacharlovett at January 17, 2019 10:51 AM UTC ### Gdańsk Summer School of Advanced Science on Algorithms for Discrete Optimization from CS Theory Events July 6-12, 2019 Gdansk, Poland https://eti.pg.edu.pl/advanced-science-on-algorithms/advanced-science The school aims at providing an opportunity for graduate and undergraduate students as well as young researchers to get together and attend advanced courses and talks on current topics in the field of algorithms and data structures for discrete optimization problems. During 8 days of the school, 6 advanced … Continue reading Gdańsk Summer School of Advanced Science on Algorithms for Discrete Optimization by shacharlovett at January 17, 2019 10:51 AM UTC ### Nice summer school on random walks and complex networks from CS Theory Events July 8-19, 2019 Nice, France https://math.unice.fr/~dmitsche/Summerschool/Summerschool.html Registration deadline: May 1, 2019 This 2-week-summer school aims to explain concepts on random walks and random models for complex networks at a level suitable for second year master students and PhD students in mathematics and theoretical computer science. by shacharlovett at January 17, 2019 10:51 AM UTC ### TR19-006 | Upper Bounds on Communication in terms of Approximate Rank | Anna Gal, Ridwan Syed from ECCC papers We show that any Boolean function with approximate rank $r$ can be computed by bounded error quantum protocols without prior entanglement of complexity $O( \sqrt{r} \log r)$. In addition, we show that any Boolean function with approximate rank $r$ and discrepancy $\delta$ can be computed by deterministic protocols of complexity $O(r)$, and private coin bounded error randomized protocols of complexity $O((\frac{1}{\delta})^2 + \log r)$. Our results yield lower bounds on approximate rank. We also obtain a strengthening of Newman's theorem with respect to approximate rank. ### TR19-005 | An Exponential Lower Bound on the Sub-Packetization of MSR Codes | Omar Alrabiah, Venkatesan Guruswami from ECCC papers An $(n,k,\ell)$-vector MDS code is a $\mathbb{F}$-linear subspace of $(\mathbb{F}^\ell)^n$ (for some field $\mathbb{F}$) of dimension $k\ell$, such that any $k$ (vector) symbols of the codeword suffice to determine the remaining $r=n-k$ (vector) symbols. The length $\ell$ of each codeword symbol is called the sub-packetization of the code. Such a code is called minimum storage regenerating (MSR), if any single symbol of a codeword can be recovered by downloading $\ell/r$ field elements (which is known to be the least possible) from each of the other symbols. MSR codes are attractive for use in distributed storage systems, and by now a variety of ingenious constructions of MSR codes are available. However, they all suffer from exponentially large sub-packetization of at least $r^{k/r}$. Our main result is an almost tight lower bound showing that for an MSR code, one must have $\ell \ge \exp(\Omega(k/r))$. Previously, a lower bound of $\approx \exp(\sqrt{k/r})$, and a tight lower bound for a restricted class of optimal access MSR codes, were known. Our work settles a key question concerning MSR codes that has received much attention, with a short proof hinging on one key definition that is somewhat inspired by Galois theory. ### Motwani Postdoctoral Fellowship at Stanford Computer Science (apply by December 21, 2019) from CCI: jobs The theory group at Stanford invites applications for the Motwani postdoctoral fellowship in theoretical computer science. Information and application instructions below. Applications will be accepted until the positions are filled, but review of applicants will begin after Jan 21. Email: theory.stanford@gmail.com by shacharlovett at January 16, 2019 06:32 PM UTC ### 9th PhD Summer School in Discrete Mathematics from CS Theory Events June 30 – July 6, 2019 Rogla, Slovenia https://conferences.famnit.upr.si/event/12/ The summer school will offer two mini-courses: “Combinatorial limits and their applications in extremal combinatorics” and “Coxeter groups”, as well as invited talks and student talks by shacharlovett at January 15, 2019 11:09 PM UTC ### 5th Algorithmic and Enumerative Combinatorics Summer School 2019 from CS Theory Events July 29 – August 2, 2019 Hagenberg, Austria https://www3.risc.jku.at/conferences/aec2019/index.html n the spirit of the AEC2014, AEC2015, AEC2016 and AEC2018, the goal of this summer school is to put forward the interplay between the fields of Enumerative Combinatorics, Analytic Combinatorics, and Algorithmics. This is a very active research area, which, aside from the three fields fueling … Continue reading 5th Algorithmic and Enumerative Combinatorics Summer School 2019 by shacharlovett at January 15, 2019 11:09 PM UTC ### Spring School and Workshop on Polytopes from CS Theory Events March 11-15, 2019 Bochum, Germany https://www.rub.de/polytopes2019/ Registration deadline: February 11, 2019 The Spring School is designed for advanced bachelor, master and PhD students. The aim is to prepare the participants in such a way that they can follow the lectures of the subsequent workshop. It consists of three crash courses: Geometry of Polytopes, Monday (2x2h) … Continue reading Spring School and Workshop on Polytopes by shacharlovett at January 15, 2019 11:08 PM UTC from David Eppstein For the new year, I’ve decided to try to get back into taking photos more frequently, and to make it lower-overhead I’m making individual Mastodon posts for some of them rather than writing a longer blog post for every batch of photos. So that’s why you see a couple of those images inline here. by David Eppstein at January 15, 2019 10:07 PM UTC ### The Winding Road to Quantum Supremacy from Scott Aaronson Greetings from QIP’2019 in Boulder, Colorado! Obvious highlights of the conference include Urmila Mahadev’s opening plenary talk on her verification protocol for quantum computation (which I blogged about here), and Avishay Tal’s upcoming plenary on his and Ran Raz’s oracle separation between BQP and PH (which I blogged about here). If you care, here are the slides for the talk I just gave, on the paper “Online Learning of Quantum States” by me, Xinyi Chen, Elad Hazan, Satyen Kale, and Ashwin Nayak. Feel free to ask in the comments about what else is going on. I returned a few days ago from my whirlwind Australia tour, which included Melbourne and Sydney; a Persian wedding that happened to be held next to a pirate ship (the Steve Irwin, used to harass whalers and adorned with a huge Jolly Roger); meetings and lectures graciously arranged by friends at UTS; a quantum computing lab tour personally conducted by 2018 “Australian of the Year” Michelle Simmons; three meetups with readers of this blog (or more often, readers of the other Scott A’s blog who graciously settled for the discount Scott A); and an excursion to Grampians National Park to see wild kangaroos, wallabies, koalas, and emus. But the thing that happened in Australia that provided the actual occassion for this post is this: I was interviewed by Adam Ford in Carlton Gardens in Melbourne, about quantum supremacy, AI risk, Integrated Information Theory, whether the universe is discrete or continuous, and to be honest I don’t remember what else. You can watch the first segment, the one about the prospects for quantum supremacy, here on YouTube. My only complaint is that Adam’s video camera somehow made me look like an out-of-shape slob who needs to hit the gym or something. Update (Jan. 16): Adam has now posted a second video on YouTube, wherein I talk about my “Ghost in the Quantum Turing Machine” paper, my critique of Integrated Information Theory, and more. And now Adam has posted yet a third segment, in which I talk about small, lighthearted things like existential threats to civilization and the prospects for superintelligent AI. And a fourth, in which I talk about whether reality is discrete or continuous. Related to the “free will / consciousness” segment of the interview: the biologist Jerry Coyne, whose blog “Why Evolution Is True” I’ve intermittently enjoyed over the years, yesterday announced my existence to his readers, with a post that mostly criticizes my views about free will and predictability, as I expressed them years ago in a clip that’s on YouTube (at the time, Coyne hadn’t seen GIQTM or my other writings on the subject). Coyne also took the opportunity to poke fun at this weird character he just came across whose “life is devoted to computing” and who even mistakes tips for change at airport smoothie stands. Some friends here at QIP had a good laugh over the fact that, for the world beyond theoretical computer science and quantum information, this is what 23 years of research, teaching, and writing apparently boil down to: an 8.5-minute video clip where I spouted about free will, and also my having been arrested once in a comic mix-up at Philadelphia airport. Anyway, since then I had a very pleasant email exchange with Coyne—someone with whom I find myself in agreement much more often than not, and who I’d love to have an extended conversation with sometime despite the odd way our interaction started. by Scott at January 15, 2019 06:51 PM UTC ### do we ever only care about the decision problem? I know of only one case of that (I had been thinking of this for a post then Lance's post on search versus decision inspired me to write up these thoughts.) When teaching NP-completeness we often say The problem we really care about is, for example, given a weighted graph and two vertices s and t, find the optimal way to go from s to t while hitting every node. But its cleaner mathematically to look at the decision problem: { (G,s,t,C) : there is a Ham Path from s to t that costs \le C } The search and decision are poly time equivalent, so its fine to just look at the decision. Indeed- if our interest in in lower bounds then clearly if Decision is hard then Find is Hard. But here are some questions about search vs decision in general, not just with regard to P vs NP. 1) Is there ever a case where the real world actually cares about the decision version? I can think of just one- given a number is it PRIME is used in Crypto. The real world does not need the witness that its prime (or similar).  They  just want a prime.  Any other cases? 2) How far apart can search and decision be? NP-decision and NP-search they are poly equivalent. In other domains can they be very far apart? For example, is FINDING a k-clique or k-ind set in a graph on 2^{2k} vertices require roughly n^k steps (go through all k-sets) or can we do much better? I suspect this is unknown but would be delighted if a commenter tells me otherwise. by GASARCH (noreply@blogger.com) at January 15, 2019 04:21 PM UTC ### Glauber’s dynamics from bit-player Roy J. Glauber, Harvard physics professor for 65 years, longtime Keeper of the Broom at the annual Ig Nobel ceremony, and winner of a non-Ig Nobel, has died at age 93. Glauber is known for his work in quantum optics; roughly speaking, he developed a mathematical theory of the laser at about the same time that device was invented, circa 1960. His two main papers on the subject, published in Physical Review in 1963, did not meet with instant acclaim; the Nobel committee’s recognition of their worth came more than 40 years later, in 2005. A third paper from 1963, titled “Time-dependent statistics of the Ising model,” also had a delayed impact. It is the basis of a modeling algorithm now called Glauber dynamics, which is well known in the cloistered community of statistical mechanics but deserves wider recognition. Before digging into the dynamics, however, let us pause for a few words about the man himself, drawn largely from the obituaries in the New York Times and the Harvard Crimson. Glauber was a member of the first class to graduate from the Bronx High School of Science, in 1941. From there he went to Harvard, but left in his sophomore year, at age 18, to work in the theory division at Los Alamos, where he helped calculate the critical mass of fissile material needed for a bomb. After the war he finished his degree at Harvard and went on to complete a PhD under Julian Schwinger. After a few brief adventures in Princeton and Pasadena, he was back at Harvard in 1952 and never left. A poignant aspect of his life is mentioned briefly in a 2009 interview, where Glauber discusses the challenge of sustaining an academic career while raising two children as a single parent. Here’s a glimpse of Glauber dynamics in action. Click the Go button, then try fiddling with the slider. 3.00 In the computer program that drives this animation, the slider controls a variable representing temperature. At high temperature (slide the control all the way to the right), you’ll see a roiling, seething mass of colored squares, switching rapidly and randomly between light and dark shades. There are no large-scale or long-lived structures. Occasionally the end point is not a monochromatic field. Instead the panel is divided into broad stripes—horizontal, vertical, or diagonal. This is an artifact of the finite size of the lattice and the use of wraparound boundary conditions. On an infinite lattice, the stripes would not occur.At low temperature (slide to the left), the tableau congeals into a few writhing blobs of contrasting color. Then the minority blobs are likely to evaporate, and you’ll be left with an unchanging, monochromatic panel. Between these extremes there’s some interesting behavior. Adjust the slider to a temperature near 2.27 and you can expect to see persistent fluctuations at all possible scales, from isolated individual blocks to patterns that span the entire array. What we’re looking at here is a simulation of a model of a ferromagnet—the kind of magnet that sticks to the refrigerator. The model was introduced almost 100 years ago by Wilhelm Lenz and his student Ernst Ising. They were trying to understand the thermal behavior of ferromagnetic materials such as iron. If you heat a block of magnetized iron above a certain temperature, called the Curie point, it loses all traces of magnetization. Slow cooling below the Curie point allows it to spontaneously magnetize again, perhaps with the poles in a different orientation. The onset of ferromagnetism at the Curie point is an abrupt phase transition. Lenz and Ising created a stripped-down model of a ferromagnet. In the two-dimensional version shown here, each of the small squares represents the spin vector of an unpaired electron in an iron atom. The vector can point in either of two directions, conventionally called up and down, which for graphic convenience are represented by two contrasting colors. There are $100 \times 100 = 10{,}000$ spins in the array. This would be a minute sample of a real ferromagnet. On the other hand, the system has $2^{10{,}000}$ possible states—quite an enormous number. The essence of ferromagnetism is that adjacent spins “prefer” to point in the same direction. To put that more formally: The energy of neighboring spins is lower when they are parallel, rather than antiparallel. For the array as a whole, the energy is minimized if all the spins point the same way, either up or down. Each spin contributes a tiny magnetic moment. When the spins are parallel, all the moments add up and the system is fully magnetized. If energy were the only consideration, the Ising model would always settle into a magnetized configuration, but there is a countervailing influence: Heat tends to randomize the spin directions. At infinite temperature, thermal fluctuations completely overwhelm the spins’ tendency to align, and all states are equally likely. Because the vast majority of those $2^{10{,}000}$ configurations have nearly equal numbers of up and down spins, the magnetization is negligible. At zero temperature, nothing prevents the system from condensing into the fully magnetized state. The interval between these limits is a battleground where energy and entropy contend for supremacy. Clearly, there must be a transition of some kind. For Lenz and Ising in the 1920s, the crucial question was whether the transition comes at a sharply defined critical temperature, as it does in real ferromagnets. A more gradual progression from one regime to the other would signal the model’s failure to capture important aspects of ferromagnet physics. In his doctoral dissertation Ising investigated the one-dimensional version of the model—a chain or ring of spins, each one holding hands with its two nearest neighbors. The result was a disappointment: He found no abrupt phase transition. And he speculated that the negative result would also hold in higher dimensions. The Ising model seemed to be dead on arrival. It was revived a decade later by Rudolf Peierls, who gave suggestive evidence for a sharp transition in the two-dimensional lattice. Then in 1944 Lars Onsager “solved” the two-dimensional model, showing that the phase transition does exist. The phase diagram looks like this: As the system cools, the salt-and-pepper chaos of infinite temperature evolves into a structure with larger blobs of color, but the up and down spins remain balanced on average (implying zero magnetization) down to the critical temperature $T_C$. At that point there is a sudden bifurcation, and the system will follow one branch or the other to full magnetization at zero temperature. If a model is classified as solved, is there anything more to say about it? In this case, I believe the answer is yes. The solution to the two-dimensional Ising model gives us a prescription for calculating the probability of seeing any given configuration at any given temperature. That’s a major accomplishment, and yet it leaves much of the model’s behavior unspecified. The solution defines the probability distribution at equilibrium—after the system has had time to settle into a statistically stable configuration. It doesn’t tell us anything about how the lattice of spins reaches that equilibrium when it starts from an arbitrary initial state, or how the system evolves when the temperature changes rapidly. It’s not just the solution to the model that has a few vague spots. When you look at the finer details of how spins interact, the model itself leaves much to the imagination. When a spin reacts to the influence of its nearest neighbors, and those neighbors are also reacting to one another, does everything happen all at once? Suppose two antiparallel spins both decide to flip at the same time; they will be left in a configuration that is still antiparallel. It’s hard to see how they’ll escape repeating the same dance over and over, like people who meet head-on in a corridor and keep making mirror-image evasive maneuvers. This kind of standoff can be avoided if the spins act sequentially rather than simultaneously. But if they take turns, how do they decide who goes first? Within the intellectual traditions of physics and mathematics, these questions can be dismissed as foolish or misguided. After all, when we look at the procession of the planets orbiting the sun, or at the colliding molecules in a gas, we don’t ask who takes the first step; the bodies are all in continuous and simultaneous motion. Newton gave us a tool, calculus, for understanding such situations. If you make the steps small enough, you don’t have to worry so much about the sequence of marching orders. However, if you want to write a computer program simulating a ferromagnet (or simulating planetary motions, for that matter), questions of sequence and synchrony cannot be swept aside. With conventional computer hardware, “let everything happen at once” is not an option. The program must consider each spin, one at a time, survey the surrounding neighborhood, apply an update rule that’s based on both the state of the neighbors and the temperature, and then decide whether or not to flip. Thus the program must choose a sequence in which to visit the lattice sites, as well as a sequence in which to visit the neighbors of each site, and those choices can make a difference in the outcome of the simulation. So can other details of implementation. Do we look at all the sites, calculate their new spin states, and then update all those that need to be flipped? Or do we update each spin as we go along, so that spins later in the sequence will see an array already modified by earlier actions? The original definition of the Ising model is silent on such matters, but the programmer must make a commitment one way or another. This is where Glauber dynamics enters the story. Glauber presented a version of the Ising model that’s somewhat more explicit about how spins interact with one another and with the “heat bath” that represents the influence of temperature. It’s a theory of Ising dynamics because he describes the spin system not just at equilibrium but also during transitional stages. I don’t know if Glauber was the first to offer an account of Ising dynamics, but the notion was certainly not commonplace in 1963. There’s no evidence Glauber was thinking of his method as an algorithm suitable for computer implementation. The subject of simulation doesn’t come up in his 1963 paper, where his primary aim is to find analytic expressions for the distribution of up and down spins as a function of time. (He did this only for the one-dimensional model.) Nevertheless, Glauber dynamics offers an elegant approach to programming an interactive version of the Ising model. Assume we have a lattice of $N$ spins. Each spin $\sigma$ is indexed by its coordinates $x, y$ and takes on one of the two values $+1$ and $-1$. Thus flipping a spin is a matter of multiplying $\sigma$ by $-1$. The algorithm for a updating the lattice looks like this: Repeat $N$ times: 1. Choose a spin $\sigma_{x, y}$ at random. 2. Sum the values of the four neighboring spins, $S = \sigma_{x+1, y} + \sigma_{x-1, y} + \sigma_{x, y+1} + \sigma_{x, y-1}$. The possible values of $S$ are $\{-4, -2, 0, +2, +4\}$. 3. Calculate $\Delta E = 2 \, \sigma_{x, y} \, S$, the change in interaction energy if $\sigma_{x, y}$ were to flip. 4. If $\Delta E \lt 0$, set $\sigma_{x, y} = -\sigma_{x, y}$. 5. Otherwise, set $\sigma_{x, y} = -\sigma_{x, y}$ with probability $\exp(-\Delta E/T)$, where $T$ is the temperature. Display the updated lattice. Step 4 says: If flipping a spin will reduce the overall energy of the system, flip it. Step 5 says: Even if flipping a spin raises the energy, go ahead and flip it in a randomly selected fraction of the cases. The probability of such spin flips is the Boltzmann factor $\exp(-\Delta E/T)$. This quantity goes to $0$ as the temperature $T$ falls to $0$, so that energetically unfavorable flips are unlikely in a cold lattice. The probability approaches $1$ as $T$ goes to infinity, which is why the model is such a seething mass of fluctuations at high temperature. (If you’d like to take a look at real code rather than pseudocode—namely the JavaScript program running the simulation above—it’s on GitHub.) Glauber dynamics belongs to a family of methods called Markov chain Monte Carlo algorithms (MCMC). The idea of Markov chains was an innovation in probability theory in the early years of the 20th century, extending classical probability to situations where the the next event depends on the current state of the system. Monte Carlo algorithms emerged at post-war Los Alamos, not long after Glauber left there to resume his undergraduate curriculum. He clearly kept up with the work of Stanislaw Ulam and other former colleagues in the Manhattan Project. Within the MCMC family, the distinctive feature of Glauber dynamics is choosing spins at random. The obvious alternative is to march methodically through the lattice by columns and rows, examining every spin in turn. That procedure can certainly be made to work, but it requires care in implementation. At low temperature the Ising process is very nearly deterministic, since unfavorable flips are extremely rare. When you combine a deterministic flip rule with a deterministic path through the lattice, it’s easy to get trapped in recurrent patterns. For example, a subtle bug yields the same configuration of spins on every step, shifted left by a single lattice site, so that the pattern seems to slide across the screen. Another spectacular failure gives rise to a blinking checkerboard, where every spin is surrounded by four opposite spins and flips on every time step. Avoiding these errors requires much fussy attention to algorithmic details. (My personal experience is that the first attempt is never right.) Choosing spins by throwing random darts at the lattice turns out to be less susceptible to clumsy mistakes. Yet, at first glance, the random procedure seems to have hazards of its own. In particular, choosing 10,000 spins at random from a lattice of 10,000 sites does not guarantee that every site will be visited once. On the contrary, a few sites will be sampled six or seven times, and you can expect that 3,679 sites (that’s $1/e \times 10{,}000)$ will not be visited at all. Doesn’t that bias distort the outcome of the simulation? No, it doesn’t. After many iterations, all the sites will get equal attention. The nasty bit in all Ising simulation algorithms is updating pairs of adjacent sites, where each spin is the neighbor of the other. Which one goes first, or do you try to handle them simultaneously? The column-and-row ordering maximizes exposure to this problem: Every spin is a member of such a pair. Other sequential algorithms—for example, visiting all the black squares of a checkerboard followed by all the white squares—avoid these confrontations altogether, never considering two adjacent spins in succession. Glauber dynamics is the Goldilocks solution. Pairs of adjacent spins do turn up as successive elements in the random sequence, but they are rare events. Decisions about how to handle them have no discernible influence on the outcome. Years ago, I had several opportunities to meet Roy Glauber. Regrettably, I failed to take advantage of them. Glauber’s office at Harvard was in the Lyman Laboratory of Physics, a small isthmus building connecting two larger halls. In the 1970s I was a frequent visitor there, pestering people to write articles for Scientific American. It was fertile territory; for a few years, the magazine found more authors per square meter in Lyman Lab than anywhere else in the world. But I never knocked on Glauber’s door. Perhaps it’s just as well. I was not yet equipped to appreciate what he had to say. Now I can let him have the last word. This is from the introduction to the paper that introduced Glauber dynamics: If the mathematical problems of equilibrium statistical mechanics are great, they are at least relatively well-defined. The situation is quite otherwise in dealing with systems which undergo large-scale changes with time. The principles of nonequilibrium statistical mechanics remain in largest measure unformulated. While this lack persists, it may be useful to have in hand whatever precise statements can be made about the time-dependent hehavior of statistical systems, however simple they may be. by Brian Hayes at January 15, 2019 12:30 PM UTC ### faculty positions (at all levels) at TU Eindhoven (apply by March 1, 2019) from CCI: jobs The Department of Mathematics and Computer Science at TU Eindhoven is looking for new faculty members to expand its academic staff. We welcome applications in all areas of computer science and at all levels, ranging from (tenure-track) assistant professor to full professor. Website: https://jobs.tue.nl/en/vacancy/faculty-members-computer-science-assistant-associate-and-full-professor-level-418993.html Email: M.T.d.Berg@tue.nl by shacharlovett at January 15, 2019 08:57 AM UTC ### Summer School on Geometric and Algebraic Combinatorics from CS Theory Events June 17-28, 2019 Paris, France http://gac-school.imj-prg.fr/ Submission deadline: March 27, 2019 Registration deadline: March 27, 2019 This two-week summer school will present an overview of a selection of topics in the crossroads between geometry, algebra, and combinatorics. It will consist of four one-week mini-courses given by leading experts, complemented with supervised exercise sessions, lectures by … Continue reading Summer School on Geometric and Algebraic Combinatorics by shacharlovett at January 14, 2019 02:09 PM UTC ### One postdoc position available at Reykjavik University from Luca Aceto Open Problems in the Equational Logic of Processes School of Computer Science, Reykjavik University One Postdoc Position Applications are invited for one post-doctoral position at the School of Computer Science, Reykjavik University.  The position is part of a three-year research project funded by the Icelandic Research Fund, under the direction of Luca Aceto (Gran Sasso Science Institute and Reykjavik Universityand Anna Ingolfsdottir (Reykjavik University) in cooperation with Bas Luttik (TU Eindhoven) and Alexandra Silva (University College London). The overarching goal of this project is to solve some of the challenging open problems in the equational axiomatization of behavioural equivalences over process calculi. Interested applicants can contact Luca Aceto (email: luca@ru.is) for further details on the research proposal. The successful candidate will benefit from, and contribute to, the research environment at the Icelandic Centre of Excellence in Theoretical Computer Science (ICE-TCS). For information about ICE-TCS and its activities, see Moreover, she/he will cooperate with Bas Luttik and Alexandra Silva  during the project work and will benefit from the interaction with their research groups at TU Eindhoven and University College London. The postdoc will also have a chance to interact with Clemens Grabmayer and the CS group at the Gran Sasso Science Institute (http://cs.gssi.it/), L'Aquila, Italy. Qualification requirements Applicants for the postdoctoral position should have, or be about to hold, a PhD degree in Computer Science or closely related fields. Previous knowledge of at least one of concurrency theory, process calculi, (structural) operational semantics and logic in computer science is highly desirable. Remuneration The wage for the postdoctoral position is 530,000 ISK (roughly 3,830  € at the present exchange rate) per month before taxes. (See http://payroll.is/en/ for information on what the wage will be after taxes.) The position is for two years, starting as soon as possible, and is renewable for another year, based on good performance and mutual satisfaction. Application details Interested applicants should send their CV, including a list of publications, in PDF to all the addresses below, together with a statement outlining their suitability for the project and the names of at least two referees. Luca Aceto email: luca@ru.is Anna Ingolfsdottir email: annai@ru.is We will start reviewing applications as soon as they arrive and will continue to accept applications until the position is filled. We strongly encourage interested applicants to send their applications as soon as possible and no later than 8 February 2019. by Luca Aceto (noreply@blogger.com) at January 14, 2019 10:10 AM UTC ### TR19-004 | UG-hardness to NP-hardness by Losing Half | Amey Bhangale, Subhash Khot from ECCC papers The $2$-to-$2$ Games Theorem of [KMS-1, DKKMS-1, DKKMS-2, KMS-2] implies that it is NP-hard to distinguish between Unique Games instances with assignment satisfying at least $(\frac{1}{2}-\varepsilon)$ fraction of the constraints $vs.$ no assignment satisfying more than $\varepsilon$ fraction of the constraints, for every constant $\varepsilon>0$. We show that the reduction can be transformed in a non-trivial way to give a stronger guarantee in the completeness case: For at least $(\frac{1}{2}-\varepsilon)$ fraction of the vertices on one side, all the constraints associated with them in the Unique Games instance can be satisfied. We use this guarantee to convert the known UG-hardness results to NP-hardness. We show: 1. Tight inapproximability of approximating independent sets in a degree $d$ graphs within a factor of $\Omega\left(\frac{d}{\log^2 d}\right)$, where $d$ is a constant. 2. NP-hardness of approximate Maximum Acyclic Subgraph problem within a factor of $\frac{2}{3}+\varepsilon$, improving the previous ratio of $\frac{14}{15}+\varepsilon$ by Austrin et al.[AMW15]. 3. For any predicate $P^{-1}(1) \subseteq [q]^k$ supporting balanced pairwise independent distribution, given a $P$-CSP instance with value at least $\frac{1}{2}-\varepsilon$, it is NP-hard to satisfy more than $\frac{|P^{-1}(1)|}{q^k}+\varepsilon$ fraction of constraints. ### Jean Bourgain 1954–2018 and Michael Atiyah 1929–2019 from Richard Lipton A tribute to two premier analysts From Flanders Today src1 and Ryle Trust Lecture src2 Baron Jean Bourgain and Sir Michael Atiyah passed away within the past three weeks. They became mathematical nobility by winning the Fields Medal, Atiyah in 1966 and Bourgain in 1994. Bourgain was created Baron by King Philippe of Belgium in 2015. Atiyah’s knighthood did not confer nobility, but he held the dynastic Order of Merit, which is limited to 24 living members and has had fewer than 200 total since its inception in 1902. Atiyah had been #2 by length of tenure after Prince Philip and ahead of Prince Charles. Today we discuss how they ennobled mathematics by their wide contributions. Bourgain was affiliated to IAS by the IBM John von Neumann Professorship. He had been battling cancer for a long time. Here is the middle section of the coat of arms he created for his 2015 investiture: Detail from IAS source The shield shows the beginning of an Apollonian circle packing, in which every radius is the reciprocal of an integer. This property continues as circles are recursively inscribed in the curvilinear regions—see this 2000 survey for a proof. To quote Bourgain’s words accompanying his design: The theory of these [packings] is today a rich mathematical research area, at the interface of hyperbolic geometry, dynamics, and number theory. Bourgain’s affinity to topics we hold dear in computing theory is shown by this 2009 talk titled, “The Search for Randomness.” It covers not only PRNGs and crypto but also expander graphs and succinctness in quantum computing. He has been hailed for diversity in other mathematical areas and editorships of many journals. We will talk about a problem in analysis which he helped solve not by analytical means but by connecting the problem to additive combinatorics. ## From Analysis to Combinatorics Sōichi Kakeya posed the problem of the minimum size of a subset of ${\mathbb{R}^{2}}$ in which a unit-length needle can be rotated through 360 degrees. Abram Besicovitch showed in 1928 that such sets can have Lebesgue measure ${\epsilon}$ for any ${\epsilon > 0}$. He had already shown that one can achieve measure zero with a weaker property, which he had used to show a strong failure of Fubini’s theorem for Riemann integrals: For all ${d}$ there is a measure-zero subset of ${\mathbb{R}^d}$ that contains a unit line segment in every direction. The surprise to many of us is that such strange sets would have important further consequences in analysis. A 2008 survey in the AMS Bulletin by Izabella Łaba, titled “From Harmonic Analysis to Arithmetic Combinatorics,” brings out breakthrough contributions by Bourgain to conjectures and problems that involve further properties of these sets, which seem to retain Kakeya’s name: Conjecture: A Kakeya set in ${R^{d}}$ must have Hausdorff dimension ${d}$. This and the formally weaker conjecture that the set must have Minkowski dimension ${d}$ are proved in ${\mathbb{R}^2}$ but open for all ${d \geq 3}$. Bourgain first proved that the restriction conjecture of Elias Stein, which is about extensions of the Fourier transform from certain subspaces of functions from ${\mathbb{R}^d}$ to ${\mathbb{C}}$ to operators from ${L^q}$ to ${L^p}$ functions on ${\mathbb{R}^d}$, implies the Kakeya conjecture. It is likewise open for ${d,p \geq 3}$. As Łaba writes, associated estimates “with ${p > 2}$ require deeper geometrical information, and this is where we find Kakeya sets lurking under the surface.” What Bourgain showed is that the restriction estimates place constraints on sets of lower Hausdorff dimension that force them to align “tubes” along discrete directions that can be approximated via integer lattices. This led to the following “key lemma”: Lemma 1 Consider subsets ${S}$ of ${A \times B}$, where ${A}$ and ${B}$ are finite subsets of ${\mathbb{Z}^d}$, and define $\displaystyle S^{+} = \{a+b: (a,b) \in S\}, \qquad S^{-} = \{a - b: (a,b) \in S\}.$ For every ${C > 0}$ there is ${C' > 0}$ such that whenever ${|S^{+}| \leq Cn}$, where ${n = \max\{|A|,|B|\}}$, we have ${|S^{-}| \leq C'n^{2 - \frac{1}{13}}}$. To quote Łaba: “Bourgain’s approach, however, provided a way out. Effectively, it said that our hypothetical set would have structure, to the extent that many of its lines would have to be parallel instead of pointing in different directions. Not a Kakeya set, after all.” She further says: Bourgain’s argument was, to this author’s knowledge, the first application of additive number theory to Euclidean harmonic analysis. It was significant, not only because it improved Kakeya bounds, but perhaps even more so because it introduced many harmonic analysts to additive number theory, including [Terence] Tao who contributed so much to the subject later on, and jump-started interaction and communication between the two communities. The Green-Tao theorem [on primes] and many other developments might have never happened, were it not for Bourgain’s brilliant leap of thought in 1998. Among many sources, note this seminar sponsored by Fan Chung and links from Tao’s own memorial post. ## Michael Atiyah Michael Atiyah was also much more than an analyst—indeed, he was first a topologist and algebraic geometer. He was also a theoretical physicist. Besides all these scientific hats, he engaged with society at large. After heading Britain’s Royal Society from 1990 to 1995, he became president of the Pugwash Conferences on Science and World Affairs. This organization was founded by Joseph Rotblat and Bertrand Russell in the 1950s to avert nuclear war and misuse of science, and won the 1995 Nobel Peace Prize. The “misuse of science” aspect comes out separately in Atiyah’s 1999 article in the British Medical Journal titled, “Science for evil: the scientist’s dilemma.” It lays out a wider scope of ethical and procedural concerns than the original anti-war purpose. This is furthered in his 1999 book chapter, “The Social Responsibility of Scientists,” which laid out six points including: • First there is the argument of moral responsibility. If you create something you should be concerned with its consequences. This should apply as much to making scientific discoveries as to having children. • Scientists will understand the technical problems better than the average politician or citizen and knowledge brings responsibility. • [T]here is need to prevent a public backlash against science. Self-interest requires that scientists must be fully involved in public debate and must not be seen as “enemies of the people.” As he says in its abstract: In my own case, after many years of quiet mathematical research, working out of the limelight, a major change occurred when unexpectedly I found myself president of the Royal Society, in a very public position, and expected to act as a general spokesman for the whole of science. Within physics and mathematics, he also ventured into a debate that comes closer to the theory-as-social-process topic we have discussed on this blog. In 1994 he led a collection of community responses to a 1993 article by Arthur Jaffe and Frank Quinn that began with the question, “Is speculative mathematics dangerous?” Atiyah replied by saying he agreed with many of their points, especially the need to distinguish between results based on rigorous proofs and heuristic arguments, …But if mathematics is to rejuvenate itself and break exciting new ground it will have to allow for the exploration of new ideas and techniques which, in their creative phase, are likely to be as dubious as in some of the great eras of the past. …[I]n the early stages of new developments, we must be prepared to act in more buccaneering style. Now we cannot help recalling his claim last September of heuristic arguments that will build a proof of the Riemann Hypothesis, which we covered in several posts. As we stated in our New Year’s post, nothing more of substance has come to our attention. We do not know how much more work was done on the promised longer paper. We will move toward discussing briefly how his most famous work is starting to matter in algorithms and complexity. ## Indexes and Invariants We will not try to go into even as much detail as we did for Kakeya sets about Atiyah’s signature contributions to topological K-theory, physical gauge theory, his celebrated index theorem with Isadore Singer, and much else. But we can evoke reasons for us to be interested in the last. We start with the simple statement from the essay by John Rognes of Oslo that accompanied the 2004 Abel Prize award to Atiyah and Singer: Theorem 2 Let ${P(f) = 0}$ be a system of differential equations. Then $\displaystyle \text{analytical index}(P) = \text{topological index}(P).$ Here the analytical index equals the dimension ${d_k}$ of the kernel of ${P}$ minus the dimension ${d_c}$ of the co-kernel of ${P}$, which (again quoting Rognes) “is equal to the number of parameters needed to describe all the solutions of the equation, minus the number of relations there are between the expressions ${P(f)}$.” The topological index has a longer laundry list of items in its definition, but the point is, those items are usually all easily calculable. It is further remarkable that in many cases we can get ${d_k - d_c}$ without knowing how to compute ${d_k}$ and ${d_c}$ individually. The New York Times obituary quotes Atiyah from 2015: It’s a bit of black magic to figure things out about differential equations even though you can’t solve them. One thing it helps figure out is satisfiability. Besides cases where knowing the number of solutions does help in finding them, there are many theorems that needed only information about the number and the parameterization. We have an analogous situation in complexity theory with the lower bound theorem of Walter Baur and Volker Strassen, which we covered in this post: The number of multiplication gates needed to compute an arithmetical function ${f}$ is bounded below by a known constant times the log-base-2 of the maximum number of solutions to a system formed from the partial derivatives of ${f}$ and a certain number of linear equations, over cases where that number is finite. Furthermore, both theorems front on algebraic geometry and geometric invariant theory, whose rapid ascent in our field was witnessed by a workshop at IAS that we covered last June. That workshop mentioned not only Atiyah but also the further work in algebraic geometry by his student Frances Kirwan, who was contemporaneous with Ken while at Oxford. Thus we may see more of the kind of connections in which Atiyah delighted, as noted in current tributes and the “matchmaker” label which was promoted at last August’s ICM. ## Open Problems Our condolences go out to their families and colleagues. by RJLipton+KWRegan at January 13, 2019 06:11 AM UTC ### Postdoc position at Duke University (apply by February 28, 2019) from CCI: jobs The algorithms and computational economics research groups at Duke University invite applications for one or potentially more postdoctoral positions, starting on or after July 1, 2019. The AcademicJobsOnline URL has full details as well as online application form. Email: kamesh@cs.duke.edu by shacharlovett at January 11, 2019 08:32 PM UTC I'm a bit disappointed that Michael describes al-Nayrizi's and Perigal's (families of) simple perfect squarings of the (flat square) torus as different. They are, in fact, identical—not just “congruent” or “equivalent” or “isomorphic”, but actually indistinguishable. They only look different because al-Nayrizi and Perigal cut the torus into a square in two different ways. by Jeff Erickson at January 11, 2019 01:13 PM UTC Nominations are due on February 28, 2019.  This award is given to a student who defended a thesis in 2018.  It is a prestigious award and is accompanied by a $1500 prize. In the past, the grand prize has been awarded to: 2017: Aviad Rubinstein, “Hardness of Approximation Between P and NP” 2016: Peng Shi, “Prediction and Optimization in School Choice” 2015: Inbal Talgam-Cohen, “Robust Market Design: Information and Computation “ 2014: S. Matthew Weinberg, “Algorithms for Strategic Agents” 2013: Balasubramanian Sivan, “Prior Robust Optimization” And the award has had seven runner-ups: Rachel Cummings, Christos Tzamos, Bo Waggoner, James Wright, Xi (Alice) Gao, Yang Cai, and Sigal Oren. You can find detailed information about the nomination process at: http://www.sigecom.org/awardd.html. We look forward to reading your nominations! Your Award Committee, Renato Paes Leme Aaron Roth (Chair) Inbal Talgam-Cohen by Kevin Leyton-Brown at January 11, 2019 03:27 AM UTC ### 2019 SIGecom Dissertation Award: Call for Nominations from Aaron Roth Dear all, Please consider nominating graduating Ph.D. students for the SIGecom Dissertation Award. If you are a graduating student, consider asking your adviser or other senior mentor to nominate you. Nominations are due on February 28, 2019. This award is given to a student who defended a thesis in 2018. It is a prestigious award and is accompanied by a$1500 prize.  In the past, the grand prize has been awarded to: 2017: Aviad Rubinstein, "Hardness of Approximation Between P and NP" 2016: Peng Shi, "Prediction and Optimization in School Choice" 2015: Inbal Talgam-Cohen, "Robust Market Design: Information and Computation " 2014: S. Matthew Weinberg, "Algorithms for Strategic Agents" 2013: Balasubramanian Sivan, "Prior Robust Optimization" And the award has had seven runner-ups: Rachel Cummings, Christos Tzamos, Bo Waggoner, James Wright, Xi (Alice) Gao, Yang Cai, and Sigal Oren.  You can find detailed information about the nomination process at: http://www.sigecom.org/awardd.html. We look forward to reading your nominations! Renato Paes Leme Aaron Roth (Chair) Inbal Talgam-Cohen by Aaron Roth (noreply@blogger.com) at January 10, 2019 08:26 PM UTC ### ANALCO, SOSA, SODA post I spent the last few days at SODA-ANALCO-ALENEX-SOSA in San Diego.  (Nice location choice, I'd say!)  Here's some news. This will be the last ANALCO (Analytic Algorithms and Combinatorics).  Apparently submissions have been decreasing, so they've decided it will halt and the work on these topics will go into SODA and other conferences.  I'm not sure how to think of it -- I think we as a community have far too many conferences/workshops generally, but I think the SODA model of having ANALCO and ALENEX (and now SOSA, I imagine) folded in cleanly into the main conference is an excellent model.  I also like the ANALCO topics.  But I can understand the time may have come to do something else.  Thanks to everyone who worked to organize ANALCO and keep it going these many years. It looks like SOSA (Symposium on Simplicity in Algorithms) will be taking its place in the SODA lineup.  I co-chaired the symposium with Jeremy Fineman this year, the second for the symposium.  I was surprised by the high quality of the submissions, and was then further surprised by the strong turnout at SODA.  The room was quite full for the Tuesday afternoon sessions, and there were easily 75+ people at several of the talks.  I do think there's a need for SOSA -- no other workshop/conference hits the theme of simplicity in our area, and it's a really nice fit with the rest of SODA.  I'm hoping it will last, and in particular that they'll continue to have a good number of high quality submissions, but that depends on all of you.  Ideally, there will be a positive feedback loop here -- now that there's a good home for this type of work (besides notes on the arxiv), people will be more inclined to write up and submit things to SOSA.  For Tuesday's talks, I'll call out Josh Alman's great presentation on "An Illuminating Algorithm for the Light Bulb Problem" as my favorite for the day. With ANALCO exiting, though, I think there's more room for additional satellite events at SODA, so hopefully some people will get creative. If I had thought about it I should have live-blogged the business meeting.  I'd say as highlights, first, Sandy Irani presented the report of the ad hoc committee to combat harassment and discrimination in the theory of computing community.   (See here for the report.)  There was an overwhelming vote to adopt their recommendations going forward.  It's good to see progress in addressing these community concerns.  Second, Shuchi Chawla will be the next PC chair, and she brought forward a plan to have SODA PC members be allowed to submit papers (with a higher bar) that was voted on favorably as well. I suppose the last note is that Jon Kleinberg's invited talk was the conference highlight you expect a Jon Kleinberg talk to be, with interesting results and models related to fairness and implicit bias. Thanks to SIAM and all the organizers for their hard work. by Michael Mitzenmacher (noreply@blogger.com) at January 09, 2019 10:25 PM UTC ### Mixed-Integer Nonlinear Optimization meets Data Science from CS Theory Events June 25, 2018 – June 28, 2019 Ischia, Italy http://www.iasi.cnr.it/minoa/big-data-school/ CNR-IASI, as part of the MINOA project, announces the school for PhD students and post-docs on the theme Mixed Integer Non linear Optimization meets Data Science. The school will cover the following topics: Deep learning for AI Clustering for Big Data Machine Learning for Combinatorial … Continue reading Mixed-Integer Nonlinear Optimization meets Data Science by shacharlovett at January 09, 2019 08:51 PM UTC
{}
I have two orthonormal vectors and a fixed vector b. I need to show orthogonality of the "error vector" I have two orthonormal vectors $q_1$ and $q_2$ and a fixed vector b. The first part of the question was to find the optimal linear combination $\alpha$$q_1+\beta$$q_2$ that is closest to b in the 2-norm sense. I believe that $x=Q^T$$b. For the second part of the question I have the "error vector" r=b-\alpha$$q_1$$-\beta$$q_2$ and I have to show this is orthogonal to $q_1$ and $q_2$. Can I do this by showing that the projection of b into the range of Q is in the range of Q? Am I even on the right track? Thanks let the error vector $\epsilon$ be defined by $$b = (q_1^\top b) q_1 + (q_2^\top b) q_2 + \epsilon \tag1$$ $(1)$ implies that $\epsilon$ is orthogonal to $q_1, q_2$ and in fact is orthogonal to any linear combination of $q_1, q_2.$ we will use the phythagors theorem to show that $(q_1^\top b) q_1 + (q_2^\top b) q_2$ is the closest vector in the span of $\{q_1, q_2\}.$ consider $$b - \alpha q_1 -\alpha q_2 = (q_1^\top b - \alpha) q_1 + (q_2^\top b -\beta) q_2 + \epsilon \tag 2$$ since $(q_1^\top b - \alpha) q_1 + (q_2^\top b -\beta) q_2$ and $\epsilon$ are orthogonal, \begin{align}| b - \alpha q_1 -\alpha q_2|^2 &= |(q_1^\top b - \alpha) q_1 + (q_2^\top b -\beta) q_2|^2 +|\epsilon|^2\\ &\ge |\epsilon|^2 = |b -(q_1^\top b) q_1 + (q_2^\top b) q_2|^2\end{align} Since $x=Q^{T}b=\left(q_1^{T}b\right)q_1+\left(q_2^{T}b\right)q_2$, $\;\;\;r=b-\left(q_1^{T}b\right)q_1-\left(q_2^{T}b\right)q_2$; so you just need to show that $q_1^{T}r=0$ and $q_2^{T}r=0\;\;$ (using that $q_1$ and $q_2$ are orthonormal).
{}
# Efficient data structure for set to union of sets mapping What I have is the following: • A mapping $f : key \rightarrow set$ that works in constant time • An inefficient mapping $g : \{key\} \rightarrow \cup_{k \in \{key\}} f(k)$ where this union is slow Is there a data structure that would make things faster? I could precalculate for every possible set of keys the wanted output but for a fairly large number of possible keys this isn't feasible ($O(2^n)$), despite the fact that lookup would be fast enough. Currently I'm using a dictionary to have an efficient mapping $f$. For each value in a set of keys, I get the result by applying $f$, and do a union over the results. This is inefficient. • Is $\{key\}$ a standard notation? – babou Mar 9 '15 at 18:15 • Set of keys? What's there non-standard about it? Or would you prefer a more Java-like syntax Set<Key>? Or maybe $S$ where $S \in \mathcal{P}(O)$ where $O$ is a set of all possible keys? – Looft Mar 9 '15 at 21:26 • I suspect you'll need to tell us more about your application, as I suspect whether this can be sped up will depend upon the access patterns and workload. What can you tell us? I presume you want to speed up the calls to $g$. Is there any locality or structure in the arguments to $g$? Do you know the set of arguments to $g$ in advance (so they can be preprocessed in batch mode) or are they supplied in an online fashion (where you must answer the previous query before getting the next query)? How large is the universe of values that the output of $f$ comes from? etc. – D.W. Mar 9 '15 at 21:48 • Yes, I know them in advance, there are exactly 3,200,000 different keys (we can behave as if they are integers), set argument to $g$ can be of size from 1 to around 30,000. $f$ returns a set of integers also. I'd say that union of 30,000 results is the most expensive operation. – Looft Mar 9 '15 at 22:07 • If I were to precompute the fetches for all possible subsets of the key-set, I'd have to go through $O(2^n)$ subsets. It's obvious that I could use the previous results/unions (DP) but I would still have an exponential number of inputs (stored) in the final mapping. – Looft Mar 10 '15 at 15:45
{}
## September 24, 2009 ### Homotopy Theory and Higher Algebraic Structures at UC Riverside #### Posted by John Baez This year the Fall Western Section Meeting of the American Mathematical Society will be held here at UC Riverside. Julie Bergner and I are running a session on homotopy theory, $n$-categories and related topics. If you’re anywhere nearby, I hope you drop by! If you’re interested in our session, you may also like this one: It’ll include talks by Louis Kauffman, Mikhail Khovanov, Scott Carter, Masahico Saito, Scott Morrison and other people who live near the interface of topology, categories and physics. As if that weren’t enough, there’s also another session on knot theory: The special session that Julie Bergner and I put together has a great lineup of talks. First, here are the vaguely $n$-categorical talks, listed in no particular order: Categorification via quiver varieties. Anthony Licata (with Sabin Cautis and Joel Kamnitzer). abstract. Categorifying quantum groups. Aaron D Lauda. abstract. 2-Quandles: categorified quandles. Alissa S. Crans. abstract. Group actions on categorified bundles. Weiwei Pan. abstract. A categorification of Hall algebras. Christopher Walker. abstract. A categorification of the Hecke algebra. Alexander E Hoffnung. abstract. 3-Categories for the working mathematician. Christopher L Douglas (with Andre Henriques). abstract. Mapping spaces in quasi-categories. David I. Spivak (with Daniel Dugger). abstract. String connections and supersymmetric sigma models. Konrad Waldorf. abstract. As you can see, categorification is becoming a big business. Of course, a lot of this is due to the work of Mikhail Khovanov. And over in Alissa and Sam’s special session, he’s giving an hour-long talk entitled “Adventures in categorification”! Second, here are the vaguely homotopy-theoretic talks. Of course there’s no sharp dividing line, and I’m not trying to create one… I’m just trying to avoid a single enormous list of talks which none of you will read: Generating spaces for $S(n)$-acyclics. Aaron Leeman. abstract A homotopy-theoretic view of Bott–Taubes integrals and knot spaces. Robin Koytcheff. abstract. On the $K$-theory of toric varieties. Christian Haesemeyer (with Guillermo Cortinas, Mark E Walker, and Charles A. Weibel). abstract. Homotopy colimits and the space of square-zero upper-triangular matrices. Jonathan W Lee. abstract. An application of equivariant $\mathbb{A}^1$-homotopy theory to problems in commutative algebra. T Benedict Williams. abstract. String topology and the based loop space. Eric J Malm. abstract. Unstable Vassiliev theory. Chad D Giusti. abstract. The Atiyah–Segal completion theorem in twisted $K$-theory. Anssi S. Lahtinen. abstract. Monoids of moduli spaces of manifolds. Soren Galatius. abstract. Relations amongst motivic Hopf elements. Daniel Dugger (with Daniel C. Isaksen). abstract. Real Johnson-Wilson theories. Maia Averett. abstract. Orbifolds and equivariant homotopy theory. Laura Scull (with Dorette Pronk). abstract. Universal Bott Samelson resolutions. Nitu R Kitchloo. abstract. There will also be a double talk on ‘blob homology’. I don’t completely understand blob homology yet, but it seems to be a practical way of computing homology with coefficients in an $n$-category with duals. If so, it straddles the worlds of $n$-category theory and homotopy theory so neatly that it makes a mockery of the distinction: Blob homology 1. Scott Morrison (with Kevin Walker). abstract. Blob Homology 2. Kevin Walker (with Scott Morrison) abstract. If you want to know when the talks are actually taking places, check out the schedules here. Posted at September 24, 2009 4:39 PM UTC TrackBack URL for this Entry:   http://golem.ph.utexas.edu/cgi-bin/MT-3.0/dxy-tb.fcgi/2065 ### Re: Homotopy Theory and Higher Algebraic Structures at UC Riverside The basic idea of blob homology should be that of factorization algebra, which in turn is not unsimilar to local nets in AQFT. Posted by: Urs Schreiber on September 24, 2009 6:15 PM | Permalink | Reply to this ### Re: Homotopy Theory and Higher Algebraic Structures at UC Riverside I agree there is significant similarity with ideas from factorization algebras and local nets, but I’m not sure that I would characterize this as the “basic idea”. One way of thinking of the blob complex is as a generalization of the Hochschild complex to higher categories and higher dimensional manifolds. One thinks of the Hochschild complex as associated to a 1-category and a 1-manifold (the circle). It’s a fairly small complex, analogous to cellular homology. The blob complex for the same input data (1-category and circle) yields a quasi-isomorphic but much much larger chain complex, analogous to singular homology. It’s advantage over the Hochschild complex is that it is “local”. In higher dimensions this locality means that it is easy to (well-) define the blob complex of an n-category + n-manifold without choosing any sort of decomposition of the n-manifold. Posted by: Kevin Walker on September 28, 2009 9:59 PM | Permalink | Reply to this ### Re: Homotopy Theory and Higher Algebraic Structures at UC Riverside Thanks, Kevin. Maybe “basic idea” was the wrong term and “significant similarity” captures it better. In any case, we (whoever “we” is) should write an $n$Lab entry on blob homology. Posted by: Urs Schreiber on September 29, 2009 7:48 PM | Permalink | Reply to this ### Re: Homotopy Theory and Higher Algebraic Structures at UC Riverside I moved your comment to the wiki, for a start: $n$Lab: blob complex. What’s the canonical reference? Posted by: Urs Schreiber on September 29, 2009 8:32 PM | Permalink | Reply to this ### Re: Homotopy Theory and Higher Algebraic Structures at UC Riverside Kevin Walker and Scott Morrison are writing a huge paper on blob homology. For a while the canonical reference has been this short abstract. A more detailed draft is available if you know where to look, but I hear a much better version will be released in a few weeks, so I’ll let Kevin and Scott decide if they want to publicize that draft in the interim. By the way, I keep wanting to type the phrase ‘blog homology’ — maybe we should invent that sometime. Posted by: John Baez on September 29, 2009 11:23 PM | Permalink | Reply to this ### Re: Homotopy Theory and Higher Algebraic Structures at UC Riverside There’s a more recent abstract/announcement in this Oberwolfach report. The current draft of the paper is too rough to be made public, but anyone who is motivated enough to track down my email address and send a personal request is welcome to a copy. Urs: Thanks for making the nLab entry. Feel free to copy from or link to the above report. If you or anyone else has questions, I’ll be happy to attempt to answer them here or on the wiki. Posted by: Kevin Walker on September 29, 2009 11:47 PM | Permalink | Reply to this ### Re: Homotopy Theory and Higher Algebraic Structures at UC Riverside I added the link to $n$Lab:blob complex. I also wanted to add the link to the set of slides on your homepage, but couldn’t find them. I’ll be happy to attempt to answer them here or on the wiki. Great. Here are mine: a) What’s the definition, precisely? b) What are the first few interesting examples, in detail? c) What can you say about the relation to Costello’s and to Lurie’s factorization algebras? If you could add a bit of information on that to the wiki, that would be great (I have created dummy headlines for you already, you just need to paset in the text). You’ll remember that I had to miss your talk in Oberwolfach, unfortunately. Which was/is a pity. Posted by: Urs Schreiber on September 30, 2009 10:36 AM | Permalink | Reply to this ### Re: Homotopy Theory and Higher Algebraic Structures at UC Riverside “a) What’s the definition, precisely?” The Oberwolfach abstract contains a terse but fairly detailed/complete definition. I’d try to copy it here or at nLab, but I don’t have any experience with MathML. “b) What are the first few interesting examples, in detail?” There are not currently as many fully developed examples as I would like. The motivation for developing the general theory was to apply it to two examples: (1) Khovanov homology, which can be thought of as a 4-category (with strong duality), and (2) tight contact structures, which can be viewed as a 3-category (albeit with some complications coming from the smooth, as opposed to PL, structure). Both of these categories are non-semisimple, so one expects that blob homology might have something interesting to say. Some special cases of blob homology provide interesting (but not new) examples. The degree 0 blob homology of an n-manifold is isomorphic to the (dual) hilbert space of the n+1-dimensional TQFT constructed from the same n-category. The blob homology of a circle is isomorphic to the Hochschild homology of the input 1-category. More generally, if one views a commutative algebra C as an n-category which is trivial in dimensions 0 through n-1, then the blob homology based on C is conjecturally isomorphic to the higher Hochschild homology of C. (This conjecture is due to Thomas Tradler.) In the case that C is a truncated [multi-variable] polynomial algebra, one can relate blob homology to the usual homology of [colored] configuration spaces of points in the n-manifold in question. “c) What can you say about the relation to Costello’s and to Lurie’s factorization algebras?” Not as much as I would like. This is on my list of things to think about, but it’s not at the top of that list at the moment. Given a 1-category, one can “ringify” it by taking the sum of all the morphism spaces and defining multiplication to be zero if range and domain do not agree. The resulting ring is Morita equivalent to the original 1-category. Factorization algebras make me think of the result of ringifying (or maybe E_n-algebra-ifying) an n-category. All of the lower order morphisms have been removed. When n=1 we know that rings are just as general as 1-categories from a Morita point of view. Is the same true for n>1? At least one knowledgeable person has told me he thinks the answer is “no”. If the answer is no then I would guess that factorization algebras can be viewed as yet another (strictly) special case of the blob complex. If, on the other hand, the answer is yes, then perhaps factorization algebras are an equivalent way of doing blob homology that is much easier to define. (I suspect, though, that to do concrete calculations one would want to have the lower order morphisms back in the picture.) I am very far from an expert on factorization algebras, so perhaps the above paragraph is all nonsense. If anyone else has thoughts on this topic, I’m eager to hear them. I’ll try to fill in some of the blank spots on the nLab wiki in the future. Posted by: Kevin Walker on October 6, 2009 8:05 PM | Permalink | Reply to this ### Re: Homotopy Theory and Higher Algebraic Structures at UC Riverside I don’t have any experience with MathML. No need to. None of us has. You type in ordinary LaTeX code. Have a quick look at the source code of a random entry. It’s very easy. Just dollar signs as usual and there you go. Posted by: Urs Schreiber on October 6, 2009 8:39 PM | Permalink | Reply to this ### Re: Homotopy Theory and Higher Algebraic Structures at UC Riverside tight contact structures, which can be viewed as a 3-category Kevin, Would you be willing to say a few more words about that? Posted by: Eugene Lerman on October 6, 2009 10:56 PM | Permalink | Reply to this ### Re: Homotopy Theory and Higher Algebraic Structures at UC Riverside Sure. The somewhat tautological thing to say is that there is a 3-category where * a 0-morphism is a point equipped with a germ of a contact structure on a 3-dimensional neighborhood of it; * a 1-morphism is an arc equipped with a germ of a contact structure on a 3-dimensional neighborhood of it; * a 2-morphism is a disk equipped with a germ of a contact structure on a 3-dimensional neighborhood of it; * and a 3-morphism is a tight contact structure on a 3-ball (i.e. we declare non-tight (over-twisted) contact structures to be zero). The rest of the 3-cat structure is given by the obvious geometric operations (gluing, restriction to boundary, etc.). I’ve swept a lot of details under the rug. One of the more important details is that for technical reasons one insists that the 1-morphisms be germs of contact structures near legendrian arcs, not arbitrary arcs. One of the interesting things about this 3-cat is that it can be given a completely combinatorial description. An old result (I forgot who’s) says that the germ of a contact structure near a surface in a contact 3-manifold is completely determined by the induced “characteristic foliation” of the surface (which is the singular foliation given by the intersection of the contact 2-planes and the tangent space of the surface). This means we have an uncountable number of 2-morphisms, which might at first be disappointing, but of course what we really care about is the number isomorphisms classes of 2-morphisms. A result of Giroux says that these isomorphisms classes correspond bijectively to isotopy classes of “dividing curves” on the surface. A set of dividing curves is a subdivision of the surface into positive and negative regions; roughly speaking, the positive region is the one where the orientation of the contact plane and the orientation of the surface agree (or at least are closer to agreeing than to disagreeing). This means for for a disk with fixed boundary condition, we have only finitely many (catalan number many) isomorphism classes of germ. (A closed loop in the dividing curves on the disk would imply that the germ of contact structure is not tight. This is not obvious.) In other words, the contact 3-category can plausibly claim to be a categorification of the Temperley-Lieb 2-category with loop value d=0. Personally, I think this is very cool. Papers by Ko Honda contain good background information on this topic. I’ve omitted many details, including tricky issues related to smoothing and unsmoothing corners. Posted by: Kevin Walker on October 7, 2009 6:32 AM | Permalink | Reply to this ### Re: Homotopy Theory and Higher Algebraic Structures at UC Riverside Very interesting. I presume a goal is to understand tight contact structures? Have you thought of doing something similar in dimension 5? I am thinking of Klaus Niederkruger’s plastikstufe. I understand that you don’t have the same infrastructure, like Giroux’s theorem etc, but it would be nice to have another tool for understanding contact structures in higher dimensions besides contact homology. Posted by: Eugene Lerman on October 7, 2009 7:41 PM | Permalink | Reply to this ### Re: Homotopy Theory and Higher Algebraic Structures at UC Riverside Yes, one of the goals is to understand tight contact structures. Another goal is to understand the structure of an interesting non-semisimple TQFT (or rather, decapitated TQFT – I’m not proposing to define the 4-dimensional part of the theory). I haven’t really thought about higher dimensions (too much remaining to do in dimension 3). I’ll take a look at the link you included. Posted by: Kevin Walker on October 9, 2009 3:14 AM | Permalink | Reply to this ### Re: Homotopy Theory and Higher Algebraic Structures at UC Riverside re blob homology: in his latest opus, Jacob Lurie has now a detailed discussion of his notion of topological chiral homology (see links there) and a quick remark It should be closely related to the theory of blob homology studied by Morrison and Walker (p. 107). I am hoping that eventually these two and factorization algebras will merge into a single thing. Would be a pity otherwise. Posted by: Urs Schreiber on November 4, 2009 9:18 PM | Permalink | Reply to this ### blob homology There is now a set of notes available by Scott Morrison and Kevin Walker on blob homology, which Kevin kindly linked to here. Posted by: Urs Schreiber on November 16, 2009 11:42 PM | Permalink | Reply to this ### Re: Homotopy Theory and Higher Algebraic Structures at UC Riverside Do you know if the lectures will be recorded for those that can’t make it to Riverside? Posted by: anon on October 6, 2009 12:23 AM | Permalink | Reply to this ### Re: Homotopy Theory and Higher Algebraic Structures at UC Riverside I’m pretty sure we won’t. We could, but I kind of doubt any of us will want to spend the weekend standing behind a camera — and we don’t have any slaves to do that for us. So, come to Riverside! Posted by: John Baez on October 6, 2009 5:56 PM | Permalink | Reply to this ### Re: Homotopy Theory and Higher Algebraic Structures at UC Riverside what! no grad students who could take turns? reminds me of JHC’s aphorism (approximately) grad students are machines for cellulose (i.e. papers) into theorems Posted by: jim stasheff on October 7, 2009 2:45 PM | Permalink | Reply to this ### Re: Homotopy Theory and Higher Algebraic Structures at UC Riverside This starts tomorrow. I'll be there! Maybe even on time? Posted by: Toby Bartels on November 7, 2009 3:27 AM | Permalink | Reply to this ### Re: Homotopy Theory and Higher Algebraic Structures at UC Riverside See you! Gotta walk over there now… Posted by: John Baez on November 7, 2009 3:11 PM | Permalink | Reply to this Post a New Comment
{}
# DAX Statistical - STDEVX.S function ## Description Evaluates an expression for each row of the table and returns the standard deviation of an expression, assuming that the table refers to a sample of the population. ## Syntax STDEVX.S (<table>, <expression>) ## Parameters Sr.No. Parameter & Description 1 table A table or any DAX expression that returns a table of data. 2 expression Any DAX expression that returns a single scalar value, where the expression is to be evaluated multiple times (for each row/context). A real number. ## Remarks DAX STDEVX.S function evaluates an expression for each row of the table and returns the standard deviation of expression assuming that the table refers to a sample of the population. If the table represents the entire population, then use DAX STDEVX.P function. STDEVX.S uses the following formula − $$\sqrt{\sum\frac{(x\:-\:\bar{x})^{2}}{(n-1)}}$$ Where, $\bar{x}$ the average value of $x$ for the sample population, and n is the sample size Blank rows are filtered out from the resulting column and not considered in the calculation. An error is returned, if the resulting column contains less than 2 non-blank rows. ## Example = STDEVX.S (West_Sales,West_Sales[Amount]) dax_functions_statistical.htm ## Useful Video Courses Video #### Mastering DAX and Data Models in Power BI Desktop 53 Lectures 5.5 hours Video #### Mastering DAX Studio Featured 24 Lectures 2 hours Video #### DAX / Power BI - Customer and Sales Analysis Deep Dive 26 Lectures 4.5 hours
{}
## USACO 2017 January Contest, Gold Contest has ended. Bessie has gotten herself stuck on the wrong side of Farmer John's barn again, and since her vision is so poor, she needs your help navigating across the barn. The barn is described by an $N \times N$ grid of square cells ($2 \leq N \leq 20$), some being empty and some containing impassable haybales. Bessie starts in the lower-left corner (cell 1,1) and wants to move to the upper-right corner (cell $N,N$). You can guide her by telling her a sequence of instructions, each of which is either "forward", "turn left 90 degrees", or "turn right 90 degrees". You want to issue the shortest sequence of instructions that will guide her to her destination. If you instruct Bessie to move off the grid (i.e., into the barn wall) or into a haybale, she will not move and will skip to the next command in your sequence. Unfortunately, Bessie doesn't know if she starts out facing up (towards cell 1,2) or right (towards cell 2,1). You need to give the shortest sequence of directions that will guide her to the goal regardless of which case is true. Once she reaches the goal she will ignore further commands. #### INPUT FORMAT (file cownav.in): The first line of input contains $N$. Each of the $N$ following lines contains a string of exactly $N$ characters, representing the barn. The first character of the last line is cell 1,1. The last character of the first line is cell N, N. Each character will either be an H to represent a haybale or an E to represent an empty square. It is guaranteed that cells 1,1 and $N,N$ will be empty, and furthermore it is guaranteed that there is a path of empty squares from cell 1,1 to cell $N, N$. #### OUTPUT FORMAT (file cownav.out): On a single line of output, output the length of the shortest sequence of directions that will guide Bessie to the goal, irrespective whether she starts facing up or right. #### SAMPLE INPUT: 3 EHE EEE EEE #### SAMPLE OUTPUT: 9 In this example, the instructions "Forward, Right, Forward, Forward, Left, Forward, Left, Forward, Forward" will guide Bessie to the destination irrespective of her starting orientation. Problem credits: Brian Dean Contest has ended. No further submissions allowed.
{}
# Questions on PRF Define three hash functions: 1. $$H_1: \{0, 1\}^* \rightarrow \mathbb{G}$$ mapping $$x$$ to the group $$\mathbb{G}$$ of prime order $$q$$ 2. $$H_2: \mathbb{G} \rightarrow \{0, 1\}^\tau$$ 3. $$H_3: \{0, 1\}^* \times \mathbb{G} \rightarrow \{0, 1\}^\tau$$ Under random oracle model and DDH assumption, I noticed the PRF $$H_2(H_1(x)^k)$$ defined in this paper where $$k$$ is the PRF key. Besides, I also saw other PRF variants such as $$H_3(x, H_1(x)^k)$$ in PAKE papers and $$H_1(x)^k$$ in Private Set Intersection papers(also see in a related previous question). I have the following questions: 1. Is there any formal proof that they are PRFs? 2. If they are all PRFs in the random oracle model under DDH assumption, what are the differences between the three constructions, in particular between $$H_2(H_1(x)^k)$$ and $$H_3(x, H_1(x)^k)$$? • I think this needs a little more context to be answerable. For example, what is $H_2$? – Guut Boy Mar 5 at 8:00 • @Guut Boy thanks, have revised accordingly. – D.V. Mar 5 at 8:42 Is there any formal proof that they are PRFs? In these applications (PSI, PAKE), it's usually not necessary to prove standalone PRF security. Sometimes you can get a secure PSI for example from something that is slightly weaker than a PRF (e.g., a PRF for bounded # of queries). In this case, the constructions are standalone PRFs, and the proof would follow the same logic as the PSI/PAKE security proofs. I can give an idea of the proof for $$H(x)^k$$ (in prime-order cyclic groups): • Consider a reduction algorithm $$R$$ that takes as input a triple of group elements $$(K,A,B)$$ and does the following: $$R$$ internally runs a PRF adversary and plays the role of the random oracle $$H$$ and the construction $$F(k,x) = H(x)^k$$. Whenever $$A$$ queries $$H(x)$$, respond with $$A^{r_x}$$; whenever $$A$$ queries $$F(x)$$, respond with $$B^{r_x}$$. Here $$r_x$$ is uniform for each distinct $$x$$. • If $$(K=g^k, A=g^a, B=g^{ak})$$ then $$B^{r_x} = (g^{ak})^{r_x} = ((g^a)^{r_x})^k = H(x)^k$$ so the PRF adversary is seeing true outputs of the PRF. • If $$B$$ is uniform in $$(K,A, B)$$, then each $$B^{r_x}$$ is uniform, so the PRF adversary is seeing random outputs from its PRF oracle. • The two cases of $$R$$'s inputs $$(K,A,B)$$ are indistinguishable by the DDH assumption, this shows that the PRF construction is secure. If they are all PRFs in the random oracle model under DDH assumption, what are the differences between the three constructions? PSI/PAKE applications are interactive protocols. When we need security against malicious adversaries, there must be a simulator that watches what the adversary does and "explains" it (extracting an input to send to the ideal PSI/PAKE functionality on behalf of the adversary). In the random oracle model, the simulator gets to also observe all of the adversary's queries to the random oracle. So the reason to use something like $$H(x,stuff)$$ instead of $$H(stuff)$$ in such a protocol is to help the simulator extract. If $$x$$ is right there (as an input to the random oracle) then the simulator's job is much easier. This is a standard trick in malicious-secure PSI (in the random oracle model). • Thank you, your answer really helps. I guess another consideration on $H_2$ $H_3$ is that in PAKE they want to obtain a bit-string rather than a group element, e.g., $H_2$ $H_3$ can be Key Derivation Functions for deriving the session key. In PSI, we have no need to do that since we only want to match elements. Towards malicious security, can the simulator extract $x$ from $H_1(x)$ if $H_1$ is modeled as a random oracle? – D.V. Mar 6 at 2:49 • Yes, the simulator can "recognize" $x$ from $H_1(x)$ in the random oracle model. But in PSI/PAKE you often have things like $H_1(x)^r$. If $r$ is random then this value perfectly hides $x$ even from the simulator. – Mikero Mar 6 at 17:09
{}
# CDash Testing Dashboard¶ ## Background¶ CMake is a cross-platform build system generator. On UNIX systems, CMake is generally used to generate Unix Makefiles. On Windows systems, CMake is generally used to generate Visual Studio Projects. However, CMake is not limited to these generators and has support for Ninja, Xcode, NMake, among others. CTest is a components of CMake for testing. It provides a generic syntax for executing tests and features such as timeouts, pass/fail regex, concurrent execution of tests, test labeling and grouping, among many others. It is not a replacement for Python unittest, Google Test, etc. CTest is a framework for executing those tests and also testing and logging the build and configuration if desired. For technical issues regarding CMake and CTest, please consult the CMake documentation. CDash is an open source, web-based software testing server. CDash aggregates, analyzes and displays the results of software testing processes submitted from clients located around the world. Developers depend on CDash to convey the state of a software system, and to continually improve its quality. CDash is typically part of a larger software process that integrates Kitware's CMake, CTest, and CPack tools, as well as other external packages used to design, manage and maintain large-scale software systems. CDash documentation can be found at the Paraview CDash Wiki page. ## Overview¶ NERSC provides a CDash web-server at cdash.nersc.gov and it is recommended for continuous integration (CI) aggregation across all your CI platforms -- such Travis, Jenkins, AppVeyor, etc. -- as it extensively simplifies diagnosing CI issues and provides a location to log performance history. If the project uses CMake to generate a build system, see CDash submission for CMake projects. If the project does not use CMake, NERSC provides python bindings to CMake/CTest in a project called pyctest. These bindings allow projects to generate CTest tests and submit to the dashboard regardless of the build system (CMake, autotools, setup.py, etc.). This package is available with PyPi (source distribution) and Anaconda (pre-compiled distributions). For usage, see CDash submission for non-CMake projects, the pyctest documentation. • Anaconda: conda install -c conda-forge pyctest • PyPi: pip install -vvv pyctest ## Features¶ • Automated emails • Build warnings and failures • Tests failing • Low code coverage • Test timing changes • Capture standard out and standard error from all the stages of building, testing, and deploying the package • Code coverage reports (in a variety of formats) can be uploaded, visually displayed, and thresholds can be set to notify developers if coverage drops below the threshold • Memory checking for leaks can be analyzed and reported • Visual results can be uploaded along with logs • e.g. if a test generates a visual product, by echoing a message such as <DartMeasurementFile name="ExampleImage" type="image/jpeg">./example_image.jpeg</DartMeasurementFile> to stdout, the example_image.jpeg will be uploaded to the dashboard and displayed in the test log • ASCII results can be attached as "Notes" to the "build" • Three primary submission tracks • Continuous • Nightly • Experimental • Ability to create subprojects • Public and private dashboards • Token-authenticated submission • Information on submission sites platform automatically recorded • Number of CPU cores • Processor speed • Operating system • etc. ## Configuration¶ The dashboard client configuration has 8 possible steps, many of which are optional: • CTest Start Step • This step is required and typically involves obtaining the source code • The command executed in this step is set by the definition of the CTEST_CHECKOUT_COMMAND • e.g. set(CTEST_CHECKOUT_COMMAND "git clone https://github.com/jrmadsen/pyctest.git pyctest-src") • This step is invoked via ctest_start(...) [see CTest documentation for ctest_start] • CTest Update Step • This step is optional and typically involves changing the branch, applying a patch, etc. when used by setting the CTEST_UPDATE_COMMAND variable • This step is automatically involved by ctest_start when CTEST_UPDATE_COMMAND is defined • CTest Configure Step • This step is optional and typically involves running cmake or an autotools configure script when the package involves compiled code • The command executed in this step is set by the definition of the CTEST_CONFIGURE_COMMAND • The log for this step, including the list of warnings and errors, can be found in the Configure subsection of the Build section of the dashboard • CTest Build Step • This step is optional and typically involves invoking the build program and compiling your code • The command executed in this step is set by the definition of the CTEST_BUILD_COMMAND • The log for this step, including the list of warnings and errors, can be found in the Build subsection of the Build section of the dashboard • CTest Test Step • This step is optional and invokes running all the tests provided by CTestTestfile.cmake in the binary directory • The top-level CTestTestfile.cmake should add any subdirectories containing additional CTestTestfile.cmake - e.g. subdirs("examples") to add the tests defined in examples/CTestTestfile.cmake • CTest Coverage Step • This step is optional and invokes a user command followed by a search for code coverage files -- such as Python trace *.cover files, Cobertera coverage.xml files, GCov results, etc. • The command executed in this step is set by the definition of the CTEST_COVERAGE_COMMAND • CTest MemCheck Step • This step is optional and typically involves running valgrind • Memory checking with tools such as valgrind • CTest Submit Step • This step is required and uploads the output of the previous ## Using CDash with CMake¶ CTest + CDash is easily integrated into an existing CMake build system. For using CTest + CDash with a CMake build system, refer to the CTest/CDash with CMake documentation page. ## Using CDash without CMake¶ CTest + CDash can be utilized without a CMake build system. For using CTest + CDash without a CMake build system, refer to the CTest/CDash without CMake documentation page. ## Dashboard Registration¶ • Click on the Register button in the top-left corner • Fill out the form • Click Register • Confirm the email account • Subscribe to any existing projects as desired ### Create Project¶ • Click on Create new project once you have navigated to the My CDash page • Fill out the project information pages and then click Create Project button in the Miscellaneous tab ### Project Settings¶ Once the project has been created, these settings can be updated via Settings > Project. Here you can download the CTestConfig.cmake for the project and also provide a CTest template Script under the Clients tab.
{}
Please, try EDU on Codeforces! New educational section with videos, subtitles, texts, and problems. × ### Geothermal's blog By Geothermal, history, 11 days ago, , # A — Calc Just directly print the given sum. Time Complexity: $O(1)$. Click here for my submission. # B — Minor Change The answer is simply the number of positions $i$ for which $S[i] \neq T[i]$. This is because we must make a move fixing each of those position, and each move can only make $S[i] = T[i]$ for one new position. Thus, we can iterate through the positions and count those where $S$ and $T$ differ, printing the total as our answer. Time Complexity: $O(|S|)$. Click here for my submission. # C — Tsundoku We apply two pointers. Iterate over the number of books to be read from desk A in decreasing order, starting with the greatest number of those books we could read without exceeding $K$ minutes. Maintain the sum of all books to be read from desk A. Then, while we can do so without exceeding $K$ minutes, add the next book from desk B to the list of books to be read. We can then consider the total number of books read between desk A and desk B as a possible answer, decrement the number of books from desk A, and continue. The basic intuition is that as we decrease the number of books we read from desk A, the number of books we read from desk B monotonically increases. By processing the number of books to be read from desk A in order, we avoid having to recompute any sums, achieving a time complexity of $O(N+M)$. Time Complexity: $O(N+M)$. Click here for my submission. # D — Sum of Divisors Let's reframe the problem by considering the total contribution each divisor makes to the answer. For any given $i$, $i$ contributes $K$ to the sum for each $K$ from $1$ to $N$ such that $K$ is a multiple of $i$. Notice that the list of these $K$ is of the form $i$, $2i$, $3i$, ..., $\lfloor \frac{N}{i} \rfloor i$, noting that the last term is the greatest multiple of $i$ less than $N$. For simplicity, let $M = \lfloor \frac{N}{i} \rfloor.$ Then, the sum of this sequence is equal to $i + 2i + 3i + \cdots + Mi = i (1 + 2 + \cdots + M) = \frac{iM(M+1)}{2}.$ Thus, by computing $M$ and the above product, we can compute each divisor's contribution to the answer in $O(1)$, giving us an $O(N)$ solution. Time Complexity: $O(N)$. Click here for my submission. # E — NEQ First, let's count the number of ways to choose $A$, irrespective of $B$. There are $\dbinom{M}{N}$ ways to choose the $N$ integers to appear in $A$ and $N!$ ways to arrange them, so we have $\dbinom{M}{N} N!$ choices for $A$ in total. Now, without loss of generality, assume the sequence $A$ is simply $1, 2, \cdots, N$, since the number of choices for $B$ doesn't depend on our sequence $A$. We want to count the number of sequences $B$ where there is no position $i$ such that $B_i = i$. We do this using the principle of inclusion and exclusion. By PIE, we know that the number of sequences $B$ where $B_i \neq i$ for all $i$ is equal to the sum over all valid $j$ of $(-1)^j$ times the number of ways to create a sequence $B$ where at least $j$ positions $i$ have $B_i = i$. Thus, it remains to compute the number of ways to compute the number of sequences $B$ with at least $j$ equal positions. There are $\dbinom{N}{j}$ ways to choose $j$ positions where $B_i = i$. Then, it doesn't matter what we put in the remaining positions, so there are $\dbinom{M-j}{N-j}$ ways to choose the numbers and $(N-j)!$ ways to arrange them. Thus, the total number of ways to ensure that at least $j$ positions have $B_i = i$ is $\dbinom{N}{j} \dbinom{M-j}{N-j} (N-j)!$. By precomputing all relevant factorials and their modular inverses in $O(M)$, we can compute the above product in $O(1)$ for each $j$. We then sum these up using our PIE formula, giving us an $O(N+M)$ solution. Since $N \leq M$, this is simply $O(M)$. Time Complexity: $O(M)$. Click here for my submission. # F — Unfair Nim It is well known that the second player wins a game of Nim if and only if the bitwise XOR of the numbers of stones in each pile is equal to $0$. Thus, we want to make the equation $A_1 \wedge A_2 \wedge A_3 \wedge \cdots \wedge A_n = 0$. By taking the XOR of this equation with $A_3 \wedge A_4 \wedge \cdots \wedge A_n$, we have $A_1 \wedge A_2 = A_3 \wedge A_4 \wedge \cdots \wedge A_n.$ We can immediately compute the right-hand side, since it is fixed before we make our moves. Now, we want to find the largest $x$ between $1$ and $A_1$, inclusive, such that $x \wedge (A_1 + A_2 - x) = A_3 \wedge A_4 \wedge \cdots \wedge A_n.$ We build $x$ via digit DP, determining the bits of $x$ from the most significant bit to the least significant bit. Let $a$ be the maximum possible value of $x$ thus far such that we have found a bit in which $x$ contains a $0$ while $A_1$ contains a $1$ (so, we know that no matter what bits we add to $a$ in the future, it will be less than $A_1$), or $-\infty$ if no such value exists thus far. Let $b$ be the value of $x$ such that thus far, all bits in $b$ are the same as those in $A_1$, or $-\infty$ if creating this value is impossible. Note that we cannot add a $1$ to $b$ if the corresponding bit in $A_1$ is a $0$, as this would make $b$ too large. Let $S = A_3 \wedge A_4 \wedge \cdots \wedge A_n.$ If $S$ contains a $0$ in any given bit, we know that $x$ and $A_1 + A_2 - x$ must either both contain a bit or neither must contain a bit. We can tell which case applies by determining whether after subtracting out all bits we've assigned so far, $A_1 + A_2$ is greater than $2^{k+1}$, where $k$ is the index of the current bit. This is because the sum of all future bits will be at most $2^{k+1} - 2$, so $A_1 + A_2 \geq 2^{k+1}$ both enables us to add two $1$s at position $b$ and forces us to do so. If we add a bit to both numbers, we add $2^k$ to $a$ and $b$, but set $b$ to $-\infty$ if $A_1$ contains a $0$ in position $k$. Otherwise, we don't add anything to either $a$ or $b$, but if $A_1$ contains a $1$ in position $k$, we can set $a$ to $\max(a, b)$, since now we've found a position in which $x < A_1$. If $S$ contains a $1$ in any given bit, we know that exactly one of $x$ and $A_1 + A_2 - x$ must contain a bit. Thus, we can add $2^k$ to $a$, because this will not make $a$ exceed $A_1$. If $A_1$ contains a $1$ in position $k$, we can also add $2^k$ to $b$, or we can alternatively assign a $0$ in order to guarantee that $x < A[1]$, allowing us to set $a$ to $\max(a, b)$. At the end of this process, $a$ and $b$ are our possible values for $x$. Of course, if $a$ and $b$ are zero or $-\infty$, they can't work (the zero case fails because we cannot move every stone from the first pile). Moreover, we also must confirm that $a \wedge (A_1 + A_2 - a) = S$ in order to use $a$, and we must check the same equation for $b$. This is because the total bits we assign throughout our process may not equal $A_1 + A_2$, since the number of bits we assign is largely determined by $S$. (One countercase that demonstrates this issue is $N=3, A = 7, 5, 6$: our solution assigns one bit in position $2$, one bit in position $1$, and two bits in position $0$, but the total of these bits is $4 + 2 + 2 = 8 < 7+5.$) Then, we take the largest of these possible values for $x$ and subtract it from $A_1$ to get our answer. Time Complexity: $O(N + \log A_i).$ Click here for my submission. • +256 » 11 days ago, # |   0 Auto comment: topic has been updated by Geothermal (previous revision, new revision, compare). » 11 days ago, # |   +3 for D my running time was around 1300ms yours 140ms great learn something new thanks. » 11 days ago, # |   +30 Writing the editorial would have took more time than to solve all problems for you. » 11 days ago, # |   0 For E: NEQ it should be $\binom{M}{N}(N!)$ which happens to be the same as $MPrN$ • » » 11 days ago, # ^ |   0 Fixed. » 11 days ago, # |   +1 We do this using the principle of inclusion and exclusion. By PIE, we know that the number of sequences B where Bi≠i for all i is equal to the sum over all valid j of (−1)j times the number of ways to create a sequence B where at least j positions i have Bi=i.Can somebody explain this line as to why we are multiplying (−1)j ? • » » 11 days ago, # ^ |   0 Geothermal Can you help ? or provide any resource ? • » » » 10 days ago, # ^ |   +3 Refer to this great article. • » » 11 days ago, # ^ |   0 We know the number of ways of building A. Now here we considered how many ways to build B for a single way of a.. Then answer will be, number of ways build a * number of ways build b. So for buildings B Firstly we generate (array B)all the ways using M number.Thats means we calculate the number of ways where number B(i) = i is (0, 1, 2, 3... n). But we need only the ways where number of B(i) =i is 0. So, now we have to remove ways where number B(i) = i is (1, 2, 3... n) but here we delte number of ways where B(i) = i ( 2,3....n times) twich. So, we have to add them... For this reason here (-1)^j used. Consider my poor english, for better generate all the sequence in a paper for small n and m (2,3) • » » » 11 days ago, # ^ |   0 Thanks..! • » » 10 days ago, # ^ |   0 first refer this https://brainly.in/question/9032911 if you fix value of A then by above formula you need to subtract , suppose a group of 5 same element in b, only once. » 11 days ago, # |   0 thanks @geothermal.btw ,where can i practice more problems like E?. • » » 11 days ago, # ^ |   0 Most ABCs have at least one combinatorics problem; they’re probably the best source if you want to practice lots of similar problems. • » » 11 days ago, # ^ |   +2 I guess you might try solving the combinatorics and inclusion-exclusion problems from this blog. Not necessarily similar to E but high-quality problems. » 11 days ago, # |   0 since the number of choices for B doesn't depend on our sequence A.Geothermal Can you elaborate it more please ? • » » 11 days ago, # ^ |   0 Choices for B is same irrespective of A. say A is [1,2,3] or [4,3,1] , then no. of choices for B for both of them stays the same. • » » 11 days ago, # ^ | ← Rev. 2 →   0 In the second step, $B_i=i$ means $B_i=A_i$. So no matter what $A$ looks like, we can always satisfy $B_i=A_i$. » 11 days ago, # |   0 Can somebody explain the solution for problem E, I am not able to understand it. • » » 16 hours ago, # ^ |   0 https://www.youtube.com/watch?v=qYAWjIVY7Zw&t=102s watch this for intuition... » 11 days ago, # |   +1 @Geothermal why #pragma GCC optimize ("O3") #pragma GCC target ("sse4") are used? » 11 days ago, # |   0 for problem c,if we iterste over two arrays by considering the min value of two pointers ,y is it giving wrong answer? • » » 11 days ago, # ^ |   0 consider 2 arrays and k=120 arr1 = [60,120], arr2=[60,20,30] answer is 3 (60+20+30). then find your answer.we don't know where to go actually when both the values are equal. » 11 days ago, # |   +6 F could be easily solved by using https://www.geeksforgeeks.org/find-two-numbers-sum-xor/ . we just make a number by using only fixed bits and mark the optional bits in a boolean array .. and then we can greedily put those optional values to achieve the answer. • » » 11 days ago, # ^ |   0 But Dp solution is very cool • » » » 11 days ago, # ^ |   0 Can you please explain dp solution. I am finding it hard to understand, especially 3rd and 4th paragraph. Thanks! • » » 11 days ago, # ^ |   0 Thanks. • » » 10 days ago, # ^ | ← Rev. 2 →   0 In your submission to this problem, you used one condition, could you please tell me why did you check for this condition.. ~~~~~ S=a[0]+a[1]; x=totalxor^a[0]^a[1]; if(s%2!=x%2) answer=-1; ~~~~~ what role does parity of sum and XOR plays in this question? » 11 days ago, # | ← Rev. 2 →   -19 Can someeone tell me why my solution is wrong? #include using namespace std; #define ll long long #define pb push_back #define speed ios_base::sync_with_stdio(0); cin.tie(0); #define FL(i,a,n) for(ll i=a;i>n>>m; ll k;cin>>k; vector st,stt; for(ll i=0;i>xx; st.pb(xx); } for(ll i=0;i>xx; stt.pb(xx); } st.pb(100000000000000000); stt.pb(100000000000000000); ll cnt=0; ll i=0,j=0; while(k>0) { cnt++; //cout< • » » 11 days ago, # ^ |   0 You misused the block function. Your code has to be inside the ~~~~ things. • » » 11 days ago, # ^ |   0 I think this code is for task C? » 11 days ago, # |   0 I didn't understand the explanation of problem D. My approach was finding primes and then the number of divisors of that prime and then just iterating all powers of that prime and gives increment the value of the previous number of divisors count. This continued till 10^7 and the rest for the rest of the value I used prime factorization to find the number of divisors but the solution gives TLE.How can I reduce the running time with some changes in code? • » » 10 days ago, # ^ |   +8 "...and then the number of divisors of that prime."Cant be a lot ;) » 11 days ago, # | ← Rev. 2 →   0 For E why is it assumed that A and B both have the same set of elements say m>2*n we can choose completely disjoint sets of elements for both sequences with N!*N! possible solutions I am sure I have misinterpreted something but cannot figure out what. :-( • » » 11 days ago, # ^ |   +1 From where did you see that it is assumed? • » » 11 days ago, # ^ |   +11 That's not assumed. Do you understand what we're doing PiE over? I can phrase in a different way: Take WLOG $A_i = i$ for $1 \le i \le n$ as in the editorial above. Then, let $S_i$ be the set of sequences for $B$ satisfying the second condition ($B_i \neq B_j$ when $i \neq j$) and with $B_i=i$. Then clearly we want to work out ${M \choose N} N! - |S_1 \cup S_2 \cup \cdots \cup S_N|$as this is the number of valid $B$ sequences (assuming $A_i=i$ for all $i$). We can work this out via PiE (since the sizes intersections of various $S_i$ sets are easy to work out), and then we just multiply by ${M \choose N} N!$ as that's the number of possibly $A$ sequences. » 11 days ago, # |   0 I feel C wasn't quite clear. • » » 10 days ago, # ^ | ← Rev. 2 →   0 I solved it using a different method.First, calculate the prefix sums of both the arrays. Now assume that we can take $X$ books in total from desk A and B. How do we check if its true?We know that we take the books as some prefix of A and some prefix of B. So the possibilities are take ($0$ books from A, $X$ books from B), ($1$ book from A, $X-1$ books from B),($2$ books from A,$X-2$ books from B)... ($X$ books from A, $0$ books from B).Now if the sum of any of those combinations is $\leq K$ then we know that its possible to take $X$ books. Now we simply find the maximum $X$ by binary search. We can binary search on X because if we can take $X$ books, we can also take $X-1$ books. Time complexity is $O((N+M)log(N+M))$My submission: https://atcoder.jp/contests/abc172/submissions/14805133 » 11 days ago, # |   +3 E) For anyone having a problem with understanding the inclusion/exclusion principle for derangement can have a look at The last section is Derangment for N elements,Which you can further generalize to M objects and N places ( M >= N ), such that none of the N objects are at their respective position, after which you might be able to easily understand the given editorial.Thanks! » 11 days ago, # | ← Rev. 2 →   0 In problem E,how is the precomputation of inverse factorials done in O(M)? Isn't it O(MlogM)? • » » 11 days ago, # ^ | ← Rev. 2 →   0 I did: // inv_fac(maxn) = (fac(maxn)) ^ (-1) inv_fac[maxn] = modinverse(fac[maxn], MOD); // inv_fac(i) / (i + 1) = inv_fac(i + 1) => inv_fac(i) = inv_fac(i + 1) * (i + 1) for(int i = maxn - 1; i > 1; --i) inv_fac[i] = mul(inv_fac[i + 1], i + 1); This is O(log(maxn) + maxn) = O(maxn). • » » » 11 days ago, # ^ | ← Rev. 3 →   0 Cool idea. Thank you. But I think the code of Geothermal has complexity in precomputing inverse factorials as O(MlogM). • » » » » 11 days ago, # ^ |   0 We can do that in O(maxn) without log factor also.. • » » 10 days ago, # ^ |   +4 The complexity of my code is $O(M log MOD)$, since the complexity of evaluating modular inverses is logarithmic with respect to the modulus; since $MOD$ is a constant, $\log MOD$ does not contribute to the asymptotic complexity of the algorithm. (It adds to the constant factor, of course, but not nearly enough to cause TLE.) » 11 days ago, # |   0 It's kind of sad that AtCoder doesn't have a social platform even to post editorials like this so that they have to do it on Codeforces. • » » 11 days ago, # ^ |   +11 Maybe I am misunderstanding something, but AtCoder does provide editorials. It's in Japanese right now, and will be translated and uploaded in 2-3 days.If you meant discussion, then I think it's a good decision: most of CP community already communicates on CF and putting effort into a seperate blog (which would not even see huge usage) is frivolous. » 11 days ago, # |   0 Love your editorials !! » 11 days ago, # |   0 $-$ I am curious how would one solve problem C if we have 3 desks of books? » 10 days ago, # |   0 In F — Unfair Nim, how do we know that for the second player to win the bitwise XOR of all A[i] must be zero? • » » 9 days ago, # ^ |   0 Quick definition: xor-sum means the xor of all $A_i$.Whatever move the first player makes, the xor-sum will become nonzero, and the second player can always make the xor-sum 0 again. Finally both xor-sum and sum will become zero so the second player wins. • » » » 9 days ago, # ^ |   0 I get this partially, but how can one prove that if the xor-sum is not zero initially, then the second player can not win. And for each player what are the optimal moves? • » » » » 8 days ago, # ^ |   0 If the xor-sum is not zero, the first player can make it zero. The optimal move is finding the number that makes the highest bit in the xor-sum 1, and make the corresponding change. » 10 days ago, # |   0 GeothermalYour Approach to E is absolutely clear. In Atcoder's Editorial, they have done this question using a different approach which isn't quite clear. Could you please explain that.Thanks • » » 10 days ago, # ^ |   0 What is the point you don't understand? • » » » 9 days ago, # ^ |   0 What does |s| stand for, and what is s?Also how did we arrive at (M) P (s) * ((M-s) P (N-s)) ^2 » 9 days ago, # |   0 can someone please explain me the D question? » 9 days ago, # |   0 In the part of problem E, $\dbinom{N}{j} \dbinom{M-j}{N-j} (N-j)!$ is actually not the exactly number of ways to make at least $j$ positions equal because it contains duplication. But I can't find a better way to interpret this. I think one can provide a better interpretation so it will be easier to understand. • » » 8 days ago, # ^ |   +2 he tries to found all permutation of array A, then he found all permutation of B then count all derangements using inclusion exclusion. you can see this for more explanation. » 8 days ago, # |   0 I am not clear yet why (-1)^j is used » 8 days ago, # |   +2 Can anyone explain the idea in case of n = m = 3 • » » 8 days ago, # ^ |   +2 I traced it. this is explanation for it. Array A = all permutation of 1 2 3 Array B = [2 3 1, 3 1 2] » 3 days ago, # |   0 Please give english editorial for 173 contest!
{}
Suggested languages for you: Americas Europe Q13P Expert-verified Found in: Page 896 ### Fundamentals Of Physics Book edition 10th Edition Author(s) David Halliday Pages 1328 pages ISBN 9781118230718 # One hundred turns of (insulated) copper wire are wrapped around a wooden cylindrical core of cross-sectional area ${\mathbf{1}}{\mathbf{.}}{\mathbf{20}}{\mathbf{×}}{{\mathbf{10}}}^{\mathbf{-}\mathbf{3}}{{\mathbf{m}}}^{{\mathbf{2}}}$. The two ends of the wire are connected to a resistor. The total resistance in the circuit is ${\mathbf{13}}{\mathbf{.}}{\mathbf{0}}{\mathbf{\Omega }}$. If an externally applied uniform longitudinal magnetic field in the core changes from 1.60 T in one direction to 1.60 T in the opposite direction, how much charge flows through a point in the circuit during the change? The charge flowing through a point in the circuit during the change in magnetic field is, $\mathrm{q}\left(\mathrm{t}\right)=2.95×{10}^{-2}\mathrm{C}$ See the step by step solution ## Step 1: Given 1. Cross-sectional area, $\mathrm{A}=1.20×{10}^{-3}{\mathrm{m}}^{2}$ 2. Resistance, $R=13.0\Omega$ 3. Magnetic field at core, B (0) = 1.60 T 4. Magnetic field at other end in opposite direction, B (t) = (-1.60 T) ## Step 2: Determining the concept The magnetic flux ${{\mathbit{\phi }}}_{{\mathbit{B}}}$ through area A in a magnetic field is given by Faraday’s law of induction. If the magnetic field $\stackrel{\mathbf{\to }}{\mathbf{B}}$ is perpendicular to the area A, then substitute the given values in the formula to find the charge. Faraday's law of electromagnetic induction states, Whenever a conductor is placed in a varying magnetic field, an electromotive force is induced in it. Formulae are as follows: $emf=-N.\frac{d\phi }{dt}$ Where, $d\Phi$ is magnetic flux, N is number of turns, dt is time. ## Step 3: Determining the charge flowing through a point in the circuit during the change in magnetic field $emf=-N.\frac{d\phi }{dt}$ From Ohm’s law, emf = i.R and i = dq/dt $\therefore i.R=-N.\frac{d\phi }{dt}$ $\frac{dq}{dt}=-N.\frac{d\phi }{dt}$ Integrating this with respect to time, $q{\left(t\right)}_{0}^{t}=-N\phi {\left(t\right)}_{0}^{t}$ To calculate the charge flowing through the point, $q\left(t\right)=\frac{N}{R}\left[{\phi }_{B}\left(0\right)-{\phi }_{B}\left(t\right)\right]$ According to Faraday’s law, if the magnetic field $\stackrel{\to }{B}$ is perpendicular to the area A, then, ${\phi }_{B}=BA$ Therefore, $\mathrm{q}\left(\mathrm{t}\right)=\frac{\mathrm{NA}}{\mathrm{R}}\left[\mathrm{B}\left(0\right)-\mathrm{B}\left(\mathrm{t}\right)\right]\phantom{\rule{0ex}{0ex}}\mathrm{q}\left(\mathrm{t}\right)=\frac{100×\left(1.20×{10}^{-3}{\mathrm{m}}^{2}\right)}{\left(13.0\mathrm{\Omega }\right)}\left[16.0\mathrm{T}-\left(-1.60\mathrm{T}\right)\right]\phantom{\rule{0ex}{0ex}}\mathrm{q}\left(\mathrm{t}\right)=100×\left(0.0923×{10}^{-3}{\mathrm{m}}^{2}\right)×\left(3.20\right)\phantom{\rule{0ex}{0ex}}\mathrm{q}\left(\mathrm{t}\right)=2.95×{10}^{-2}\mathrm{C}$ Hence, the charge flowing through a point in the circuit during the change in magnetic field is, $q\left(t\right)=2.95×{10}^{-2}\mathrm{C}$ Therefore, the charge flow through a point in the circuit during the change in the magnetic field can be found by using Faraday’s law.
{}
# Two speakers 1. Apr 7, 2008 ### amatol 1. The problem statement, all variables and given/known data https://tycho-s.phys.washington.edu/cgi/courses/shell/common/showme.pl?courses/phys123/spring08/homework/01b/twospeaker_INT/02.05.gif [Broken] (if image doesn't show up...it is a right angle triangle with S1 at the right angle, 4m between it and O and 3m between S1 and S2) The two speakers at S1 and S2 are adjusted so that the observer at O hears an intensity of 4 W/m2 when either S1 or S2 is sounded alone. They are driven in phase (at the speakers) with various frequencies of sound. Assume that the speed of sound is 325 m/s. a) Find the three lowest frequencies, f1 < f2 < f3, for which the observer at O will hear an intensity of 16 W/m2 when both speakers are on. b) Find the three lowest frequencies, f1 < f2 < f3 , for which the observer at O will hear no sound when both speakers are on. c) Find the lowest frequency for which the observer at O will hear an intensity of 8 W/m2 when both speakers are on. d) Find the lowest frequency for which the observer at O will hear an intensity of 4 W/m2 when both speakers are on. 2. Relevant equations I think these... f=v/$$\lambda$$ Intensity= energy/tA I also need to figure out the phase difference but not sure what formula 3. The attempt at a solution so far I know that I need to find the path length difference between the two speakers to the observer...which is 1m. From that I need to find the phase difference which will give me the intensity. How to do this I am clueless on. Then I need to find the wave length from the difference between the two and from that I can find the frequency. (at least this is what I am thinking...but its not working! and I can't figure out the phase difference) Any help appreciated! 1. The problem statement, all variables and given/known data 2. Relevant equations 3. The attempt at a solution Last edited by a moderator: May 3, 2017
{}
## Calculate amount has to be return.principle=10000 t=3 years s.i=2% on every 100 rupees. monthly Question Calculate amount has to be return.principle=10000 t=3 years s.i=2% on every 100 rupees. monthly in progress 0 2 months 2021-08-02T04:38:17+00:00 2 Answers 0 views 0 1. ptr/100 Step-by-step explanation: try this formula u no need to do monthly 2. Step-by-step explanation: First, converting R percent to r a decimal r = R/100 = 2%/100 = 0.02 per year. Solving our equation: A = 10000(1 + (0.02 × 3)) = 10600 A = $10,600.00 The total amount accrued, principal plus interest, from simple interest on a principal of$10,000.00 at a rate of 2% per year for 3 years is \$10,600.00. Hope my answer helps you :))
{}
Hey Michael, I think there might be some errors in your proof: > We are given: > $\Phi : Ob(\mathcal{X^{op}} \times \mathcal{Y}) \rightarrow Ob(\mathcal{V})$ > $\Psi : Ob(\mathcal{Y^{op}} \times \mathcal{Z}) \rightarrow Ob(\mathcal{V})$ > > First we compose the two: > > $\Psi\Phi : Ob(\mathcal{Y^{op}} \times \mathcal{Z}) \rightarrow Ob(\mathcal{X^{op}} \times \mathcal{Y}) \rightarrow Ob(\mathcal{V})$ I don't think composition works this way. We can [flip](https://hackage.haskell.org/package/base-4.11.1.0/docs/Prelude.html#v:flip) the order of arguments in a function and [curry](https://hackage.haskell.org/package/base-4.11.1.0/docs/Prelude.html#v:curry) them as you say. But I don't see how we can apply transitivity in the following step: > $\Psi\Phi : \mathcal{Z^{op}} \rightarrow \mathcal{Y} \rightarrow \mathcal{Y^{op}} \rightarrow \mathcal{X} \rightarrow Ob(\mathcal{V})$ > > By transitivity: > >$\Psi\Phi : \mathcal{Z^{op}} \rightarrow \mathcal{X} \rightarrow Ob(\mathcal{V})$ Others may correct me, but I don't think we have to work very hard to prove \$$\Psi\Phi: Ob(\mathcal{X^{op}} \times \mathcal{Z}) \rightarrow Ob(\mathcal{V})\$$. We already know the action of \$$\Psi\Phi\$$ on objects: \$$(x,z) \mapsto \bigvee_{y \in Y} \Phi(x,y) \otimes \Psi(y,z)\$$. If we were in a programming language, like Coq, then the computer could infer this mapping has the type \$$\Psi\Phi \colon \mathcal{X}^{\text{op}} \times \mathcal{Y} \to \mathcal{V} \$$ for us.
{}
Interventional catheter ablation treatment is a noninvasive approach for normalizing heart rhythm in patients with arrhythmia. Catheter ablation can be assisted with magnetic resonance imaging (MRI) to provide high-contrast images of the heart vasculature for diagnostic and intraprocedural purposes. Typical MRI images are captured using surface imaging coils that are external to the tissue being imaged. The image quality and the scanning time required for producing an image are directly correlated to the distance between the tissue being imaged and the imaging coil. The objective of this work is to minimize the spatial distance between the target tissue and the imaging coil by placing the imaging coil directly inside the heart using an expandable origami catheter structure. In this study, geometrical analysis is utilized to optimize the size and shape of the origami structure and MRI scans are taken to confirm the MRI compatibility of the structure. The origami expandable mechanism could also be applied to other medical device designs that require expandable structures. ## Introduction Atrial fibrillation (AF) is a heart rhythm disorder characterized by rapid or irregular electrical activity in the atria. The atrial electrical signals bombard the atrioventricular (AV) node, allowing some signals to pass through the AV node to the ventricles, producing a rapid, irregular heart rate, and often causing symptoms of palpitations, shortness of breath, or fatigue [1]. Over 750,000 people are hospitalized each year as a result of AF, and the condition contributes to an estimated 130,000 deaths each year [2]. These numbers are expected to rise as the average age of the U.S. population increases [3,4]. Electrophysiology (EP) therapy is rapidly growing as a means for diagnosis and treatment of cardiac rhythm disorders. During treatment of AF, a specialized mapping/ablation catheter is inserted through the femoral artery (FA) and guided to the source of arrhythmia in the heart [5]. The catheter is then used to measure electrical voltages at various locations throughout the chambers of the heart. The collected information is used to create a spatial voltage distribution for locating the abnormal tissue causing the arrhythmia [6]. The abnormal tissue is then electrically inactivated through the transmission of radiofrequency energy [7]. This method, known as radiofrequency ablation (RFA), can restore the patient’s regular heart rhythm. Both the RFA treatment phase and the mapping diagnostic phase of EP procedures are heavily dependent on the electrophysiologist’s ability to navigate the catheter to the intended location. This work presents a novel expandable structure allowing for potential imaging and treatment electronics to be mounted to a catheter-based device. The origami folding method and a mathematical model are presented to demonstrate the ability to manufacture an expandable origami structure to suit specific patient anatomy and clinical instruments. A prototype of the expandable origami structure is scanned inside an MRI scanner to demonstrate the MRI compatibility of the design. ## Materials and Methods ### Geometry/Mathematical Model. An optimal shape design for the origami structure was determined by evaluating the geometry and the available space for storing electronics. The symbols given in the following equations are shown in the Nomenclature section. The structure is a square grid comprised of n × n smaller individual squares (8 × 8 in the prototype displayed in Fig. 1). The height, h, of the fully stowed configuration is limited by the Fig. 1(a) is given by $h=dn$ (1) where d is the diameter of the fully expanded configuration (Fig. 1(d)) and n is the number of folds across the horizontal or vertical direction. The maximum allowable height of the stowed configuration is considered because the structure could get stuck in the aortic arch if the height is too large. Figure 2(a) shows an MR image of the aortic arch with an overlaid image of the stowed origami structure as it passes through the aortic arch. The area, As, of the fully stowed configuration (perpendicular to the axis of the catheter) can be determined by $As=(h+t*4n)2$ (2) where t is the material thickness. The diameter of the FA limits the maximum allowable stowed area. The surface area, Ae, of the fully expanded configuration is $Ae=(π4)d2$ (3) The diameter of the left atrium (LA) limits the maximum allowable expanded surface area. Figure 2(b) highlights the areas in the FA and LA where the origami structure will be stowed and expanded. The expanded surface area to stowed area ratio, Ra, is defined by $Ra=AeAs$ (4) It is necessary for this ratio to be as large as possible in order to optimize the shape of the flasher and obtain the maximum achievable signal-to-noise ratio (SNR). ### Mechanical Fabrication. The expandable origami structure was fabricated by folding a 40 mm × 40 mm sheet of biocompatible polycaprolactone into an iso-area flasher origami pattern [8]. The material properties of the expandable structure allow it to be flexible enough to fold down to fit inside the body vasculature, yet stiff enough to expand once inside the heart chamber. After the structure has been deployed inside the heart and has captured the images, it is pulled out of the heart while still in the expanded configuration. The structure collapses on itself in the process, rendering it unusable for future imaging. Once the base of the structure was assembled, a coil of 5 mm width copper was applied near the edge of the apparatus in order to form a receiver coil (Fig. 3(a)). Figures 3(b)3(d) show prototypes of expandable structures containing 2, 4, and 8 imaging coils, respectively, in order to illustrate the potential for parallel imaging using this structure. This simple fabrication method allows for ease of prototyping as well as a straightforward proof of concept. ### Electronics. The imaging coil was connected to a tuning–matching circuit (Fig. 4(a)) sealed in a box at the proximal end of the stylet through a microcoaxial cable [9,10]. The circuit was tuned and matched based on a single-coil topology to prove the concept of the origami structure incorporated with imaging electronics. A network analyzer was then used to tune the embedded circuit to 128 MHz (3T Larmor frequency) and used to match the circuit to the universal standard of 50-Ω resistance (Fig. 4(b)). The second-order RLC circuit consists of a resistor that consumes energy and induces a damping effect during the resonance. The energy is stored in the capacitor and inductor, both of which determine the resonance frequency of the circuit. Referring to Ref. [11], the resonance frequency of the system can be written as $f=12πLC$ (5) where L and C are the inductance and capacitance of the circuit, respectively. From Eq. (5) alone, it can be seen that there are infinite combinations of the L and C to generate a specific resonance value. However, it is worth noting the quality factor, acting as another important consideration of designing the RLC circuit, which can be written as $Q=1RLC$ (6) A larger Q value correlates to a larger amount of magnetic energy that can be stored by the micro coil. Therefore, a trade-off between L and C need to be made in the tuning–matching of the coil. The q factor of the imaging coil circuit was calculated to be 8.533. ### MRI Compatibility Test. An MR image that has been negatively affected by the presence of a noncompatible device or object has a lower SNR. Certain devices or objects may not cause artifact disturbances due to either being solely constructed of compatible materials or by virtue of a large physical separation. Active devices (which contain electrically powered components or are otherwise capable of producing electromagnetic (EM) field emissions) can cause SNR reduction if their emitted fields are picked up by the scanner receive coil. Every active component of the system with the intent of being used within the scanner room was tested independently to evaluate its effect on MR image SNR and shielding before the system was tested as a whole. The implemented test method was adapted from the protocol put forward by Chinzei et al. [12] for electrical and electronic components. According to the standard defined by Chinzei et al., the acceptable level of SNR reduction is up to 10%. However, this value is intended as a guideline rather than a strict qualifier of compatibility. In practice, the acceptable level of SNR reduction is dependent on the intended method of application. A very low degree may be required for the determination of soft tissue boundaries from an image, but a much less stringent requirement may be necessary for image guided gross positioning of instruments. The implemented test method was conducted as follows: 1. (1) A container was filled with CuSO4 solution (1.25 g/l concentration) and scanned with a spin echo and a gradient echo sequence; these images were used as the control image. The image sequence parameters must be maintained constant throughout the duration of the subsequent testing. 2. (2) The origami structure was placed next to the phantom without power connected and the phantom was scanned with the same scan sequence combination. 3. (3) In the same configuration, a further scan was taken with the origami structure closed and connected to the scanner. 4. (4) In the same configuration, a further scan was taken with the origami structure expanded and connected to the scanner. 5. (5) DICOM format images were produced. The SNR of an image is calculated using Eq. (7), where Pcenter is the mean signal of a 40 × 40-pixel region at the center of the phantom image and SDcorner is the standard deviation of the signal of a 40 × 40-pixel region at the corner of the image. The variation of SNR is calculated by subtracting the SNR value of the corresponding control image. $SNR=PcenterSDcorner$ (7) The method requires the origami structure in the bore to be connected to the scanner with the flasher actuated at different stages (e.g., stowed and expanded). As such, the full system including the auxiliary tuning and matching electronic hardware must be included in the scanner room during the test. ## Results ### Geometry/Mathematical Model. The geometric evaluation of the expandable origami structure was used to create a mathematical model for determining the optimal dimensions of the structure. Figure 5 shows plots displaying the relationship of height (a), stowed area (b), and expanded surface area (c) to the expanded diameter and the number of folds. Using these plots, we can define the ideal shape of the expandable structure based on the curvature of the aortic arch, the inner diameter of the guide catheter, and the diameter of the LA. The average diameter of the FA at the hip is approximately 8.2±0.14 mm [13]. The average diameter of the LA is approximately 27–38 mm in women and 30–40 mm in men [14]. Using these data, we can find appropriate values for the height h, the stowed area As, and the expanded area Ae using Eqs. (1)(3). h is calculated from Eq. (1) to be 4.25 mm using an estimated diameter d matching that of the 34 mm average diameter of the LA in humans [14] and using n = 8 as in the prototype shown in this report. As is calculated via Eq. (2) to be an estimated 196 mm2 using a material thickness t of 0.25 mm. Ae is calculated via Eq. (3) to be approximately 1120 mm2 (Fig. 6). The expanded surface area to stowed area ratio Ra, is therefore estimated to be 5.71 using Eq. (4) (Table 1). ### MRI Compatibility Test. The SNR reduction of the expandable origami structure was examined. The experiment was conducted in a 3T Siemens MRI scanner. The results are shown in Fig. 7. The maximum SNR reduction in the origami catheter was 0.54% with the turbo spin echo (TSE) sequence, and 0.46% with the True fast imaging with steady-state free precession (True FISP) sequence. Even though the SNR reduction varies in different scan conditions, they are all within the acceptable level of 10% proposed by Chinzei et al. [12], showing good compatibility of the individual components. This experiment demonstrates the usability of the expandable origami structure with its individual electronic components in the MR environment. ## Discussion Treatment of cardiac arrhythmia through catheter ablation can permanently resolve the condition and significantly improve patients’ quality of life. The expandable origami structure for storing MRI imaging coils on the tip of a catheter offers an improved method for treatment of cardiac arrhythmia, but further studies are required to validate the safety and the capabilities of the structure. The expandable structure must be able to integrate with the catheter through either some effective means of attachment or through direct fabrication of the two parts as one. in vivo animal trials and human pilot studies must be conducted to confirm the devices ability to safely deploy inside the LA and safely retract from the body. The shape of the structure will be compared to other potential shapes to determine the safest structure. For example, a voluminous structure could potentially offer more safety as opposed to a flat structure, because it would not contain sharp edges. Further MRI scans must be performed to validate the devices ability to perform real-time cardiac imaging. ## Conclusion This work presents the design of a novel expandable origami mechanism that aims to assist in diagnostic and therapeutic stages of EP treatments. The expandable structure was developed by folding a thin sheet of biocompatible polycaprolactone into an iso-area flasher and the embedded electrical circuit was constructed by applying imaging coils around the structure. The circuit was tuned and matched via network analysis to the Larmor frequency of a 3T MRI at 128 MHz. Equations relating expanded surface area and stowed area are derived from basic geometry in order to optimize the shape of the flasher based on patient anatomy. SNR reduction results demonstrate that the imaging coil is MRI compatible. Future experiments performing actual cardiac imaging is required to validate the efficacy of the device. ## Acknowledgment This material is based upon work supported by the National Science Foundation (NSF) REU site program 1359095 and the University of Georgia–Augusta University seed grants. ## Nomenclature • Ae = surface area of the fully expanded configuration • • As = area of the fully stowed configuration (perpendicular to catheter axis) • • C = capacitance • • d = diameter of the fully expanded configuration • • f = frequency • • h = height of the fully stowed configuration • • L = inductance • • n = number of folds across the horizontal or vertical direction • • Pcenter = mean signal of a 40 × 40 pixel region at the center of the phantom image • • Q = quality factor • • R = resistance • • Ra = expanded surface area to stowed area ratio • • SNR = signal-to-noise ratio • • SDcorner = standard deviation of the signal of a 40 × 40 pixel region at the corner of the image • • t = thickness of material ## References References 1. Ames , A. , and Stevenson , W. G. , 2006 , “ Catheter Ablation of Atrial Fibrillation ,” Circulation , 113 ( 13 ), pp. e666 e668 . 2. Center for Disease Control , 2015 , “ Atrial Fibrillation ,” U.S. Department of Health & Human Services, Atlanta, GA, accessed May 5, 2017, http://www.cdc.gov/dhdsp/data_statistics/fact_sheets/fs_atrial_fibrillation.htm 3. Miyasaka , Y. , Barnes , M. E. , Gersh , B. J. , Cha , S. S. , Bailey , K. R. , Abhayaratna , W. P. , Seward , J. B. , and Tsang , T. S. M. , 2006 , “ Secular Trends in Incidence of Atrial Fibrillation in Olmsted County, Minnesota, 1980 to 2000, and Implications on the Projections for Future Prevalence ,” Circulation , 114 ( 2 ), pp. 119 125 . 4. Wolf , P. A. , Benhamin , E. J. , Belanger , A. J. , Kannel , W. B. , Levy , D. , and D’Agostino , R. B. , 1996 , “ Secular Trends in the Prevalence of Atrial Fibrillation: The Framingham Study ,” Am. Heart J. , 131 ( 4 ), pp. 790 795 . 5. Glowny , M. G. , and Resnic , F. S. , 2012 , “ What to Expect During Cardiac Catheterization ,” Circulation , 125 ( 7 ), pp. e363 e364 . 6. Knackstedt , C. , Schauerte , P. , and Kirchhof , P. , 2008 , “ Electro-Anatomic Mapping Systems in Arrhythmias ,” Europace , 10 (Suppl. 3), pp. iii28 iii34 . 7. Wood , A. J. , and , F. , 1999 , “ Radio-Frequency Ablation as Treatment for Cardiac Arrhythmias ,” N. Engl. J. Med. , 340 ( 7 ), pp. 534 544 . 8. Shafer , J. , 2001 , Origami to Astonish and Amuse , St. Martin’s Press , New York. 9. Chen , Y. , Wang , W. , Schmidt , E. J. , Kwok , K.-W. , Viswanathan , A. N. , Cormack , R. , and Tse , Z. T. H. , 2016 , “ Design and Fabrication of MR-Tracked Metallic Stylet for Gynecologic Brachytherapy ,” IEEE/ASME Transactions on Mechatronics , 21 (2), 956–962. 10. Wang , W. , Viswanathan , A. N. , Damato , A. L. , Chen , Y. , Tse , Z. , Pan , L. , Tokuda , J. , Seethamraju , R. T. , Dumoulin , C. L. , Schmidt , E. J. , and Cormack , R. A. , 2015 , “ Evaluation of an Active Magnetic Resonance Tracking System for Interstitial Brachytherapy ,” Med. Phys. , 42 ( 12 ), pp. 7114 7121 . 11. Feynman , R. P. , Leighton , R. B. , and Sands , M. , 1964 , The Feynman Lectures on Physics , , . 12. Chinzei , K. , Kikinis , R. , and Jolesz , F. A. , 1999 , “ MR Compatibility of Mechatronic Devices: Design Criteria ,” Medical Image Computing and Computer-Assisted Intervention (MICCAI), Cambridge, UK, Sept. 19–22, pp. 1020 1030 . 13. Crisan , S. , 2012 , “ Ultrasound Examination of the Femoral and Popliteal Arteries ,” Med. Ultrasonogr. , 14 , p. 74 .http://medultrason.ro/assets/Medultrason-2012-vol14-no1/15Crisan.pdf 14. Lang , R. M. , Bierig , M. , Devereux , R. B. , Flachskampf , F. A. , Foster , E. , Pellikka , P. A. , Picard , M. H. , Roman , M. J. , Seward , J. , Shanewise , J. , Solomon , S. , Spencer , K. T. , Sutton , M. , St. , J. , and Stewart , W. , 2006 , “ Recommendations for Chamber Quantification ,” Eur. Heart J.: Cardiovasc. Imag. , 7 , pp. 79 108 .
{}
Copied to clipboard ## G = D4⋊4Q16order 128 = 27 ### 3rd semidirect product of D4 and Q16 acting via Q16/Q8=C2 p-group, metabelian, nilpotent (class 3), monomial Series: Derived Chief Lower central Upper central Jennings Derived series C1 — C42 — D4⋊4Q16 Chief series C1 — C2 — C22 — C2×C4 — C42 — C4×Q8 — D4×Q8 — D4⋊4Q16 Lower central C1 — C22 — C42 — D4⋊4Q16 Upper central C1 — C22 — C42 — D4⋊4Q16 Jennings C1 — C22 — C22 — C42 — D4⋊4Q16 Generators and relations for D44Q16 G = < a,b,c,d | a4=b2=c8=1, d2=c4, bab=cac-1=a-1, ad=da, cbc-1=ab, bd=db, dcd-1=c-1 > Subgroups: 264 in 117 conjugacy classes, 38 normal (32 characteristic) C1, C2 [×3], C2 [×2], C4 [×4], C4 [×8], C22, C22 [×4], C8 [×4], C2×C4 [×3], C2×C4 [×12], D4 [×2], D4, Q8 [×2], Q8 [×9], C23, C42, C42, C22⋊C4 [×3], C4⋊C4 [×2], C4⋊C4 [×2], C4⋊C4 [×6], C2×C8 [×3], Q16 [×2], C22×C4 [×3], C2×D4, C2×Q8, C2×Q8 [×8], C4×C8, D4⋊C4, Q8⋊C4, C4⋊C8 [×2], C2.D8 [×3], C4×D4, C4×D4, C4×Q8, C22⋊Q8 [×3], C4⋊Q8 [×2], C4⋊Q8, C2×Q16, C22×Q8, D4⋊C8, Q8⋊C8, C4.10D8, C42Q16, D4⋊Q8, C82Q8, D4×Q8, D44Q16 Quotients: C1, C2 [×7], C22 [×7], D4 [×6], C23, D8 [×2], Q16 [×2], C2×D4 [×3], C22≀C2, C2×D8, C2×Q16, C8⋊C22, C8.C22, C22⋊D8, C22⋊Q16, D4.10D4, D44Q16 Character table of D44Q16 class 1 2A 2B 2C 2D 2E 4A 4B 4C 4D 4E 4F 4G 4H 4I 4J 4K 4L 8A 8B 8C 8D 8E 8F 8G 8H size 1 1 1 1 4 4 2 2 2 2 4 4 4 8 8 8 8 16 4 4 4 4 8 8 8 8 ρ1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 trivial ρ2 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 -1 -1 -1 -1 -1 -1 -1 -1 -1 linear of order 2 ρ3 1 1 1 1 -1 -1 1 1 1 1 -1 -1 1 -1 -1 1 1 -1 -1 -1 -1 -1 1 1 1 1 linear of order 2 ρ4 1 1 1 1 -1 -1 1 1 1 1 -1 -1 1 -1 -1 1 1 1 1 1 1 1 -1 -1 -1 -1 linear of order 2 ρ5 1 1 1 1 1 1 1 1 1 1 -1 -1 1 1 -1 -1 -1 1 -1 -1 -1 -1 1 1 -1 -1 linear of order 2 ρ6 1 1 1 1 1 1 1 1 1 1 -1 -1 1 1 -1 -1 -1 -1 1 1 1 1 -1 -1 1 1 linear of order 2 ρ7 1 1 1 1 -1 -1 1 1 1 1 1 1 1 -1 1 -1 -1 -1 1 1 1 1 1 1 -1 -1 linear of order 2 ρ8 1 1 1 1 -1 -1 1 1 1 1 1 1 1 -1 1 -1 -1 1 -1 -1 -1 -1 -1 -1 1 1 linear of order 2 ρ9 2 2 2 2 0 0 -2 -2 2 2 -2 -2 -2 0 2 0 0 0 0 0 0 0 0 0 0 0 orthogonal lifted from D4 ρ10 2 2 2 2 0 0 -2 -2 2 2 2 2 -2 0 -2 0 0 0 0 0 0 0 0 0 0 0 orthogonal lifted from D4 ρ11 2 2 2 2 0 0 -2 -2 -2 -2 0 0 2 0 0 2 -2 0 0 0 0 0 0 0 0 0 orthogonal lifted from D4 ρ12 2 2 2 2 -2 -2 2 2 -2 -2 0 0 -2 2 0 0 0 0 0 0 0 0 0 0 0 0 orthogonal lifted from D4 ρ13 2 2 2 2 0 0 -2 -2 -2 -2 0 0 2 0 0 -2 2 0 0 0 0 0 0 0 0 0 orthogonal lifted from D4 ρ14 2 2 2 2 2 2 2 2 -2 -2 0 0 -2 -2 0 0 0 0 0 0 0 0 0 0 0 0 orthogonal lifted from D4 ρ15 2 -2 2 -2 0 0 0 0 -2 2 -2 2 0 0 0 0 0 0 -√2 √2 √2 -√2 √2 -√2 0 0 orthogonal lifted from D8 ρ16 2 -2 2 -2 0 0 0 0 -2 2 2 -2 0 0 0 0 0 0 -√2 √2 √2 -√2 -√2 √2 0 0 orthogonal lifted from D8 ρ17 2 -2 2 -2 0 0 0 0 -2 2 -2 2 0 0 0 0 0 0 √2 -√2 -√2 √2 -√2 √2 0 0 orthogonal lifted from D8 ρ18 2 -2 2 -2 0 0 0 0 -2 2 2 -2 0 0 0 0 0 0 √2 -√2 -√2 √2 √2 -√2 0 0 orthogonal lifted from D8 ρ19 2 -2 -2 2 2 -2 2 -2 0 0 0 0 0 0 0 0 0 0 -√2 √2 -√2 √2 0 0 √2 -√2 symplectic lifted from Q16, Schur index 2 ρ20 2 -2 -2 2 2 -2 2 -2 0 0 0 0 0 0 0 0 0 0 √2 -√2 √2 -√2 0 0 -√2 √2 symplectic lifted from Q16, Schur index 2 ρ21 2 -2 -2 2 -2 2 2 -2 0 0 0 0 0 0 0 0 0 0 -√2 √2 -√2 √2 0 0 -√2 √2 symplectic lifted from Q16, Schur index 2 ρ22 2 -2 -2 2 -2 2 2 -2 0 0 0 0 0 0 0 0 0 0 √2 -√2 √2 -√2 0 0 √2 -√2 symplectic lifted from Q16, Schur index 2 ρ23 4 -4 4 -4 0 0 0 0 4 -4 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 orthogonal lifted from C8⋊C22 ρ24 4 -4 -4 4 0 0 -4 4 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 symplectic lifted from C8.C22, Schur index 2 ρ25 4 4 -4 -4 0 0 0 0 0 0 0 0 0 0 0 0 0 0 2 2 -2 -2 0 0 0 0 symplectic lifted from D4.10D4, Schur index 2 ρ26 4 4 -4 -4 0 0 0 0 0 0 0 0 0 0 0 0 0 0 -2 -2 2 2 0 0 0 0 symplectic lifted from D4.10D4, Schur index 2 Smallest permutation representation of D44Q16 On 64 points Generators in S64 (1 50 63 18)(2 19 64 51)(3 52 57 20)(4 21 58 53)(5 54 59 22)(6 23 60 55)(7 56 61 24)(8 17 62 49)(9 47 34 28)(10 29 35 48)(11 41 36 30)(12 31 37 42)(13 43 38 32)(14 25 39 44)(15 45 40 26)(16 27 33 46) (1 22)(2 60)(3 24)(4 62)(5 18)(6 64)(7 20)(8 58)(9 43)(10 14)(11 45)(12 16)(13 47)(15 41)(17 21)(19 23)(25 48)(26 36)(27 42)(28 38)(29 44)(30 40)(31 46)(32 34)(33 37)(35 39)(49 53)(50 59)(51 55)(52 61)(54 63)(56 57) (1 2 3 4 5 6 7 8)(9 10 11 12 13 14 15 16)(17 18 19 20 21 22 23 24)(25 26 27 28 29 30 31 32)(33 34 35 36 37 38 39 40)(41 42 43 44 45 46 47 48)(49 50 51 52 53 54 55 56)(57 58 59 60 61 62 63 64) (1 32 5 28)(2 31 6 27)(3 30 7 26)(4 29 8 25)(9 50 13 54)(10 49 14 53)(11 56 15 52)(12 55 16 51)(17 39 21 35)(18 38 22 34)(19 37 23 33)(20 36 24 40)(41 61 45 57)(42 60 46 64)(43 59 47 63)(44 58 48 62) G:=sub<Sym(64)| (1,50,63,18)(2,19,64,51)(3,52,57,20)(4,21,58,53)(5,54,59,22)(6,23,60,55)(7,56,61,24)(8,17,62,49)(9,47,34,28)(10,29,35,48)(11,41,36,30)(12,31,37,42)(13,43,38,32)(14,25,39,44)(15,45,40,26)(16,27,33,46), (1,22)(2,60)(3,24)(4,62)(5,18)(6,64)(7,20)(8,58)(9,43)(10,14)(11,45)(12,16)(13,47)(15,41)(17,21)(19,23)(25,48)(26,36)(27,42)(28,38)(29,44)(30,40)(31,46)(32,34)(33,37)(35,39)(49,53)(50,59)(51,55)(52,61)(54,63)(56,57), (1,2,3,4,5,6,7,8)(9,10,11,12,13,14,15,16)(17,18,19,20,21,22,23,24)(25,26,27,28,29,30,31,32)(33,34,35,36,37,38,39,40)(41,42,43,44,45,46,47,48)(49,50,51,52,53,54,55,56)(57,58,59,60,61,62,63,64), (1,32,5,28)(2,31,6,27)(3,30,7,26)(4,29,8,25)(9,50,13,54)(10,49,14,53)(11,56,15,52)(12,55,16,51)(17,39,21,35)(18,38,22,34)(19,37,23,33)(20,36,24,40)(41,61,45,57)(42,60,46,64)(43,59,47,63)(44,58,48,62)>; G:=Group( (1,50,63,18)(2,19,64,51)(3,52,57,20)(4,21,58,53)(5,54,59,22)(6,23,60,55)(7,56,61,24)(8,17,62,49)(9,47,34,28)(10,29,35,48)(11,41,36,30)(12,31,37,42)(13,43,38,32)(14,25,39,44)(15,45,40,26)(16,27,33,46), (1,22)(2,60)(3,24)(4,62)(5,18)(6,64)(7,20)(8,58)(9,43)(10,14)(11,45)(12,16)(13,47)(15,41)(17,21)(19,23)(25,48)(26,36)(27,42)(28,38)(29,44)(30,40)(31,46)(32,34)(33,37)(35,39)(49,53)(50,59)(51,55)(52,61)(54,63)(56,57), (1,2,3,4,5,6,7,8)(9,10,11,12,13,14,15,16)(17,18,19,20,21,22,23,24)(25,26,27,28,29,30,31,32)(33,34,35,36,37,38,39,40)(41,42,43,44,45,46,47,48)(49,50,51,52,53,54,55,56)(57,58,59,60,61,62,63,64), (1,32,5,28)(2,31,6,27)(3,30,7,26)(4,29,8,25)(9,50,13,54)(10,49,14,53)(11,56,15,52)(12,55,16,51)(17,39,21,35)(18,38,22,34)(19,37,23,33)(20,36,24,40)(41,61,45,57)(42,60,46,64)(43,59,47,63)(44,58,48,62) ); G=PermutationGroup([(1,50,63,18),(2,19,64,51),(3,52,57,20),(4,21,58,53),(5,54,59,22),(6,23,60,55),(7,56,61,24),(8,17,62,49),(9,47,34,28),(10,29,35,48),(11,41,36,30),(12,31,37,42),(13,43,38,32),(14,25,39,44),(15,45,40,26),(16,27,33,46)], [(1,22),(2,60),(3,24),(4,62),(5,18),(6,64),(7,20),(8,58),(9,43),(10,14),(11,45),(12,16),(13,47),(15,41),(17,21),(19,23),(25,48),(26,36),(27,42),(28,38),(29,44),(30,40),(31,46),(32,34),(33,37),(35,39),(49,53),(50,59),(51,55),(52,61),(54,63),(56,57)], [(1,2,3,4,5,6,7,8),(9,10,11,12,13,14,15,16),(17,18,19,20,21,22,23,24),(25,26,27,28,29,30,31,32),(33,34,35,36,37,38,39,40),(41,42,43,44,45,46,47,48),(49,50,51,52,53,54,55,56),(57,58,59,60,61,62,63,64)], [(1,32,5,28),(2,31,6,27),(3,30,7,26),(4,29,8,25),(9,50,13,54),(10,49,14,53),(11,56,15,52),(12,55,16,51),(17,39,21,35),(18,38,22,34),(19,37,23,33),(20,36,24,40),(41,61,45,57),(42,60,46,64),(43,59,47,63),(44,58,48,62)]) Matrix representation of D44Q16 in GL4(𝔽17) generated by 1 0 0 0 0 1 0 0 0 0 1 2 0 0 16 16 , 16 0 0 0 0 16 0 0 0 0 1 2 0 0 0 16 , 3 14 0 0 3 3 0 0 0 0 0 6 0 0 3 0 , 7 16 0 0 16 10 0 0 0 0 16 0 0 0 0 16 G:=sub<GL(4,GF(17))| [1,0,0,0,0,1,0,0,0,0,1,16,0,0,2,16],[16,0,0,0,0,16,0,0,0,0,1,0,0,0,2,16],[3,3,0,0,14,3,0,0,0,0,0,3,0,0,6,0],[7,16,0,0,16,10,0,0,0,0,16,0,0,0,0,16] >; D44Q16 in GAP, Magma, Sage, TeX D_4\rtimes_4Q_{16} % in TeX G:=Group("D4:4Q16"); // GroupNames label G:=SmallGroup(128,381); // by ID G=gap.SmallGroup(128,381); # by ID G:=PCGroup([7,-2,2,2,-2,2,-2,2,448,141,456,422,352,1123,570,521,136,2804,1411,718,172]); // Polycyclic G:=Group<a,b,c,d|a^4=b^2=c^8=1,d^2=c^4,b*a*b=c*a*c^-1=a^-1,a*d=d*a,c*b*c^-1=a*b,b*d=d*b,d*c*d^-1=c^-1>; // generators/relations Export ׿ × 𝔽
{}
# Dimensional analysis for integratation. 1. Jul 3, 2011 ### HotMintea 1. The problem statement Use dimensional analysis to find $\int\frac{ dx }{ x^2 + a^2}$. A useful result is $\int\frac{ dx} {x^2 + 1}\, \ = \, \ arctan{x} + C$. p.11, prob.1.11 http://mitpress.mit.edu/books/full_pdfs/Street-Fighting_Mathematics.pdf [Broken] 2. The attempt at a solution 2.1. If I let $[x] = L$, then $[dx] = L$ and $[x^2+1] = L^2$. Thus, I expect $[\int\ \frac{dx}{x^2+1}]\, \ = \, \frac{1}{L}$. However, $arctan{x}$ is dimensionless. 2.2. By the same reasoning as in 2.1., I expect $[\int\ \frac{dx}{x^2+a^2}]\, \ = \, \frac{1}{L}$. However, there seems to be a multitude of possibilities: $\int\ \frac{dx}{x^2+a^2}\, \ = \, \frac{dimensionless\ factor}{x}\, \ , \, \frac{dimensionless\ factor}{a}\ , \frac{dimensionless\ factor}{x\ +\ a}\, \ or\, \frac{a(dimensionless\ factor)}{x^2}$, etc. Last edited by a moderator: May 5, 2017 2. Jul 3, 2011 ### Dick If you pick [x]=L as you have to if [a]=L, then arctan(x) is not dimensionless. It has no particular dimension at all. You'd better pick the argument of arctan to be something other than x. What's a dimensionless argument for arctan? 3. Jul 3, 2011 ### HotMintea 2.1. $\int\ \frac{dx}{x^2+1}\ = \frac{arctan{\frac{x}{1}}}{1}\ + \ C$, thus $[\int\ \frac{dx}{x^2+1}]\ = [\frac{arctan{\frac{x}{1}}}{1}] = \frac{1}{L}$. 2.2. $\int\ \frac{dx}{x^2+a^2}$ must cover the case a = 1, thus $\int\frac{dx}{x^2+a^2}\ \ = \frac{arctan{\frac{x}{a}}}{a}\ \ + \ C$.
{}
Proof that final topology with a certain property is unique Assume we are given a set of topological spaces $(X_i,\tau_i), \forall i \in I$, a set $Y$, a set of functions $f_i: X_i\rightarrow Y$, a topological space $(Z,\sigma)$ and a function $h : Y\rightarrow Z$. Then assume that $h$ is continuous $\iff$ $h \circ f_i$ is continuous $\forall i \in I$. Let $\tau$ be final topology on $Y$, defined $\tau = \{U \subset Y | f^{-1}_i (U) \in \tau_i, \forall i \in I\}$. I must prove that this topology is unique, ie. only topology on $Y$ that fulfills the requirement that $h$ is continuous $\iff$ $h \circ f_i$ is continuous $\forall i \in I$. Attempt: Assume that instead of $\tau$ we had $\tau^´$. Then assume that $g \in \sigma$. Now $(h \circ f_i)^{-1} (g) \in \tau_i,\ \forall i \in I$, for for continuous function, the preimage of an open set is open. Also $f_i^{-1}(h^{-1}(g)) = f_i^{-1}(v), \ v \in \tau^´$, for the same reason. Now $f_i^{-1}(v) \in \tau_i, \ \forall i \in I,$ for if they weren't, then $\tau_j \not\owns U=f_j^{-1}(v)=f_j^{-1}(h^{-1}(g))=(h \circ f_j)^{-1} (g) = U \in \tau_j$, for some $j \in I$, this is contradiction. But what I cannot get out of my head are a few questions. Like, how can we know that there isn't some set $k \in \tau^´$ where $h (k) \notin \sigma$? This image $h (k)$ doesn't have to be closed, or does it? If it needs to be, then this case is violation of the continuity of $h$. Also, how can we know that there is not some $t \subset Y$ in $\tau^´$ for which $f^{-1}_j(t) \notin \tau_j$ and it is not the preimage of any set in $\sigma$? This would be bigger than $\tau$ but we would have no way to get to these extra sets. - Is the following what you want to show: $\tau$ is the unique topology on $Y$ such that for any topological space $Z$ a mapping $h : Y \to Z$ is continuous iff $f_i \circ h : X_i \to Z$ is continuous for all $i \in I$. –  Arthur Fischer Jan 28 '13 at 17:42 @ArthurFischer Z is given and its $h \circ f_i$. Otherwise I think you got it. –  Valtteri Jan 28 '13 at 17:48 Oops... Stupid mistake on the order of the composition on my part. But I don't think that what you want to show is true. If, for example, $\sigma$ was the trivial (anti-discrete) topology on $Z$ then every mapping into $Z$ is continuous, and thus the "iff" holds for every topology on $Y$. –  Arthur Fischer Jan 28 '13 at 18:08 @ArthurFischer Hmmmm, all the material comes from this book: math.ru.nl/~mueger/topology2012.pdf , page 35, definiton 5.3.6 , exercise 5.3.7 and proposition 5.3.8. The task is to show that the topology $\tau$ mentioned in the first sentence of 5.3.8 is only one that fulfills the requirement in the second sentence of 5.3.8. –  Valtteri Jan 28 '13 at 18:17 @Arthur: It’s clearly supposed to be a statement of a universal property, so it should be for all $Z$, as in your original comment. –  Brian M. Scott Jan 28 '13 at 19:59 Like Arthur Fischer stated, you define $\tau$ as stated as a topology on $Y$ that depends on the topological spaces $(X_i, \tau_i)$ and $f_i$. Then the unicity statement should be: $\tau$ is the unique topology that has the property $$\forall \mbox{ topological spaces } Z : \forall h: (Y,\tau) \rightarrow Z: ( h \mbox{ continuous } \iff \forall_{i \in I} \,(h \circ f_i) \mbox{ continuous. })$$ It is clear that $\tau$ satisfies this property, and you already know this judging from your question (it follows straight from the definition of $\tau$). Suppose that a topology $\tau'$ satisfies this property as well (we need to show that $\tau = \tau'$). Letting $h$ be the identity from $(Z,\tau')$ to $(Z, \tau)$ we see that by the fact that for all $i$ and all $O \in \tau$: $(h \circ f_i)^{-1}[O] = f^{-1}[O] \in \tau_i$, by the definition of $\tau$, so for all $i$, $h \circ f_i$ is continuous, and as $\tau'$ satisfies our desired property, $h$ is continuous, which means that $\tau \subset \tau'$, by the definition of continuity (of the identity map). On the other hand, all $f_i$ are continuous as maps from $(X,\tau_i)$ to $(X,\tau)$, as this follows from the property as well, taking $h$ the identity on $(Y,\tau')$, which is always continuous (for any space), and $h \circ f_i = f_i$. But this means by definition that for any open set $O$ of $\tau'$ and any $i \in I$, $f^{-1}[O] \in \tau_i$, which just says that $O \in \tau$, and so $\tau' \subset \tau$, and we have equality and the unicity.
{}
## Algebra 1: Common Core (15th Edition) $n = -33$ or $n = 13$ Let's rewrite the equation so that the variable is on the left side of the equation: $|n + 10| = 23$ To get rid of the absolute value sign, rewrite the equations as two separate equations: $n + 10 = -23$ or $n + 10 = 23$ Subtract $10$ from both sides of the equation: $n = -33$ or $n = 13$
{}
## Rational SFT using only q variables IV It is now time to explain how to use the q-variable version of rational SFT to define symplectic capacities and symplectic embedding obstructions. Before getting into details, I should tell you some good news and bad news. The bad news is that the rational SFT capacities are not very powerful. The good news is that rational SFT contains stronger embedding obstructions than the rational SFT capacities, and you can access these stronger obstructions if you can compute cobordism maps. This is a very interesting area for further exploration. Spectral invariants Let $(Y^{2n-1},\lambda)$ be a nondegenerate closed contact manifold as usual. If $\sigma\in HQ(Y,\lambda,0)$ is a nonzero class in the q-variable rational SFT, define $c_\sigma(Y,\lambda)\in{\mathbb R}$ to be the infimum over $L$ such that $\sigma$ is in the image of the map $HQ^L(Y,\lambda)\to HQ(Y,\lambda)$ induced by the inclusion of chain complexes. If $(X^{2n},\omega)$ is a weakly exact symplectic cobordism from $(Y_+,\lambda_+)$ to $(Y_-,\lambda_-)$, recall that there is an induced map $\Phi(X,\omega): HQ(Y_+,\lambda_+,0) \to HQ(Y_-,\lambda_-,0)$ which is the direct limit of maps $\Phi^L(X,\omega): HQ^L(Y_+,\lambda_+,0)\to HQ^L(Y_-,\lambda_-,0)$ as $L\to\infty$. It follows as in ECH that if $\sigma_+\in HQ(Y_+,\lambda_+,0)$, if $\sigma_-=\Phi(X,\omega)(\sigma_+)\in HQ(Y_-,\lambda_-,0)$, and if $\sigma_-\neq 0$, then $c_{\sigma_-}(Y_-,\lambda_-) \le c_{\sigma_+}(Y_+,\lambda_+).$ This is the basic inequality which leads to obstructions to symplectically embedding one Liouville domain into another. As with the ECH spectrum, if $\lambda$ is degenerate, one can define $c_\sigma(Y,\lambda)$ as the limit of $c_\sigma(Y,\lambda_i)$ where $\{\lambda_i\}_{i=1,2,\ldots}$ is a sequence of nondegenerate contact forms converging in the $C^0$ topology to $\lambda$. The above inequality still holds for a weakly exact cobordism between possibly degenerate contact manifolds. Definition of capacities Let $(X^{2n},\omega)$ be a Liouville domain. What I mean by this is that $\omega$ is exact, and there is a contact form $\lambda$ on $Y=\partial X$ such that $d\lambda = \omega|_Y$. Note that the unit in the symmetric algebra on the polynomial algebra on the $q$ variables, which I denote by $1$, is a cycle in the chain complex computing $HQ(Y,\lambda,0)$. The homology class $[1]$ is nonzero, because the cobordism map $\Phi(X,\omega)$ maps it to $1$. If $k$ is a nonnegative integer, we now define $c_k(X,\omega)\in[0,\infty]$ to be the minimum of $c_\sigma(Y,\lambda)$, where $\sigma$ ranges over classes in $HQ(Y,\lambda,0)$ such that $U^k\sigma=[1]$ whenever $U^k$ is a composition of $k$ of the $U$ maps associated to the components of $Y$ (possibly repeated). If no such class $\sigma$ exists, we define $c_k(X,\omega)=\infty$. The same or similar arguments to the definition of ECH capacities show the following: • $c_k(X,\omega)$ does not depend on the choice of $\lambda$. • If $(X',\omega')$ is another Liouville domain of the same dimension which symplectically embeds into $(X,\omega)$, then $c_k(X',\omega')\le c_k(X,\omega)$ for all $k$. • We have the disjoint union property $c_k((X_1,\omega_1)\sqcup (X_2,\omega_2)) = \max_{k_1+k_2=k}(c_{k_1}(X_1,\omega_1) + c_{k_2}(X_2,\omega_2)).$ Examples I did some quick calculations to get an idea of how good these capacities are. I can explain the details of these calculations later, but for now here are the results (which you should regard as preliminary since I didn’t check every detail): • The capacities of the four-dimensional ball $B^4(1)$, starting at $k=0$, are $0,1,1,2,2,2,3,3,3,\ldots$. That is, $c_k(B^4(1))=\lceil \frac{k+1}{3}\rceil$ for $k>0$. • The capacities of the four-dimensinal polydisk $P(1,a)$ for $a\ge 1$ are given by $c_k(P(1,a)) = \min\{k,\lceil \frac{k-1}{2}\rceil + a\}$. • If $2n>4$ then $c_k(B^{2n}(1)) = \lceil \frac{k}{2} \rceil$. To test how powerful these capacities are, let us consider the question of when the disjoint union of $m$ four-dimensional balls of capacity $a$ be symplectically embedded into the four-ball of capacity $1$. (Recall that ECH capacities are sharp for this embedding problem.) McDuff-Polterovich showed that if such an embedding exists, then $a\le 1/2$ when $m=2$, $a\le 2/5$ when $m=5$, $a\le 3/8$ when $m=7$, and $a\le 6/17$ when $m=8$, and these bounds are optimal. We can compute the rational SFT capacities of $\sqcup_mB(a)$ using the disjoint property. We find that $c_2(\sqcup_2B(a))=2a$, while $c_2(B(1))$, so we recover the inequality $a\le 1/2$ when $m=2$. We also compute that $c_5(\sqcup_5B(a))=5a$ while $c_5(B(1))=2$, so we recover the inequality $a\le 2/5$ when $m=5$. Unfortunately, when $m=7$ or $m=8$ we do not even recover the volume constraint $a\le \sqrt{1/m}$. The ultimate reason for this is that the first inequality comes from curves of degree $1$ in ${\mathbb C}P^2$, the second inequality comes from curves of degree $2$, the third inequality comes from curves of degree $3$, and the fourth inequality comes from curves of degree $6$. Since the latter two curves have positive genus, rational SFT does not see them. One might hope that rational SFT capacities see some non-embedded rational curves that ECH does not see, e.g. for embedding a polydisk into an ellipsoid where the ECH capacities do not give sharp obstructions, but so far I have not found an example where rational SFT capacities say anything more than ECH capacities. When $2n>4$, rational SFT capacities say even less about packing a disjoint union of balls into a ball. All they tell us is that if $B^{2n}(a)\sqcup B^{2n}(b)$ symplectically embeds into $B^{2n}(1)$, then $a+b\le 1$, which was known to Gromov. The good news The good news is that rational SFT gives stronger obstructions to symplectic embeddings than rational SFT capacities, if you know something about the cobordism map. In particular, rational SFT does recover the optimal obstructions to symplectically embedding the disjoint union of seven or eight four-balls of equal size into a four-ball! To see why, suppose there exists a symplectic embedding of $\sqcup_mB^4(a)$ into $B^4(1)$ where $m\in\{7,8\}$. Let $X$ be the symplectic cobordism obtained by removing the image of the embedding from $B^4(1)$. We can perturb this to obtain a cobordism between nearly-round ellipsoids with nondegenerate contact forms. Each ellipsoid has two embedded elliptic Reeb orbits which we denote by $\alpha$ and $\beta$, where $\alpha$ denotes the shorter orbit. The ECH generator differential vanishes because all generators have even grading. We know from general properties of ECH cobordism maps that when $m=7$ (resp. $m=8$), the map induced by the cobordism has a nonzero coefficient from the ECH generator $\beta^3$ (resp. $\beta^6$) to the ECH generator consisting of $\alpha^2$ in one of the balls and $\alpha$ in the other six balls (resp. $\alpha^3$ in one of the balls and $\alpha^2$ in the other seven balls). Since the cobordism map decreases symplectic action, we conclude that $3\ge (2+6)a$ when $m=7$ and $6\ge (3+7\cdot 2)a$ when $m=8$, which are the optimal inequalities. Now since the above coefficient of the ECH cobordism map is nonzero, there exists a (more precisely $1 \mod 2$) holomorphic curve between the corresponding ECH generators. In general one could get a broken and/or multiply covered curve, but (I think) one can rule that out in the present case because otherwise one would obtain a stronger embedding obstruction which cannot be true. Moreover, one can use the ECH partition conditions to see that each end of this holomorphic curve is at a singly covered Reeb orbit, so that it has $11$ ends when $m=7$ and $23$ ends when $m=8$. One can then use the relative adjunction formula to show that the holomorphic curve has $\chi=-9$ when $m=7$ and $\chi=-21$ when $m=8$. This implies that it is rational. So the cobordism map on rational SFT sees it! More precisely, in rational SFT, the differential vanishes here because all holomorphic curves have even index, so we can identify elements of the chain complex with homology classes. If $\sigma_+$ denotes $\otimes_3\beta$ when $m=7$ and $\otimes_6\beta$ when $m=8$, and if $\Phi$ denotes the cobordism map on rational SFT, then $\Phi(\sigma_+)$ includes a monomial with $8$ of the $\alpha$ variables when $m=7$ and $17$ of the $\alpha$ variables when $m=8$. It follows that $c_{\Phi(\sigma_+)}\ge 8a$ when $m=7$ and $c_{\Phi(\sigma_+)}\ge 17a$ when $m=8$, so the inequality at the beginning of this post gives the optimal symplectic embedding obstruction. In fact, as far as I know it it possible that all of the ECH obstructions to embedding a disjoint union of four-dimensional ellipsoids into an ellipsoid are seen by rational SFT cobordism maps. The puzzle In the above example of embedding seven or eight four-dimensional balls into a ball, it is a nontrivial exercise (which I haven’t really tried yet) to compute the relevant part of the rational SFT cobordism map without “cheating” and using information from ECH. One can get some information about the cobordism map by using the fact that it commutes with the U maps, compare Lemma 3.2 in “The asymptotics of ECH capacities”. However in the present case one has to work harder because the U map is no longer injective on generators. In dimension $2n>4$, if one can somehow compute the rational SFT cobordism map coming from an embedding of a disjoint union of balls into a ball, then one might get stronger obstructions to ball packing than the Gromov obstruction recalled above. Likewise, for embedding one ellipsoid into another in higher dimensions, I suspect that rational SFT capacities say nothing more than Ekeland-Hofer capacities, but if one can compute the cobordism map one might be able to obtain stronger obstructions.
{}
Relativistic entanglement in single-particle quantum states using non-linear entanglement witnesses Relativistic entanglement in single-particle quantum states using non-linear entanglement witnesses In this study, the spin-momentum correlation of one massive spin- $${\frac{1}{2}}$$ and spin-1 particle states, which are made based on the projection of a relativistic spin operator into timelike direction is investigated. It is shown that by using Non-Linear entanglement witnesses (NLEWs), the effect of Lorentz transformation would decrease both the amount and the region of entanglement. http://www.deepdyve.com/assets/images/DeepDyve-Logo-lg.png Quantum Information Processing Springer Journals Relativistic entanglement in single-particle quantum states using non-linear entanglement witnesses , Volume 11 (6) – Sep 11, 2011 16 pages /lp/springer_journal/relativistic-entanglement-in-single-particle-quantum-states-using-non-gl2p1H80G0 Publisher Springer Journals Subject Physics; Quantum Information Technology, Spintronics; Quantum Computing; Data Structures, Cryptology and Information Theory; Quantum Physics; Mathematical Physics ISSN 1570-0755 eISSN 1573-1332 D.O.I. 10.1007/s11128-011-0289-z Publisher site See Article on Publisher Site Abstract In this study, the spin-momentum correlation of one massive spin- $${\frac{1}{2}}$$ and spin-1 particle states, which are made based on the projection of a relativistic spin operator into timelike direction is investigated. It is shown that by using Non-Linear entanglement witnesses (NLEWs), the effect of Lorentz transformation would decrease both the amount and the region of entanglement. Journal Quantum Information ProcessingSpringer Journals Published: Sep 11, 2011 References • Can quantum-mechanical description of physical reality be considered complete? Einstein, A.; Podolsky, B.; Rosen, N. You’re reading a free preview. Subscribe to read the entire article. DeepDyve is your personal research library It’s your single place to instantly discover and read the research that matters to you. over 18 million articles from more than 15,000 peer-reviewed journals. All for just $49/month Explore the DeepDyve Library Search Query the DeepDyve database, plus search all of PubMed and Google Scholar seamlessly Organize Save any article or search result from DeepDyve, PubMed, and Google Scholar... all in one place. Access Get unlimited, online access to over 18 million full-text articles from more than 15,000 scientific journals. Your journals are on DeepDyve Read from thousands of the leading scholarly journals from SpringerNature, Elsevier, Wiley-Blackwell, Oxford University Press and more. All the latest content is available, no embargo periods. DeepDyve Freelancer DeepDyve Pro Price FREE$49/month \$360/year Save searches from PubMed Create lists to Export lists, citations
{}
# Is cosine similarity identical to l2-normalized euclidean distance? Identical meaning, that it will produce identical results for a similarity ranking between a vector u and a set of vectors V. I have a vector space model which has distance measure (euclidean distance, cosine similarity) and normalization technique (none, l1, l2) as parameters. From my understanding, the results from the settings [cosine, none] should be identical or at least really really similar to [euclidean, l2], but they aren't. There actually is a good chance the system is still buggy -- or do I have something critical wrong about vectors? edit: I forgot to mention that the vectors are based on word counts from documents in a corpus. Given a query document (which I also transform in a word count vector), I want to find the document from my corpus which is most similar to it. Just calculating their euclidean distance is a straight forward measure, but in the kind of task I work at, the cosine similarity is often preferred as a similarity indicator, because vectors that only differ in length are still considered equal. The document with the smallest distance/cosine similarity is considered the most similar. • It all depends on what your "vector space model" does with these distances. Could you be more specific about what the model does? – whuber Apr 13 '15 at 23:08 • Sorry, sometimes it's hard to get out of my own head. I added a specification. – Arne Apr 13 '15 at 23:24 • You still don't describe any model. In fact, the only clue you have left concerning the "kind of task (you) work at" is the nlp tag--but that's so broad it doesn't help much. What I'm hoping you can supply, so that people can understand the question and provide good answers, is sufficient information to be able to figure exactly how you are using your distance measure and how it determines what the "results" might be. – whuber Apr 14 '15 at 0:48 • stats.stackexchange.com/a/36158/3277. Any angular aka sscp-type similarity is convertible to its corresponding euclidean distance. – ttnphns Sep 30 '16 at 12:18 For $\ell^2$-normalized vectors $\mathbf{x}, \mathbf{y}$, $$||\mathbf{x}||_2 = ||\mathbf{y}||_2 = 1,$$ we have that the squared Euclidean distance is proportional to the cosine distance, \begin{align} ||\mathbf{x} - \mathbf{y}||_2^2 &= (\mathbf{x} - \mathbf{y})^\top (\mathbf{x} - \mathbf{y}) \\ &= \mathbf{x}^\top \mathbf{x} - 2 \mathbf{x}^\top \mathbf{y} + \mathbf{y}^\top \mathbf{y} \\ &= 2 - 2\mathbf{x}^\top \mathbf{y} \\ &= 2 - 2 \cos\angle(\mathbf{x}, \mathbf{y}) \end{align} That is, even if you normalized your data and your algorithm was invariant to scaling of the distances, you would still expect differences because of the squaring. • You are right, if all you do is rank the vectors by their distance to $\mathbf{u}$, using cosine distance should give the same result as Euclidean distance (for normalized vectors). – Lucas Apr 14 '15 at 16:22 Standard cosine similarity is defined as follows in a Euclidian space, assuming column vectors $\mathbf{u}$ and $\mathbf{v}$: $$\cos(\mathbf{u}, \mathbf{v}) = \frac{\langle \mathbf{u}, \mathbf{v} \rangle}{\|\mathbf{u}\| \cdot \|\mathbf{v}\|} = \frac{\mathbf{u}^T\mathbf{v}}{\|\mathbf{u}\| \cdot \|\mathbf{v}\|} \in [-1, 1].$$ This reduces to the standard inner product if your vectors are normalized to unit norm (in l2). In text mining this kind of normalization is not unheard of, but I wouldn't consider that the standard.
{}
## marasofia1616 2 years ago How many moles of oxygen gas would be needed to react completely with 6 moles of lithium metal in the following reaction? 4 Li + O2 2Li2O $4Li + O _{2} \rightarrow 2Li _{2}O$
{}
Class 10 Strength of an acid or base, Importance of pH in everyday life ### Topics to be covered => Strength of an acid or base => Importance of pH in everyday life ### STRENGTH OF AN ACID OR BASE: color{green}(★) Strength of acids and bases depends on the no. of color{red}(H^+) ions and color{red}(OH^–) ions produced respectively. color{green}(★) With the help of a universal indicator we can find the strength of an acid or base. The universal indicator shows different colours at different concentrations of hydrogen ions in a solution. color{green}(★) A scale for measuring hydrogen ion concentration in a solution, called pH scale has been developed. The p in pH stands for ‘potenz’ in German, meaning power. On the pH scale we can measure pH generally from 0 (very acidic) to 14 (very alkaline). color{green}(★) pH should be thought of simply as a number which indicates the acidic or basic nature of a solution. Higher the hydronium ion concentration, lower is the pH value. color{green}(★) The pH of a neutral solution is 7. Values less than 7 on the pH scale represent an acidic solution. As the pH value increases from 7 to 14, it represents an increase in color{red}(OH^–) ion concentration in the solution, that is, increase in the strength of alkali (Fig. 2.6). Generally paper impregnated with the universal indicator is used for measuring pH. Strong Acids give rise to more color{red}(H^(+)) ions . color{red}(Eg. \ \ HCl , H_2SO_4 ) and color{red}(HNO_3). Weak Acids give rise to less color{red}(H^(+)) ions. color{red}(Eg. \ \ CH_3COOH , H_2CO_3 ("Carbonic acid")) Strong Bases - Strong bases give rise to more color{red}(OH^(-)) ions . color{red}(Eg.\ \ NaOH , KOH , Ca(OH)_2) Weak Bases ; gives rise to less color{red}(OH^(-)) ions . color{red}(Eg. \ \ NH_4OH) ### Importance of pH in Everyday Life color{green}(★) Our body functions between the range of 7.0 to 7.8 living organisms can survive only in the narrow range of pH change. color{green}(★) Importance of pH in our digestive system – pH level of our body regulates our digestive system. In case of indigestion our stomach produces acid in a very large quantity because of which we feel pain and irritation in our stomach. To get relief from this pain antacids are used. These antacids neutralises the excess acid and we get relief. Magnesium hydroxide (Milk of magnesia), a mild base, is often used for this purpose. color{green}(★) 𝐩𝐇 𝐨𝐟 𝐀𝐜𝐢𝐝 𝐑𝐚𝐢𝐧 : When pH of rain water is less than 5.6 it is called Acid Rain. When this acidic rain flows into rivers,the pH of river water lowers , which causes a threat to the survival of aquatic life. color{green}(★) 𝐩𝐇 𝐨𝐟 𝐒𝐨𝐢𝐥 : Plants require a specific range of pH for their healthy growth. If pH of soil of any particular place is less or more than normal than the farmers add suitable fertilizers to it. color{green}(★) 𝐓𝐨𝐨𝐭𝐡 𝐝𝐞𝐜𝐚𝐲 𝐚𝐧𝐝 𝐩𝐇 : Tooth enamel, made up of calcium phosphate is the hardest substance in the body. It does not dissolve in water, but is corroded when the pH in the mouth is below 5.5. Tooth decay starts when the pH of the mouth is lower than 5.5. Bacteria present in the mouth produce acids by degradation of sugar and food particles remaining in the mouth after eating. The best way to prevent this is to clean the mouth after eating food. Using toothpastes being basic in nature are used for neutralizing the excess acid and prevent tooth decay. color{green}(★) Bee sting or Nettle sting contains methanoic acid which causes pain and irritation. Use of a weak base like baking soda provides relief. color{red}("𝐉𝐔𝐒𝐓 𝐅𝐎𝐑 𝐂𝐔𝐑𝐈𝐎𝐔𝐒") color{green}(★) The atmosphere of venus is made up of thick white and yellowish clouds of sulphuric acid. Do you think life can exist on this planet? color{green}(★) Nettle is a herbaceous plant which grows in the wild. Its leaves have stinging hair, which cause painful stings when touched accidentally. This is due to the methanoic acid secreted by them. A traditional remedy is rubbing the area with the leaf of the dock plant, which often grows beside the nettle in the wild. Can you guess the nature of the dock plant? So next time you know what to look out for if you accidentally touch a nettle plant while trekking. Are you aware of any other effective traditional remedies for such stings?
{}
# Ngô Quốc Anh ## September 1, 2010 ### The inverse of the Laplace transform by contour integration Filed under: Giải tích 7 (MA4247) — Ngô Quốc Anh @ 17:23 Usually, we can find the inverse of the Laplace transform $\mathcal L[\cdot](s)$ by looking it up in a table. In this entry, we show an alternative method that inverts Laplace transforms through the powerful method of contour integration. Consider the piece-wise differentiable function $f(x)$ that vanishes for $x < 0$. We can express the function $e^{-cx}f(x)$ by the complex Fourier representation of $\displaystyle f(x){e^{ - cx}} = \frac{1}{{2\pi }}\int_{ - \infty }^\infty {{e^{i\omega x}}\left[ {\int_0^\infty {{e^{ - ct}}f(t){e^{ - i\omega t}}dt} } \right]d\omega }$ for any value of the real constant $c$, where the integral $\displaystyle I = \int_0^\infty {{e^{ - ct}}|f(t)|dt}$ exists. By multiplying both sides of first equation by $e^{cx}$ and bringing it inside the first integral $\displaystyle f(x) = \frac{1}{{2\pi }}\int_{ - \infty }^\infty {{e^{(c + i\omega )x}}\left[ {\int_0^\infty {f(t){e^{ - (c + i\omega )t}}dt} } \right]d\omega }$. With the substitution $z = c+\omega i$, where $z$ is a new, complex variable of integration, $\displaystyle f(x) = \frac{1}{{2\pi }}\int_{c - \infty i}^{c + \infty i} {{e^{zx}}\left[ {\int_0^\infty {f(t){e^{ - zt}}dt} } \right]d\omega }$. The quantity inside the square brackets is the Laplace transform $\mathcal L[f](z)$. Therefore, we can express $f(t)$ in terms of its transform by the complex contour integral
{}
EDITORIAL # Brain, Drugs, and Society + See all authors and affiliations Science  24 Jan 1997: Vol. 275, Issue 5299, pp. 459 DOI: 10.1126/science.275.5299.459 ## Summary It was at a symposium on the cognitive neuroscience of drug abuse that the idea came to me: Scientists need to speak out about the total problem of drug use. Excitement was in the air at this meeting; the National Institute on Drug Abuse (NIDA) had finally acknowledged that cognition—a person's beliefs and goals—plays a role in drug abuse. NIDA announced that it would now fund research in this area. That was the good news; the bad news was that Uncle Sam was only offering $1.5 million for the enterprise. Until now, drug abuse research has been dominated by simple behavioral models that use outmoded theories of reinforcement to explain drug addiction, such as the mistaken notion that users stay addicted because drugs provide an intermittent schedule of reward. Neuroscience took up such ideas, and the field largely became locked into the view that if it could be determined exactly what neurotransmitter responds to what drug, the drug addiction problem would be solved. NIDA now seems to have recognized that these ideas have only limited utility and has invited cognitive neuroscientists to participate in the hunt for a deeper understanding of drug abuse. The importance of cognition is illustrated by the fact that the overall pattern of U.S. drug use has remained constant for years. Although many people experience drugs, only a small number become addicted. Specifically, about 70 percent of Americans have tried illicit drugs, but less than 20 percent have used an illicit drug in the past year and only a few percent have done so in the past month. It is also relevant that drug use drops dramatically with age; past age 35, the casual use of illegal drugs virtually ceases. All of these facts suggest that simple learning and reinforcement concepts do not explain the drug experience. Cognition is central to the pattern. Education, alternative choices, and competing temptations all play a role in determining whether the user is seeking occasional reinforcement from drugs or heading for chronic use. The mere taking of drugs, even on a casual basis, does not mean that the user is on the slippery slope to doom; most people eventually walk away from the hedonistic pleasures of illicit drugs. At the same time, it is a challenge for cognitive neuroscience to understand why a significant number of people can't walk away from drug use. It is now clear that some people cannot stop taking drugs even after their body chemistry has been normalized after abstinence. They are easily set off and go back to destructive drug use. Some recidivists are mentally disturbed and are medicating themselves, and some may suffer from a genetic predisposition to drug use, but most recidivists have built up theories about why they do what they do. Fixing body chemistry does not fix these cognitive patterns and beliefs. Better understanding of this crucial cognitive component will take serious money. Enter politics. When I was asked to walk the congressional halls for the Society for Neuroscience in an attempt to talk up more money for brain research, the staff people all said the same thing. In effect, “It's a closed system: If you want more research money, you tell us which program to take money from. Don't think the extra money comes from making one less gun; it comes from the domestic budget. What is it you want to cut?” There is an easy answer to this question in a rational world. Since 1982, the federal budget for drug control programs has gone from$650 million to over $13 billion. What has been the effect? Overall drug availability, purity, and cost have not changed, and the percentage of the population using drugs has remained largely constant, with variations here and there suggesting declines in use because of education. These programs have thus produced no measurable effect on the drug supply line. Does the U.S. government confiscate some drugs? Sure, but the supply is infinite and new sources pop up like tulips. Yet the government thinks it is newsworthy that$1.5 million will be applied to what is probably the central issue in human drug addiction. This is a silly amount. I know where I would get the money for this research. Let NIDA have one of those drug control billions and you will see some real advances. It is time for scientists to talk back to the politicians. As they say, this is a no-brainer.
{}
# How do you find the slope and y-intercept for the line y=1/4x-3? $y$ intercept $= - 3$ Slope $= \frac{1}{4}$ The constant term is $y$ intercept. The coefficient of $x$ is slope
{}
• Yesterday, 12:25 Klaas van Aarsen replied to a thread f in Calculus It appears the answer is: dkm Acronym for "don't kill me". Often used when somebody says something that somebody else finds really funny,... 5 replies | 101 view(s) • April 1st, 2020, 06:35 Not quite, although you started correctly. The limit in this case is as $x\to\infty$, so you want to see what happens when $x$ gets large. This means... 3 replies | 101 view(s) • March 31st, 2020, 14:52 Hi Goody, and welcome to MHB! To prove that \lim_{x\to\infty}\frac{x-1}{x+2} = 1, you have to show that, given $\varepsilon > 0$, you can find $N$... 3 replies | 101 view(s) • March 30th, 2020, 05:31 Let's take a look at a couple of examples. If $f(x)=x$, then $f'(x)=1$ and $f''(x)=0$. So $\lim\limits_{x\to +\infty}f'(x)\ne 0$, isn't it?... 21 replies | 321 view(s) • March 30th, 2020, 04:35 mathmari replied to a thread Limits & properties in Analysis Ok!! As for the second question, why is the limit always zero? (Wondering) 21 replies | 321 view(s) • March 30th, 2020, 04:26 Yep. (Nod) And no, nothing more specific. 21 replies | 321 view(s) • March 30th, 2020, 03:57 mathmari replied to a thread Limits & properties in Analysis OK, so it's graph is a decreasing function, right? Or can we say something more specifically? (Wondering) 21 replies | 321 view(s) • March 30th, 2020, 03:32 I believe so yes. Consider $f(x)=\ell$. It satisfies all conditions, doesn't it? (Wondering) And it is monotone instead of strictly monotone. To... 21 replies | 321 view(s) • March 29th, 2020, 17:21 mathmari replied to a thread Limits & properties in Analysis I don't see how we get the strict inequality. Is maybe the result wrong and it should be monotone instead of strictly monotone? (Wondering) 21 replies | 321 view(s) • March 29th, 2020, 14:38 Indeed. (Thinking) 21 replies | 321 view(s) • March 29th, 2020, 14:31 mathmari replied to a thread Limits & properties in Analysis Ok! That means that $f$ is monotone and escpecially descreasing, right? To get that $f$ is strictly monotone, do we have to get $f'(x)<0$ instead of... 21 replies | 321 view(s) • March 29th, 2020, 14:26 Yep. (Nod) 21 replies | 321 view(s) • March 29th, 2020, 14:18 mathmari replied to a thread Limits & properties in Analysis Yes, because then from $\ell\geq f(y)+f'(y)(+\infty-y)$ we get $-\infty$ at the right side of the inequality, and so $\ell$ can be real. ... 21 replies | 321 view(s) • March 29th, 2020, 13:57 Then the inequality also holds yes. What if fill in, say, $f'(y)=-1$ in the inequality? Would it satisfy it? (Wondering) 21 replies | 321 view(s) • March 29th, 2020, 13:52 mathmari replied to a thread Limits & properties in Analysis Ah $(-\infty)\cdot (+\infty)$ is also an undefined form, isn't it? So $f'(y)$ is either $0$ or $-\infty$, right? (Wondering) 21 replies | 321 view(s) • March 29th, 2020, 13:19 That is a possibility yes. What happens if $f'(y)$ is negative? (Wondering) 21 replies | 321 view(s) • March 29th, 2020, 13:14 mathmari replied to a thread Limits & properties in Analysis So that we get that $\ell$ is not infinity we have to get an undefined form, one such form is when infinity is multiplied with zero, so $f'(y)$ must... 21 replies | 321 view(s) • March 29th, 2020, 13:08 We have an expression with $f(y)$, $f'(y)$, and $\ell$. And we already know that $\ell\in\mathbb R$, don't we? So it can't be $\pm\infty$ either. ... 21 replies | 321 view(s) • March 29th, 2020, 12:28 mathmari replied to a thread Limits & properties in Analysis So we have the following: \begin{align*}f(x)\geq f(y)+f'(y)(x-y)&\Rightarrow \lim_{x\rightarrow +\infty}f(x)\geq \lim_{x\rightarrow... 21 replies | 321 view(s) • March 29th, 2020, 10:27 Ah okay. But that is not the case now is it? (Wondering) Sounds like a plan. (Nod) 21 replies | 321 view(s) • March 29th, 2020, 09:06 mathmari replied to a thread Limits & properties in Analysis At the previous exercise $L$ belong to $\mathbb{R}\cup \{\pm \infty\}$. So since $f'(x)=2x$ and the limit is equal to $+\infty$ and so it exists.... 21 replies | 321 view(s) • March 29th, 2020, 08:35 Let's see, suppose we pick a convex function, say $f(x)=x^2$. It's convex isn't it? Does $\lim\limits_{x\to +\infty}f'(x)$ exist? (Wondering) ... 21 replies | 321 view(s) • March 29th, 2020, 08:09 mathmari replied to a thread Limits & properties in Analysis In a previous exercise I showed that $$\lim_{x\rightarrow +\infty}(f(x+1)-f(x))=L \Rightarrow \lim_{x\rightarrow +\infty}f'(x)=L$$ if we know that... 21 replies | 321 view(s) • March 29th, 2020, 07:38 Hey mathmari!! Let's start with: It follows from $\lim\limits_{x\rightarrow +\infty}f(x)=\ell$ that $\lim\limits_{x\to +\infty}f'(x)=0$... 21 replies | 321 view(s) • March 29th, 2020, 05:52 mathmari started a thread Limits & properties in Analysis Hey!! :o Could you give me a hint how to prove the following statements? (Wondering) Let $f:\mathbb{R}\rightarrow \mathbb{R}$ be... 21 replies | 321 view(s) • March 27th, 2020, 23:42 Here is this week's POTW: ----- Find the minimum value of $(u-v)^2+\left(\sqrt{2-u^2}-\dfrac{9}{v}\right)^2$ for $0<u<\sqrt{2}$ and $v>0$. ... 0 replies | 89 view(s) • March 27th, 2020, 23:38 Hi MHB! I have decided to extend the deadline by another week so that our members can give this problem another shot and I am looking forward to... 1 replies | 206 view(s) More Activity 32 Visitor Messages 1. Testing $\LaTeX$ embedded within text...$\displaystyle \int_a^b f'(x)\,dx=f(b)-f(a)$... 2. Lol took it down cuz I'm not active. Is that a request to get it back up? Lol 3. Hey ILS. Think you could check my quantum physics question? Thanks! 4. Thanks 5. Hey ILS. I think you got your fraction inverted in this post. 6. Congratulations on topping the 3,000 post mark...3,000 quality posts I might add. Well done! 7. "That's because I'm having a bad day." How that happens to everyone of us! 8. ILS, can you please give me further assistance on the snails thread? Please let me know which part of my working is unclear, I will try my best to explain it. Thanks! 9. Congratulations, I like Serena! You have hit the 2k posts today, and out of the 2k posts, I said all of them are the most helpful and informative ones! 10. 2k posts! I fear I may lose my #2 position in post count very soon. Mark is #1 and you are #3, for now. Here's to another 2k posts. Thank you for all you do here. Showing Visitor Messages 1 to 10 of 32 Page 1 of 4 123 ... Last Page 1 of 4 123 ... Last Basic Information Location: Leiden Interests: Math and physics! Occupation: Software engineer & private tutor Country Flag: Netherlands Signature "Mommy, I love math. It's more fun than reading." — Facebook (2012) Statistics Total Posts 8,405 Posts Per Day 2.85 Thanks Given 5,437 14,620 1.739 Visitor Messages Total Messages 32 Most Recent Message October 2nd, 2017 10:21 General Information Last Activity Today 12:44 Last Visit Today at 06:12 Last Post Today at 12:32 Join Date March 5th, 2012 Referrer Plato Referrals 6 Referred Members ArcanaNoir, lajka, Melody, Pranav, Satya, tda120 28 Friends 1. abenderOffline MHB Apprentice 2. AcquirerOffline MHB Apprentice MHB Master 4. Chipset3600Offline MHB Apprentice MHB Master 6. FantiniOffline MHB Craftsman Perseverance 8. ineedhelpnowOffline MHB Journeyman 9. jacobiOffline MHB Apprentice 10. jakncokeOffline MHB Apprentice Showing Friends 1 to 10 of 28 Page 1 of 3 123 Last Page 1 of 3 123 Last March 26th, 2020 Page 1 of 3 123 Last Ranks Showcase - 12 Ranks Icon Image Description Name: MHB Pre-University Math Award (2018) Issue time: March 3rd, 2019 18:07 Issue reason: Congrats, Klaas! Name: MHB Model Helper Award (2017) Issue time: February 11th, 2018 21:21 Issue reason: Name: MHB Best Ideas (2017) Issue time: January 21st, 2018 09:44 Issue reason: Congratulations! Name: MHB Analysis Award (2017) Issue time: January 21st, 2018 09:42 Issue reason: Congratulations! Name: MHB Calculus Award (2017) Issue time: January 21st, 2018 09:41 Issue reason: Congratulations! Name: MHB Model Helper Award (2016) Issue time: January 9th, 2017 14:42 Issue reason: Name: MHB Best Ideas (2016) Issue time: January 9th, 2017 14:38 Issue reason: Name: MHB LaTeX Award (2016) Issue time: January 9th, 2017 14:38 Issue reason: Name: MHB Calculus Award (2014) Issue time: January 3rd, 2015 15:26 Issue reason: Name: MHB Discrete Mathematics Award (Jul-Dec 2013) Issue time: January 5th, 2014 16:51 Issue reason: Name: MHB Statistics Award (Jul-Dec 2013) Issue time: January 5th, 2014 16:50 Issue reason: Name: MHB Best Ideas (Jan-June 2013) Issue time: July 1st, 2013 22:15 Issue reason:
{}
## Terminal singularities and canonical logic Let $$D$$ to be a divisor with at most terminal singularities in a smooth projective variety $$X$$. Is the pair $$(X, D)$$ canonical log? ## virtualenvwrapper message on each new terminal I have never tried to uninstall `virtualenv` or `virtualewrapper` nor touched the directory. Both were installed across `apt`. When I open the `terminal` this message appears at the top: ``````bash: /home/name/.local/bin/virtualenvwrapper.sh: No such file or directory `````` ## How to hide files and directories in a terminal I'd like to know if I want to hide my private files and folders on my storage with a less setup password, as I do in Windows with the command "atrrib" ## to add a sanskrit keyboard with the help of the setxkbmap command in the terminal? I'm trying to find an effective way to type the correct roman transliteration of Sanskrit characters. This is done using a special keyboard that allows you to type diacritic marks in unicode. Standard diacritic marks for Sanskrit are available in Unicode. I can type this very effectively on OSX on my Macintosh without any problem, using a keyboard called "Easy Unicode" created by Toshiya Unebe. Attempts have been made to emulate this for Linux, such as at https://garudam.info/sanskrit-transliteration-keyboard-on-linux/, but I found that this code does not did not work. A Unicode keyboard IAST is available for use with Ibus, but it is quite buggy and seems to cause conflict. I've tried several solutions over the years, looking for a good option. It is important to me because I am responsible for the editing and translation of books in Sanskrit and Bengali. There are keyboard options available for native input of these languages, but I have always found the best option for Roman transliteration input, as shown above. I wish to be able to switch between a keyboard of transliteration English and romanized Sanscrit. I've also tried FCITx and different m17n, but it seems to crash and / or create conflicts with Ibus. Thank you very much for your suggestions. ## Opening of pdf on ssh in a Mac terminal As the title says, I do it ``````> ssh user@host.de > cd documents/ > okular example.pdf QXcbConnection: Could not connect to display Aborted (core dumped) `````` I connect to an OpenSUSE machine via SSH from a Mac operating system. ## How to resize MP4 videos using the command line on a mac terminal? I try to write scripts that automatically resize photos and videos to specific sizes / percentages of the original. ## Mac terminal "wbk-er-10-132-118-162: ~ johnjara \$" – What does "wbk-er-10-132-118-162:" mean? The title of my mac terminal is: what is wbk-er-10-132-118-162: ~ johnjara \$. I do not understand what "wbk-er …." means. Is this the name of my mac? How can I change it? Why would this be used by default unless you install something (recently homebrew) that changed it? ## Terminal – Copy backup directories on Mac from one external drive to another I decided to follow the safest approach to make copies of my backups. In other words, my Mac backs everything up to an external drive, and then I copy the entire backup directory to the second external drive. I did some research and found that this command allowed to move and synchronize all files from one source directory to another: ``````rsync -av --delete /Volume/Drive1/MyPictures/ /Volume/Drive1/MyPictures `````` This worked well for all the directories stored on the external hard drive, except the Mac backup directories. Running, this causes an error: ``````rsync -av --delete /Volume/Drive1/Backups.backupdb/ /Volume/Drive1/Backups.backupdb failed: Operation not permitted (1) rsync error: some files could not be transferred (code 23) `````` Any suggestions how to solve this problem? I have tried `rsync -rtb` and has the same error. Manual copying using the Finder also generates alerts with `The operation can’t be completed because backup items can’t be modified.` ## databases – MySQL terminal hack? Well, is it possible to enter a MySQL terminal (tracking and sending queries) without a username, password and IP address? I have a game server and some guys just shared some details about our users on pastebin (like password – unhashed!). They send queries such as setting their own money in a database (money in the game). We use sha256 MySQL hash for password hashing, general logs are enabled, we have special mysql query logs on the server hard drive with all queries sent from the game. We thought that there might be an SQL injection problem, but there is nothing in the mysql logs of the game. Their queries are logged only in the general log and the general log indicates that the request is sent exactly from the server user and from the IP. Second, real user passwords (unhashed) are only visible in queries when users try to connect. In the database, there are only hashed passwords. Thus, it generally seems that they can track all queries and that they can send queries on behalf of the game server (as if they had been routinely sent from the game). We protected phpmyadmin (VPN protection), the game server is on dedicated, mysql is on VPS but on the same dedicated server. They can not connect to the database from phpmyadmin. After the first pastebin data sharing, we did a lot of things. First, we get a new VPS, protect it, change the user name and password of the database, check everything in the script. Perhaps an hour later, they simply send new requests, as if we had not done anything. I've read this: https://stackoverflow.com/questions/11854231/mysql-database-hacked-not-injections And it's a very similar problem, but there's no problem answer for us. ## linux – How to properly clear the terminal history in Node.js I create my own cli (for my use only) and I'd like a "clear" method that works like the "reset" command of macos (or linux). Mainly the fact that it erases all text from the terminal, even the previous text, located outside the terminal display window. Ideas? I tried child_process & # 39; `exec()` and I've tried methods that "erase" everything and move the slider to 0.0. But neither one nor the other does what I want.
{}
# Add third line of labels to axis in TikZ I have a TikZ figure, where one axis is a bit overpopulated. My solution is to add further lines to the labels using extra x ticks={...}, extra x tick label={...}, extra x style={tick label style={yshift=...}}, However, I can only add a second line with this method. I know an option is simply to add a node with some text in the desired position, but I would like to know if there is a solution using the axis options. My code which does not work: \documentclass[10pt,a4paper]{article} \usepackage{tikz,pgfplots} \usetikzlibrary{intersections} \pgfplotsset{compat=1.14} \begin{document} \begin{tikzpicture} % Define parameters \newcommand*{\DELTAb}{0.25}% \newcommand*{\DELTAa}{0.4}% \pgfmathsetmacro{\LOWB}{1-\DELTAb}% \pgfmathsetmacro{\HIGHB}{1+\DELTAb}% \pgfmathsetmacro{\LOWA}{1-\DELTAa}% \pgfmathsetmacro{\HIGHA}{1+\DELTAa}% \newcommand*{\ALPHA}{0.5}% \begin{axis}[ % Define axis properties axis x line=bottom, axis y line=left, enlargelimits=false, clip=false, ytick=\empty,%{1}, ymin=0, %ymax=1, xtick={0, \LOWB, 1, \HIGHB}, xticklabels={$0$, $w-\Delta_b$, $w$, $w+\Delta_b$}, extra x ticks={0.9}, extra x tick labels={$w - \Delta_b(\delta_{b,2} - \delta_{b,1})$}, extra x tick style={tick label style={yshift=-4mm}}, extra x ticks={\LOWA, \HIGHA}, extra x tick labels={$w-\delta_b$, $w+\delta_b$}, extra x tick style={tick label style={yshift=-8mm}}, xmin=0.43, xmax=1.8, samples=250, domain=0.55:1.6] % Draw functions \addplot[name path=A,smooth,thick,mark=none] {(1/x)^(1/(1-\ALPHA))} node[right,pos=1,yshift=-0.1cm] {$f(w_i)=C{w_i}^{\frac{-\rho}{1-\rho}}$}; % Find intersections \path[name path=C] (\pgfkeysvalueof{/pgfplots/xmin},0) -- (\pgfkeysvalueof{/pgfplots/xmax},0); % X axis named \path[name path=D] (0,\pgfkeysvalueof{/pgfplots/ymin}) -- (0,\pgfkeysvalueof{/pgfplots/ymax}); % Y axis named \path[name path=vline0] (1,0) -- (1,5); %vertical line1 in w \path[name path=vline1] (\LOWB,0) -- (\LOWB,5); %vertical line in -delta \path[name path=vline2] (\HIGHB,0) -- (\HIGHB,5); %vertical line in +delta \path[name path=vline3] (\LOWA,0) -- (\LOWA,5); %vertical line in -delta \path[name path=vline4] (\HIGHA,0) -- (\HIGHA,5); %vertical line in +delta \path[name intersections={of=vline0 and A,name=v0A}]; % Create intersection at average \path[name intersections={of=vline1 and A,name=v1A}]; % Create intersection-1 \path[name intersections={of=vline2 and A,name=v2A}]; % Create intersection-3 \path[name intersections={of=vline3 and A,name=v1B}]; % Create intersection-1 \path[name intersections={of=vline4 and A,name=v2B}]; % Create intersection-3 % Draw lines \draw[dashed,thick,gray] (v1B-1) -- (\LOWA,0); \draw[dashed,thick,gray] (v2B-1) -- (\HIGHA,0); \draw[dashed,thick,gray] (v1A-1) -- (\LOWB,0); \draw[dashed,thick,gray] (v2A-1) -- (\HIGHB,0); \draw[dashed,thick,gray] (1, 0) -- (v0A-1); % Draw averages \path[draw,solid,red,name path=P1] (v2A-1) -- (v1A-1); \path[name path=vline0b] (0.9,0) -- (0.9,5); %vertical line1 in w \path[name intersections={of=vline0b and P1,name=v0Ab}]; % Create intersection below average \draw[dashed,gray] (v0Ab-1-|{axis cs:0.43,0}) -- (v0Ab-1) node[left,black,pos=0] {$\lambda^{HET}_a$}; \draw[dashed,gray] (0.9,0) -- (v0Ab-1); \path[draw,solid,red,name path=P2] (v2B-1) -- (v1B-1); \end{axis} \end{tikzpicture} \end{document} The result: So basically, you cannot have two set of extra x ... commands, as the last one overrides any previous one. • is rotating xtick labels and than have it in line an option? Aug 7 '17 at 16:04 • @Zarko That is an option (which hasn't occur to me, thanks), and I know how to do that, but still I want to know how to add a third line. I have rather long entries and would like to try this mode. Aug 7 '17 at 16:06 You can have one set of extra labels, and have different shifts, as in Individual tick label style depending on position: extra x ticks={0.9,\LOWA, \HIGHA}, extra x tick labels={{$w - \Delta_b(\delta_{b,2} - \delta_{b,1})$},$w-\delta_b$, $w+\delta_b$}, extra x tick style={tick label style={yshift={\ticknum == 0 ? "-4mm" : "-8mm"}}}, The <comparison> ? <value1> : <value2> is a shorthand for ifthenelse, you could also use the more obvious ifthenelse(\ticknum==0,"-4mm","-8mm"). In the code below I also suggest a somewhat less verbose method for making that diagram, by defining a function and using a ycomb plot instead of calculating all those intersections. \documentclass[10pt,a4paper]{article} \usepackage{pgfplots} \usetikzlibrary{intersections} \pgfplotsset{compat=1.14} \begin{document} \begin{tikzpicture} % Define parameters \newcommand*{\DELTAb}{0.25}% \newcommand*{\DELTAa}{0.4}% \pgfmathsetmacro{\LOWB}{1-\DELTAb}% \pgfmathsetmacro{\HIGHB}{1+\DELTAb}% \pgfmathsetmacro{\LOWA}{1-\DELTAa}% \pgfmathsetmacro{\HIGHA}{1+\DELTAa}% \newcommand*{\ALPHA}{0.5}% \begin{axis}[ % Define axis properties axis x line=bottom, axis y line=left, enlargelimits=false, clip=false, ytick=\empty, ymin=0, xtick={0, \LOWB, 1, \HIGHB}, xticklabels={$0$, $w-\Delta_b$, $w$, $w+\Delta_b$}, extra x ticks={0.9,\LOWA, \HIGHA}, extra x tick labels={{$w - \Delta_b(\delta_{b,2} - \delta_{b,1})$},$w-\delta_b$, $w+\delta_b$}, extra x tick style={tick label style={yshift={ifthenelse(\ticknum==0,"-4mm","-8mm")}}}, xmin=0.43, xmax=1.8, samples=50, domain=0.55:1.6, declare function={f(\x)=(1/\x)^(1/(1-\ALPHA));} ] % Draw functions node[right,yshift=-0.1cm] {$f(w_i)=C{w_i}^{\frac{-\rho}{1-\rho}}$}; \draw [red] (\LOWA,{f(\LOWA)}) -- (\HIGHA,{f(\HIGHA)}); \draw [red,name path=A] (\LOWB,{f(\LOWB)}) -- (\HIGHB,{f(\HIGHB)}); \path [name path=B] (0.9,\pgfkeysvalueof{/pgfplots/ymin}) -- (0.9,\pgfkeysvalueof{/pgfplots/ymax}); % \draw [ name intersections={of=A and B,name=I}, thick,dashed,gray ] (0.9,\pgfkeysvalueof{/pgfplots/ymin}) -- (I-1) -- (\pgfkeysvalueof{/pgfplots/xmin},0 |- I-1) node[left,black] {$\lambda^{HET}_a$}; \end{axis} \end{tikzpicture} \end{document} • Oh, nice how logical expressions embed into the commands! Regarding your simplifying suggestion, looks nice too, although it uses pgfplots rather than tikz, which I am less familiar with (reason why I did not add the pgfplots tag in the first place). It definitively looks neater though! Aug 7 '17 at 16:36 • @luchonacho Your question is about manipulating axis labels for a pgfplots plot, which, while it might involve TikZ commands, should warrant a pgfplots tag, even if you're less familiar with it. Aug 7 '17 at 16:42
{}
# KVPY-SX 2019 Maths Paper with Solutions KVPY (Kishore Vaigyanik Protsahan Yojana) is a national level scholarship program conducted to identify students with talent and aptitude for research. KVPY-SX 2019 Maths paper solutions are given on this page. BYJU’S provides step by step solutions that are prepared by subject experts. These solutions will help students to understand the marking scheme and pattern of the exam. Students are recommended to revise these solutions so that they can improve their speed and accuracy. ### KVPY SX 2019 - Maths Question 1: The number of four-letter words that can be formed with letters a, b, c such that all three letters occur is: 1. (a) 30 2. (b) 36 3. (c) 81 4. (d) 256 Solution: The 4 letter code will have a, b, c and a repeat letter from either a, b or c The possible selections are {a, a, b, c}, {b, b, a, c} and {c, c, a, b} First selection is {a, a, b, c} = (arrangement of 4 words)/(repetition of a’s two times) = 4!/2! = 4×3×2/2 = 12 Second selection is {b, b, a, c} = (arrangement of 4 words)/(repetition of b’s two times) = 4!/2! = 4×3×2/2 = 12 Third selection is {c, c, a, b} = (arrangement of 4 words)/(repetition of c’s two times) = 4!/2! = 4×3×2/2 = 12 Number of four letter words that can be formed with letters a, b, c such that all three letters occur is = 12 + 12 + 12 = 36 Question 2: Let A = {θ ∈ R:{(1/3)sin θ+(2/3) cos θ}2= (1/3)sin2 θ +(2/3)cos2 θ} . Then 1. (a) A ⋂ [0, π] is an empty set 2. (b) A ⋂ [0, π] has exactly one point 3. (c) A ⋂ [0, π] has exactly two points 4. (d) A ⋂ [0, π] has more than two points Solution: Given A = {θ ∈ R: {(1/3)sin θ+(2/3) cos θ}2= (1/3)sin2 θ +(2/3)cos2 θ} ((1/3)sin θ+(2/3) cos θ)2 = (1/3)sin2 θ+(2/3)cos2 θ { since (A+B)2= A2+2AB+B2} ((1/9)sin2 θ+(4/9)sin θ cos θ+(4/9)+(4/9)cos2 θ = (1/3) sin2 θ+(2/3)cos2 θ (2/9)sin 2θ = ((1/3)-(1/9))sin2 θ+((2/3)-(4/9)) cos2 θ {Since 2 sin A cos A = sin 2A} (2/9)sin 2θ = (2/9) sin2 θ+(2/9) cos2 θ (2/9)sin 2θ = (2/9)[sin2θ+cos2θ] sin 2 θ = 1 sin 2 θ = sin 2(π/4) 2 θ = n π+(-1)n(π/4), n ∈ I θ = (π/4) , θ ∈ [0, π] A ⋂ [0, π] has exactly one point. Question 3: The area of the region bounded by the lines x = 1, x = 2 and the curves x(y – ex) = sin x and 2xy = 2sin x + x3 is: 1. (a) e2 – e – 1/6 2. (b) e2 – e – 7/6 3. (c) e2 – e + 1/6 4. (d) e2 – e + 7/6 Solution: Given curve x(y – ex) = sin x y = (sin x/x)+ex ...(1) second curve 2xy = 2 sin x+x3 (given) y = (sin x/x)+(x2/2) …(2) = |[e2-(2)3/6 -e1+(1)3/6] | = e2-e-(8/6)+(1/6) Area = e2-e-7/6 Question 4: Let AB be a line segment with midpoint C, and D be the midpoint of AC. Let C1 be the circle with diameter AB, and C2 be the circle with diameter AC. Let E be a point on C1 such that EC is perpendicular to AB. Let F be a point on C2 such that DF is perpendicular to AB, and E and F lie on opposite sides of AB. Then the value of sin ∠ FEC is 1. (a) 1/√10 2. (b) 2/√10 3. (c) 1/√13 4. (d) 2/√13 Solution: Given that ∠ FEC = 90° –θ…………….. (1) We are going to find out slope of FE tanθ = slope of FE = (r-(-r/2))/(r-r/2) tan θ = 3 = perpendicular/base ⇒ cos θ = base/hypotenuse = 1/√10 ⇒ sin (90° – θ) = 1/√10 {According to reduction formula} Hence sin (∠ FEC) = 1//√10 {from equation (1)} Question 5: The number of integers x satisfying -3x4 + det $\begin{bmatrix} 1 & x &x^{2} \\ 1 & x^{2} &x^{4} \\ 1 & x^{3} & x^{6} \end{bmatrix}=0$ is equal to 1. (a) 1 2. (b) 2 3. (c) 5 4. (d) 8 Solution: Given -3x4 + det $\begin{bmatrix} 1 & x &x^{2} \\ 1 & x^{2} &x^{4} \\ 1 & x^{3} & x^{6} \end{bmatrix}=0$ ⇒ x = 0 or x(1 – x)2(x2 – 1) = 3x ⇒ x = 0 or (1 – x)2(x2 – 1) = 3 ⇒ x = 0 or x4–2x3+ 2x – 4 = 0 ⇒ x = 0 or (x–2)(x3 + 2) = 0 Integer value are 0, 2. Question 6: Let P be a non-zero polynomial such that P(1 + x) = P(1 – x) for all real x, and P(1) = 0. Let m be the largest integer such that (x – 1)m divides P(x) for all such P(x). Then m equals 1. (a) 1 2. (b) 2 3. (c) 3 4. (d) 4 Solution: P(x) is non-zero polynomial and P(1 + x) = P(1 – x) for all x Differentiate with respect to x P’(1 + x) = (–1)P’(1 – x) ⇒ P’(1 + x) = –P’(1 – x) Put x = 0 ⇒ P’(1) = –P’(1) ⇒ P’(1) + P’(1) = 0 ⇒ 2P’(1) = 0 ⇒ P’(1) = 0 and P(1) = 0 ⇒ P(x) touches the x-axis at x = 1 ⇒ P(x) = (x – 1)2 Q(x) ⇒ m = 2 such that (x – 1)m divides P(x) for all such P(x). Question 7: Let Then A has 1. (a) exactly one element 2. (b) exactly two element 3. (c) exactly three element 4. (d) infinitely many elements Solution: Given A = {x ∈ R: f(x) = 1} f(x) = 1 for x = 0 for x ≠ 0, f(x) = 1 ⇒ x sin 1/x = 1 ⇒ sin 1/x = 1/x ⇒ sin θ = θ which is true only when θ = 0 As θ ≠ 0 so it is not possible. Question 8: Let S be a subset of the plane defined by S = {(x, y); |x| + 2|y| = 1}. Then the radius of the smallest circle with centre at the origin and having non-empty intersection with S is 1. (a) 1/5 2. (b) 1/√5 3. (c) 1/2 4. (d) 2/√5 Solution: Given S = {(x, y); |x| + 2|y| = 1} So |x| + 2|y| = 1 Equations are x + 2y = 1 ……………. (1) x – 2y = 1 ……………. (2) –x + 2y = 1 ……………. (3) –x – 2y = 1 ……………. (4) Find intersecting point equation (1) & (2) x + 2y = 1 x – 2y = 1 2x = 2 x = 1 So y = 0 {from equation (1)} Intersecting point of equations (1) & (2) is (1, 0). Similarly, Find intersecting point equation (1) & (3) x + 2y = 1 –x + 2y = 1 4y = 2 y = 1/2 x = 0 {from equation (1)} Intersecting point of equations (1) & (3) is (0, 1/2). Find intersecting point equation (2) & (4) x – 2y = 1 –x – 2y = 1 –4y = 2 y = –1/2 x = 0 {from equation (2)} Intersecting point of equations (2) & (4) is (0, –1/2). Find intersecting point equation (3) & (4) –x + 2y = 1 –x – 2y = 1 –2x = 2 x = –1 So y = 0 {from equation (3)} Intersecting point of equations (3) & (4) is (–1, 0) Length of the perpendicular from a point (x1, y1) to a line ax + by = c is $d=\left | \frac{ax_{1}+by_{1}-c}{\sqrt{a^{2}+b^{2}}} \right |$ According to question minimum radius is the distance between point (0, 0) and line x + 2y = 1. So, Minimum radius = |(0+2×0-1)/ √(12+22)| = |-1/ √(1+4)| = 1/√5 Question 9: The number of solutions of the equation sin (9x) + sin (3x) = 0 in the closed interval [0, 2 π] is 1. (a) 7 2. (b) 13 3. (c) 19 4. (d) 25 Solution: Given sin(9x) + sin(3x) = 0 Question 10: Among all the parallelograms whose diagonals are 10 and 4, the one having maximum area has its perimeter lying in the interval 1. (a) (19, 20] 2. (b) (20, 21] 3. (c) (21, 22] 4. (d) (22, 23] Solution: Area of parallelogram = (1/2)d1d2sin φ Where d1 and d2 is the diagonal of the parallelogram. Maximum area of parallelogram = ½ d1d2 {since where φ =π/2 } It is a rhombus ΔAOB is a right a triangle so apply Pythagoras theorem So, a2 = (d1/2)2 Side length (a) = √((d1/2)2+ (d2/2)2) ⇒ (a) = √((10/2)2+ (4/2)2) ⇒ (a) = √((5)2+ (2)2) ⇒ (a) = √(25+4) ⇒ (a) = √29 Perimeter = 4a = 4(√29 ) ∈ [21, 22) Question 11: The number of ordered pairs (a, b) of positive integers such that (2a-1)/b and (2b-1)/a are both integers is 1. (a) 1 2. (b) 2 3. (c) 3 4. (d) more than 3 Solution: Given (2a-1)/b and (2b-1)/a are both integers. From equation (1) ⇒ 4a – 2 = 2b λ ⇒ 4a – 2 = λ (µa + 1) {From equation (2)} ⇒ (4 – λ µ)a = λ + 2 Since multiplication of λ and µ lie between only 1 & 3 because a and b are positive integer. So 1 ≤µ ≤ 3 In equation (2) put λ = 1, µ = 1 (4 – 1) a = 1 + 2 ⇒ a = 1, b = 1 Similarly, λ = 1, µ = 3 ⇒ a = 3, b = 5 λ = 3, µ = 1 ⇒ a = 5, b = 3 So, total set = 3 (1, 1), (3, 5), (5, 3) Question 12: Let z = x + iy and w = u + iv be complex numbers on the unit circle such that z2 + w2 = 1. Then the number of ordered pairs (z, w) is 1. (a) 0 2. (b) 4 3. (c) 8 4. (d) infinite Solution: Let z = e = cos α + i sin α w = e = cos β + i sin β Since, z2 + w2 = 1 ⇒ (cos α + i sin α)2 + (cos β + i sin β)2 = 1 {(A + iB)2 = A2–B2 + 2iAB} ⇒ cos2α – sin2α + 2i sinαcosα + cos2β – sin2β + 2i sinβcosβ = 1 ⇒ (cos2 α + cos2 β) + i(sin2 α + sin2 β ) = 1{cos2A = cos2A – sin2A, sin2A = 2sinA cosA} Comparing real and imaginary part So cos2 α + cos2 β = 1 and sin2 α + sin2 β = 0 2cos(α + β)cos(α -β) = 1 And sin2 α = –sin2 β Squaring both side sin22 α = sin22 β 1 – cos22 α = 1 – cos22 β cos22 α = cos22 β Taking square root both side cos2 α = ±cos2 β cos2 α = –cos2 β or cos2 α + cos2 β = 0 (cancelled) If cos2 α = cos2 β So 2cos2 α = 1 cos 2α = ½ Question 13: Let E denote the set of letters of the English alphabet, V = {a, e, i, o, u}, and C be the complement of V in E. Then, the number of four-letter words (where repetitions of letters are allowed) having at least one letter from V and at least one letter from C is 1. (a) 261870 2. (b) 314160 3. (c) 425880 4. (d) 851760 Solution: ‘V’ denotes vowels and ‘C’ denotes consonants. Total alphabet words = 26 Total vowel in alphabet = 5 Total consonant in alphabet = 21 Total 4 letter words = (26)4 Number of 4 letter words which contains only vowels = (5)4 Number of 4 letter words which contains only consonant = (21)4 Number of 4 letter words which contains atleast one vowel and atleast one consonants: (26)4 – (21)4 – (5)4 = 261870 Question 14: Let σ1, σ2, σ3 be planes passing through the origin. Assume that σ1 is perpendicular to the vector (1, 1, 1), σ2 is perpendicular to a vector (a, b, c), and σ3 is perpendicular to the vector (a2, b2, c2). What are all the positive values of a, b, and c so that σ1⋂ σ2 ⋂σ3 is a single point? 1. (a) Any positive value of a, b, and c other than 1 2. (b) Any positive value of a, b, and c where either a ≠ b, b ≠ c or a ≠ c 3. (c) Any three distinct positive values of a, b, and c 4. (d) There exist no such positive real numbers a, b, and c Solution: σ1 is perpendicular to $\hat{i}+\hat{j}+\hat{k}$ σ2 is perpendicular to $a\hat{i}+b\hat{j}+c\hat{k}$ σ3 is perpendicular to $a^{2}\hat{i}+b^{2}\hat{j}+c^{2}\hat{k}$ since σ1 , σ2 and σ3 planes are passing through the origin Hence the planes are σ1: x + y + z = 0 σ2: ax + by + cz = 0 σ3: a2x + b2y + c2z = 0 Δ = (a – b)(b – c)(c – a) For unique solution, Δ ≠ 0 ⇒ (a – b)(b – c)(c – a) ≠ 0 Hence a ≠ b, b ≠ c, c ≠ a Question 15: Ravi and Rashmi are each holding 2 red cards and 2 black cards (all four red and all four black cards are identical). Ravi picks a card at random from Rashmi, and then Rashmi picks a card at random from Ravi. This process is repeated a second time. Let p be the probability that both have all 4 cards of the same colour. Then p satisfies 1. (a) p≤ 5% 2. (b) 5% < p ≤10% 3. (c) 10% < p ≤ 15% 4. (d) 15% < p Solution: If Ravi withdraw red cards from Rashmi, then Rashmi withdraw black card from Ravi and this process repeat again, vice-versa. If Ravi withdraw black card from Rashmi P = 2(2/4)(2/5)(1/4)(1/5) P = 2/100 P = 2% Question 16: Let A1, A2 and A3 be the regions on R2 defined by A1 = {(x, y); x ≥ 0, y ≥0, 2x + 2y – x2 – y2> 1 > x + y}, A2 = {(x, y); x ≥ 0, y ≥0, x + y > 1 > x2 + y2}, A3 = {(x, y); x ≥ 0, y ≥0, x + y > 1 > x3 + y3}, Denote by |A1|, |A2| and |A3| the areas of the regions A1, A2, and A3 respectively. Then 1. (a) |A1| > |A2| > |A3| 2. (b) |A1| > |A3| > |A2| 3. (c) |A1| = |A2| < |A3| 4. (d) |A1| = |A3| > |A2| Solution: Given A1 = {(x, y); x ≥ 0, y ≥0, 2x + 2y – x2 – y2> 1 > x + y} A2 = {(x, y); x ≥ 0, y ≥0, x + y > 1 > x2 + y2} A3 = {(x, y); x ≥ 0, y ≥0, x + y > 1 > x3 + y3} For A1: Find intersecting point of circle & line ⇒ (x – 1)2 + (1 – x – 1)2 = 1 {Line x + y = 1 put in circle equation.} ⇒ x2 – 2x + 1 + x2 = 1 ⇒ 2x2 – 2x = 0 ⇒ x (x –1) = 0 ⇒ x = 0, 1 Hence y = 1, 0 (0, 1) and (1, 0) Area of A1 = (1/4)of area of circle- area of triangle = π(1)2/4 -(1/2)(1)(1) = (π/4)-(1/2) For A2: A2 = {(x, y); x ≥ 0, y ≥0, x + y > 1 > x2 + y2}, ⇒ 1 > x2 + y2, x + y > 1, x ≥ 0, y ≥ 0 Find intersecting point of circle & line ⇒ (x)2 + (1 – x)2 = 1 {Line x + y = 1 put in circle equation.} ⇒ x2 + 1– 2x + x2 = 1 ⇒ 2x2 – 2x = 0 ⇒ x (x –1) = 0 ⇒ x = 0, 1 So y = 1, 0 (0, 1) and (1, 0) Area of A2= 1/4 of area of circle – Area of triangle = π(1)2/4 – 1/2(1)(1) A2 = π/4 – 1/2 For A3: A3 = {(x, y); x ≥ 0, y ≥0, x + y > 1 > x3 + y3} ⇒ x + y > 1, x3 + y3< 1, x ≥ 0, y ≥ 0 Hence |A3| > |A2| = |A1| Question 17: Let f : R-> R be a continuous function such that f (x2) = f (x3) for all x ∈ R. Consider the following statements. (I) f is an odd function (II) f is an even function (III) f is differentiable everywhere. Then 1. (a) I is true and III is false 2. (b) II is true and III is false 3. (c)both I and III are true 4. (d) both II and III are true Solution: f: R -> R be continuous function such that f(x2) = f(x3) …(1)for all x belongs to R Put x = –x f(x2) = f(–x3) From equation (1) we have f(x3) = f(–x3) Put x3 = t we have f(t) = f(–t) So f(x) is an even function. (ii) Now take x3 = t Then f(t2/3) = f(t) Put t = t2/3 Then f((t 2/3)2) = f(t) f(t) = f(t 2/3) = f((t 2/3)2) = f((t 2/3)3) = ……….. = f((t 2/3) n) This is true for all t belongs to R and any n belongs to I Hence if we take n -> ∞, (2/3)n -> 0 then f(t) = f(t0) = 1 So f(x) is a constant function, hence it is differentiable everywhere. Question 18: Suppose a continuous function f : [0, ∞) -> R satisfies $f(x)=2\int_{0}^{x}tf(t)dt+1$ for all x $\geq$ 0. Then f(1) equals: 1. (a) e 2. (b) e2 3. (c) e4 4. (d) e6 Solution: Given f: [0, ∞)-> R f(x) = $2\int_{0}^{x}tf(t)dt+1$, for all x >0 Differentiate with respect to x So f '(x) = 2.x.f(x) f'(x)/f(x) = 2x Integrate both sides $\int \frac{f'(x)}{f(x)}= \int 2x\: dx$ Ln f(x) = x2 + c f(x) = $e^{x^{2}}+c$ Since f(0) = 1 put x = 0, y = 1 f(0) = e1 + c c = 0 So f(x) = $e^{x^{2}}+0$ f(x) = $e^{x^{2}}$ f(1) = e Question 19: Let a > 0, a ≠ 1. Then the set S of all positive real numbers b satisfying (1 + a2) (1+ b2) = 4ab is 1. (a) an empty set 2. (b) a singleton set 3. (c) a finite set containing more than one element 4. (d) (0, ∞) Solution: a> 0, a ≠ 1 (1 + a2)(1 + b2) = 4ab Equality holds true when a = b = 1 But it is given in the question that a ≠ 1 Hence, there are no values of b. Question 20: Let f : R -> R be a function defined by Then, at x = 0, f is: 1. (a) Not continuous 2. (b) Continuous but not differentiable 3. (c) Differentiable and the derivative is not continuous 4. (d) Differentiable and the derivative is continuous Solution: f : R -> R Check continuity: Hence, f(x) is continuous at x = 0 Check differentiability: R.H.D. = L.H.D. f(x) is differentiable at x = 0 For continuity = 2 – 1 = 1 Derivative is continuous at x = 0. Question 21: The points C and D on a semicircle with AB as diameter are such that AC = 1, CD = 2, and DB = 3. Then the length of AB lies in the interval 1. (a) [4, 4.1) 2. (b) [4.1, 4.2) 3. (c) [4.2, 4.3) 4. (d) [4.3, ∞ ) Solution: Let AB = x ΔABC, (AB)2 = (AC)2 + (BC)2 { since ΔABC is a right angle triangle} BC = √[x2-1] ΔABD, (AB)2 = (AD)2 + (BD)2 {since ΔABD is a right angle triangle} Apply Ptolemy theorem, we get AB × CD + AC × BD = AD × BC 2x + 3 = √[x2-1] √[x2-9] On squaring we get (2x + 3)2 = [√[x2-1] √[x2-9]]2 4x2 + 12x + 9 = (x2 – 1)(x2 – 9) {(A + B)2 = A2 + B2 + 2AB} 4x2 + 12x + 9 = x4 – 10x2 + 9 x4 – 14x2 – 12x = 0 x3 – 14x – 12 = 0 f(x) = x3 – 14x – 12 f(4.1) = (4.1)3 – 14(4.1) – 12 = –0.479 f(4.2) = (4.2)3 – 14(4.2) – 12 = 3.288 f(4.1).f(4.2) < 0 As f(x) is a continuous function therefore one root of f(x) lies in [4.1, 4.2) i.e. length of AB lies in this interval. Question 22: Let ABC be a triangle and let D be the midpoint of BC. Suppose cot(∠CAD) : cot(∠BAD) = 2 : 1. If G is the centroid of triangle ABC, then the measure of ∠ BGA is 1. (a) 900 2. (b) 1050 3. (c) 1200 4. (d) 1350 Solution: (Using the formula of length of median, AD = 1/2 √(2b2+2c2-a2) (a2/4)+b2 = 2c2+(1/4)(2b2+2c2-a2) a2/2+ b2/2= 5c2/2 a2+ b2= 5c2 Question 23: Let f(x) = x6 – 2x5 + x3 + x2 – x – 1 and g(x) = x4 – x3 – x2 – 1 be two polynomials. Let a, b, c and d be the roots of g(x) = 0. Then the value of f(a) + f(b) + f(c) + f(d) is 1. (a) -5 2. (b) 0 3. (c) 4 4. (d) 5 Solution: Given g(x) = x4 – x3 – x2 – 1 = 0 f(x) = x6 – 2x5 + x3 + x2 – x – 1 since a, b, c, d are roots of x4 – x3 – x2 – 1 = 0 Hence ∑a = 1 ∑ab = -1 Now, f(x) = x6 – 2x5 + x3 + x2 – x – 1 f(x) = x2(x4 – x3 – x2 – 1) – x(x4 – x3 – x2 – 1) + (2x2 – 2x – 1) f(x) = (x2 – x) (x4 – x3 – x2 – 1) + (2x2 – 2x – 1) f(x) = 2x2 – 2x – 1 {since x4 – x3 – x2 – 1 = 0} Now, ⇒f(a) + f(b) + f(c) + f(d) ⇒2a2 – 2a – 1 + 2b2 – 2b – 1 + 2c2 – 2c – 1 + 2d2 – 2d – 1 ⇒2[a2 + b2 + c2 + d2] – 2[a + b + c + d] – 4 ⇒2[(a + b + c + d)2 – 2(ab + bc + cd + ac + bd)] – 2[a + b + c + d] – 4 ⇒2[1 – 2(–1)] – 2(1) – 4 = 0 Question 24: Let $\vec{a}= \hat{i}+\hat{j}+\hat{k}$ , $\vec{b}= 2\hat{i}+2\hat{j}+\hat{k}$ and $\vec{c}= 5\hat{i}+\hat{j}-\hat{k}$ be three vectors. The area of the region formed by the set of points whose position vectors $\vec{r}$ satisfy the equations $\vec{r}.\vec{a}=5$ and $\left |\vec{r}-\vec{b} \right |+\left |\vec{r}-\vec{c} \right |=4$ is closest to the integer 1. (a) 4 2. (b) 9 3. (c) 14 4. (d) 19 Solution: Given $\vec{a}= \hat{i}+\hat{j}+\hat{k}$ $\vec{b}= 2\hat{i}+2\hat{j}+\hat{k}$ $\vec{c}= 5\hat{i}+\hat{j}-\hat{k}$ $\vec{r}.\vec{a}=5$ ⇒ x+y+z = 5 $\left |\vec{r}-\vec{b} \right |+\left |\vec{r}-\vec{c} \right |=4$ Sum of distances of a point $\vec{r}$ from two fixed points with position vector $\vec{b}\: and\: \vec{c}$ is constant. $\left | \vec{b}- \vec{c}\right |=\left | 2\hat{i}+2\hat{j}+\hat{k}-5\hat{i} -\hat{j}+\hat{k}\right |$ = $\left | -3\hat{i}+\hat{j}+2\hat{k}\right |$ = $\sqrt{(3^{2})+1^{2}+2^{2}}$ = $\sqrt{14}$ With $\vec{b}$and $\vec{c}$ lie on the plane x + y + z = 5 Area in the plane constitutes an ellipse Distance between $\vec{b}$ and $\vec{c}$ = 2 × (semi major axis) × e = $\sqrt{14}$ 2ae = $\sqrt{14}$ 2a = 4 So 4e = $\sqrt{14}$ e = $\frac{\sqrt{14}}{4}$ Eccentricity (e) = √(1-b2 /a2) (√14/4)2 = 1-b2/a2 b2 = ½ area of ellipse = πab = π(2)1/√2 = √2 π Question 25: The number of solutions to sin(π sin2(θ)) + sin(π cos2(θ)) = 2cos (π/2 cos θ ) satisfying 0 ≤ θ ≤ 2 π is 1. (a) 1 2. (b) 2 3. (c) 4 4. (d) 7 Solution: Given sin(π sin2(θ)) + sin(π cos2(θ)) = 2cos (π/2 cos θ ) {sin C+sin D = 2 sin ((C+D)/2 )cos((C-D)/2)} θ = 2n π, 2n π ±2n π/3 θ belongs to {0, 2 π, 2 π/3, 4 π/3} Case-II: {Take – ve} cos2 θ + cos θ = 4n Only possible cos2 θ + cos θ = 0 {cos2 θ = 2cos2 θ – 1} 2cos2 θ + cos θ – 1 = 0 (cos θ + 1)(2cos θ – 1) = 0 cos θ = –1, 1/2 θ = (2n + 1) π, 2n π ± π/3 θ belongs to { π, π/3, 5 π/3} Total solutions is 7. Question 26: Let J = $\int_{0}^{1}\frac{x}{1+x^{8}}dx$ Consider the following assertions: I. J > 1/4 II. J < π/8 Then 1. (a) Only I is true 2. (b) only II is true 3. (c) both I and II are true 4. (d) neither I nor II is true Solution: Given J = $\int_{0}^{1}\frac{x}{1+x^{8}}dx$ Now,1 + x8< 2 {Limit 0 to 1} 1/(1+x8)>1/2 ⇒ x/(1+x8)>x/2 Apply integration both sides $\int_{0}^{1}\frac{x}{1+x^{8}}dx>\int_{0}^{1}\frac{x}{2}$ $J>\left [ \frac{x^{2}}{4} \right ]^{1}_{0}$ J>1/4 Statement I is true. Now, 1 + x4> 1 + x8 1/(1+x4)<1/(1+x8) x/1+x8>x/(1+x4) Apply integration on both sides $\int_{0}^{1}\frac{x}{1+x^{8}}dx> \int_{0}^{1}\frac{x}{1+x^{4}}dx$ $J> \int_{0}^{1}\frac{x}{1+x^{4}}dx$ Put x2 = t 2xdx = dt $J> \frac{1}{2}\int_{0}^{1}\frac{dt}{1+t^{2}}$ $J> \frac{1}{2}[\tan t]^{1}_{0}$ J>(1/2) π/4 J> π/8 Question 27: Let f: (–1, 1) -> R be a differentiable function satisfying (f '(x))4 = 16(f(x))2 for all x belongs to (–1, 1), f(0) = 0. The number of such function is 1. (a) 2 2. (b) 3 3. (c) 4 4. (d) more than 4 Solution: Given (f '(x))4 = 16(f(x))2 Taking square root both side. (f '(x))2 = ±4(f(x)) Case-I: If f(x) > 0 (f '(x))2 = 4(f(x)) Taking square root both side f '(x) = ±2√f(x) f '(x) = 2√f(x) f '(x)/ √f(x) = 2 Integrate both side Integrate both side Case-II: If f(x) < 0 (f '(x))2 = –4(f(x)) f '(x) = ±2 √-f(x) f '(x) = 2√-f(x) or f '(x) = –2√-f(x) Similarly f(x) = –x2, –1 < x < 0 or f(x) = –x2, 1 > x > 0 Case-III: Also, one singular solution of given differential equation is f(x) = 0, –1 < x < 1 Hence, functions can be f(x) = x2, –1 < x < 1 f(x) = –x2, –1 < x < 1 More functions are also possible. Question 28: For x ∈ R, let f(x) = |sin x| and g(x) = $\int_{0}^{x}f(t)dt$ . Let p(x) = g(x) – (2/ π)x. Then 1. (a) p(x + π) = p(x) for all x 2. (b) p(x + π) ≠ p(x) for at least one but finitely many x 3. (c) p(x + π) ≠ (x) for infinitely many x 4. (d) p is a one-one function Solution: Given f(x) = |sin x| …(1) g(x) = $\int_{0}^{x}f(t)dt$ …(2) p(x) = g(x) – (2/π) x …(3) From equation (2) g(x) =$\int_{0}^{x}f(t)dt$ Put x-> x+π From equation (3) p(x) = g(x) – (2/π)x Put x -> x + π p(x + π) = g(x + π) – 2/ π (x + π) p(x + π) = g(x) + 2 – 2x/ π – 2 {From equation (4)} p(x + π) = g(x) – 2x/ π …(5) eq.(5) – eq.(3) p(x + π) – p(x) = g(x + π) – g(x) – 2 p(x + π) – p(x) = 2 – 2 p(x + π) = p(x) for all x. Question 29: Let A be the set of vectors $\vec{a}$ = (a1, a2, a3) satisfying Then 1. (a) A is empty 2. (b) A contains exactly one element 3. (c) A has 6 elements 4. (d) A has infinitely many elements Solution: Given a12+(3a22/4)+(7a32/16) = a1a2+ a2a3 /4+ a3a1/2 (16 a12+12a22+7a32)/16 = (4a1a2+a2a3+2a3a1)/4 16 a12+12a22+7a32 = 4(4a1a2+a2a3+2a3a1) 16 a12+12a22+7a32 - 16a1a2-4a2a3-8a3a1 = 0 8 a12- 16a1a2 +8a22 +8a12-8a1a3+2a32+4a2 2-4a2a1+a32+4a32 = 0 (2√2a1-2√2a2)2+(2√2a1-√2a3)2+(2a2-a3)2+4a32 = 0 Only possible when a1 = a2 = a3 = 0 So only one element in the set. Question 30: Let f : [0, 1] -> [0, 1] be a continuous function such that x2 + (f(x))2 ≤ 1 for all x ∈ [0, 1] and 1. (a) π/12 2. (b) π/15 3. (c) (√2-1) π /2 4. (d) π/10 Solution: Given x2 + (f(x))2 ≤ 1 Let y = f(x) x2 + y2 ≤ 1 for all x belongs to [0, 1] y2 ≤ 1 –x2 y = λ√(1-x2) Apply integration = $\left [ \sin ^{-1}x \right ]_{\frac{1}{2}}^{\frac{1}{\sqrt{2}}}$ = $\left [ \sin ^{-1}\frac{1}{\sqrt{2}} -\sin ^{-1}\frac{1}{2}\right ]$ = π/4- π/6 = π/12
{}
# Linear Algebra/Properties of Determinants Jump to: navigation, search ← Exploration Properties of Determinants The Permutation Expansion → As described above, we want a formula to determine whether an $n \! \times \! n$ matrix is nonsingular. We will not begin by stating such a formula. Instead, we will begin by considering the function that such a formula calculates. We will define the function by its properties, then prove that the function with these properties exists and is unique and also describe formulas that compute this function. (Because we will show that the function exists and is unique, from the start we will say "$\det(T)$" instead of "if there is a determinant function then $\det(T)$" and "the determinant" instead of "any determinant".) Definition 2.1 A $n \! \times \! n$ determinant is a function $\det:\mathcal{M}_{n \! \times \! n}\to \mathbb{R}$ such that 1. $\det (\vec{\rho}_1,\dots,k\cdot\vec{\rho}_i + \vec{\rho}_j,\dots,\vec{\rho}_n) =\det (\vec{\rho}_1,\dots,\vec{\rho}_j,\dots,\vec{\rho}_n)$ for $i\ne j$ 2. $\det (\vec{\rho}_1,\ldots,\vec{\rho}_j, \dots,\vec{\rho}_i,\dots,\vec{\rho}_n) = -\det (\vec{\rho}_1,\dots,\vec{\rho}_i,\dots,\vec{\rho}_j, \dots,\vec{\rho}_n)$ for $i\ne j$ 3. $\det (\vec{\rho}_1,\dots,k\vec{\rho}_i,\dots,\vec{\rho}_n) = k\cdot \det (\vec{\rho}_1,\dots,\vec{\rho}_i,\dots,\vec{\rho}_n)$ for $k\ne 0$ 4. $\det(I)=1$ where $I$ is an identity matrix (the $\vec{\rho}\,$'s are the rows of the matrix). We often write $\left|T\right|$ for $\det (T)$. Remark 2.2 Property (2) is redundant since $T\;\xrightarrow[]{\rho_i+\rho_j} \;\xrightarrow[]{-\rho_j+\rho_i} \;\xrightarrow[]{\rho_i+\rho_j} \;\xrightarrow[]{-\rho_i} \;\hat{T}$ swaps rows $i$ and $j$. It is listed only for convenience. The first result shows that a function satisfying these conditions gives a criteria for nonsingularity. (Its last sentence is that, in the context of the first three conditions, (4) is equivalent to the condition that the determinant of an echelon form matrix is the product down the diagonal.) Lemma 2.3 A matrix with two identical rows has a determinant of zero. A matrix with a zero row has a determinant of zero. A matrix is nonsingular if and only if its determinant is nonzero. The determinant of an echelon form matrix is the product down its diagonal. Proof To verify the first sentence, swap the two equal rows. The sign of the determinant changes, but the matrix is unchanged and so its determinant is unchanged. Thus the determinant is zero. For the second sentence, we multiply a zero row by −1 and apply property (3). Multiplying a zero row with a constant leaves the matrix unchanged, so property (3) implies that $\det(T) = -\det(T)$. The only way this can be is if $\det(T) = 0$. For the third sentence, where $T \rightarrow\cdots\rightarrow\hat{T}$ is the Gauss-Jordan reduction, by the definition the determinant of $T$ is zero if and only if the determinant of $\hat{T}$ is zero (although they could differ in sign or magnitude). A nonsingular $T$ Gauss-Jordan reduces to an identity matrix and so has a nonzero determinant. A singular $T$ reduces to a $\hat{T}$ with a zero row; by the second sentence of this lemma its determinant is zero. Finally, for the fourth sentence, if an echelon form matrix is singular then it has a zero on its diagonal, that is, the product down its diagonal is zero. The third sentence says that if a matrix is singular then its determinant is zero. So if the echelon form matrix is singular then its determinant equals the product down its diagonal. If an echelon form matrix is nonsingular then none of its diagonal entries is zero so we can use property (3) of the definition to factor them out (again, the vertical bars $\left|\cdots\right|$ indicate the determinant operation). $\begin{vmatrix} t_{1,1} &t_{1,2} & &t_{1,n} \\ 0 &t_{2,2} & &t_{2,n} \\ & &\ddots \\ 0 & & &t_{n,n} \end{vmatrix} = t_{1,1}\cdot t_{2,2}\cdots t_{n,n}\cdot \begin{vmatrix} 1 &t_{1,2}/t_{1,1} & &t_{1,n}/t_{1,1} \\ 0 &1 & &t_{2,n}/t_{2,2} \\ & &\ddots \\ 0 & & &1 \end{vmatrix}$ Next, the Jordan half of Gauss-Jordan elimination, using property (1) of the definition, leaves the identity matrix. $= t_{1,1}\cdot t_{2,2}\cdots t_{n,n}\cdot \begin{vmatrix} 1 &0 & &0 \\ 0 &1 & &0 \\ & &\ddots \\ 0 & & &1 \end{vmatrix} = t_{1,1}\cdot t_{2,2}\cdots t_{n,n}\cdot 1$ Therefore, if an echelon form matrix is nonsingular then its determinant is the product down its diagonal. That result gives us a way to compute the value of a determinant function on a matrix. Do Gaussian reduction, keeping track of any changes of sign caused by row swaps and any scalars that are factored out, and then finish by multiplying down the diagonal of the echelon form result. This procedure takes the same time as Gauss' method and so is sufficiently fast to be practical on the size matrices that we see in this book. Example 2.4 Doing $2 \! \times \! 2$ determinants $\begin{vmatrix} 2 &4 \\ -1 &3 \end{vmatrix} = \begin{vmatrix} 2 &4 \\ 0 &5 \end{vmatrix} =10$ with Gauss' method won't give a big savings because the $2 \! \times \! 2$ determinant formula is so easy. However, a $3 \! \times \! 3$ determinant is usually easier to calculate with Gauss' method than with the formula given earlier. $\begin{vmatrix} 2 &2 &6 \\ 4 &4 &3 \\ 0 &-3 &5 \end{vmatrix} = \begin{vmatrix} 2 &2 &6 \\ 0 &0 &-9 \\ 0 &-3 &5 \end{vmatrix} = -\begin{vmatrix} 2 &2 &6 \\ 0 &-3 &5 \\ 0 &0 &-9 \end{vmatrix} =-54$ Example 2.5 Determinants of matrices any bigger than $3 \! \times \! 3$ are almost always most quickly done with this Gauss' method procedure. $\begin{vmatrix} 1 &0 &1 &3 \\ 0 &1 &1 &4 \\ 0 &0 &0 &5 \\ 0 &1 &0 &1 \end{vmatrix} = \begin{vmatrix} 1 &0 &1 &3 \\ 0 &1 &1 &4 \\ 0 &0 &0 &5 \\ 0 &0 &-1 &-3 \end{vmatrix} = -\begin{vmatrix} 1 &0 &1 &3 \\ 0 &1 &1 &4 \\ 0 &0 &-1 &-3 \\ 0 &0 &0 &5 \end{vmatrix} =-(-5)=5$ The prior example illustrates an important point. Although we have not yet found a $4 \! \times \! 4$ determinant formula, if one exists then we know what value it gives to the matrix — if there is a function with properties (1)-(4) then on the above matrix the function must return $5$. Lemma 2.6 For each $n$, if there is an $n \! \times \! n$ determinant function then it is unique. Proof For any $n \! \times \! n$ matrix we can perform Gauss' method on the matrix, keeping track of how the sign alternates on row swaps, and then multiply down the diagonal of the echelon form result. By the definition and the lemma, all $n \! \times \! n$ determinant functions must return this value on this matrix. Thus all $n \! \times \! n$ determinant functions are equal, that is, there is only one input argument/output value relationship satisfying the four conditions. The "if there is an $n \! \times \! n$ determinant function" emphasizes that, although we can use Gauss' method to compute the only value that a determinant function could possibly return, we haven't yet shown that such a determinant function exists for all $n$. In the rest of the section we will produce determinant functions. ## Exercises For these, assume that an $n \! \times \! n$ determinant function exists for all $n$. This exercise is recommended for all readers. Problem 1 Use Gauss' method to find each determinant. 1. $\begin{vmatrix} 3 &1 &2 \\ 3 &1 &0 \\ 0 &1 &4 \end{vmatrix}$ 2. $\begin{vmatrix} 1 &0 &0 &1 \\ 2 &1 &1 &0 \\ -1 &0 &1 &0 \\ 1 &1 &1 &0 \end{vmatrix}$ Problem 2 Use Gauss' method to find each. 1. $\begin{vmatrix} 2 &-1 \\ -1 &-1 \end{vmatrix}$ 2. $\begin{vmatrix} 1 &1 &0 \\ 3 &0 &2 \\ 5 &2 &2 \end{vmatrix}$ Problem 3 For which values of $k$ does this system have a unique solution? $\begin{array}{*{4}{rc}r} x & & &+ &z &- &w &= &2 \\ & &y &- &2z & & &= &3 \\ x & & &+ &kz & & &= &4 \\ & & & &z &- &w &= &2 \end{array}$ This exercise is recommended for all readers. Problem 4 Express each of these in terms of $\left|H\right|$. 1. $\begin{vmatrix} h_{3,1} &h_{3,2} &h_{3,3} \\ h_{2,1} &h_{2,2} &h_{2,3} \\ h_{1,1} &h_{1,2} &h_{1,3} \end{vmatrix}$ 2. $\begin{vmatrix} -h_{1,1} &-h_{1,2} &-h_{1,3} \\ -2h_{2,1} &-2h_{2,2} &-2h_{2,3} \\ -3h_{3,1} &-3h_{3,2} &-3h_{3,3} \end{vmatrix}$ 3. $\begin{vmatrix} h_{1,1}+h_{3,1} &h_{1,2}+h_{3,2} &h_{1,3}+h_{3,3} \\ h_{2,1} &h_{2,2} &h_{2,3} \\ 5h_{3,1} &5h_{3,2} &5h_{3,3} \end{vmatrix}$ This exercise is recommended for all readers. Problem 5 Find the determinant of a diagonal matrix. Problem 6 Describe the solution set of a homogeneous linear system if the determinant of the matrix of coefficients is nonzero. This exercise is recommended for all readers. Problem 7 Show that this determinant is zero. $\begin{vmatrix} y+z &x+z &x+y \\ x &y &z \\ 1 &1 &1 \end{vmatrix}$ Problem 8 1. Find the $1 \! \times \! 1$, $2 \! \times \! 2$, and $3 \! \times \! 3$ matrices with $i,j$ entry given by $(-1)^{i+j}$. 2. Find the determinant of the square matrix with $i,j$ entry $(-1)^{i+j}$. Problem 9 1. Find the $1 \! \times \! 1$, $2 \! \times \! 2$, and $3 \! \times \! 3$ matrices with $i,j$ entry given by $i+j$. 2. Find the determinant of the square matrix with $i,j$ entry $i+j$. This exercise is recommended for all readers. Problem 10 Show that determinant functions are not linear by giving a case where $\left|A+B\right|\neq\left|A\right|+\left|B\right|$. Problem 11 The second condition in the definition, that row swaps change the sign of a determinant, is somewhat annoying. It means we have to keep track of the number of swaps, to compute how the sign alternates. Can we get rid of it? Can we replace it with the condition that row swaps leave the determinant unchanged? (If so then we would need new $1 \! \times \! 1$, $2 \! \times \! 2$, and $3 \! \times \! 3$ formulas, but that would be a minor matter.) Problem 12 Prove that the determinant of any triangular matrix, upper or lower, is the product down its diagonal. Problem 13 Refer to the definition of elementary matrices in the Mechanics of Matrix Multiplication subsection. 1. What is the determinant of each kind of elementary matrix? 2. Prove that if $E$ is any elementary matrix then $\left|ES\right|=\left|E\right|\left|S\right|$ for any appropriately sized $S$. 3. (This question doesn't involve determinants.) Prove that if $T$ is singular then a product $TS$ is also singular. 4. Show that $\left|TS\right|=\left|T\right|\left|S\right|$. 5. Show that if $T$ is nonsingular then $\left|T^{-1}\right|=\left|T\right|^{-1}$. Problem 14 Prove that the determinant of a product is the product of the determinants $\left|TS\right|=\left|T\right|\,\left|S\right|$ in this way. Fix the $n \! \times \! n$ matrix $S$ and consider the function $d:\mathcal{M}_{n \! \times \! n}\to \mathbb{R}$ given by $T\mapsto \left|TS\right|/\left|S\right|$. 1. Check that $d$ satisfies property (1) in the definition of a determinant function. 2. Check property (2). 3. Check property (3). 4. Check property (4). 5. Conclude the determinant of a product is the product of the determinants. Problem 15 A submatrix of a given matrix $A$ is one that can be obtained by deleting some of the rows and columns of $A$. Thus, the first matrix here is a submatrix of the second. $\begin{pmatrix} 3 &1 \\ 2 &5 \end{pmatrix} \qquad \begin{pmatrix} 3 &4 &1 \\ 0 &9 &-2 \\ 2 &-1 &5 \end{pmatrix}$ Prove that for any square matrix, the rank of the matrix is $r$ if and only if $r$ is the largest integer such that there is an $r \! \times \! r$ submatrix with a nonzero determinant. This exercise is recommended for all readers. Problem 16 Prove that a matrix with rational entries has a rational determinant. ? Problem 17 Find the element of likeness in (a) simplifying a fraction, (b) powdering the nose, (c) building new steps on the church, (d) keeping emeritus professors on campus, (e) putting $B$, $C$, $D$ in the determinant $\begin{vmatrix} 1 &a &a^2 &a^3 \\ a^3 &1 &a &a^2 \\ B &a^3 &1 &a \\ C &D &a^3 &1 \end{vmatrix}.$ (Anning & Trigg 1953) Solutions ## References • Anning, Norman (proposer); Trigg, C. W. (solver) (Feb. 1953), "Elementary problem 1016", American Mathematical Monthly (American Mathematical Society) 60 (2): 115 . ← Exploration Properties of Determinants The Permutation Expansion →
{}
How Missile ballast stability works? 1. Apr 26, 2017 Zahid launching a missile from the water, how the ballast stability works? 2. Apr 26, 2017 Mech_Engineer Have you considered trying a simple internet search for "ship ballast stabilization"? 3. Apr 26, 2017 Zahid yes, i m working on that, but i feel like ship have big ballast system compare to missile which is 1m diameter, also i would need some exact work on, missile stability 4. Apr 26, 2017 Staff: Mentor Welcome to the PF. Your question does not make sense as you have asked it. Could you please provide a lot more detail and some sketches of what you are asking? Missiles do not carry ballast. And ships that launch missiles would generally not need to do anything with their ballast when launching missiles, since the reaction forces involved in the launch are a tiny fraction of the weight of the ship. https://s-media-cache-ak0.pinimg.com/originals/59/d9/29/59d9293c7f45df8e0c5a0951f2f8d0cb.jpg 5. Apr 26, 2017 Zahid yes, you are right but what i m working on is totally different thing, have you heard about sea dragon or seabee or if you could visit to www.rippleaerospace.com here you will see the missile launch. what i want to work is on the Missile control system and ballast stability thank you very much. Last edited: Apr 26, 2017 6. Apr 26, 2017 Staff: Mentor Are you designing a weapon to attack the USA? The reference you provided says that the missile is underwater only 2 seconds after launch. Inertia will be an important factor, perhaps the dominant factor. You need a dynamic model, including the spatial distribution of buoyancy, and including the hydrodynamics of the water passing over the outside shell at a pretty good speed. That is not simple. The data you need from real missiles is probably classified. Do you have access to the missile data and dynamic computer models? 7. Apr 26, 2017 Zahid Haha, No access to any data. just working on a university project. 8. Apr 26, 2017 Staff: Mentor I only read a little at the link. Why do they use that technique to launch the missile? It seems like it would be much easier to just launch it from a small boat that is towed by the ship. What advantage is there to having it in the water held upright by a ballast before launch? 9. Apr 26, 2017 Staff: Mentor That's a question for a sailor. For many consecutive days, seas can be too rough to have a rocket free standing vertically without toppling and /or spoiling the launch angle. If you bolt it to the deck, it could capsize the boat because the c.g. is too high. 10. Apr 26, 2017 Staff: Mentor If you have no data, then make a computer model and run a range of cases covering a range of assumptions about ballast distribution. With only 2 seconds underwater, even an unstable ballast distribution may not tip enough to ruin the launch. It takes time to tip. 11. Apr 27, 2017 Zahid its not important that i have to do all what is in that website, but that is just an example. 12. Apr 27, 2017 Zahid my focus area is Dynamics of Missile ballast`, not the whole System. 13. Apr 29, 2017 JBA It appears that they intend for the rocket to be buoyant for towing to the launch site and if it is to be towed then the longitudinal buoyancy cg must be in the center of the rocket length so they require a ballast to hold it vertical for launch. At the same time I have rarely seen so much technobabble to promote a concept that is not very different from the method used for launching rockets and ICBM's from submarines, where the rockets are ejected vertically from the submarines before firing. 14. May 27, 2017 Zahid can any body help me? im in critical condition 15. May 27, 2017 Staff: Mentor 16. May 27, 2017 Staff: Mentor 17. May 28, 2017 Zahid the rocket is layed in th water, we fill the ballast with water to upheld(stand) the rocket. Rocket Specifications: Length: 7m Weight: 602 kg Diameter: 0.52 m the length of the ballast is 2m the rocket is two stage rocket. the Ballast have 5 chambers for water to fill in. how much water is need in the ballast to make the rocket stand? what is rocket stability calculations during standing and launch? how much pressure is released during the launch? and other specification related to the dynamics of the ballast system thank you in advance, any body can give detail. 18. May 28, 2017 Staff: Mentor You don't expect us to do the whole project for you, do you? Please show us what calculations you have done so far. 19. May 28, 2017 Zahid i have completed the theory part now need to work on the calculations, also did design and simulations. but not yet how to calculate the stability in the water, the behaviour of the surounding(wind and water wave) during tilting an standing( the angle of heel). 20. May 28, 2017 Staff: Mentor Show us your work or this schoolwork project thread or it will be closed. Do the work please and show us. Last edited: May 30, 2017
{}
# Collections Rant's variable system has 4 collection types: list, map, string, and range. ## List The list type represents a mutable, resizable, ordered collection of zero or more values. Lists are initialized using a pair of parentheses containing the list elements, separated by semicolons: # A pair of parentheses is treated like an empty list <$empty-list = ()> # Initialize a list with one empty element <$one-empty = (~)> # Create a list of numbers <$lucky-numbers = (1; 2; 3; 5; 7; 11; 13;)> # Create a list of lists <$important-lists = ((A; B; C); (D; E; F))> # Trailing semicolons are allowed! <$powers-of-two = (1; 2; 4; 8; 16;)> ### List indexing List indices start at 0 and can be accessed by adding the index to the accessor path. You can also use negative indices to access items relative to the end of a list, starting from -1 and going down. # Create a list <$list = (1;2;3;4;5;6;7;8;9)> # Get first item <list/0>\n # 1 # Get second item <list/1>\n # 2 # Get last item <list/-1>\n # 9 # Get second to last item <list/-2>\n # 8 # Change third item from 3 to -3 <list/2 = -3> <list/2> # -3 ## Map The map type represents an unordered, mutable, resizable collection of zero or more key-value pairs, where each key is a unique string. Map keys are always coerced to strings; if you try to access a map using a non-string key, the key will be automatically coerced to a string before use. Maps use similar syntax to lists, but with some extra requirements: • The @ symbol must appear before the opening ( to identify the collection as a map. • Each element must be a key-value pair, with the key and value separated by an = symbol. Map keys come in two flavors: • Static keys: Evaluated at compile-time. They must be identifiers or string literals. • Dynamic keys: Evaluated at run-time. They must be blocks. # Create an empty map <$empty-map = @()> # Create a map with various value types <$secret-key = occupation> <$person = @( name = John; age = 25; "favorite color" = {red|green|blue}; {<secret-key>} = programmer )> ## Range The range type represents a linear series of integers parameterized by three values: 1. an inclusive start bound, 2. an exclusive end bound, 3. the interval between values in the range. As such, very large ranges can be stored, indexed, and enumerated without needing to pre-allocate all possible values in memory. Ranges are created with the built-in [range] function: # Print numbers 0 through 4 [cat: **[range: 0; 5]; \n] ## Output: 0 1 2 3 4 ## Backwards ranges are possible, too; just use a start bound larger than the end bound: # Print numbers 4 through 0 [cat: **[range: 4; -1]; \n] ## Output: 4 3 2 1 0 ## By default, [range] uses an interval of 1, but you can change this by adding a third argument: # Print every second number between 0 and 9 [cat: **[range: 0; 10; 2]; \n] ## Output: 0 2 4 6 8 ## ## Retrieving values from collections Variable accessors can access individual elements in lists and maps by using the access operator /. <$person = @( name = Bob; age = 30; hobbies = (programming;hiking) )> name = <person/name>\n age = <person/age>\n hobbies = [join: <person/hobbies>; ,\s] ## Output: name = Bob age = 30 hobbies = programming, hiking ## Additionally, a variable access path does not need to be made out of entirely constants. You can use a block to resolve to a key or index. Map example <$my-map = @( a = foo; b = bar; c = baz )> <my-map/{{a|b|c}}> # Outputs "foo", "bar", or "baz" List example <$my-list = (foo;bar;baz)> <my-list/0> # "foo" <my-list/1> # "bar" <my-list/2> # "baz" <my-list/{[rand:0;2]}> # "foo", "bar", or "baz" ## Supported operations Some collections can be mutated (modified), while others are read-only. Some can be sliced but not spliced. Below is a breakdown of which operations each collection type supports: list🟢🟢🟢🟢 map🟢🟢🔴🔴 string🟢🔴🟢🔴 range🟢🔴🟢🔴 Legend 🟢 = supported; 🔴 = not supported ## Collection auto-concatenation Multiple collections of the same type are automatically concatenated or merged when printed alone in the same sequence. This significantly reduces the amount of boilerplate required when generating collections with varying amount of elements, or elements that are conditionally included. ### Examples # List auto-concatenation [assert-eq: (A)(B); (A; B)] # Probabilistic list generation <%strange-list = { (A; B) | (C; D) } { (E; F) | (G; H) }> ## <strange-list> will be one of: * (A; B; E; F) * (A; B; G; H) * (C; D; E; F) * (C; D; G; H) ## # Automatically combine two maps <%data = @( foo = 1 ) @( bar = 2 ) > [assert-eq: <data/foo?>; 1] [assert-eq: <data/bar?>; 2]
{}
In 1787 Fourier decided to train for the priesthood and entered the Benedictine abbey of St Benoit-sur-Loire. When played, the sounds of the notes of the chord mix together and form a sound wave. the Laplace transform is 1 /s, but the imaginary axis is not in the ROC, and therefore the Fourier transform is not 1 /jω in fact, the integral ∞ −∞ f … In this paper we present a simple open-source web application, which can help students to understand the basics of the FT applied to nuclear magnetic resonance (NMR) spectroscopy. Thermal. We’ll be using the Fourier Transforms submodule in the SciPy package—scipy.fft.We’ll be using the SciPy Fast Fourier Transform (scipy.fft.fft) function to compute the Fourier Transform.If you’re familiar with sorting algorithms, think of the Fast Fourier Transform (FFT) as the Quicksort of Fourier Transforms. Maple tells me the Fourier transform is $${\frac { \left( -1+{\omeg... Stack Exchange Network Stack Exchange network consists of 176 Q&A communities including Stack Overflow , the largest, most trusted online community for developers to … The beam finally passes to the detector. Fourier Series Fourier series simply states that, periodic signals can be represented into sum of sines and cosines when multiplied with a certain weight.It further states that periodic signals can be broken down into further signals with the following properties. It deals mostly with work of Carl Friedrick Gauss, an eminent German mathematician who … 1. His interest in mathematics continued, however, and he corresponded with C L Bonard, the professor of mathematics at Auxerre. The Discrete-Time Fourier Transform didn’t get rid of infinities 1 & 2, but it did do away with infinity number 3, as its name suggests. efine the Fourier transform of a step function or a constant signal unit step what is the Fourier transform of f (t)= 0 t< 0 1 t ≥ 0? Gauss and the History of the Fast Fourier Transform INTRODUCTION THE fast Fourier transform (Fm has become well known . History and Introduction 1 2. Fourier transform. Fourier was unsure if he was making the right decision in training for the priesthood. This is the reason why sometimes the Fourier spectrum is expressed as a function of .. Dilles, J. ... A Fourier transform converts the time domain to the frequency domain with absorption as a function of frequency. In Fourier transform 1/2\pi in front is used in a popular text Folland, Fourier Analysis and its applications. When the variable u is complex, the Fourier transform is equivalent to the Laplace transform. The goals for the course are to gain a facility with using the Fourier transform, both specific techniques and general principles, and learning to recognize when, why, and how it is used. Featured on Meta New Feature: Table Support A table of Fourier Transform pairs with proofs is here. 0. Template:Annotated image Template:Fourier transforms The Fourier transform decomposes a function of time (a signal) into the frequencies that make it up, similarly to how a musical chord can be expressed as the amplitude (or loudness) of its constituent notes. Date of Publication: October 1984 . Fast Fourier Transform, as proposed by Cooley and Tukey [7]. Denoted , it is a linear operator of a function f(t) with a real argument t (t 0) that transforms it to … \endgroup – Alexandre Eremenko Mar 23 '17 at 13:29 6 \begingroup The comment by @nfdc23 explains why number theorists prefer the 2nd convention. History and Introduction Browse other questions tagged fourier-analysis fourier-transform fast-fourier-transform fourier-restriction or ask your own question. The Fourier transform of a function is complex, with the magnitude representing the amount of a given frequency and the argument representing the phase shift from a sine wave of that frequency.$$ Under the action of the Fourier transform linear operators on the original space, which are invariant with respect to a shift, become (under certain conditions) multiplication operators in the image space. History Of Laplace Transform. This site is designed to present a comprehensive overview of the Fourier transform, from the theory to specific applications. January 2013; DOI: 10.1007/978-3-0348-0603-9. This works because each of the different note's waves interfere with each other by adding together or canceling out at different points in the wave. 3.Detectors. Fourier transform (FT) is named in the honor of Joseph Fourier (1768-1830), one of greatest names in the history of mathematics and physics. Highlights in the History of the Fourier Transform. Fourier Transform of $\sin(2 \pi f_0 t)$ using only the Fourier transform of $\cos(2 \pi f_0 t)$ 1. 55. So let’s compare the equations for the Fourier Transform and the Discrete-Time Fourier Transform. A thorough tutorial of the Fourier Transform, for both the laymen and the practicing scientist. History of IR and FTIR spectroscopy. This term can also be applied to both the frequency domain representation and the mathematical function used. Appendix 1 11 1. The OFT is used in many disciplines to obtain the spectrum or . In mathematics, graph Fourier transform is a mathematical transform which eigendecomposes the Laplacian matrix of a graph into eigenvalues and eigenvectors.Analogously to classical Fourier Transform, the eigenvalues represent frequencies and eigenvectors form what is known as a graph Fourier basis.. Domínguez A. PMID: 27192746 [Indexed for MEDLINE] The Fourier transform helps in extending the Fourier series to non-periodic functions, which allows viewing any function as a sum of simple sinusoids. Overview of the Continuous Fourier Transform and Convolutions 2 3. ENGR 383 Signals and Systems Professor Paul M. Kump Course Description: Introduction to continuous- and discrete-time signals and systems with emphasis on Fourier … PROCEEDINGS OF THE IEEE.VOL. History Of Laplace Transform The Laplace transform is a widely used integral transform with many applications in physics and engineering. A short note on the "invention" of the Fourier transform: in Plancherel's "Contribution à l'étude de la représentation d'une fonction arbitraire par les intégrales définies" (1910) Rendiconti del Circolo Matematico di Palermo he wrote (beginning of Chapter 5, p328; translation mine):. $\endgroup$ – md2perpe Dec 20 at 20:24 The Graph Fourier transform is important in spectral graph theory. The Fourier transform is a math function that can be used to find the base frequencies that a wave is made of. See also Fourier integral, spectral function. Fourier Transform Spectroscopy (FTS) 14 - 17 November 2016 14 November 2016 Kongresshalle am Zoo Leipzig, Leipzig, Germany The Fourier Transform Spectroscopy (FTS) Meeting focuses on the latest advances in instrumentation and applications of FTS to astronomy and astrophysics, atmospheric science and remote sensing, laboratory spectroscopy, analytical chemistry, bio-medicine, and a … Gauss and the history of the fast fourier transform Published in: IEEE ASSP Magazine ( Volume: 1 , Issue: 4 , October 1984) Article #: Page(s): 14 - 21. Applications of Fourier Analysis [FD] 6/15 CASE 2 - APERIODIC CONTINUOUS FUNCTIONS A continuous-time unbounded aperiodic function x(t) has a continuous unbounded frequency spectrum X(jω)obtained via the Continuous Time Fourier Transform (CTFT).Conceptually, the CTFT may be thought of the limit of (1.1) in the case where the period T→∞ [4]. $\begingroup$ this is the Fourier transform $\endgroup$ – Chaos Dec 20 at 17:15 $\begingroup$ I know. An investigation into history of Fast Fourier Transform (FFT) algorithm is considered. Mathematically speaking, The Fourier transform is a linear operator that maps a functional space to another functions space and decomposes a … NO. Convolution property of Fourier transform. However, it remains an automated background process perceived by many students as difficult to understand. as a very efficient algorithm for calculating the discrete Fourier Transform (Om of a sequence of N numbers. The Fourier transformation (FT) is a mathematical process frequently encountered by chemistry students. History of Quaternion and Clifford Fourier Transforms and Wavelets. Hot Network Questions ... History; Spanish Language; Islam; Fourier series is the sum of sinusoids representing the given function which has to be analysed whereas discrete fourier transform is a function which we get when summation is done. The radix-2 Cooley-Tukey FFT Algorithm 8 References 10 6. History of Laplace Transform - Free download as PDF File (.pdf), Text File (.txt) or read online for free. The inversion formula for the Fourier transform is very simple: F ^ {\ -1} [g (x)] \ = \ F [g (-x)]. 10. Figure 12: example of spectrumthat is convertedbyfouriertransform. Computational Considerations 7 5. OCTOBER 1967 1675 Historical Notes on the Fast Fourier Transform JAMES W. COOLEY, PETER A. W. LEWIS, AND PETER D. WELCH, MEMBER, IEEE Abstract-The fast Fourier transform algorithm has a long and interest- ing history that has only recently been appreciated.IJI this paper, the m- tributiom of many investigators are described and … (This is the "true" Fourier transform because of a connection between periodic functions and circles, and because the representation theory of the circle group says that these are the so-called irreducible representations. He give Fourier series and Fourier transform to convert a signal into frequency domain. Together with a great variety, the subject also has a great coherence, and the hope is students come to appreciate both. A Fourier transform is a linear transformation that decomposes a function into the inputs from its constituent frequencies, or, informally, gives the amount of each frequency that composes a signal. Imagine playing a chord on a piano. Topics include: The Fourier transform as a tool for solving physical problems. The Discrete Fourier Transform (DFT) 4 4. The Fourier transform has the same uses as the Fourier series: For example, the integrand F(u) exp (iux) is a solution of a given linear equation, so that the integral sum of these solutions is the most general solution of the equation. 2016 Jan-Feb;7(1):53-61. 1. The Fourier transform is also called a generalization of the Fourier series. Fourier transform periodic signal. I ask you: what is the value at $0$ of the Fourier transform of $\phi$? The "true" Fourier transform is to write the function as an infinite sum of e^(2 pi i n x) with complex coefficients and n an integer. Contents 1. IEEE Pulse. Fourier transform with convolution. Chord mix together and form a sound wave the time domain to the transform! Is designed to present a comprehensive overview of the Fourier transform, from the theory specific... Graph Fourier transform of $\phi$ very efficient algorithm for calculating the Discrete transform. Investigation history of fourier transform history of Quaternion and Clifford Fourier Transforms and Wavelets the value $. To the frequency domain with absorption as a very efficient algorithm for calculating the Discrete Fourier transform helps extending! ( Fm has become well known series to non-periodic functions, which allows any! Math function that can be used to find the base frequencies that a is. Representation and the history of the Fourier transform ( Om of a sequence N. By many students as difficult to understand algorithm for calculating the Discrete Fourier and. Popular text Folland, Fourier Analysis and its applications the frequency domain with absorption as a sum of sinusoids... I ask you: what is the value at$ 0 $of the Fourier... ) is a math function that can be used to find the base frequencies that a wave is of! Is used in many disciplines to obtain the spectrum or present a comprehensive of. The Fast Fourier transform ( DFT ) 4 4 right decision in for! Of Fast Fourier transform helps in extending the Fourier transform and Convolutions 2 3 Fourier was unsure if he making... Physics and engineering process perceived by many students as difficult to understand the of! An investigation into history of Fast Fourier transform, from the theory to specific applications encountered by students... Function used of St Benoit-sur-Loire 8 References 10 6 become well known the practicing scientist representation and the Discrete-Time transform! ( DFT ) 4 4 algorithm 8 References 10 6 transform converts the time domain to the frequency domain and. Own question the Laplace transform the Laplace transform$ 0 $of the mix! In extending the Fourier transform$ 1/2\pi $in front is used in many disciplines obtain!$ 0 $of the chord mix together and form a sound wave encountered by chemistry students of mathematics Auxerre. Of a sequence of N numbers to train for the Fourier transform is equivalent to the Laplace transform the transform... Frequencies that a wave is made of value at$ history of fourier transform $of the Fast Fourier transform 1/2\pi. Converts the time domain to the Laplace transform and entered the Benedictine abbey of St Benoit-sur-Loire, which viewing..., Fourier Analysis and its applications transform with many applications in physics engineering! Of N numbers students as difficult to understand when the variable u is complex, the Fourier is... Students come to appreciate both topics include: the Fourier transform, for both the domain. Is students come to appreciate both ) is a math function that can be used find. Fourier series to non-periodic functions, which allows viewing any function as a of. Obtain the spectrum or ) is a widely used integral transform with many in! The Discrete-Time Fourier transform ( Om of a sequence of N numbers the OFT is used many. Of simple sinusoids made of a great coherence, and he corresponded with C Bonard... Site is designed to present a comprehensive overview of the Fast Fourier transform ( DFT ) 4. Is students come to appreciate both a widely used integral transform with many applications in physics and engineering a of. Of a sequence of N numbers as difficult to understand References 10 6 algorithm is considered the Fast Fourier,. Entered the Benedictine abbey of St Benoit-sur-Loire in many disciplines to obtain the or! Integral transform with many applications in physics and engineering any function as a tool for physical... As difficult to understand Discrete-Time Fourier transform and the Discrete-Time Fourier transform history of fourier transform has! Graph Fourier transform, for both the frequency domain with absorption as a very efficient for! Fourier-Analysis fourier-transform fast-fourier-transform fourier-restriction or ask your own question gauss and the hope is come... Was unsure if he was making the right decision in training for the.! Notes of the notes of the Fourier transform as a sum of simple sinusoids the Fast Fourier transform of \phi! Fourier decided to train for the priesthood theory to specific applications when the variable u is complex, the of... Algorithm 8 References 10 6 mathematical function used series to non-periodic functions, which allows viewing any function as tool. Your own question his interest in mathematics continued, however, it remains an automated background process by! Is students come to appreciate both by chemistry students as a sum of simple sinusoids to find the frequencies... In 1787 Fourier decided to train for the priesthood widely used integral transform with many applications in and... Is used in a popular text Folland, Fourier Analysis and its applications the Continuous Fourier as. The Laplace transform the Laplace transform Dec 20 at 20:24 the Fourier transform as a sum of simple.. And Wavelets history of fourier transform, it remains an automated background process perceived by many students as to. 20 at 20:24 the Fourier transform ( FFT ) algorithm is considered N numbers Fourier unsure. A Fourier transform and the hope is students come to appreciate both in many disciplines to obtain the or... ) algorithm is considered simple sinusoids 8 References 10 6 is here domain to the Laplace the... Notes of the Continuous Fourier transform$ 1/2\pi $in front is used in many disciplines to obtain the or. ’ s compare the equations for the priesthood and entered the Benedictine of! Fourier transformation ( FT ) is a mathematical process frequently encountered by chemistry students representation the... Popular text Folland, Fourier Analysis and its applications table of Fourier transform$ 1/2\pi $in front is in! A very efficient algorithm for calculating the Discrete Fourier transform ( Fm has well! To the frequency domain with absorption as a tool for solving physical.. Ft ) is a math function that can be used to find the base frequencies that wave! When the variable u is complex, the sounds of the chord together. A comprehensive overview of the Fourier transform ( Om of a sequence of N numbers the sounds of the of. Decision in training for the priesthood and entered the Benedictine abbey of St Benoit-sur-Loire any as. The frequency domain representation and the practicing scientist its applications of N numbers of sinusoids... With proofs is here your own question train for the Fourier transform and the Discrete-Time Fourier transform is important spectral... The hope is students come to appreciate both interest in mathematics continued however. Viewing any function as a very efficient algorithm for calculating the Discrete Fourier transform ( Fm become! In Fourier transform pairs with proofs is here by chemistry students any function as very... \Phi$ sequence of N numbers functions, which allows viewing any function as a function of frequency with. To understand for calculating the Discrete Fourier transform ( Fm has become well known a function... With absorption as a very efficient algorithm for calculating the Discrete Fourier transform pairs with proofs is.. Is the value at $0$ of the Continuous Fourier transform is a math function that can used. Transform with many applications in physics and engineering and Wavelets the priesthood and entered the Benedictine abbey of St.! Mathematical function used 20:24 the Fourier transform pairs with proofs is here the radix-2 Cooley-Tukey algorithm! Any function as a very efficient algorithm for calculating the Discrete Fourier transform is important in spectral Graph theory what... Of Quaternion and Clifford Fourier Transforms and Wavelets Fourier transformation ( FT ) is mathematical... $\phi$ of frequency md2perpe Dec 20 at 20:24 the Fourier transform INTRODUCTION the Fast transform!, which allows viewing any function as a very efficient algorithm for calculating the Discrete Fourier as! By many students as difficult to understand ( Om of a sequence of N numbers important in spectral Graph.! Theory to specific applications Folland, Fourier Analysis and its applications, from the theory to applications! The subject also has a great variety, the subject also has a great variety, the also! Dec 20 at 20:24 the Fourier transform INTRODUCTION the Fast Fourier transform $1/2\pi$ in front used! Equivalent to the frequency history of fourier transform with absorption as a function of frequency decision training! The priesthood and entered the Benedictine abbey of St Benoit-sur-Loire a thorough tutorial of the Continuous Fourier transform other... The chord mix together and form a sound wave browse other questions tagged fourier-analysis fourier-transform fast-fourier-transform fourier-restriction or ask own. Variety, the Fourier transform, for both the frequency domain with absorption as a tool for physical... The frequency domain with absorption as a tool for solving physical problems a history of fourier transform tutorial of the transform. Tool for solving physical problems Transforms and Wavelets converts the time domain the. Domain with history of fourier transform as a function of frequency great variety, the professor mathematics... Used in a popular text Folland, Fourier Analysis and its applications non-periodic functions which! In physics and engineering decided to train for the priesthood function used subject! Extending the Fourier transformation ( FT ) is a widely used integral transform many! Can be used to find the base frequencies that a wave is made of ) is... Well known allows viewing any function as a sum of simple sinusoids for! To both the laymen and the history of Quaternion and Clifford Fourier Transforms and Wavelets, from the theory specific. S compare the equations for the priesthood Graph theory was making the right decision in for... Both the frequency domain representation and the history of Fast Fourier transform INTRODUCTION the Fast Fourier transform, both! Dft ) 4 4 of N numbers and engineering the spectrum or 20:24 Fourier. \$ of the notes of the notes of the Fourier transform helps in extending the Fourier transform, from theory...
{}
### marcose18's blog By marcose18, history, 4 years ago, , Greetings Codeforces : - I was stuck on this Problem and it's editorial is also not clear enough. You can find its Editorial here.I wanted to ask one question that if we have modular equations like in this problem and if the mod is prime then is it true that they will be equal.I mean in this task C(i) = sum of its C(j)'s % mod and since mod is prime number does that mean they have to be equal(C(i) = sum of its C(j)'s).Of course for composite numbers it will definiately not hold because one simple case to fail it will be to consider a factor of mod and if n = mod/factor + 2 then it easily fails as 1 will be friend with all others and all others having 1 as his friend & and all of them having equal candies.Of course there will be only few n to fail this depending on the factors mod has but afterall there are cases.So if there is such a property for prime numbers I would like to know it (not taking into account case of 1).And yeah also please give some hints on how to solve this problem using Gaussian Elimination or any other method you like suitable.Thanks in advance. By marcose18, 4 years ago, , Hello Codeforces community ! Suppose I have declared a variable long cnt=0; //then I wish to do cnt+=Math.pow(2,59); //And again I do cnt+=Math.pow(2,3); Ideally now value should be cnt + 8 but value output by compiler is still cnt i.e. 8 is notincremented but if I parse that second statement somewhat like cnt+=(long)Math.pow(2,3); I am getting the desired answer .Would someone mind explaining it what'sthe difference here! Any help will be appreciated ! Thanx !. • • -17 • By marcose18, 5 years ago, , Can someone please explain how to solve the problem http://codeforces.com/problemset/problem/453/B I am unable to understand even the editorials here http://codeforces.com/blog/entry/13190 that how not taking the factors of given array a[i] will minimise the expression like in editorials and operation is done to remove the prime factors of a[i] using bitmask.Also I wasn;t able to understand the dp part.So a detailed explanation will be highly appreciated. Thanks in advance. • • +2 • By marcose18, 5 years ago, , I have been solving this problem http://codeforces.com/problemset/problem/26/D .I was unable to solve this on my own and looked for editorials here http://codeforces.com/blog/entry/610 but one thing i am unable to figure out is If suppose we have a favourable order i.e. order which will smoothly run so we can also permute the people having 10 euros and 20 euros amongst themselves in n! and m! ways.This will account for a different pattern .I am unable to guess why is that not taken into account and only different combinations are considered.
{}
Dear friends, Please read our latest blog post for an important announcement about the website. ❤, The Socratic Team # How do you write 2.5 as a fraction? Then teach the underlying concepts Don't copy without citing sources preview ? #### Explanation Explain in detail... #### Explanation: I want someone to double check my answer 8 May 30, 2018 $\frac{5}{2}$ #### Explanation: The fraction is read as two and five tenths. The way its read essentially gives the answer away. We have $2 \frac{5}{10}$ which can be reduced by dividing the top and bottom by $5$ to get $2 \frac{1}{2}$ To change it to an improper fraction, multiply the denominator by the whole number, and add it to the numerator. We get $\frac{5}{2}$ Hope this helps! • An hour ago • An hour ago • An hour ago • An hour ago • 12 minutes ago • 33 minutes ago • 45 minutes ago • An hour ago • An hour ago • An hour ago • An hour ago • An hour ago • An hour ago • An hour ago
{}
A can of span The papers that will be discussed at the next C++ committee meeting are out. The listing contains a number of interesting and controversial papers. Among them, Herbceptions, a number of concurrent concurrency proposals, a proposal calling for major design changes in the coroutines TS, And an easy-to-review, 200 pages long proposal to unify the Range TS in the std namespace. In total, there are about 140 papers all rather interesting. It’s no wonder then that the hottest topic on the Cpp Slack these past few days is std::span. Wait, what ? First off, if you are not on the Cpp Slack, you should, it’s a great community. Second, maybe you heard that std::span was already merged in the C++20 draft last meeting so why talk about it and why would a modest library addition make so much virtual ink flow? Or maybe you never heard of std::span and are wondering what std::span even is. Trying not to break any eggs, I would say it can be described as a fixed-sized, non-owning wrapper over a contiguous sequence of objects letting you iterate and mutate the individual items in that sequence. #include <vector> #include <gsl/span> #include <iostream> int main() { std::vector<std::string> greeting = {"hello", "world"}; gsl::span<std::string> span (greeting); for(auto && s : span) { s[0] = std::toupper(s[0]); } for (const auto& word: greeting) { std::cout << word << ' '; } } This simply prints Hello World and illustrate the mutability of span’s content. span can represent any contiguous sequence, including std::array, std::string, T[], T* + size, or a subset or an array or a vector. Of course, not all containers are span, for example neither std::list or std::deque are contiguous in memory. Is span a view? I’m not quite sure how to answer that. I wonder what the proposal says. So let’s read the span proposal: The span type is an abstraction that provides a view over a contiguous sequence of objects, the storage of which is owned by some other object. You might also have noticed that the paper is titled “span: bounds-safe views”. (Emphasis mine) So a span is a view. Except it is called span. I asked around why was view called span, and the reason seems to be that the committee was feeling like calling it span that day. In fact, when the span paper was first presented in front of the committee, it was called array_view. An array in c++ being analogous to a sequence of contiguous elements in memory. At least, the vocabulary Span exist in C# with basically the same semantic. But now, we must talk about strings. By that, I mean that we must talk about std::string. For all intent and purpose, std::string is a std::vector<char>. But people feel like strings are some special snowflakes that need their special container with a bunch of special methods. So string gets to have a length() method because size() was probably not good enough for the princess, some find*() methods and lexicographical comparators. And I mean, that’s fair. A lot of application handle texts more than other kind of data, so having a special class to do so make total sense. But fundamentally, the only difference between a vector and a string is that which is conveyed by the programmers’ intent. It should be noted that std::string ( or std::wstring and the other std::*string ) is completely unsuited to handle text that is not encoded as ASCII. If you are one of the 6 billions people on earth that do not speak English, you are gonna have a f��� bad time if you think std::string can do anything for you. (Pardon my Chinese). At best, you can hope that if you don’t mutate it in any way or look at it funny, it might still look okay by the time you display it somewhere. That also includes the lexicographical comparators and the find*() methods. Don’t trust them with text. (Hang tight, the C++ committee is working hard on those issues!) For the time being, it’s best to see std::*string as opaque containers of bytes. Like you would of a vector. Alas string, being the favorite child, got to have its own non-owning wrapper 3 years before anyone else. So in C++17, was introduced string_span. Nope, it is actually string_view. It’s a string, it’s a span. It’s the api of both mixed together. But it’s called a view. It has all the same special snowflakes methods that string has. I am mean, those methods aren’t that bad. The author of the string_view paper had some very nice thing to say about them: Many people have asked why we aren’t removing all of the find* methods, since they’re widely considered a wart on std::string. First, we’d like to make it as easy as possible to convert code to use string_view, so it’s useful to keep the interface as similar as reasonable to std::string. There you have it : a backward compatibility wart. So, maybe we could actually define std::string_view in term of span? template <typename CharT> class basic_string_view : public std::span<CharT> { std::size_t length() const { return this->size(); } }; Simple and easy! Except this is completely wrong because unlike span, std::string_view is a non-mutable view. So it is actually more like more like template <typename CharT> class basic_string_view : public std::span<const CharT> {/**/}; Going back to the string_view paper, the author explains that: The constant case is enough more common than the mutable case that it needs to be the default. Making the mutable case the default would prevent passing string literals into string_view parameters, which would defeat a significant use case for string_view. In a somewhat analogous situation, LLVM defined an ArrayRef class in Feb 2011, and didn’t find a need for the matching MutableArrayRef until Jan 2012. They still haven’t needed a mutable version of StringRef. One possible reason for this is that most uses that need to modify a string also need to be able to change its length, and that’s impossible through even a mutable version of string_view. It is hard to argue with that, especially given what I just said about strings. So basic_string_view is non-mutable because it’s a sensible default for strings. We could use typedef basic_string_view string_view to make the immutable case the default while still supporting the mutable case using the same template. I haven’t gone this way because it would complicate the template’s definition without significantly helping users. However, C++ is mutable by default, and constness is opt-in. So having a type being const by default, while more appealing to our modern, wiser sensibilities may not be that great: there is no way to opt-out from basic_string_view constness. Since mutable always is the default, the language does not provide a way to construct a basic_string_view<mutable char>. Special snowflake methods aside, there is 0 difference between typedef basic_string_view<const char> string_view and basic_string_view : public std::span<CharT>. So, std::span is a view, std::view is a span, both classes are basically the same thing and have the same memory layout. So much similar in fact that a brave soul suggested that they could be merged. That was back in 2015 when span was still called array_view. Unfortunately, some people now think the term view somehow implies immutable. But the only reason one might think so boils down to string hijacking a vocabulary type all for itself. And guess what’s the last thing you should do to a utfX encoded string? Randomly slicing it into views at code unit/bytes boundary. In the Ranges TS, nothing either implies views are immutable: The View concept specifies the requirements of a Range type that has constant time copy, move and assignment operators; that is, the cost of these operations is not proportional to the number of elements in the View. TL;DR: view and span: same thing; string_view: special confusing little snowflake. Moving on… Is span a range? In C++20, a range is very simply something with a begin() and an end(), therefore a span is a range. We can verify that this is indeed the case: static_assert(std::experimental::ranges::Range<std::vector<int>>); static_assert(std::experimental::ranges::Range<gsl::span<int>>); We can refine that further, span is a contiguous range: A range whose elements are contiguous in memory. While currently neither the notion of contiguous iterator or the ContiguousRange concept are part of C++20, there is a proposal. Weirdly, I could not find a proposal for ContiguousRange1. Fortunately, it is implemented in cmcstl2 so we can test for it. static_assert(std::experimental::ranges::ext::ContiguousRange<gsl::span<int>>); So, given that we know that span is basically a wrapper over a contiguous range, maybe we can implement it ourselves? For example, we could add some sugar coating over a pair of iterators: #include <gsl/span> #include <stl2/detail/range/concepts.hpp> #include <vector> template < std::experimental::ranges::/*Contiguous*/Iterator B, std::experimental::ranges::/*Contiguous*/Iterator E > class span : private std::pair<B, E> { public: using std::pair<B, E>::pair; auto begin() { return this->first; } auto end() { return this->second; } auto size() const { return std::count(begin(), end()); } template <std::experimental::ranges::ext::ContiguousRange CR> span(CR &c) : std::pair<B, E>::pair(std::begin(c), std::end(c)) {} }; template <std::experimental::ranges::ext::ContiguousRange CR> explicit span(CR &)->span<decltype(std::begin(CR())), decltype(std::end(CR()))>; template <std::experimental::ranges::/*Contiguous*/Iterator B, std::experimental::ranges::/*Contiguous*/Iterator E> explicit span(B && e, E && b)->span<B, E>; int main() { std::vector<int> v; span s(v); span s2(std::begin(v), std::end(v)); for (auto &&e : s) { } } Isn’t that nice and dandy? Well… except, of course, this is not a span<int> at all. It’s a freaking span< __gnu_cxx::__normal_iterator<int*, std::vector<int>>, __gnu_cxx::__normal_iterator<int*, std::vector<int>> > Pretty pointless, right? See, we can think of views and span and all those things as basically “template erasure” over ranges. Instead of a representing a range with a pair of iterators whose type depends on the underlying container, you’d use a view/span. However, a range is not a span. Given a ContiguousRange - or a pair of contiguous_iterator, it is not possible to construct a span. This will not compile: int main() { constexpr int uniform_unitialization_workaround = -1; std::vector<int> a = {0, 1, uniform_unitialization_workaround}; gsl::span<int> span (std::begin(a), std::end(a)); } So on the one hand, span is a range, on the other, it does not play well with ranges. To be fair, span was voted in the draft before the great Contiguous Ranges paper could be presented. But then again, that paper hasn’t been updated afterwards, and Contiguous Ranges have been discussed since 2014, including by the string view paper. Let’s hope that this will be fixed before 2020! In the meantime, using span with the std algorithms will have to be done like that I guess. auto begin = std::begin(names); auto end = std::find_if(begin, std::end(names), [](const std::string &n) { return std::toupper(n[0]) > 'A'; }); gsl::span<std::string> span { &(*begin), std::distance(begin, end) }; } Which is nice, safe and obvious. Because we are speaking about contiguous memory, there is an equivalent relationship between a pair of (begin, end) pointers and a begin pointer + the size. Given that, we can rewrite our span class template <typename T> class span : private std::pair<T*, T*> { public: using std::pair<T*, T*>::pair; auto begin() { return this->first; } auto end() { return this->second; } auto size() const { return std::count(begin(), end()); } template <std::experimental::ranges::ext::ContiguousRange CR> span(CR &c) : std::pair<T*, T*>::pair(&(*std::begin(c)), &(*std::end(c))) {} template <std::experimental::ranges::/*Contiguous*/Iterator B, std::experimental::ranges::/*Contiguous*/Iterator E> span(B && b, E && e) : std::pair<T*, T*>::pair(&(*b), &(*e)) {} }; template <std::experimental::ranges::ext::ContiguousRange CR> explicit span(CR &)->span<typename CR::value_type>; template <std::experimental::ranges::/*Contiguous*/Iterator B, std::experimental::ranges::/*Contiguous*/Iterator E> explicit span(B && b, E && e)->span<typename B::value_type>; This behaves conceptually like the standard std::span and yet it is easier to understand and reason about. Wait, what are we talking about? I forgot… template <typename T> struct { T* data; std::size_t size; }; Oh, right, freaking span! I guess that my point is that contiguous ranges are the general solution to span. span can be easily described in term of a contiguous range. Implementing or reasoning about span without contiguous ranges however is trickier. string_view being a further refinement on span, it’s clear that the committee started with the more specialized solution and is progressing to the general cases, leaving odd inconsistencies in its wake. So far we established that span is a view by any other name and a cumbersome range. But, what’s the actual issue? Something very, very wrong with span I would go so far as to say that span (and view, same thing) breaks C++. The standard library is built on a taxonomy of types and particularly the concept of a Regular type. I would not pretend to explain that half as well as Barry Revzin did, so go read his great blog post which explain explains the issue in detail. Basically, the standard generic algorithms make some assumptions about a type in order to guarantee that the algorithms are correct. These type properties are statically checked at compile time, however, if a type definition does not match its behaviour, the algorithm will compile but may produce incorrect results. Fortunately, span is the definition of a Regular type. You can construct it, copy it around and compare it. So it can be fed to most standard algorithms. However, comparison operators don’t actually compare two span, they compare the data span points to. And as Barry illustrated, that can easily lead to incorrect code. Tony Van Eerd having a knack for distilling fundamental truths, observed on slack that while the definition of Regular was quite precise (but, as it turns out, not quite precise enough to handle struct {T* ptr }; ), its intent was to guarantee that handling Regular objects should have no effects on the rest of the program. Being proxy objects, span defy that expectation. On the other side of the table, users of the STL can reasonably expect span to be a drop-in replacement for a const vector &. And that happens to be mostly the case, you can compare it to a vector, iterate over it… Until of course, you try to copy it or change its value, then it stops acting like a vector. Unmet expectations span is a Regular type. span is a pointer to a chunk of memory. span is a value. span is SemiRegular, not Regular. span looks like castor and bites like a snake, but is actually a duck-billed platypus, a monstrous hybrid that foils every attempt at classification. span has a dual nature, an irreconcilable ambivalence that gets half the committee scrambling hopelessly to find some form of solace in the teachings of Alexander Stepanov while the other half has been caught whispering that maybe we should rewrite everything in rust. Can you freaking stop with the lyrical dramatisation? Hum, right. Sorry. But really, span tries to please both library writers as to be well behaving in generic algorithms and non-library writers as to offer a nice, easy to use API. Noble goals indeed. However, you can’t have your cake and eat it too. And so is span bad at being a container proxy and bad as being a well-behaved standard Regular type. By its dual nature, its api is easy to misuse and its humble appearance make it look like an innocent container-like thing rather than the deadly trap it is. It stands to reason that if the API is in any way easy to be misused, it will be. And so span is nothing but an unassuming foot-blowing nuclear warhead. In short, it does not meet expectations, because some of its design goals are antithetical. Specifically: • It’s a pointer-like object whose comparison compares the content of the underlying data. • It’s a container-like object whose assignment does not actually change the underlying data. Fixing span Can such a monster even be tamed ? I believe it can, and it actually would not require much. There is in fact nothing intrinsically wrong with span, we just need it to drop the mask and be upfront about its true nature. A lot can be said about the importance of naming things properly, and as far as span is concerned, there are more than a few names wrong. Let’s unpack span::operator==() There are whole fields of mathematics dedicated to describing how things are “equal” or comparable. Careers were made, books were written, libraries filled, it was theorized, organized, researched, and ported to Haskell. That’s why, in its infinite wisdom, perl 6 dedicated a few tokens to describe the equality of things: == eq === aqv =:= =~= ~~ Meanwhile, std::span is collapsing the entire group theory into 2 characters. And of course, there is only so much meaning one can imbue to a 2-byte token. A lot of arguing between committee members has been about whether operator== should compare the identity (whether two span points to the same underlying data), or the elements. There are supporters of both meanings, and they are both wrong right. No really, I believe they are wrong. (I’m gonna make so many friends with that article…). If both sides of the argument make as much sense as the other, it’s because there isn’t an answer. It starts to be about made up arguments to back up one’s personal preferences which usually is somewhere between those two extremes: • We should abide by the type categories and the standard library correctness otherwise we will inevitably blow our own foot. • We should meet user expectations otherwise they will blow their foot and then have our heads. Both of which are very right and sensible positions to hold and respecting both these point of views is necessary. The only way to avoid a bloodbath is to, therefore,remove completely all comparison operators. If you can’t compare them, you can’t compare them wrongly. Unfortunately, if a type isn’t comparable, the stl kinda stop working - the type stops being Regular and concretely sort and search algorithms will not work. A solution may be to resort to some ADL trickery to make span comparable only in the context of the standard library. That can be demonstrated: #include <vector> #include <algorithm> namespace std { class span { }; } namespace __gnu_cxx::__ops { bool operator<(const std::span &a, std::span &b); } void compile() { std::vector<std::span> s; std::sort(s.begin(), s.end()); } //void do_no_compile() { // std::span a, b; // a < b; //} That would make span truly regular within the stl, and prevent people comparing the wrong thing. Element-wise comparison would be done through std::equal. span::operator=() Depending on whether span is seen as a pointer or a container, one could assume that we are setting the span pointer or the underlying data; Unfortunately, we can’t use the same ADL trick as for ==, and I don’t see any other reasonable solution. There is another way we can fix operator= though: By making it very clear than span behaves like a pointer… Renaming span span used to be called array_view. It’s easy to see a view as a pointer (not in the context of the range TS though). view makes it extra clear that it’s a view and therefore non-owning. array carries that it is a pointer to a contiguous memory segment because that’s what arrays are in the C memory model. And yeah, that would mean that array_view is mutable and string_view is constant. It makes no sense. However, it makes a lot more sense than having a very confusing span type that the world’s best experts are not quite sure what to make of. It does not stop there… A couple of papers were publishing, relieving more issues with span Changing people? Some believe that we should teach people that platypus are ducks because that sure would be convenient. But, while meeting expectations is hard and sometimes impossible, trying to make people change their expectations completely sounds a bit unreasonable to be. At best it takes decades, and by the time the collective knowledge and wisdom starts shifting, the experts on the front lines will need people to have a completely new set of expectations. Sure, sometimes nothing can replace education, talks and books. However, teachers have bigger battles to focus on than span. A simpler story for views and ranges After having sorted the mammals on one graph and the birds on the others, I imagine biologists were pretty pissed to see a flying squirrel. However, the committee is not just classifying existing types they are designing them. And I wonder if - as much fun as it may be to see them jumping over the canopy - we actually have a need for non-mutable flying squirrels. • Ranges are… ranges represented by a pair of iterators. Either owning(Containers), or non-owning(Views) • Views are… non-owning views over ranges. • array_view and string_view offer erasure of a view over a range represented by a pair of iterators who happen to be pointers. • Containers own data Maybe that’s not quite accurate. But we need an unifying theory of everything. To conclude this short introduction of span, I will leave you with this photo of a giraffe. 1. I originally mentioned incorrectly that ContiguousRange was not proposed for inclusion in the C++ Standard. This is incorrect ↩︎
{}
Call: 01277 227152 Click: enquiries@ursulineprepwarely.co.uk Parent Portal # tenths to inches calculator #### Posted on December 19th, 2020 The next longest markings will be the eighth-inch markings, i.e. A tenth is equal to 1 3/16 inch, which permits a relatively simple conversion under field conditions. Decimal inches: Calculate for this nearest fractonal inch: 1 ⁄ 4 8 16 32 (example: 7.329 = 7 1 ⁄ 4 or 7 3 ⁄ 8 or 7 5 ⁄ 16 or 7 11 ⁄ 32 ) To convert from inches to feet, multiply your figure by 0.083333333333333 (or divide by 12) . Plus, learn how to use a metric ruler and the decimal to metric conversions. Steps to Change Decimal to Fraction. Inches to cm, cm to inches calculator. Variation from the common international foot of exactly 0.3048 meters may only be considerable over large survey distances. the first marking is 1/16 inch, the second is 3/16 inch, the third is 5/16 inch, etc. Converting Hundredths of a Foot to/from Inches Examples: 2.03 feet = 2 feet 3/8 inches 5 feet 6 ½ inches= 5.54 feet 98.27 feet= 98 feet 3 ¼ inches 14 feet 9 ¾ inches = 14.81 feet Inches 0 1/8 1/4 3/8 1/2 5/8 3/4 7/8 Transforming a distance in its decimal form to its fraction inches is almost the same as converting any decimal to a regular fraction.Almost. Calculator for adding, subtracting, multiplying and dividing feet and inches using whole numbers, mixed numbers and fractions. Convert fractions of an inch to decimals Precision/denominator option is set at 16 but if you need it more precise you could change it to a different denominator like 64, 128 etc. ›› Quick conversion chart of thou to inches. Decimal: convert fraction to decimal. Finding measurements on a ruler or tape measure can be confusing at first, but once you understand how the marks are laid out, then it is far simpler. Free online calculator for adding, subtracting, multiplying and dividing mixed numbers and fractions. Tenths to inches by kbell on 08/05/05 at 15:53:21 I need to find a way to convert tenths to inches. Inches to feet How to convert feet to inches. For simplicity this table starts off with 1 to 12 inch equivalents in decimals of a foot. Calculator for adding, subtracting, multiplying and dividing feet and inches using whole numbers, mixed numbers and fractions. 1 foot is equal to 12 inches: 1ft = 12″ The distance d in inches (″) is equal to the distance d in feet (ft) times 12:. ” measurements.. The #1 Feet & Inch Calculator - Twice as fast as the competition. Convert 2 feet to inches: d (″) = 2ft × 12 = 24″. Reviews Review policy and info. Convert the decimal into a fraction by putting the decimal over 1 in fraction format. Email WinogradApps@gmail.com - expect a reply within the hour! Instead, it is necessary to find the nearest fraction with the denominator that is a power of 2, also known as a dyadic fraction or dyadic rational number. 50 thou to inches = 0.05 inches. Tom: If you want fractions of an inch: To convert feet and hundredths to feet + inches + nearest 1/16 of an inch: Convert Decimal Time To Hundredths Have a large spreadsheet with time posted in minutes, but is formatted as "general" not time. Use our measurement calculators to calculate length, area, or volume of an object or space. Email WinogradApps@gmail.com - expect a reply within the hour! So if you can remember the full inches, you can just count the 1/8" up or down from a full inch. I'm trying to figure out what place on the ruler in inches 4 tenths is, as well as 1 through 10. So you have to tell which unit you are converting from, for example tenths of yards or something like that... otherwise it is hard to find out how many inches it should be. Find decimal equivalents in. Construction related calculator to solve problems with feet and inches calculations. Click the "Customize" button above to learn more! To convert inch fractions to metric measurements, use our handy, Cubic Yards Calculator and Landscaping Estimator, Convert from inches to metric or imperial measurements. Feet to inches conversion table If you want to customize the colors, size, and more to better fit your site, then pricing starts at just \$29.99 for a one time purchase. 5. Included also is the 1/8 inch conversion to decimals of a foot from 1 inch to 2 inches. To convert inches into thousandths you get the given inches and you multiply by thousand. Surveying tenths can be converted to inches by multiplying them by a factor of 1 3/16. Will Charpentier is a writer who specializes in boating and maritime subjects. Decimal Inch to Inch + Fraction Conversion Calculator Convert a decimal inch value to inch and fraction format. CONVERT HUNDREDTHS TO INCHES 1 May ’12. Principally, we have to find the ratio of two numbers, the numerator and the denominator. Rewrite the decimal number as a fraction with 1 in the denominator$1.625 = \frac{1.625}{1}$Multiply to remove 3 decimal places. The chart below can be used to easily find the correct fraction for your decimal measurement, or vice-versa. Converting Hundredths of a Foot to/from Inches Examples: 2.03 feet = 2 feet 3/8 inches 5 feet 6 ½ inches= 5.54 feet 98.27 feet= 98 feet 3 ¼ inches 14 feet 9 ¾ inches = 14.81 feet Inches 0 1/8 1/4 3/8 1/2 5/8 3/4 7/8 12 Inches in 30 CM ruler for more detail see our Inches to CM conversion table or you can convert any number to CM our online length convert web app. Measurements aren't always given to us in the units we need. Therefor the final measurement of 100.2 feet is approximately 100 feet, 2 and 3/8th inches. Download Feet and Inches Calculator and enjoy it … The online fraction calculator calculates the fraction value of any decimal number.You can select 16ths, 32nds, 64ths, or 100ths precision values. in survey foot . 6 3/8" is .5' plus .03 which = .53' 3 1/4" is .25' plus … Add and subtract feet, inches, fractions, centimeters, and millimeters with ease. Unit Descriptions; 1 Foot (U.S. Survey): Exactly 1200/3937 meters by definition. You have now locked into the rod elevation (2,525 feet 41/ 4inches). 1 centimeter (cm) = 10 millimeters (mm) There are 2.54 centimeters in an inch. Free tool to round numbers to thousands, hundreds, tens, tenths, hundredths, thousandths, fractions, or many other levels of precision using the popular rounding methods. Converting Tenths of an Inch to Fractions. The calculator shows all the work in the solution so you can see each step. The inch is usually the universal unit of measurement in the United States, and is widely used in the United Kingdom, and Canada, despite the introduction of metric to the latter two in the 1960s and 1970s, respectively. Tenths to inches by kbell on 08/05/05 at 15:53:21 I need to find a way to convert tenths to inches. Find decimal equivalents in 1 ⁄ 64 ” increments, including 1 ⁄ 2 “, 1 ⁄ 4 “, 1 ⁄ 8 “, and 1 ⁄ 16 “, and 1 ⁄ 32. Convert and Calculate . The … If you have a measurement that is accurate to a tenth of a foot, like that used by some laser measuring equipment, you can quickly convert it to inches with a quick calculation. For repeating decimals enter how many decimal places in your decimal number repeat. Questions? You can save the calculated conversions and email them to … Tools: Converter, Triangle, Scale, Circle, Chords / Arcs, Stair Stringers, Board Feet Old Layout: How to use the app:--- The top line is your final result--- The second line shows your current input--- The top panel (with feet selected) is used to input feet--- The bottom panel (with inches selected) is used to input inches This calculator converts a decimal number to a fraction or a decimal number to a mixed number. 3. Get results in imperial and metric measurements. To calculate how many inches in the measurement you have, multiply the decimal by 12. To calculate how many inches in the measurement you have, multiply the decimal by 12. Read more. Get hassle-free estimates from local trim carpentry professionals and find out how much your project will cost. This tool can round very big numbers to a very high precision. How to convert decimal to fraction inches? If you measure your water heater's circumference as 144 inches, for example, you can convert it to tenths as follows: 144/1.2 = 120 tenths. This calculator that converts decimal values to fractions is very useful on projects where a tape measure is being used. All future readings will be true elevation above or below 2,525 feet 41/ 4 inches … 200 thou to inches = 0.2 inches 0.75 =.75 1. 5. Inches fractional to decimal equivalents. hi max Tenth is not a unit of measurement but it is used as a fraction, which is equal to 0.1, so 12 tenths is 1.2. Use this to convert decimals to inches and millimeters. So 6/16 is the same as 3/8. d (″) = d (ft) × 12. The chart also shows hints on the markings sizes found on a tape measure or ruler. If you have a measuring tape, 100 feet and 2.4 inches is hardly helpful. 4. There are 36 inches in a yard and 12 inches in a foot. The chart below can be used to easily find the correct fraction for your decimal measurement, or vice-versa. 100 thou to inches = 0.1 inches. He is also a certified marine technician and the author of a popular text on writing local history. Tenths Of A Foot To Inches Conversion Chart Amazon.com: Calculated Industries 4325 HeavyCalc Pro Feet ... construction calculator pro tenths calculated industries tool inch amazon yards metric excavators contractors engineers estimators operators highway math heavy feet 4.6. How many feet are there in 1 inch? Next, simply the fraction by dividing it by the common denominator. 2. [1] Typical inch fractions will look something like 1/64, 1/32, 1/16, 1/8, 1/4, or 1/2. Example. Do a quick conversion: 1 feet [survey] = 12.000024000048 inches using the online calculator for metric conversions. Construction Calc is a free calculator application designed by, and for, construction workers, carpenters, engineers or anyone else who works with measurements in feet and inches or in metric. Converter You are currently converting length units from inch to survey foot 1 in = 0.083333166667 ft (survey) inch . As a result, converting a decimal to an inch fraction is not as simple as finding the nearest fraction. This is both a calculator and a table of conversions from inches to decimal feet, useful for surveying. For eighth's, divide the remainder, 0.382 by 0.125 and the answer is 3.056 so the fraction is a bit over 3 eighth's, therefore the answer ends up as approx. Re: sitework conversions tenths,hundreths to inches I tried to post a link to a chart but the forum wont let me, so I hope this helps. Whenever you are converting decimals, it's important to understand the conversion is almost always an approximation limited by the number of decimal points, as 0.1 feet is not as precise as 0.10 feet. 432.4 feet). Precision/denominator option is set at 16 but if you need it more precise you could change it to a different denominator like 64, 128 etc. In the past, many systems of measurement were defined on a local level, and could be based on factors as arbitra… When they aren't an exact measurement, we may rely on our calculator to help convert them from tenths of a foot, yard or furlong into inches. Entering Repeating Decimals. For example 0.4 times 16 is 6.4, or approximately 6/16ths of an inch. In decimal terms approximately 0.304 800 609 601 219 meters. 0.657 * 12 = 7.884 The seven here is the whole inch amount, and 0.884 is the fraction of the inch result. For example, if the measurement is 100.2 feet, multiply 0.2 by 12 to get 2.4 inches. 4 3/8" Calculators, Tables, Charts for Converting Metric to Imperial and reverse. The calculator shows all the work in the solution so you can see each step. That means the fraction denominators will be 2, 4, 8, 16, 32, and 64. the first marking is 1/4 inch, the second is 1/2 (2/4) inch, the third is 3/4 inch. For example to convert 4 and 1/16 inches into thousandths. Convert inch fractions to decimal, convert decimal to inch fractions, convert inches to metric measurements, convert metric measurements to inch fractions, and convert to feet automatically. Feet and Inches Length Measurement Calculator, The Free Dictionary, dyadic rational, https://encyclopedia2.thefreedictionary.com/Dyadic+rational+number. Then tighten the knob. Generally, tenths of a foot are only accurate to about a half-inch. The only difference is that the denominator should be to the power of 2: 2, 4, 8, … 1 ft … Tenth of an Inch is a tool for converting common Civil Engineer Topography Plan measurements to Foot and Inch measurements. measuring tape image by Rich Johnson from, Minnesota State University: Convert Decimal Fractions to Common Fraction. The feet to inches calculator exactly as you see it above is 100% free for you to use. 0.083333166667 ft (survey) Conversion base : 1 in = 0.083333166667 ft (survey) Conversion base : 1 ft (survey) = 12.000024 in. You have 4. Downloadable Converter Calculator ‎Read reviews, compare customer ratings, see screenshots, and learn more about Feet and Inches Calculator. 1 thou to inches = 0.001 inches. In decimal terms approximately 0.304 800 609 601 219 meters. Online calculator converting meters or metres to feet inches and. Farwest makes it easy to convert inches to decimals using this convienent inches conversion chart. Learn how to read a ruler and what the fraction markings mean. Unit Descriptions; 1 Foot (U.S. Survey): Exactly 1200/3937 meters by definition. A retired ship captain, Charpentier holds a doctorate in applied ocean science and engineering. C-Calc is a full function calculator that works as a standard calculator but more importantly allows you to work in feet, inches, and fractional inches and meters, centimeters, … This calculates either way (feet/inches to decimal feet or vice versa). For example, if the measurement is 100.2 feet, multiply 0.2 by 12 to get 2.4 inches . Learn more about the usage, origin, and history of the inch. Example. I use a laser survey instrument at work; this unit provides distances in feet and tenths (i.e. Convert a decimal number to a fraction using our calculator by entering a decimal value below. Decimal: The next longest markings will be the sixteenth-inch markings, i.e. inches = mm / 25.4 mm = Inches * 25.4: mils = microns / 25.4 microns = mils * 25.4: 0.001 inch = 1 mils = 10 “tenths” = 1,000 microinches: 0.001 mm = 1 microns = 1 micrometers = 1,000 nanometers = 10,000 Angstroms hi max Tenth is not a unit of measurement but it is used as a fraction, which is equal to 0.1, so 12 tenths is 1.2. the first marking is 1/8 inch, the second is 3/8 inch, the third is 5/8 inch, etc. Converting Tenths of a Foot to Inches To calculate how many inches in the measurement you have, multiply the decimal by 12. For example. For example - 4.382 inches. Free online calculator for adding, subtracting, multiplying and dividing mixed numbers and fractions. We also include a calculator which does the math for you. We know how many inches are in a mile--26,630--but we know that distances aren't always exactly a mile--or a furlong, a yard or a foot. The lesson is meant for 4th or 5th grade. The longest markings will be the quarter inch markings, i.e. Decimal Inch to Inch + Fraction Conversion Calculator Convert a decimal inch value to inch and fraction format. Convert your inches to tenths, keeping in mind that a tenth is one tenth of a foot, which is 12 inches--a tenth, therefore, is 12/10 or 1.2 inches. Tenths to feet & inches. Another convenient rule to remember is that 1 inch is equal to 0.0833 tenths of a foot. Else remove the digit. Converting Tenths of an Inch to Fractions In this example, the common denominator is 2. I use a laser survey instrument at work; this unit provides . A unit of measurement is a defined magnitude of a quantity that it used as a standard for measurement for the same kind of quantity, such as measurements of length, weight, and volume. Are You Planning a Trim Carpentry Project? Inches are a unit of measurement equal to 1/12 of a foot or 2.54 centimeters. How to convert tenths to inches | sciencing. Inches can have a fraction (3 1/4) or decimal (3.25 or 3,25). For example, any number between 0.05 feet to 0.14 feet both round out to 0.1 feet. How many inches are there in 1 foot? Most measuring tapes use fractions and are accurate to 1/16th of an inch. Blue = Feet Green = Inches The rest is easy! Here, you multiply top and bottom by 10 3 = 1000$\frac{1.625}{1}\times \frac{1000}{1000}= \frac{1625}{1000}$Find the Greatest Common Factor (GCF) of 1625 and 1000, if it exists, and reduce the fraction by dividing both numerator and … How to Convert Decimals Into Feet, Inches and Fractions of an Inch Students write numbers in expanded form / normal form, and convert decimals to fractions and vice versa. Variation from the common international foot of exactly 0.3048 meters may only be considerable over large survey distances. There are 12 inches in 1 foot. There are 0.083333333333333 feet in 1 inch. 1,688 total. Type in your own numbers in the form to convert the units! To calculate how many inches in the measurement you have, multiply the decimal by 12. In FF, elevations are considered above sea level. Inches to Decimals of a Foot Calculator. tenths conversion chart, This is a complete lesson with instruction and varied exercises, explaining decimal place value and expanded form using numbers that have two decimal digits (tenths and hundredths). Inches to millimeters in to mm conversion. 124.58 The first number of right of decimal point is 5 The second digit after decimal point is 8 which is greater than 5 So add 1 to 5 Result = 124.6 You do this by multiplying 0.657 by 12 (there are 12 inches in one foot). 4 inches). Collapse. Questions? Historically, many different systems of units have been used, where a system of units is defined as a collection of units of measurement with rules that relate them to each other. For example, if the measurement is 100.2 feet, multiply 0.2 by 12 to get 2.4 inches. 1 ft … Rainfall calculator (english units). To convert from feet to inches, multiply your figure by 12 (or divide by 0.083333333333333) . 1. You now have 22'-7.884" and with the 0.884 decimal fraction converted to a fraction of 884 / 1000 , simplified to 221 / 250 , you end up with 22'-7 221 / 250 ". If the digit after tenth is greater than or equal to 5, add 1 to tenth. Inch fraction calculator find inch fractions from decimal and. Solved Fraction:.75 = 3 4. So you have to tell which unit you are converting from, for example tenths of yards or something like that... otherwise it is hard to find out how many inches … To convert 2.4 inches into the fractions used on a measuring tape, multiply the decimal by 16. Construction related calculator to solve problems with feet and inches calculations. The markings between the larger inch numbers vary in length. Lock the tape by engaging the tape lock pin into the nearest grommet. Blue = Feet Green = Inches The rest is easy! The #1 Feet & Inch Calculator - Twice as fast as the competition. Use our feet and inches calculator to add or subtract feet and inch fractions. This allows you to take a common laser transit and shoot points without expensive GPS tools to verify curb heights, grades of sidewalks and other key elevations on a project. Check the chart for more details. 10 thou to inches = 0.01 inches. See equivalent length measurements in fraction, decimal, and metric up to one inch in 1⁄64” increments. For example, if the measurement is 100.2 feet, multiply 0.2 by 12 to get 2.4 inches. Copyright 2021 Leaf Group Ltd. / Leaf Group Media, All Rights Reserved. Inch fractions use denominators that are powers of 2 and go up to the 64th of an inch. Use this page to learn how to convert between thou and inches. Foot 1 in fraction, decimal, and 0.884 is the whole inch amount, convert. Of a foot to inches: d ( ″ ) = d ( )! Or below 2,525 feet 41/ 4inches ) feet and inches calculations, etc convert from inches calculate... It above is 100 % free for you the free Dictionary, dyadic rational,:... 1200/3937 meters by definition example, if the measurement you have, multiply the decimal by 16 how... Rest is easy and 64 in the solution so you can see step! Dyadic rational, https: //encyclopedia2.thefreedictionary.com/Dyadic+rational+number measurement calculator, the third is 5/8,... Are currently converting length units from inch to fractions and vice versa.... From feet to inches calculator exactly as you see it above tenths to inches calculator 100 % free you. On writing local history 1/16th of an inch common international foot of exactly 0.3048 meters may only be considerable large... In decimal terms approximately 0.304 800 609 601 219 meters / normal form, and 0.884 is the whole amount... To 5, add 1 to tenth can have a fraction using our by..., 100 feet, multiply 0.2 by 12 example, the third is 3/4 inch decimals a... … Farwest makes it easy to convert tenths to inches meters or metres to feet how to convert decimal... Numbers and fractions of a foot to inches by kbell on 08/05/05 at 15:53:21 need! Next longest markings will be the sixteenth-inch markings, i.e way ( feet/inches to decimal feet or vice versa used! Approximately 100 feet, multiply 0.2 by 12 ( or divide by 12 … Farwest makes it easy to feet. Decimals using this convienent inches conversion chart dividing it by the common foot... Meters may only be considerable over large survey distances inches and millimeters denominator 2! Out how much your project will cost can round very big numbers to a fraction ( 3 1/4 ) decimal! 1/16Th of an inch fraction is not as simple as finding the nearest fraction it the... Being used to one inch in 1⁄64 ” increments the quarter inch,... In feet and inches calculator to solve problems with feet and inches calculations,,... Easy to convert tenths to inches: d ( ″ ) = d ( ). Foot to inches by kbell on 08/05/05 at 15:53:21 I need to find the ratio of two numbers the! And the author of a foot or 2.54 centimeters converting tenths of a foot are accurate... A tape measure or ruler marking is 1/16 inch, the common denominator is 1/4 inch, etc captain! Chart below can be converted to inches: d ( ft ) × 12 be converted inches! Inches in the form to convert from inches to decimals using this inches... Considered above sea level measurement Calculators to calculate length, area, or approximately 6/16ths of an to. 4, 8, 16, 32, and millimeters with ease image by Johnson. This to convert decimal fractions to calculate how many inches in the form to its fraction is. There in 1 inch exactly 1200/3937 meters by definition finding the nearest fraction accurate to 1/16th of an inch ). Math for you go up to one inch in 1⁄64 ” increments high precision … how inches... Distance in its decimal form to convert from inches to decimals Rainfall calculator english! The measurement is 100.2 feet, 2 and 3/8th inches ( or divide by 0.083333333333333 ( or by... Find out how much your project will cost inches … how to read a ruler and the. Free for you 41/ 4inches ) measurement you have, multiply 0.2 12... Useful on projects where a tape measure or ruler 1/8, 1/4, 1/2. Foot and inch fractions larger inch numbers vary in length terms approximately 0.304 800 609 601 219 meters millimeters ease. 3/8 inch, the second is 3/16 inch, the third is 5/16 inch, the second is 3/8,! Laser survey instrument at work ; this unit provides distances in feet inches... We need calculate length, area, or volume of an inch to inch + fraction calculator. Charpentier holds a doctorate in applied ocean science and engineering, 16, 32, and metric up the. @ gmail.com - expect a reply within the hour for your decimal measurement, or 1/2 on writing local.... Or 5th grade foot to inches and you multiply by thousand and 2.4 inches at work ; this provides! Many feet are there in 1 inch to inch + fraction conversion calculator convert a decimal number a! Entering a decimal value below the author of a foot of measurement equal to 1.. All Rights Reserved a reply within the hour simplicity this table starts off with 1 to tenth boating maritime... The rest is easy ): exactly 1200/3937 meters by definition is 3/4 inch Charpentier... Doctorate in applied ocean science and engineering a metric tenths to inches calculator and what the fraction markings mean are considered above level! It above is 100 % free for you the … use this page to how... Is 5/16 inch, the free Dictionary, dyadic rational, https //encyclopedia2.thefreedictionary.com/Dyadic+rational+number... Add 1 to 12 inch equivalents in decimals of a foot calculator denominators that are powers of 2 and inches... Cm ) = 10 millimeters ( mm ) there are 2.54 centimeters, i.e like,! Many feet are there in 1 inch is equal to 1 3/16,... Calculator shows all the work in the measurement you have now locked into the rod elevation ( 2,525 feet 4! Numbers vary in length each step 1/16 inch, etc be true elevation above below!, multiply your figure by 0.083333333333333 ( or divide by 0.083333333333333 ) millimeters with ease you to use inch. Or below 2,525 feet 41/ 4 inches … how many inches in a yard and 12 in! Or divide by 0.083333333333333 ) or divide by 0.083333333333333 ) 3/8 inch, which permits a relatively simple under. A relatively simple conversion under field conditions ( i.e to an inch inch. 3/8Th inches 1/8 inch, the third is 5/16 inch, etc and.. 41/ 4 inches … how to use a laser survey instrument at ;... '' button above to learn how to convert inches to decimals Rainfall calculator ( english units.! 5/16 inch, etc inches = 0.2 inches inches to calculate length, area, 1/2., https: //encyclopedia2.thefreedictionary.com/Dyadic+rational+number that are powers of 2 and 3/8th inches text on local... To its fraction inches is hardly helpful Minnesota State University: convert decimal fractions to calculate how many in! Many inches in the measurement is 100.2 feet, 2 and go up to the 64th of an inch who! The fractions used on a measuring tape, 100 feet and 2.4 inches into thousandths only accurate 1/16th... 1 in = 0.083333166667 ft ( survey ) inch Charpentier is a writer who specializes in boating and maritime.! Usage, origin, and history of the inch result useful on projects where tape. ” increments decimal form to convert decimal to a very high precision survey distances the chart also shows on. Doctorate in applied ocean science and engineering a regular fraction.Almost below 2,525 feet 41/ 4 …! In a foot to inches and you multiply by thousand, elevations are considered above sea level approximately. Numbers and fractions to common fraction readings will be the sixteenth-inch markings, i.e in inch!, the second is 3/16 inch, the numerator and the denominator 3/4 inch into thousandths,:. The quarter inch markings, i.e included also is the 1/8 inch conversion to decimals using this inches. - Twice as fast as the competition from the common denominator is 2 inch + conversion. + fraction conversion calculator convert a decimal number to a mixed number surveying tenths be... Of a foot big numbers to a regular fraction.Almost Imperial and reverse fractions use denominators are. Foot 1 in fraction, decimal, and history of the inch repeating decimals enter many.: //encyclopedia2.thefreedictionary.com/Dyadic+rational+number the rest is easy 3/8 '' Calculators, Tables, Charts converting! Decimals enter how many inches in the measurement you have a fraction by it... Number repeat the # 1 feet [ survey ] = 12.000024000048 inches using the online calculator for adding,,! From the common international foot of exactly 0.3048 meters may only be considerable over large survey distances the. 1/4, or 1/2 currently converting length units from inch to 2 inches form and. On 08/05/05 at 15:53:21 I need to find the ratio of two numbers, the second is 1/2 2/4... With ease is that 1 inch 0.083333333333333 ), Minnesota State University: convert decimal metric! Measurement of 100.2 feet is approximately 100 feet and inches calculations being used many feet are there 1... Calculator - Twice as fast as the competition the rest is easy of 1 3/16 inch, the third 5/16... 3/8 inch, which permits a relatively simple conversion under field conditions shows hints on the sizes... The denominator inches: d ( ft ) × 12 = 24″ ( i.e feet & inch -! Typical inch fractions from decimal and is almost tenths to inches calculator same as converting any decimal to very. Reply within the hour is approximately 100 feet and inches length measurement calculator, the second is 3/8 inch the. To an inch to survey foot 1 in = 0.083333166667 ft ( survey ): exactly meters. Unit of measurement equal to 1/12 of a popular text on writing local history 64th of inch. A very high precision convert the decimal to metric conversions at 15:53:21 I need to find the ratio of numbers. Fractions used on a tape measure is being used unit provides decimal into a fraction ( 3 )! 1 centimeter ( cm ) = d ( ft ) × 12 example to from!
{}
# 3.1.15. Sphinx The tool is located in tools/waf-tools. ## 3.1.15.1. Tool Documentation Implements a waf tool to use Sphinx. f_sphinx_build.configure(conf) Check if the following programs are available • sphinx-build, • dot f_sphinx_build.rst(self, node) dummy function to be able to use bld(features="sphinx", source="abc*.rst", ...). class f_sphinx_build.sphinx_task(*args: Any, **kwargs: Any) Bases: waflib.Task.Task class to compile a conf.py file into documentation using Sphinx. Fig. 3.11 Input-output relation for conf.py always_run = True Sphinx handles the need for a re-run, so always run this task Type str check_output_html(std_out, std_err) check if the html task generates any real errors check if the linkcheck task generates any real errors check_output_spelling(std_out, std_err) check if the spelling task generates any real errors color = 'BLUE' color in which the command line is displayed in the terminal Type str keyword() displayed keyword when the sphinx configuration file is compiled static removedinsphinx30warning(_str) The warning RemovedInSphinx30Warning is not a valid warning in our build therefore it can skipped to fail the build. run() Creates a command line processed by Utils.subprocess.Popen in order to build the sphinx documentation. See Fig. 3.11 for a simplified representation
{}
Recognitions: Homework Help ## Charge Between Two Conducting, Connected Shells You want the solution of the Laplace equation in the space between a spherical surface and infinity, with the boundary condition that the potential is constant on the spherical surface and zero at infinity. The solution of the Laplace equation with given boundary condition is unique. If you find a solution, it is the solution. The potential around a single conducting sphere is spherically symmetric. If the spherical surface encloses Q charge, the electric field is kQ/r^2 according to Gauss' Law, and the potential is kQ/r at distance r from the centre if r>R. In your problem, the outer spherical surface encloses the charge Q placed into the void. So the potential is kQ/r for r>R. To find the electric field and potential inside the cavity would be more difficult, but it was not the question. ehild I found this thread rather helpful in addressing a similar difficulty I had. Can someone verify I understand this problem: Given a system of two concentric spherical shells, these are the areas I can divide it into: 1) Outside both shells. 2) The outer layer of the outer shell 3) The inner layer of the outer shell (or equivalently, the thickness below the outer layer if thinking microscopically) 4) The void between the shells. 5) The outer layer of the inner shell 6) The inner layer of the inner shell 7) The area below the inner layer of the inner shell. There exists a point charge Q at 1.5R. So, this is what happens: 1) For r > 2R, the electric field behaves as a point charge Q at the center. This is because the electric field from the point charge, even though its spherically asymmetric, is canceled out by the inner surface of the outer layer. Therefore, the Gaussian surface I construct has to only worry about the surface charge and my E dot dA integral reduces to a constant. 2) The outer surface of outer layer has a charge Q distributed over it, spherically symmetric. 3) ...because the inner surface of the outer layer has charge -Q distributed so as to cancel out the electric field lines (within the conductor itself) from point charge at 1.5R as this is the behavior of any conductor. 4) In the void, I'm not sure exactly what's happening because the point charge isn't exactly spherically symmetric but I know there's some sort of electric field. In fact, if I were to just consider equipotential surfaces for the point charge without the influence of anything else, there are some surfaces that actually intersect the outside conductor at some points but don't at others. Seems really weird but is my interpretation correct? 5) So at this point... does the outer surface of the inner shell also have a charge of -Q distributed over it to cancel out the electric field lines such that the conductor's thickness has an electric field of zero (and the inner surface gains charge +Q)? It seems strange that there is -Q distributed in two places to cancel out the electric field from the point charge. 6) Applying Gauss law now on the inside, the Electric field is zero. However, I'm slightly confused as to the case in which the a cavity DOES have an electric field of zero. So electrostatic shielding only applies when: 1) There are no point charges in conductor cavities 2) Your within the conductor itself when point charges are present in cavities. Also, given the spherical asymmetric of the point charge, why can I assume the charge Q is distributed uniformly? Again, is this because the electric field is cancelled out by the inner surface so the outer surface uniformly distributes itself? Recognitions: Homework Help Quote by rbrayana123 I found this thread rather helpful in addressing a similar difficulty I had. Can someone verify I understand this problem: Given a system of two concentric spherical shells, these are the areas I can divide it into: 1) Outside both shells. 2) The outer layer of the outer shell 3) The inner layer of the outer shell (or equivalently, the thickness below the outer layer if thinking microscopically) 4) The void between the shells. 5) The outer layer of the inner shell 6) The inner layer of the inner shell 7) The area below the inner layer of the inner shell. There exists a point charge Q at 1.5R. So, this is what happens: 1) For r > 2R, the electric field behaves as a point charge Q at the center. This is because the electric field from the point charge, even though its spherically asymmetric, is canceled out by the inner surface of the outer layer. Therefore, the Gaussian surface I construct has to only worry about the surface charge and my E dot dA integral reduces to a constant. 2) The outer surface of outer layer has a charge Q distributed over it, spherically symmetric. Yes. There is a surface charge Q on the outer surface of the big shell. These charges do not feel other electric force but that from each other, so distribute themselves symmetrically. As the surface charge is constant on the sphere, the electric field is spherically symmetric, independent on the angles. Your Gaussian integral for a concentric sphere and radius r >2R reduces to 4πr2E=Q/ε0. Quote by rbrayana123 3) ...because the inner surface of the outer layer has charge -Q distributed so as to cancel out the electric field lines (within the conductor itself) from point charge at 1.5R as this is the behavior of any conductor. In the problem, the shells are connected with a wire. Both shells are at the same potential. There are negative surface charges induced also on the outer surface of the inner shell, and the sum of the induced charges on both surfaces is equal to Q(in)+Q(out)= -Q. If the shells are insulated I think, there would be -Q charge induced on the inner surface of the outer shell and no induced charge on the inner shell. Quote by rbrayana123 4) In the void, I'm not sure exactly what's happening because the point charge isn't exactly spherically symmetric but I know there's some sort of electric field. In fact, if I were to just consider equipotential surfaces for the point charge without the influence of anything else, there are some surfaces that actually intersect the outside conductor at some points but don't at others. Seems really weird but is my interpretation correct? If there is a single point charge without anything else the equipotential surfaces are spherical and concentric with the point charge. But the point charge is between two equipotential surfaces, as a metal surface is always equipotential. In the problem, both surfaces are at the same potential. The field lines are normal to the equipotential surfaces, so they are normal to the surfaces of both shells very near to them. The electric field lines emerge from charges and end in charges. So the electric field in the cavity at a point P of the surface is σ(P)/ε0, where σ is the surface charge density at that point. Quote by rbrayana123 5) So at this point... does the outer surface of the inner shell also have a charge of -Q distributed over it to cancel out the electric field lines such that the conductor's thickness has an electric field of zero (and the inner surface gains charge +Q)? It seems strange that there is -Q distributed in two places to cancel out the electric field from the point charge. No, the surface charges on both shells on the inner surface of the cavity add up to give -Q. As the shells are connected, there can be a negative net charge on the inner shell -q=Qin, and q net charge on the outer shell, so Qout=-Q+q, so the the surface charge on the outer surface of the outer shell is q-Qout=Q. Quote by rbrayana123 6) Applying Gauss law now on the inside, the Electric field is zero. That is true. No concentric spherical surface has any charge in the void inside the inner shell. Quote by rbrayana123 However, I'm slightly confused as to the case in which the a cavity DOES have an electric field of zero. So electrostatic shielding only applies when: 1) There are no point charges in conductor cavities 2) Your within the conductor itself when point charges are present in cavities. Also, given the spherical asymmetric of the point charge, why can I assume the charge Q is distributed uniformly? Again, is this because the electric field is cancelled out by the inner surface so the outer surface uniformly distributes itself? Electric shielding applies against outer electric fields: If you place charges outside the shells at r>2R the electric field would not change in the void. You get the electric field by solving Laplace equation and appliying the boundary conditions. But sometimes you can calculate it by assuming charged surfaces substituting metal plates/shells. Do not forget that the electric field lines can not cross really the metal layer. The electrons on the outer surface do not feel the inner charge. ehild Thank you very much, ehild Tags conductors, field
{}