chash
stringlengths
16
16
content
stringlengths
267
674k
3e2ce5aeab6875a2
GPU-Accelerated VASP Get started today with this GPU Ready Apps Guide. VASP (Vienna Ab Initio Simulation Package) is an application for first-principle calculations of the electronic structure on the atomic scale by approximating solutions to the Schrödinger equation. It implements standard density functional theory, as well as state-of-the art functionalities like, hybrid functionals, Green’s function methods (GW and ACFDT-RPA) and MP2 (2nd-order Møller-Plesset) perturbation theory. VASP runs up to about 10X faster using NVIDIA Tesla P100s compared to CPU-only systems, enabling usage of computationally more demanding and more accurate methods in the same time. VASP Runs Up To 10X Faster On GPUs System requirements VASP is distributed as source code and has a couple of compile-time and run-time dependencies. For this guide, we will assume that the following software packages are already installed on your Linux system and that their respective environment variables are set: 1. Intel Compiler Suite (especially Fortran, C/C++ and MKL) 2. Intel MPI 3. NVIDIA CUDA 8.0 In most cases, these packages have been installed on your supercomputer by its administrators and can be loaded via a module system. Installing the mentioned packages is beyond the scope of this guide, so please contact your cluster support team if you need further assistance. The latest revision of the VASP GPU-port can be compiled with the PGI compiler suite, of which a community edition is provided at no cost! But because many VASP users traditionally use the Intel compiler, we will stick to it for this tutorial as well. Download and Compilation VASP is a commercial software and as a regular VASP licensee you can download the most current version of the GPU port. To acquire a license, see this page. Enter your login credentials on the right under "Community Portal" and click on "Login" to gain access to the download area. Click on "VASP5" and select the "src" folder. At the time of writing, you need to download the following files: Make sure to check VASP site regularly to get the latest patches or new versions. First, extract the VASP source code that you have just downloaded: tar xfz vasp.5.4.4.tar.gz Now switch into the freshly extracted directory containing the sources and apply the patches: cd vasp.5.4.4 The VASP makefile requires some modifications to reflect your local software environment. VASP comes with a selection of makefile templates for different setups, which are located in the arch/ subfolder. Copy an appropriate makefile.include from the arch/ folder (in this guide, we are using Intel's Fortran compiler and NVIDIA CUDA under Linux): cp arch/makefile.include.linux_intel makefile.include If you need to adapt the makefile.include, please refer to the section Troubleshooting below. Most of the options in makefile.include are set to work out of the box by detecting the necessary values from your environment variables, but it is highly recommended to set the GENCODE_ARCH variable in the file you just copied appropriately for your GPUs. Please check the compute capabilities (CC) of your GPU card(s) and edit makefile.include with an editor (e.g. nano, vim or emacs are available on many systems by default): nano makefile.include We are using NVIDIA P100 cards and as such compile for compute capability 6.0 to yield best performance. Hence, we are ok with the default GENCODE_ARCH line that looks like this: GENCODE_ARCH  := -gencode=arch=compute_30,code=\"sm_30,compute_30\" \ -gencode=arch=compute_35,code=\"sm_35,compute_35\" \ Leaving unused compute capabilities flags (e.g. 3.5) in won’t hurt, but enables to run the resulting binary on other GPU architectures as well. If your target GPU features a different compute capability make sure to adapt the line accordingly. So, e.g., when you want to target a V100 as well, make it look like this (and use CUDA 9): -gencode=arch=compute_60,code=\"sm_60,compute_60\" \ Now build the GPU port of VASP by executing (make sure not to add -j because VASP does not support parallel building): make gpu If the compilation succeeded, you should be presented with the GPU accelerated version of VASP in bin/vasp-gpu. Check if the binary is there with ls -l bin/vasp-gpu If the build failed, please refer to the section “Adapting build variables” in the Troubleshooting section. After a successful build, it’s time to build the GPU port of VASP that allows for non-collinear calculations (when LNONCOLLINEAR=.TRUE. or LSORBIT=.TRUE. in the INCAR) using the gpu_ncl target. Note that the Γ point flavor of VASP is not yet supported on the GPU. You can build the CPU-only versions (std, ncl, gam) just as well: make gpu_ncl std ncl gam This will give you the following list of binaries, but for this tutorial, only vasp_gpu and optionally vasp_std are will be used: Table 1 Overview over different executable files built for VASP. vasp_std Default version of VASP vasp_ncl Special version required to run calculations with LNONCOLLINEAR=.TRUE. or LSORBIT=.TRUE. in the INCAR vasp_gam Special version that saves memory and computations for calculations at Γ only. vasp_gpu Same as vasp_std, but with GPU acceleration vasp_gpu_ncl Same as vasp_ncl, but with GPU acceleration We recommend installing the VASP binaries into a place outside your build directory, e.g., into ~/bin to avoid accidental overwriting with future versions: mkdir -p ~/bin cp bin/vasp* ~/bin VASP relies on tabulated data used for smoothing the all-electron wavefunctions, often called pseudopotentials. You can download those pseudopotentials. Enter your login credentials on the right under “Community Portal” and click on “Login” to gain access to the download area. Then, click on “Potentials” and start with “LDA”. Download all files offered there, and proceed in the same manner for the “PBE” and “PW91” folders. This should leave you with the following set of files: 1. potpaw_PBE.tgz 2. potpaw_PBE.54.tar.gz 3. potpaw_PBE.52.tar.gz 4. potpaw_LDA.tgz 5. potpaw_LDA.54.tar.gz 6. potpaw_LDA.52.tar.gz 7. potpaw_GGA.tar.gz 8.  potUSPP_LDA.tar.gz 9. potUSPP_GGA.tar.gz Unpack them using the script: All scripts shown in this tutorial are available for download. You can also clone the repository directly to your filesystem using Running Jobs First GPU accelerated VASP calculation There are a few options in the main control file INCAR that need special consideration for the GPU port of VASP. GPU VASP will print error and warning messages if settings in the INCAR file are unsupported or discouraged. Do not ignore GPU-related messages and act accordingly! This section explains INCAR settings that are relevant for the GPU. Limit yourself to use of the following options for the ALGO flag: 1. Normal 2. Fast 3. Veryfast Other algorithms available in VASP have not been extensively tested and are not guaranteed to perform or may even produce incorrect results. Besides that, you must use the following settings in the INCAR file: 1. LREAL = .TRUE. or LREAL = A 2. NCORE = 1 To get started, we offer a few example calculations that we will use later to show how to reach better performance compared to simple setups. You can find some exemplary input files in the git repository. Go to the right directory and take a quick look at the INCAR file, you can see that it is in accordance with the options mentioned above: cd gpu-vasp-files/benchmarks For copyright reasons, you must generate the required POTCAR files on your own. We assume that you have downloaded and extracted the pseudopotential database like shown and use ~/vasp/potcars/ as the directory where they reside. The exemplary input files come with a script that takes care of the generation automatically, but it needs to know the path to your POTCAR database: cd siHugeShort bash ~/vasp/potcars Then, you are ready to start your first GPU-accelerated VASP calculation: This will only start one process that will utilize only one GPU and one CPU core, regardless of how many are available in your system. Running it this way, may take relatively long, but shows that everything is working. To confirm that the GPU is actively used, enter nvidia-smi -l in a terminal connected to the same node where your process is running. You should see your VASP process listed and see to what extent your GPU is utilized. You can stop watching by pressing CTRL+c. Using a single compute node Just like the standard version of VASP, the GPU port is parallelized with MPI and can distribute the computational workload across multiple CPUs, GPUs and nodes. We will use Intel MPI in this guide, but all techniques described herein work with other MPI implementations just as well. Please refer to the documentation of your concrete MPI implementation to find the equivalent command line options. VASP supports a variety of features and algorithms causing its computational profile to be just as diverse. Therefore, depending on your specific calculations, you might need different parameters to yield the quickest possible execution times. These aspects propagate to the GPU port just as well. In this tutorial, we will provide various techniques that can help speeding up your GPU runs. However, as there is no one optimal setup, you need to benchmark your cases individually to find the settings with the best performance for your cases. First, let’s see how many (and which) GPUs your node offers: nvidia-smi –L The output of the command tells us that we have 4 Tesla P100 GPUs available and lists their unique identifiers (UUID) that we will use later on: GPU 0: Tesla P100-PCIE-16GB (UUID: GPU-74dff14b-a797-85d9-b64a-5293c94557a6) GPU 1: Tesla P100-PCIE-16GB (UUID: GPU-576df4e5-8f0c-c773-23d2-7560fd29542e) GPU 2: Tesla P100-PCIE-16GB (UUID: GPU-cff44500-e07e-afef-8231-0bdd32dde61f) GPU 3: Tesla P100-PCIE-16GB (UUID: GPU-54c0a6db-b406-3e24-c28b-0b517549e824) Typically, GPUs need to transfer data between their own and the main memory. On multi-socket systems, the transfer performance depends on the path the data needs to move along. In the best case, there is a direct bus between the two separate memory regions. In the worst-case scenario, the CPU process needs to access memory that is physically located in a RAM module associated to the other CPU socket and then copy it to GPU memory that is (yet again) only accessible via a PCI-E lane controlled by the other CPU socket. Information about the bus topology can be displayed with: nvidia-smi topo -m Because GPU accelerated VASP does not (yet) support direct GPU-to-GPU communication, we can ignore most of the output that tells us, what pairs of GPUs could communicate fastest (PIX or even NV#) to slowest (SOC) among each other:   GPU0 GPU1 GPU2 GPU3 mlx5_0 CPU Affinity mlx5_0 SOC PHB PHB PHB X   The last column labeled “CPU Affinity” is important because it tells us, on which CPU cores the MPI ranks should ideally be run, if they communicate with a certain GPU. We see that all CPU cores of the first socket (0-15) can directly communicate with GPU0, whereas the CPUs of the second socket (16-31) are expected to show best performance when combined with GPU1, GPU2 and GPU3. Expected Performance Whenever you want to compare execution times of runs in various configurations, it is essential to avoid unforeseen deviations. NVIDIA GPUs feature techniques to allow for temporarily raising and lowering clock-rates based on the current thermal situation and compute load. While this is good for saving power, for benchmarking it might give misleading numbers caused by a slightly higher variance on execution times between multiple runs. Therefore, to do comparative benchmarking we try to turn it off for all the cards in the system: When you are done with benchmarking, you can reset the cards to run with maximally supported frequencies: Though the numbers representing performance have been generated on production systems, they are only meant to serve as a guideline demonstrating the methods presented in the following. Note that performance on your system might differ because there are many aspects influencing CPU and GPU performance. The Easiest Method: One Process per GPU The easiest method to use all 4 GPUs present in our system is just to start 4 MPI processes of VASP, and have the mapping, i.e. on which CPU cores your processes will run, taken care of automatically: mpirun -n 4 ~/bin/vasp_gpu The Intel MPI environment automatically pins processes to certain CPU cores, so that the operating system cannot move them to other cores during the execution of the job and thus prevents some disadvantageous scenarios for data movement. Yet, this may still lead to a suboptimal solution, because the MPI implementation is not aware of the GPU topology. We can investigate process pinning by increasing the verbosity: mpirun -n 4 -genv I_MPI_DEBUG=4 ~/bin/vasp_gpu Looking at the output and comparing it to our findings about the interconnect topology, it seems that things are not ideal: [0] MPI startup(): Node name Pin cpu [0] MPI startup(): [0] MPI startup(): [0] MPI startup(): [0] MPI startup(): Using device 0 (rank 0, local rank 0, local size 4) : Tesla P100-PCIE-16GB Using device 1 (rank 1, local rank 1, local size 4) : Tesla P100-PCIE-16GB Using device 2 (rank 2, local rank 2, local size 4) : Tesla P100-PCIE-16GB Using device 3 (rank 3, local rank 3, local size 4) : Tesla P100-PCIE-16GB Rank 0 uses GPU0, but is bound to the more distant CPU cores 16-23. The same problem applies for ranks 2 and 3. Only rank 1 uses GPU1 and is pinned to the cores 24-31, which offer best transfer performance. Let’s look at some actual performance numbers now. Using all 32 cores of the two Intel® Xeon® E5-2698 v3 CPUs present in our system without any GPU acceleration, it took 607.142 s to complete the siHugeShort benchmark.1 Using 4 GPUs in this default, but suboptimal way, results in an execution time of 273.320 s and a speedup of 2.22x. Use the following metrics included in VASP to quickly find out how long your calculation ran 1 If you have built the CPU-only version of VASP before, you can use the following command to see how long it takes on your system: mpirun -n 32 -env I_MPI_PIN_PROCESSOR_LIST=allcores:map=scatter ~/bin/vasp_std grep Elapsed\ time OUTCAR VASP maps GPUs to MPI ranks consecutively, while skipping GPUs with insufficient compute capabilities (if there are any). By that and using the following syntax, we can manually control process placement on the CPU and distribute the ranks so that every process uses a GPU that has the shortest memory transfer path: mpirun -n 4 -genv I_MPI_DEBUG=4 -env I_MPI_PIN_PROCESSOR_LIST=0,16,21,26 ~/bin/vasp_gpu This way does not yield an improvement on our system (runtime was 273.370 s) which is probably caused by an imbalanced use of common CPU resources like memory bandwidth and caches (3 processes sharing one CPU). As a compromise, one can distribute the ranks, so they are spread evenly across the CPU sockets, but only one rank must use the slower memory path to the GPU: mpirun -n 4 -genv I_MPI_DEBUG=4 -env I_MPI_PIN_PROCESSOR_LIST=0,8,16,24 ~/bin/vasp_gpu With a runtime of 268.939 s, this is a slight improvement of about 3% for this benchmark, but if your workload is heavier on memory transfers, you might gain more. Especially for larger numbers of ranks, manually selecting the distribution can be tedious or you might decide that equally sharing CPU resources is more important than memory transfers on your system. The following command maps the ranks consecutively, but avoids sharing common resources as much as possible: mpirun -n 4 -genv I_MPI_DEBUG=4 -env I_MPI_PIN_PROCESSOR_LIST=allcores:map=scatter ~/bin/vasp_gpu This gave us a runtime of 276.299 s and can be especially helpful if some of the CPU cores remain idle. You may want to do so on purpose, if a single process per GPU saturates a GPU resource that is limiting performance. Overloading the GPU even further, would impair performance then. This is given in the siHugeShort benchmark example, so on our system, this is as good as it gets (feel free to try out the coming options here anyway!). However, it’s generally a bad idea to waste available CPU cores as long as you are not overloading the GPUs, so do your own testing! The Second Easiest Method: Multiple Processes per GPU To demonstrate how to use more CPU cores than we have GPUs available, we will switch to a different benchmark called silicaIFPEN. It takes 710.156 s to execute using the 32 CPU cores only. Using 4 P100 GPUs with one MPI rank per GPU and the compromise regarding process placement it takes 241.674 s to complete (2.9x times faster). NVIDIA GPUs have the capability to be shared between multiple processes. To use this feature, we must ensure that all GPUs are set to “default” compute mode: Then, we run the actual calculation: mpirun -n 8 -env I_MPI_PIN_PROCESSOR_LIST=0,8,16,24,4,12,20,28 ~/bin/vasp_gpu For the silicaIFPEN benchmark, on our system the runtime improved to 211.576 s for 12 processes sharing 4 P100s (i.e. 3 processes per GPU), which raised the speed-up to 3.36. Going with 4 or more processes per GPU has an adverse effect on the runtime, though. The comparison in the table below show that the manually placed processes don’t give an advantage there anymore, as well. Table 2 Comparison between elapsed times for the silicaIFPEN benchmark varying the number of MPI processes per GPU MPI ranks per GPU Total MPI ranks Elapsed time (map:scatter) Elapsed time (map:ideal) 0 32 (CPU only) 710.156 s   1 4 242.247 s 241.674 s 2 8 214.519 s 212.389 s 3 12 212.623 s 211.576 s2 4 16 220.611 s 224.013 s3 5 20 234.540 s   6 24 243.665 s   7 28 259.757 s   8 32 274.798 s   2 mpirun -n 12 -env I_MPI_PIN_PROCESSOR_LIST=0,8,16,24,3,11,19,27,6,14,22,30 ~/bin/vasp_gpu 3 mpirun -n 16 -env I_MPI_PIN_PROCESSOR_LIST=0,8,16,24,2,10,18,26,4,12,20,28,6,14,22,30 ~/bin/vasp_gpu After reaching the sweet spot, adding more processes per GPU impairs performance even more. But why? Whenever a GPU needs to switch contexts, i.e., allow another process to take over, it introduces a hard synchronization point. Consequently, there is no possibility for instructions of different processes to overlap on the GPU and overusing this feature can in fact slow things down again. Please also see the illustration below. In conclusion, it seems to be a good idea to test how much oversubscription is beneficial for your type of calculations. Of course, very large calculations will more easily fill a GPU with a single process than smaller ones, but we can’t encourage you enough to do your own testing! VASP Runs Up To 10X Faster On GPUs NVIDIA MPS: enabling overlapping while sharing GPUs This method is closely related to the previous one, but remedies the problem that the instructions of multiple processes may not overlap on the GPU as shown in the second row of the illustration. It is recommended to set GPUs to process exclusive compute mode when using MPS: The only thing left to do to exploit this possibility is to start the MPS server before launching your MPI jobs as usual: nvidia-cuda-mps-control -d mpirun -n 8 -env I_MPI_PIN_PROCESSOR_LIST=allcores:map=scatter ~/bin/vasp_gpu echo quit | nvidia-cuda-mps-control The first command starts the MPS server in the background (daemon mode). When it is running it will intercept instructions issued by processes sharing a GPU and put them into the same context before sending them to the GPU. The difference to the previous section is that from the GPU’s perspective the instructions belong to a single process and context and as such can overlap now, just like if you were using streams within a CUDA application. You can check with nvidia-smi -l that only a single process is accessing the GPUs. This mode of running the GPU port of VASP can help to increase GPU utilization, when a single process does not saturate GPU resources. To demonstrate this, we employ our third example B.hR105, a calculation using exact exchange within the HSE06 functional. We have run it with different amounts of MPI ranks per GPU each time with and without MPS enabled. Table 3 Comparison between elapsed times for the B.hR105 benchmark varying the number of MPI processes per GPU each with and without MPS for NSIM=4 MPI ranks per GPU Total MPI ranks Elapsed time without MPS Elapsed time with MPS 0 32 (CPU only) 1027.525 s 1 4 213.903 s 327.835 s 2 8 260.170 s 248.563 s 4 16 221.159 s 158.465 s 7 28 241.594 s 169.441 s 8 32 246.893 s 168.592 s Most importantly, MPS improves the execution time by 55.4 s (that are 26%) to 158.465 s when compared to the best result without MPS (213.903 s). While there is a sweet spot with 4 ranks per GPU without using MPS, starting the as many processes as there are CPU cores available yields best performance with MPS. We skipped calculations with 3, 5 and 6 ranks per GPU on purpose because the number of bands (224) that is used in this example is not divisible by the resulting number of ranks and would hence be increased by VASP automatically, which increases the workload. If you are only interested in the time-to-solution, we suggest you experiment a little with the NSIM parameter. By setting it to 32 and using just 1 process per GPU (hence no MPS) we were able to push the calculation time down to 108.193 s, which is roughly a 10x speedup. Advanced: One MPS instance per GPU For certain setups, especially on older versions of CUDA, it could be beneficial to start multiple instances of the MPS daemon, e.g. one MPS server per GPU. However, doing so is a little more involved because one has to tell every MPI process which MPS server it should use and every MPS instance must be bound to another GPU. Especially on P100 with CUDA 8.0, we discourage this method, but that doesn’t necessarily mean that you won’t maybe find it useful. The following script can be used to start the instances of MPS: For this method to work, the GPU port of VASP must be started via a detour using a wrapper script. This will set the environment variables so that each process will use the correct instance of the MPS servers which we have just started: This script basically generates a list with the paths for setting the environment variables that decide which MPS instance is used. The fourth last line (myMpsInstance=...) then selects this instance depending on the local MPI process ID. We decided to go with a round-robin fashion, by distributing processes 1 to 4 to GPU0 to GPU3. Process 5 uses GPU 0 again and so would process 9, whereas process 6 and 10 are mapped to GPU 2 and so on. If you used a different path to install your GPU VASP binary, make sure you adapt the line starting with runCommand accordingly. Then, let’s start the calculation: mpirun -n 16 -env I_MPI_PIN_PROCESSOR_LIST=allcores:map=scatter ./ When using MPS, please keep in mind that the MPS server(s) itself use(s) the CPU. Hence, when you start as many processes as you have CPU cores available it might as well overload your CPU. So, it might be a good idea to reserve a core or two for MPS. When the calculation is finished, the following script cleans up the MPS instances: Using multiple compute nodes Basically, everything that was said about using GPUs housed in a single node applies to multiple nodes as well. So, whatever you decide worked best for your systems concerning process mapping will probably work well on more nodes. In the following we will assume that you have a hostfile setup that lists all your nodes associated with your job. In our case, we used two nodes and the hostfile looks like this: If you go with the manual selection of process mapping, there is just a small difference to the command given in the previous chapter: mpirun -f hostfile -n 8 -ppn 4 -genv I_MPI_DEBUG=4 -env I_MPI_PIN_PROCESSOR_LIST=0,8,16,24 ~/bin/vasp_gpu The debug output of the MPI implementation tells us that the processes are distributed across two nodes and reside on the CPU cores just as we expected: [0] MPI startup(): Rank Pid Node name Pin cpu [0] MPI startup(): 0 38806 hsw224 0 [0] MPI startup(): 1 38807 hsw224 8 [0] MPI startup(): 2 38808 hsw224 16 [0] MPI startup(): 3 38809 hsw224 24 [0] MPI startup(): 4 49492 hsw225 0 [0] MPI startup(): 5 49493 hsw225 8 [0] MPI startup(): 6 49494 hsw225 16 [0] MPI startup(): 7 49495 hsw225 24 The output by GPU VASP confirms, that also the GPUs are mapped to the MPI ranks, just as we intended: Using device 0 (rank 4, local rank 0, local size 4) : Tesla P100-PCIE-16GB Using device 1 (rank 5, local rank 1, local size 4) : Tesla P100-PCIE-16GB Using device 2 (rank 6, local rank 2, local size 4) : Tesla P100-PCIE-16GB Using device 3 (rank 7, local rank 3, local size 4) : Tesla P100-PCIE-16GB The performance for the siHugeShort benchmark gets a little faster with 258.917 s, but compared to its runtime of  268.939 s on one node does by no means justify its usage. The silicaIFPEN benchmark on the other hand improves notably from 241.674 s to 153.401 s, when going from 4 to 8 P100 GPUs with one MPI process per GPU. Regarding the previous sections, going to multiple nodes with multiple processes per GPU is straightforward: mpirun -f hostfile -n 16 -ppn 8 -genv I_MPI_DEBUG=4 -env I_MPI_PIN_PROCESSOR_LIST=0,8,16,24,4,12,20,28 ~/bin/vasp_gpu Or if you want to use all CPU cores: mpirun -f hostfile -n 64 -ppn 32 -genv I_MPI_DEBUG=4 -env I_MPI_PIN_PROCESSOR_LIST=allcores:map=scatter ~/bin/vasp_gpu For increasing the number of ranks per GPU, the silicaIFPEN benchmark shows a behavior a little bit different the single node case (fastest configuration took 211.576 s there): Using two processes per GPU improves the runtime only insignificantly to 149.818 s when compared 153.401 s in the case of one process per GPU. Further overloading the GPUs has yet again an adverse effect because already using 3 processes per GPU increases runtime to 153.516 s and 64 ranks in total make it take 231.015 s. So apparently using 1 or 2 processes per GPU on each node is enough in this case. Using a single instance of MPS per node is trivial when the instances are started. Some job schedulers offer submission options to do that for you, e.g. SLURM sometimes offers --cuda-mps. If anything like that is available on your cluster, we strongly advise you to use it and proceed just as described in previous section. But what can you do if your scheduler does not offer such an elegant solution? You must make sure that on each node one (and only one) MPS instance is started prior to VASP being launched. We provide another script that takes care of this: Yet again, if you installed the GPU accelerated VASP binary in an alternative location, please adapt the runCommand variable in the beginning of the script. The variables following this calculate the local rank on each node because Intel’s MPI implementation does not provide information this easily. The script starts an MPS server on each first rank per node making sure that the MPS process is not bound to the same core then a VASP process will be bound to later. This step is crucial because otherwise MPS would be limited using only a single core (it can use more than that) and even worse compete against VASP for CPU cycles on that core. The script continues executing VASP and then stops MPS afterwards. The script must be called from the mpirun command, as you might have seen in the advanced section already. The mpirun command works just like when running without MPS, but note we call the script instead of the VASP binary: mpirun -f hostfile -n 24 -ppn 12 -genv I_MPI_DEBUG=4 -env I_MPI_PIN_PROCESSOR_LIST=allcores:map=scatter ./ Regarding the B.hR105 benchmark, MPS improved the runtime on a single node and this holds on two nodes as well: enabling MPS speeds up the calculation time and using more ranks per GPU is beneficial up to a certain point of (over-) saturation. The sweet spot on our system was 4 ranks per GPU, which resulted in a runtime of 104.052 s. Compared to the baseline of a Haswell single node this is a speedup of 9.05x and compared to all 64 CPU cores this is still faster by a factor of 6.61x. If we use NSIM=32 with 4 ranks per GPU on each of the 2 nodes and do not use MPS, the calculation took only 71.222 s. Table 4 Comparison between elapsed times for the B.hR105 benchmark varying the number of MPI processes per GPU each with and without MPS for NSIM=4 using 2 nodes 0 32 (CPU only – 1 node) 1027.525 s 0 64 (CPU only) 763.939 s4 1 8 127.945 s 186.853 s 2 16 117.471 s 110.158 s 4 32 130.454 s 104.052 s 7 56 191.211 s 148.662 s 8 64 234.307 s5 182.260 s 4, 5 Here 256 bands were used, which increases the workload. VASP Runs Up To 10X Faster On GPUs Recommended System Configurations Hardware Configuration CPU Architecture System Memory 32-64 GB 8 Cores, 3+ GHz 10 cores, 2.2+ GHz 16 Cores, 2+ GHz GPU Model NVIDIA Quadro GP100 CPU Architecture System Memory 64-128 GB 16+ Cores, 2.7+ GHz GPU Model NVIDIA Tesla P100, V100 GPUs per Node Software Configuration Software stack Linux 64 GPU Driver 352.79 or newer CUDA Toolkit 8.0 or newer PGI Compiler 16.10 Intel Compiler Suite 16 Intel MPI Your local software environment might deviate from what the VASP build system can automatically handle. In this case, the build will fail and you will need to make minor adjustments to makefile.include. Open makefile.include with your favorite editor (e.g. nano, vim or emacs are available on many systems by default) and make the necessary changes (see below): nano makefile.include Whenever you have made changes to any file, make sure to execute the following command to start building from scratch: make veryclean In the following, we list a few typical error messages and how-to work around them: mpiifort: Command not found This error message simply tells you, that on your system the MPI-aware Intel Fortran compiler has a different name than we could have guessed. In makefile.include, please change all occurrences of mpiifort to whatever it is called on your system (e.g. mpif90). # error "This Intel <math.h> is for use with only the Intel compilers!" To get around this error, you have to do two things. First edit makefile.include and add -ccbin=icc to the NVCC variable, so that the line reads: NVCC := $(CUDA_ROOT)/bin/nvcc -ccbin=icc After that, you have to edit a second file: nano src/CUDA/ In there, you will see a section that looks like this: # Compilers LINK := g++ -fPIC Please change it to look like the following: # Compilers LINK := g++ -fPIC /usr/local/cuda//bin/nvcc: Command not found That message tells you that make cannot find the NVIDIA CUDA compiler nvcc. You can either correct the path in the line CUDA_ROOT := /usr/local/cuda/ or even comment it out (using a # as first symbol of the line) if CUDA_ROOT is set as an environment variable. No rule to make target `/cm/shared/apps/intel/composer_xe/2015.5.223/mkl/interfaces/fftw3xf/libfftw3xf_intel.a', needed by `vasp'. Stop. Probably, your local MKL was installed without support for the FFTW3 interface as a static library. If you comment out the line referencing that static library by inserting a # in its very beginning, the linker will pull in the dynamic analogue. Make sure to comment out the line associated (and following) to OBJECTS_GPU and not just the one after OBJECTS. My error is not covered here If your system meets the requirements mentioned in the beginning, it’s most likely just a path to a library that needs to be changed in the makefile.include file. Further explanation on the variables defined in there is given at Build Your Ideal GPU Solution Today.
bbcdfa4318bd1ff0
Skip to main content Chemistry LibreTexts 22.4.2: ii. Problems • Page ID • Q1 Suppose you are given two molecules (one is \(CH_2\) and the other is \(CH_2^-\) but you don't know which is which). Both molecules have \(C_{2v}\) symmetry. The CH bond length of molecule I is 1.121 Å and for molecule II it is 1.076 Å. The bond angle of molecule I is 104\(^\circ\) and for molecule II it is 136\(^\circ\). a. Using a coordinate system centered on the C nucleus as shown above (the molecule is in the YZ plane), compute the moment of inertia tensors of both species (I and II). The definitions of the componenets of the tensor are, for example: \begin{align} I_{xx} &=& & \sum\limits_j m_j (y_j^2 + z_j^2) - M(Y^2 + Z^2) \\ I_{xy} &=& & -\sum\limits_j m_jx_jy_j - MXY \end{align} Here, \(m_j\) is the mass of the nucleus j, M is the mass of the entire molecule, and X, Y, Z are the coordinates of the center of mass of the molecule. Use Å for distances and amu's for masses. b. Find the principal moment of interia \(I_a \langle I_b \langle I_c\) for both compounds ( in amu Å\(^2\) units) and convert these values into rotational constants A, B, and C in \(cm^{-1}\) using, for example, \[ A = \dfrac{h}{8\pi^2cI_a}. \] c. Both compounds are "nearly prolate tops" whose energy levels can be well approximated using the prolate top formula: \[ E = (A - B) K^2 + B J(J + 1), \] if one uses for the B constant the average of the B and C valued determined earlier. Thus, take B and C values (for each compound) and average them to produce an effective B constant to use in the above energy formula. Write down ( in \(cm^{-1}\) units) the energy formula for both species. What values are J and K allowed to assume? What is the degeneracy of the level labeled by a given J and K? d. Draw a picture of both compounds and show the directions of the three principle axes (a,b,c). On these pictures show the kinf of rotational motion associated with the quantum number K. e. Given that the electrical transition moment vector \(\vec{\mu}\) connecting species I and II is directed along the Y axis, what are the selection rules J and K? f. Suppose you are given the photoelectron spectrum of \(CH_2^-\). In this spectrum \(J_j = J_i + 1\) transitions are called R-branch absorptions and those obeying \( J_j = J_i - 1\) are called P-branch transitions , The spacing between lines can increase or decrease as functions of \(J_i\) depending on the changes in the moment of inertia for the transition. If spacings grow closer and closer, we say that the spectrum exhibits a so-called band head formation. In the photoelectron spectrum that you are given, a rotational analysis of the vibrational lines in this spectrum is carried out and it is found that the R-branches show band head formation but the P-branches do not. Based on this information, determine which compound I or II is the \(CH_2^-\) anion. Explain your reasoning. g. At what J value (of the absorbing species) does the band head occur and at what rotational energy difference? Let us consider the vibrational motions of benzene. To consider all of the vibrational modes of benzene we should attach a set of displacement vectors in the x, y, and z directions to each atom in the molecule (giving 36 vectors in all), and evaluate how these transform under the symmetry operations of \(D_{6h}\). For this problem, however, let's only inquire about the C-H stretching vibrations. a. Represent the C-H stretching motion on each C-H bond by an outward-directed vector on each H atom, designated \(r_i\): These vectors form the basis for a reducible representation. Evaluate the characters for this reducible representation under the symmetry operations of the \(D_{6h}\) group. b. Decompose the reducible representation you obtained in part a. into its irreducible components. These are the symmetries of the various C-H stretching vibrational modes in benzene. c. The vibrational state with zero quanta in each of the vibrational modes (the ground vibrational state) of any molecule always belongs to the totally symmetric representation. For benzene the ground vibrational state is therefore of \(A_{1g}\) symmetry. Am excited state which has one quantum of vibrational excitation in a mode which is of a given symmetry species has the same symmetry species as the mode which is excied (because the vibrational wave functions are given as Hermite polynomials in the stretching coordinate). Thus, for example, excitation (by one quantum) of a vibrational mode of \(A_{2u}\) symmetry gives a wavefunction of \(A_{2u}\) symmetry. To resolve the question of what vibrational modes may be excited by the absorption of infrared radiation we must examine the x, y, and z componenets of the transition dipole integral for initial and final state wave functions \(\psi_i \text{ and } \psi_f\), respectively: \[ |\langle \psi_f |x| \psi_i \rangle |, |\langle \psi_f |y| \psi_i \rangle |, \text{ and } |\langle \psi_f |z| \psi_i \rangle |. \] Using the information provided above, which of the C-H vibrational modes of benzene will be infrared-active, and how will the transitions be polarized? How many C-H vibrations will you observe in the infrared spectrum of benzene? d. A vibrational mode will be acrive in Raman spectroscopy only if one of the following integrals is nonzero: \begin{align} & & &| \langle \psi_f |xy| \psi_i \rangle |, | \langle \psi_f |xz| \psi_i \rangle |, | \langle \psi_f |yz| \psi_i \rangle |, \\ & & &| \langle \psi_f |x^2| \psi_i \rangle |, | \langle \psi_f |y^2| \psi_i \rangle |, \text{ and } | \langle \psi_f |z^2| \psi_i \rangle | . \end{align} Using the fact that the quadratic operators transform according to the irreducible representations: \begin{align} \left( x^2 + y^2, z^2 \right) &\Rightarrow && A_{1g} \\ \left( xz, yz \right) &\Rightarrow && E_{1g} \\ \left( x^2 - y^2 ,xy \right) & \Rightarrow && E_{2g} \end{align} Determine which of the C-H vibrational modes will be Raman-active. e. Are there any of the C-H stretching vibrational motions of benzene which cannot be observed in either infrared of Raman spectroscopy? Give the irreducible representation label for these unobservable modes. In treating the vibrational and rotational motion of a diatomic molecule having reduced mass μ, equilibrium bond length re and harmonic force constant k, we are faced with the following radial Schrödinger equation: \[ \dfrac{-h^2}{2\mu r^2}\dfrac{d}{dr} \left( r^2 \dfrac{dR}{dr} \right) + \dfrac{J(J + 1)\hbar^2}{2\mu r^2}R + \dfrac{1}{2}k(r-r_e)^2R = ER \] a. Show that the substitution \( R = \dfrac{F}{r}\) leads to: \[ \dfrac{-h^2}{2\mu}F" + \dfrac{J(J + 1)\hbar^2}{2\mu r^2}F + \dfrac{1}{2}k(r - r_e)^2F = EF \] b. Taking \( r = r_e + \Delta r\text{ and expanding } \dfrac{1}{(1 + x)^2} = 1 - 2x + 3x^2 + ...,\) show that so-called vibration-rotation coupling term \( \dfrac{J(J + 1)\hbar^2}{2\mu r^2} \) can be approximated (for small \(\Delta \text{ r) by } \dfrac{J(J + 1)\hbar^2}{2\mu r_e^2} \left( 1 - \dfrac{2\Delta r}{r_e} + \dfrac{3\Delta r^2}{r_e^2} \right).\) Keep terms only through order \(\Delta r^2.\) c. Show that, through terms of order \(\Delta r^2\), the above equation for F can be rearranged to yield a new equation of the form: \[ \dfrac{-\hbar^2}{2\mu} F" + \dfrac{1}{2}\bar{k}( r - \bar{r}_e)^2 F = \left( E - \dfrac{J(J + 1)\hbar^2}{2\mu r_e^2} + \Delta \right) F \] Give explicit expressions for how the modified force constant \(\bar{k} \text{, bond length } \bar{r}_e\), and energy shift \(\Delta\) depend on J, k, \(r_e \text{ and } \mu .\) d. Given the above modified vibrational problem, we can now conclude that the modified energy levels are: \[ E = \hbar \sqrt{\dfrac{\bar{k}}{\mu}}\left( v + \dfrac{1}{2}\right) + \dfrac{J(J + 1)\hbar^2}{2\mu r_e^2} - \Delta . \] Explain how the conclusion is "obvious", how for J = 0, k = \(\bar{k} \text{, and }\Delta\) = 0, we obtain the usual harmonic oscillator energy levels. Describe how the energy levels would be expected to vary as J increases from zero and explain how these changes arise from changes in k and \(r_e\). Explain in terms of physical forces involved in the rotating-vibrating molecule why \(r_e\) and k are changed by rotation.
0a88e7e2d614083c
Relativistic calculation of nuclear transparency in reactions Andrea Meucci Dipartimento di Fisica Nucleare e Teorica, Università di Pavia Istituto Nazionale di Fisica Nucleare, Sezione di Pavia, Italy December 21, 2020 Nuclear transparency in reactions is evaluated in a fully relativistic distorted wave impulse approximation model. The relativistic mean field theory is used for the bound state and the Pauli reduction for the scattering state, which is calculated from a relativistic optical potential. Results for selected nuclei are displayed in a range between and (GeV and compared with recent electron scattering data. For (GeV the results are lower than data; for higher they are in reasonable agreement with data. The sensitivity of the model to different prescriptions for the one-body current operator is investigated. The off-shell ambiguities are rather large for the distorted cross sections and small for the plane wave cross sections. 25.30.Fj, 24.10.Jv I Introduction Exclusive knockout reactions have been used since a long time to study the single particle properties of nuclear structure. The analysis of the experimental cross sections were successfully carried out in the theoretical framework of the nonrelativistic distorted wave impulse approximation (DWIA) for less than GeV Oxford ; Kelly1 . In recent years, owing to the new data from TJNAF gao ; malov , similar models based on a fully relativistic DWIA (RDWIA) framework were developed. In this approach the wave functions of the initial and final nucleons are solutions of a Dirac equations containing scalar and vector potentials fitted to the ground state properties of the nucleus and to proton elastic scattering data RDWIA . In the nucleus, final state interaction with the nuclear medium can absorb the struck proton, thus reducing the experimental cross section. This reduction is related to the nuclear transparency, which can be intuitively defined as the ratio of the measured to the plane wave cross section. The transparency can be used to refine our knowledge of nuclear medium effects and to look for deviation from conventional predictions of nuclear physics, such as the Color Transparency (CT) effect. The CT was introduced basing on perturbative QCD arguments ct . The name is related to the disappearance of the color forces at high : three quarks should form an object that passes through the nuclear medium without undergoing interactions. If the CT effect switches on as increases, then the nuclear transparency should be enhanced towards unity. Several measurements of the nuclear transparency in and knockout have been carried out in the past. The first experiment looking for CT effect was performed at Brookhaven brook measuring transparency in reaction. An increase of transparency for (GeV/, followed by a decrease for (GeV/ was observed. New data confirm this energy dependence of transparency brook2 . The first measurements of nuclear transparency in reaction were carried out at Bates with (GeV garino . In recent years, higher energy data of transparency in were produced at SLAC oneill and TJNAF abbott ; garrow . In contrast with data, the NE-18 experiment at SLAC did not see any CT effect up to (GeV, but could not exclude a slow onset of CT. The E91-013 experiment at TJNAF studied the nuclear transparency in a range up to (GeV with greatly improved statistics and did not found evidence for the onset of CT. The distorted wave approach was first applied to evaluate transparency in knockout in Ref. green , where it was shown that measurements of the normal transverse structure function in Pb could afford to see CT effect, and in Ref. kellyt , where the nuclear part of the transition amplitude was written in terms of Schrödinger-like wave functions for bound and scattering states and of an effective current operator containing the Dirac potentials. Alternatively, the nuclear transparency results were analyzed in terms of a Glauber model pand ; jain ; frank1 , which assumes classical attenuation of protons in the nuclear medium. In this paper we present RDWIA calculations of nuclear transparency in reaction. The RDWIA treatment is the same as in Refs. meucci1 ; meucci2 . The relativistic bound state wave functions have been generated as solutions of a Dirac equation containing scalar and vector potentials obtained in the framework of the relativistic mean field theory. The effective Pauli reduction has been adopted for the outgoing nucleon wave function. The resulting Schrödinger-like equation is solved for each partial wave starting from relativistic optical potentials. The relativistic current is written following the most commonly used current conserving prescriptions for the () reaction introduced in Ref. deF . The ambiguities connected with different choices of the electromagnetic current cannot generally be dismissed. In the reaction the predictions of different prescriptions are generally in close agreement pollock . Large differences can however be found at high missing momenta off1 ; off2 . The formalism is outlined in Sec. II. Relativistic calculations of nuclear transparency are presented in Sec. III, where current ambiguities are also investigated. Some conclusions are drawn in Sec. IV. Ii Formalism The nuclear transparency can be experimentally defined as the ratio of the measured cross section to the cross section in plane wave approximation, which is usually evaluated by means of a Monte Carlo simulation to take in account the kinematics of the experiment. Hence, we define nuclear transparency as where is the distrorted wave cross section and is the plane wave one. Since the measured transparency depends upon the kinematics conditions and the spectrometer acceptance, we have to specify the space phase volume, , and use it for both the numerator and the denominator gol . Because of final state interaction, the distorted cross section depends upon the momentum of the emitted nucleon , whereas the undistorted cross section only depends upon the missing energy and the missing momentum . In the one-photon exchange approximation the cross section is given by the contraction between the lepton tensor and the hadron tensor. In the case of an unpolarized reaction it can be written as where is the Mott cross section, is the recoil factor Oxford ; Kelly1 , and are the energy and momentum of the emitted nucleon, and is the out of plane angle between the electron scattering plane and the plane. The coefficients are obtained from the lepton tensor components and depend only upon the electron kinematics Oxford ; Kelly1 . The structure functions are given by bilinear combinations of the components of the nuclear current as where means that average over the initial and sum over the final states is performed fulfilling energy conservation. In our frame of reference the axis is along , and the axis is parallel to . In RDWIA the matrix elements of the nuclear current operator, i.e., are calculated using relativistic wave functions for initial and final states. The choice of the electromagnetic operator is a longstanding problem. Here we discuss the three current conserving expressions deF ; Kelly2 ; Kelly3 where is the four-momentum transfer, , , is the anomalous part of the magnetic moment, and are the Dirac and Pauli nucleon form factors, is the Sachs nucleon magnetic form factor, and . These expressions are equivalent for on-shell particles thanks to Gordon identity. However, since nucleons in the nucleus are off-shell we expect that these formulas should give different results. Current conservation is restored by replacing the longitudinal current and the bound nucleon energy by deF The bound state wave function is given by the Dirac-Hartree solution of a relativistic Lagrangian containing scalar and vector potentials. The ejectile wave function is written in terms of its positive energy component following the direct Pauli reduction method HPa where and are the scalar and vector potentials for the nucleon with energy . The upper component is related to a Schrödinger equivalent wave function by the Darwin factor , i.e., is a two-component wave function which is solution of a Schrödinger equation containing equivalent central and spin-orbit potentials obtained from the scalar and vector potentials. Hence, using the relativistic normalization, the emitted nucleon wave function is written as Iii Transparency and the reaction The reaction is a well-suited process to search for CT effects. The - cross section is accurately known from QED and the energy resolution guarantees the exclusivity of the reaction. Several measurements of nuclear transparency to protons in quasifree knockout have been carried out on several target nuclei and over a wide range of energies to look for CT onset. Here, we calculated nuclear transparency for closed shell or subshell nuclei at kinematics conditions compatible with the experimental setups for which the measurements of nuclear transparency have been performed, and for which the RDWIA predictions are known to provide a good agreement with cross section data. The bound state wave functions and optical potentials are the same as in Refs. meucci1 ; meucci2 , where the RDWIA results are in satisfactory agreement with and data. The relativistic bound-state wave functions have been obtained from the program ADFX of Ref. adfx , where relativistic Hartree-Bogoliubov equations are solved in the mean field approximation to the description of ground state properties of several spherical nuclei. The model starts from a Lagrangian density containing sigma-meson, omega-meson, rho-meson and photon field, whose potentials are obtained by solving self-consistently Klein-Gordon equations. Moreover, finite range interactions are included to describe pairing correlations and the coupling to particle continuum states. The outgoing nucleon wave function is calculated by means of the complex phenomenological optical potential EDAD1 of Ref. chc , which is obtained from fits to proton elastic scattering data on several nuclei in an energy range up to 1040 MeV. Since no rigorous prescription exists for handling off-shell nucleons, we have studied the sensitivity to different choices of the nuclear current. The Dirac and Pauli form factors and are taken from Ref. mud . In Fig. 1 our RDWIA results for nuclear transparency, calculated with the prescription for the nuclear current are shown. The of the exchanged photon is taken between (GeV/ and (GeV/ in constant kinematics. Calculations have been performed for selected closed shell or subshell nuclei (C, O, Si, Ca, Zr, and Pb) for which the relativistic mean field code easily converges. The agreement with the data is rather satisfactory. At (GeV/ our results lie below the data and are comparable with those presented in Ref. kellyt , where it was shown that the EDAD1 optical potential led to a smaller transparency, while better agreement was found using an empirical effective interaction which fits both proton elastic and inelastic scattering data. However, we have to note that the DWIA model of Ref. kellyt uses a different approach to obtain single particle bound state wave functions. The calculations at , and (GeV/ are closer to the data and fall down only for higher mass numbers. In Fig. 2 the energy dependence of nuclear transparency is shown. The calculations have been performed for the same nuclei and at the same kinematics as in Fig. 1. The calculated transparency is approximately constant for each nucleus and decreases for increasing mass number. In Refs. oneill ; garrow it is reported that the transparency data can be fitted with an exponential law of the form , with . Since our model is based on a single particle picture of nuclear structure, we expect our results to be sensible to the discontinuities of the shell structure. These clearly appear in the changes in shape of the A-dependent curves. In Fig. 3 the sensitivity of transparency calculations for C and Ca to different choices for the electromagnetic current is shown. The results with the current are larger than those obtained with the current, whereas results are smaller than the ones. A similar behavior was already found out in Ref. meucci2 for differential cross section. Here it is mainly due to the fact that, when using the current, the distorted cross section, in Eq. 1, is enhanced with respect to the calculations with the or the current, whereas the plane wave cross sections, , are almost independent of the operator form. Iv Summary and conclusions In this paper we have presented relativistic DWIA calculations for nuclear transparency of reaction in a momentum transfer range between and (GeV. The transition matrix element of the nuclear current operator in RDWIA is calculated using the bound state wave functions obtained in the framework of the relativistic mean field theory, and the direct Pauli reduction method with scalar and vector potentials for the scattering state. In order to analyze the ambiguities in the choice of the electromagnetic vertex due to the off-shell character of the initial nucleon, we have used three current conserving expressions in our calculations. We have performed calculations for selected closed shell or subshell nuclei. The dependence of nuclear transparency upon the mass number and the energy has been discussed. Low results underestimates the data, thus indicating the presence of too strong an absorptive term in the optical potential. In contrast, results at higher are closer to the data. We find little evidence of energy dependence or momentum transfer of the transparency for each nucleus. The sensitivity to different choices of the nuclear current has been investigated for C and Ca. The results with the current are larger than the results, whereas those obtained with the current are more similar to the ones. This effect is due to the enhancement of the distorted cross section with respect to the and cross sections. I would like to thank Professor C. Giusti and Professor F. D. Pacati for useful discussions and comments on the manuscript. • (1) S. Boffi, C. Giusti, F. D. Pacati, and M. Radici, Electromagnetic Response of Atomic Nuclei, Oxford Studies in Nuclear Physics (Clarendon Press, Oxford, 1996). • (2) J. J. Kelly, Adv. Nucl. Phys. 23, 75 (1996). • (3) J. Gao et al., Phys. Rev. Lett. 84, 3265 (2000). • (4) S. Malov et al., Phys. Rev. C 62, 057302 (2000). • (5) Y. Jin, D. S. Onley, and L. E. Wright, Phys. Rev. C 45, 1311 (1992); Y. Jin and D. S. Onley, Phys. Rev. C 50, 377 (1994); J. M. Udías, P. Sarriguren, E. Moya de Guerra, E. Garrido, and J. A. Caballero, Phys. Rev. C 48, 2731 (1993); Phys. Rev. C 51, 3246 (1995); J. M. Udías, P. Sarriguren, E. Moya de Guerra, and J. A. Caballero, Phys. Rev. C 53, R1488 (1996); J. M. Udías and J. R. Vignote, Phys. Rev. C 62, 034302 (2000). • (6) A. H. Mueller, in Proceedings of the XVII Rencontre de Moriond, 1982, edited by J. Tran Thanh Van (Editions Frontieres, Gif-sur-Yvette, France, 1982), p. 13; S. J. Brodsky, in Proceedings of the Thirteenth International Symposium on Multiparticle Dynamics, edited by W. Kittel, W. Metzger, and A. Stergiou (World Scientific, Singapore, 1982), p. 963. • (7) A. S. Carroll et al., Phys. Rev. Lett. 61, 1698 (1988). • (8) A. Leksanov et al., Phys. Rev. Lett. 87, 212301 (2001). • (9) G. Garino et al., Phys. Rev. C 45, 780 (1992). • (10) T. G. O’Neill et al., Phys. Lett. B351, 87 (1995). • (11) D. Abbott et al., Phys. Rev. Lett. 80, 5072 (1998). • (12) K. Garrow et al., arXiv:hep-ex/0109027. • (13) W. R. Greenberg and G. A. Miller, Phys. Rev. C 49, 2747 (1994). • (14) J. J. Kelly, Phys. Rev. C 54, 2547 (1996). • (15) V. R. Pandharipande and S. C. Pieper, Phys. Rev. C 45, 791 (1992). • (16) P. Jain and J. P. Ralston, Phys. Rev. D 48, 1104 (1993). • (17) L. Frankfurt, M. Strikman, and M. Zhalov, Phys. Lett. B503, 87 (2001). • (18) A. Meucci, C. Giusti, and F. D. Pacati, Phys. Rev. C 64, 014604 (2001). • (19) A. Meucci, C. Giusti, and F. D. Pacati, Phys. Rev. C 64, 064615 (2001). • (20) T. de Forest, Jr., Nucl. Phys. A392, 232 (1983). • (21) S. Pollock, H. W. L. Naus, and J. H. Koch, Phys. Rev. C 53, 2304 (1996). • (22) J. A. Caballero, T. W. Donnelly, E. Moya de Guerra, and J. M. Udías, Nucl. Phys. A632, 323 (1998). • (23) S. Jeschonnek and J. W. Van Orden, Phys. Rev. C 62, 044613 (2000). • (24) Y. S. Golubeva, L. A. Kondratyuk, A. Bianconi, S. Boffi, and M. Radici, Phys. Rev. C 57, 2618 (1998). • (25) J. J. Kelly, Phys. Rev. C 56, 2672 (1997); 59, 3256 (1999). • (26) J. J. Kelly, Phys. Rev. C 60, 044609 (1999). • (27) M. Hedayati-Poor, J. I. Johansson, and H. S. Sherif, Nucl. Phys. A593, 377 (1995); Phys. Rev. C 51, 2044 (1995). • (28) W. Pöschl, D. Vretenar, and P. Ring, Comput. Phys. Commun. 103, 217 (1997). • (29) E. D. Cooper, S. Hama, B. C. Clark, and R. L. Mercer, Phys. Rev. C 47, 297 (1993). • (30) P. Mergell, Ulf-G. Meissner, and D. Drechsel, Nucl. Phys. A596, 367 (1996). Figure 1: The nuclear transparency for the quasifree A reaction as a function of the mass number for ranging from to (GeV. Calculations have been performed for selected closed shell or subshell nuclei with mass numbers indicated by open circles. The data at (GeV are from Ref. garino . The data at , and (GeV/ are from Ref. abbott . Figure 2: The energy dependence of nuclear transparency for C (open circles), O (open stars), Si (open squares), Ca (open triangles), Zr (open crosses), and Pb(open diamonds), at the same kinematics as in Fig. 1. Calculations were performed for values marked by symbols. The C data are from Refs. garino ; oneill ; abbott . Figure 3: The electromagnetic current dependence of nuclear transparency for C and Ca, at the same kinematics as in Fig. 1. Calculations were performed for values marked by symbols. For everything else, email us at [email protected].
bfa2f336004147df
Samiksha Jaiswal (Editor) Free particle Updated on Share on FacebookTweet on TwitterShare on LinkedInShare on Reddit Free particle In physics, a free particle is a particle that, in some sense, is not bound by an external force, or equivalently not in a region where its potential energy varies. In classical physics, this means the particle is present in a "field-free" space. In quantum mechanics, it means a region of uniform potential, usually set to zero in the region of interest since potential can be arbitrarily set to zero at any point (or surface in three dimensions) in space. Classical free particle The classical free particle is characterized simply by a fixed velocity v. The momentum is given by p = m v and the kinetic energy (equal to total energy) by E = 1 2 m v 2 where m is the mass of the particle and v is the vector velocity of the particle. Mathematical description A free quantum particle is described by the Schrödinger equation: 2 2 m 2   ψ ( r , t ) = i t ψ ( r , t ) where ψ is the wavefunction of the particle at position r and time t. The solution for a particle with momentum p or wave vector k, at angular frequency ω or energy E, is given by the complex plane wave: ψ ( r , t ) = A e i ( k r ω t ) = A e i ( p r E t ) / with amplitude A. As for all quantum particles free or bound, the Heisenberg uncertainty principles Δ p x Δ x 2 , Δ E Δ t (similarly for the y and z directions), and the De Broglie relations: p = k , E = ω apply. Since the potential energy is (set to) zero, the total energy E is equal to the kinetic energy, which has the same form as in classical physics: E = T 2 k 2 2 m = ω Measurement and calculations The integral of the probability density function ρ ( r , t ) = ψ ( r , t ) ψ ( r , t ) = | ψ ( r , t ) | 2 where * denotes complex conjugate, over all space is the probability of finding the particle in all space, which must be unity if the particle exists: a l l s p a c e | ψ ( r , t ) | 2 d 3 r = 1 This is the normalization condition for the wave function. The wavefunction is not normalizable for a plane wave, but is for a wavepacket. In this case, the free particle wavefunction may be represented by a superposition of free particle momentum eigenfunctions ϕ(k), the Fourier transform of the momentum space wavefunction: ψ ( r , t ) = 1 ( 2 π ) 3 a l l p s p a c e A ( p ) e i ( p r E t ) / d 3 p = 1 ( 2 π ) 3 a l l k s p a c e A ( k ) e i ( k r ω t ) d 3 k where the integral is over all k-space, and E = E ( p ) = p 2 2 m and ω = ω ( k ) = k 2 2 m (to ensure that the wavepacket is a solution of the free particle Schrödinger equation). Note that here we abuse notation and denote A ( p ) = A ( k ) and A ( k ) with the same symbol, when we should denote A ^ ( k ) = A ( k ) , where A is the p-space and A ^ the k-space function. The expectation value of the momentum p for the complex plane wave is p = ψ | i | ψ = a l l s p a c e ψ ( r , t ) ( i ) ψ ( r , t ) d 3 r = k and for the general wavepacket it is p = a l l s p a c e ψ ( r , t ) ( i ) ψ ( r , t ) d 3 r = a l l k s p a c e k | A ( k ) | 2 d 3 k The expectation value of the energy E is (for both plane wave and general wave packet; here one can observe the special status of time and hence energy in quantum mechanics as opposed to space and momentum) E = ψ | i t | ψ = a l l s p a c e ψ ( r , t ) ( i t ) ψ ( r , t ) d 3 r = ω For the plane wave, solving for k and ω and substituting into the constraint equation yields the familiar relationship between energy and momentum for non-relativistic massive particles E = p 2 2 m . In general, the identity holds in the form E = p 2 2 m where p = |p| is the magnitude of the momentum vector. The group velocity of the plane wave is defined as v g = d ω d k which turns out to be the classical velocity of the particle. The phase velocity of the plane wave is defined as v p = ω k = E p = p 2 m = v 2 Relativistic quantum free particle There are a number of equations describing relativistic particles: see relativistic wave equations. Free particle Wikipedia Similar Topics She Wore a Yellow Ribbon Marla Maples Bimal Soni
38b743b7e870dc59
Five Popular Posts Of The Month Wednesday, November 28, 2018 Killing the Schrödinger's Cat, at last and for good: part I This Foreword was prompted by publication a new book about quantum mechanics. I love reading Lee Smolin, but I'm not going to read this book. In this book at first he criticizes all existing interpretations of quantum mechanics, and then he promotes his own. However, I prefer sticking to the interpretation which satisfies the Occam's razor test (and described in the series of my posts). There are two major facts about quantum mechanics everybody needs to know The first one is that quantum mechanics works. The second one is that quantum mechanics is not a complete theory – that is why there are several interpretations of why it works. The third fact, however, the fact of the matter is that this situation is not the first, not the only, and I'm pretty sure not the last one in physics and in science in general when a working theory exists, but the reasons for why it works aren’t clear and different factions of scientific community hold different views on that matter. For example, more than 2000 years ago the world knew only a few true physical theories – one of those was the Archimedes’ theory of a lever. The lever has been known for thousands of years before Archimedes, but he was the first one who gave a detailed mathematical description of how it works. But only a couple of thousands of years later physicists could explain why Archimedes’ theory worked. This maybe is not the perfect example, but it works as an illustration. Physics is not alone, other sciences also have similar examples: for example, no one denies evolution, species do evolve. But the theory of evolution, that one that answers the question why do species evolve – is a different story; Darwinism is not the only one, there are, or at least, there were alternatives. So, quantum theory works, we know how to use it to describe quantum world, but we do not know why it works. And the only reason why so many physicists are still bothered by the latter fact is that quantum theory is very different from clear and logical classical mechanics. So far, all attempts to fit quantum theory in the same logical frame that works for classical mechanics have failed. Most physicists believe that because a classical, i.e. macroscopic, world represents a composition of a large number of quantum, i.e. microscopic, worlds, the logical and mathematical description of both worlds must be connected in a clear and logical way – classical laws must be “derived” from quantum laws, and quantum laws must be “derived” from classical laws. Here we already run into a debate – because different scholars have a different meaning for term “derived”. At the dawn of the quantum era, to demonstrate how different quantum mechanics is from classical mechanics physicists invented paradoxes. One of such well-known paradoxes was “Schrödinger’s cat”, and another one “EPR paradox”. What readers need to understand that a hundred years ago those paradoxes played an important role as discussion generators. But today, a hundred years later, we have a much better understanding of what works and what does not work in quantum mechanics, including the meaning of those old paradoxes. A historian may keep uncovering more and more nuances in those hundred-year-old discussions. But a physicist needs to focus on the current understanding. And again, the history of science knows very similar situations, when for many years a paradox was a nucleus of many heated discussions, but was resolved and now it has only historical value. My favorite paradox of such type is Zeno’s paradox that says that a runner cannot ever run a mile (the Dichotomy paradox). Now we know that the sum of an infinitely many terms can have a finite value. In conclusion, we know that quantum mechanics is very different from classical mechanics, we know that we don’t know why a quantum theory works, and that realization leads to different theories about a quantum theory, known as interpretations. How do we select that one which we like the most? Everyone has a different approach. I always use the Occam's razor and select an explanation which requires the least amount of reasoning, the smallest number of assumptions, and the most natural assumptions. Such interpretation of quantum mechanics exists. And in the series of post on this page I tried to offer a description of this interpretation and explain why this interpretation is the best one – so far.   P.S. This post is one of the posts the origins of quantum mechanics:   The Core Assumption of Every Known Single-Photon Experiment Is Wrong,   Freeing The Schrödinger's Cat I (has an additional discussion of the general methodology of science);  Freeing The Schrödinger's Cat II;   The Uncertainty Principle;   The Origins of Quantum Mechanics, and   Part II of this book review. From my first encounter with Quantum Mechanics I was fascinated by it. I also immediately “knew”, or rather was convinced, that Quantum Mechanics was, and still is, not a theory, but a “cooking recipe”. A mathematically and conceptually complicated, sometimes counter-intuitive, but just a recipe: the prescription of actions found via a trial and error, exactly like cooking. As a soon-to-be theoretical physicist (who had not idea that later in life would switched to education), I was convinced that since the world is a united undivided space-time continuum filled up with matter in the form of moving objects and changing fields, there has to be one united undivided universal theory describing the whole world. When a scientist would need to describe a specific subset of natural phenomena, that universal theory would be used in a simplified form of a specific theory the best suited for that type of phenomena. For example, to describe the mechanical motion of a small number of slow moving objects one would use the Newtonian Mechanics. But the Newtonian Mechanics represents a special case of the Relativistic Mechanics; the special case which is described by the equations derived as a mathematical limit of the equations for the Relativistic Mechanics when the speed of the motion of all objects is much lower than the speed of light in vacuum. Physicists have established similar relationships between the Newtonian Mechanics and the laws of Thermodynamics and Gas Laws; between the Newtonian Mechanics the Navier-Stokes equations describing the motion of fluids; between the General Theory of Relativity and Special Theory of Relativity. The Quantum Mechanics, as a highly successful recipe, formally includes a transition from the laws of Quantum Mechanics (e.g. in the form of the Schrödinger equation) to the Newtonian Mechanics (e.g. in the form of the Newton's laws). In a way we can say that the Quantum Mechanics explains why Newton's laws work. But there is no yet a commonly accepted theory which explains why the Quantum Mechanics works. There are only possible interpretations of that. After getting an A for my Quantum Mechanics course, I did not lose the interest to the fundamentals of the Quantum Mechanics. In time, I read four or five more standard textbooks on the Quantum Mechanics, and at least as many books on the philosophy and fundamental principles of it (I even posted a couple of pieces of my own; one on the Heisenberg Principle, and another one on Quantum Entanglement). The recent book on the matter “What Is Real?” by Dr. Adam Becker attracted my attention by good reviews, so I purchased it. When I started to read it (and I am still in the process – unfortunately, my reading time is not as available as I would wish), I knew I would not learn much new on the origins and philosophy of the Quantum Mechanics. But I am always eager to learn something new on the history of the scientific battles between different scholars, different groups of scientists, which happens a lot, even in physics. People seem to think that science is something like the wisdom inscribed in tablets, and scientists just dig them out and reveal to the public as a discovery. In reality, the practice of science is not much different from all other human practices (think of acting, for example); it is developed as the result of a constant struggle between different groups with opposing interests (funds, fame). The only difference between science and other human practices is that in science people have a procedure which eventually allows to differentiate between “winners” and “losers” (BTW: that procedure is called an “experiment”). Scientists are human, and they act like all humans do – in the best interests of their own. If a scientist has a strong opinion about something, his/her brain just rejects any ideas which do not fit his/her views. That is why Max Plank said: “A new scientific truth does not triumph by convincing its opponents and making them see the light, but rather because its opponents eventually die, and a new generation grows up that is familiar with it.” How true, how true. And BTW, many scientists simply “don't know what they do”. I mean they do know what project they work on, the goals of that project, what they want to achieve. But if you ask them about a big picture, if you ask what is science and how does their project fit in science in general, you don't get an articulate answer, because most scientists simply don't think about things like that. “What do you do? Science. What is science? Hmmm … So, you don’t know what you do? Hmmm …” No wonder, scientists cannot clearly explain the difference between science and religion, and STEM education is in a state of struggle.  There is a common misunderstanding that the motivation for a scientist to do science is curiosity. First, no one, even a scientists needs curiosity to achieve success in life. In science, the NSF - the most important funding agency - does not value curiosity, it values appeal and appearance. Curiosity is just a psychological predisposition to trying things. It is like hunger. You feel hunger, you start looking for food. But maybe in time you will figure out that making food also can make you rich and famous, and you become a chef. Same in science; the true locomotive of sciences (read, scientists) is the two big Fs - Fame, and Funds. While growing up they - future scientists - got fascinated by something. Maybe it was a book, a parent, a friend, a teacher, a movie, or something else which influenced them, but the main reason they started doing science is just they liked it and they were good at it. One thing led to another and here they are, having a PhD. But the majority of scientists are practitioners, they are practicing in the field of their science and they don't usually think about the philosophical basis or fundamental principles of the science they do. They just have no time and taste for that. The Dr. Adam Becker’s book is promised to describe some examples of the battle between different groups of physicists about the origins of the Quantum Mechanics. But immediately after starting reading the book, I ran into multiple examples of contradictory or illogical statements. This fact has prompted me to write this piece. I hope, my further reading will inspire me for my further writing (this is why I called this one “Part I/one”). I would like to start from these two quotes: “And one position in that debate – held by the majority of physicists and purportedly by Bohr – has continually denied the very terms of the debate itself” (page 5). “The popularity of this attitude to quantum physics is surprising” (page 6). These statements indicate that Dr. Becker is simply not familiar with the views beyond the ones represented by English-speaking writers. For example, in all Russian textbooks on the Quantum Mechanics the debate has been settled; the quantum world is real, it can and has to be described in the terms of the observable variables, there is still though a technical debate on how to reconcile the quantum (microscopic) world description (i.e. the Quantum Mechanics) with the human (macroscopic) world description (i.e. the Newtonian Mechanics). In Russia, physics is based on a specific philosophy of science, called Materialism, and the philosophy described by Dr. Becker is called Positivism and deemed wrong (for better or worse – that is not the topic of this piece). So, what Dr. Becker should have written is “The popularity of this attitude to quantum physics in the English-speaking sciences is surprising” (with the exception of David Bohm). Since the “Introduction” Dr. Becker uses statements which already depend on a specific interpretation of Quantum Mechanics, however, without mentioning that fact. For example, he writes: “The atom doesn’t split, it doesn’t take one path and then the other – it travels down both paths, simultaneously” (page I/one). This description represents only one of possible interpretations of the motion of a quantum particle discussed in the physics world. Among other interpretations, there is one, namely, a statistical interpretation, which states that an atom takes one and only one path; but another atom in the same situation may take a different path, and there is math which tells us the chances for each path to be taken. I am not saying that the statistical interpretation is better (it is), I just want to stress that the assertion that an atom “travels down both paths, simultaneously”, describes the author's point of view, and not the only possible point of view, but the readers are led to believe that other points of view do not exists. And that is on page I/one of the “Introduction”. The view that when a quantum particle is presented with two paths to travel, it “doesn’t split, it doesn’t take one path and then the other – it travels down both paths, simultaneously” is very common for scientists who exercise the philosophy of Positivism. This view, however, leads to a very strange world picture, which, from my point of view, cannot be correct. What if a quantum particle is presented with three paths to travel, or four, of five, or seven thousand thirty-six? The same logic forces us to say that it “doesn’t split – it travels down seven thousand thirty-six paths, simultaneously”. What if there are no physical obstruction at all and a quantum particle can travel in any direction using any – meaning, all! – path(s)? In that case, it “doesn’t split – it travels all paths, simultaneously”. Meaning, it is smeared all over the universe. So, it is not a particle any more, but an actual physical field. And that view should be applied to all existing particles at the same time, which now are all fields smeared all over the universe. This view does exist (“a wave function is a real physical field”). For example, a description of one of the "classical" quantum experiments on single-electron diffraction begins from this statement. (From: Demonstration of single-electron buildup of an interference pattern A. Tonomura, J. Endo, T. Matsuda, T. Kawasaki, and H. Ezawa / Citation: American Journal of Physics 57, 117 (1989); doi: 10.1119/1.16104 View online: The authors should have written -"According to one of the interpretations ..." (more on this in Appendix II and other publications). However, using this view is hard to explain how a single electron makes a spot on a photo-plate when it collides with the plate (and many more experiments). This discussion would require to invent a process similar to a “collapse of a wave function” but applied to an actual physical field (before it hits a screen, the field - in general - is all over the universe, and a fraction of a second later, it is only at this point). And another very unrealistic event would happen right when an electron was ejected and starter traveling - inside of a heated element it was local, but as soon as it is out, it immediately fills all the space in a form a wave, or rather a wave packet, that still has no definite size, only an "uncertainty" in space.  It is just easier to choose a different interpretation, namely, the statistical one; and the Occam's Razor principle says, if it’s easier, it’s better, so – use that one. It also automatically answers the question: “Why aren’t our keys ever in two places at once?” (page 2). Because nothing is; not keys, not stars, not atoms, not electrons – nothing. I do understand the need of an author to make some impressive statements or ask mysterious questions, but a scientific book should not mislead readers. BTW: many authors fail to inform readers that what they write is only one possible interpretation of quantum mechanics, here is another example - from the "Scientific American"! A human mind is very powerful, it can imagine things which do not exist in nature, like a Unicorn, or a Pegusas, or a Griffin, etc. A human mind can also generate questions which make no sense, for example “How old is the first Centaurs?”, or “Who won the Super Bowl on Mars in 1234 AC?” Asking a question which make no sense is a nonsense. An example of such nonsense is asking: “where an electron is” (page 16) without having described the specific physical situation. Asking “Where an electron is in a Hydrogen atom” is a nonsense. Asking “Where an electron is when it hits a photo-plate” is a legitimate question. The discussion of the meaning of “measurement” (page17) without bringing into it different interpretations of the Quantum Mechanics is pointless, because various interpretations of the Quantum Mechanics differ by the very definition of “measurement”. On page 17 Dr. Becker writes: “The predictions of quantum physics are generally in terms of probabilities, not certainties. And that’s strange…”. Well, using probabilities may seem strange for a regular person, but it definitely cannot be seen as strange for a scientist. The fact that one probabilistic function (e.g. a wave function) is defined by a deterministic equation (e.g. Schrödinger equation) is not new and no different from other functions and equations used in physics for describing probabilistic behavior (e.g. N-particle distribution function, for which the time evolution is governed by the Liouville equation). Probability is as the part of physics as the determinism. What drastically separates the Quantum Mechanics from any other probabilistic theories, is not the fact that we have to calculate probabilities of different events, but the fact that we cannot use any equation which describes probabilities per se, which tells us how to calculate those probabilities on their own. Instead, we have to calculate a wave-function first, only then we can find the probabilities we need. The recipe (which has several but mathematically equivalent forms) is simple: 1. Guess the Hamiltonian for your system (there are some hints for that); 2. Solve the Schrödinger equation for eigenstates (does not matter what they are, it’s just math); 3. Calculate the amplitudes (does not matter what they are, it’s just math) of the eigenstates for a wave-function of your choice, including their time evolution, if you want; and then 4. Calculate squares of the absolute values of those amplitudes (just some more math) – the resulting numbers will give you the probabilities you are looking for. Why do we have to use a wave-function, but not actual probabilities? Why does the universe make us using wave-functions instead of actual probabilities in the first place? Here is where scientists get divided into different groups. Some just ignore the question or say that this question does not make sense, so, “shut up and calculate”. Some say that this question makes sense, but it is not worth spending time on searching for the answer, or we will never be able to find the answer to it so, again, just “shut up and calculate”. And some are still trying to find the answer to this question – those people represent the tiny fraction of all scientists, but those are the people who will eventually make a breakthrough in quantum physics. This feature of the Quantum Mechanism, i.e. the need for the use of a wave-function instead of probabilities, is the root of all mysteries of the Quantum Mechanics (go ahead and just Google “mysteries of Quantum Mechanics”). Finally let's talk about the main topic of this piece, i.e. the “Schrodinger’s Cat” thought experiment. I have read many interpretations of that experiment. If I wanted to discuss the history of physics, probably, I would have to learn German and then read the original paper to make my own interpretation of what Schrödinger wanted to say. However, since we talk about the physics behind that experiment, we don’t have to go through the whole ninety-year old discussion. All we need is the description of the experiment, and then we can make our own interpretation, based on our own contemporary version of the meaning of the Quantum Mechanics. I start from the copy of the description of the experiment provided by doctor Becker in his book. “Schrödinger imagined putting a cat in a box along with the sealed glass vial of cyanide, with a small hammer hanging or the vile.  The hammer, in turn, would be connected to a Geiger counter, which detects radioactivity, and that counter will be pointed at a tiny lump of slightly radioactive metal.” (page 3). BTW: Schrödinger did not apply quantum mechanics to a macroscopic object (it was a radioactive material), he was much more accurate than many contemporary physicists who invent  thought experiments implying a direct application of quantum mechanics to a large system, and then point at a paradox. Of course! When you apply a theory beyond its limits you get "paradoxes" - because you are making a trivial mistake! Back to the cat. I am, as we all are, an external observer who can open the box and look at the cat and make a conclusion if the cat is dead, or if the cat is alive. First let's make sure that the cat can live as long as we need it, so, the box also has installed inside it all the facilities required to keep the cat alive. The only reason for the cat to be dead is if the hammer breaks the vial with poison, and the only reason for that to happen is if a Geiger counter registers a particle, and the only reason for that to happen is if radioactive metal emits that particle. For a regular person, this whole setup looks pretty much ludicrous already, so we can make it as ludicrous as we want to, if it helps to achieve our goal, which is to understand what is happening inside the box. So, let’s imagine that we have not just one but many, thousands, millions, maybe even billions of identical boxes with identical cats waiting for their fate. We created all those boxes at exactly same time, we waited exactly same time period, and we opened all the boxes at exactly same instant. Since, when we open all the boxes and look at all the cats, in every single box every single cat can be only dead or alive, all we can see is: 1. All cats are alive. 2. All cats are dead. 3. Some cats are alive and others are dead. If all cats are alive the best explanation is that we made a mistake and instead of using the radioactive metal we placed some stable material. If all cats are dead the best explanation is that we didn't have enough boxes, or the radioactive metal had much higher radioactivity then we thought. But since it is our thought experiment, we can imagine that we are smart enough or lucky enough to have the right number of boxes and the right type of the radioactive metal, so once we open all the boxes, what we see is that some cats indeed are alive and some cats indeed are dead. In order to make this conclusion about what we would see if our thought experiment would actually happen, all we need to know is that within a specific time interval a radioactive metal may or may not emit a particle. Based on this property of a radioactive metal we can say that, indeed, every cat inside each box may or may not be alive or dead. However, there is simply no logical reason to make a statement that until the box was opened a cat inside it is dead-and-alive at the same time. “No logical reason” does not mean “no reason at all”; such a reason, for example, could be “I just want it to be”. Let's assume for the moment that all cats in all boxes were in a state of dead-and-alive right until we opened the doors. Let’s assume that it was the action of opening the door of each box which has led to some cats become dead, and for some cats remain alive. In that case, all the alive cats would look exactly the same – happy. But also, all the dead cats would look exactly the same, with no sign of any deterioration. My common sense and everyday experience doesn't believe in that picture. My common sense and everyday experience says that different cats in different boxes would die at different time (right after a radioactive particle left the metal and entered the counter), so when all boxes are opened, we could see one cat that died a long time ago, and another cat that just recently passed away. The time of death would be based on the time when the vile with the poison was broken, which was based on the time when the counter registered a particle, which was based on the time when the radioactive metal emitted that particle via radioactivity decay. Counting the number of dead and alive cats, and going back to an experiment with a single box, we could make a good approximation of the probability for finding a cat dead or alive when only one box is to be opened. In the experiment with only one box, all we can say is: 1. A cat inside a box is either alive or dead, and before we open the box there is no way to know if the cat is alive or dead. 2. There is a specific instant in time before which a cat inside a box is alive, and after which the cat is dead, but there is no way to predict the value of that instant (which, in principle, includes such values as "always" and "never"). 3. A cat inside a box may die, and if that happens, when we open a box we will see a dead cat, and we may be able to find out how long the cat was dead before a box was opened (hence, when after the start of the experiment the cat died), but before we open the box we will never be able to predict when exactly the cat would die. 4. Using multiple experiments we may be able to find the chance of finding the cat alive (or the time distribution of the moments when the cat becomes dead). Of course, Schrödinger used a cat just for a dramatic effect – a death-or-life situation definitely sharpens the argumentation of the case. But it does not have to be a cat. Instead of a vial and a cat one could use a timer with a button pushed by a falling hammer. The physical behavior of the system would not change. It still would describe the same act of an interaction between a microscopic and a macroscopic systems (a.k.a. “measurement”). My interpretation of the “Schrödinger’s Cat” thought experiment implicates that statement: “the subatomic particles in the metal… don’t know whether they should stay or they should go. So, they do both” (page 3) is wrong (or at least depends on a specific interpretation of the Quantum Mechanics, hence, inaccurate). Statistical interpretation of the Quantum Mechanics paints a much simpler, hence less dramatic, but hence clearer, hence more practical, hence more workable, picture, i.e. the subatomic particles in the metal remain intact until they, independently from each other, at the moment which may be different for different particles, at the moment which is intrinsically not predictable, “go”. A patient and accurate experimenter, though, can find out (with a reasonable certainty) what is a chance for a given particle “to go” (during a given time interval). I hope that now all the “cat killing” has finally come to the logical end. Cats, timers, particles, any existing objects, cannot “take two paths at the same time”, or “stay and go simultaneously”. They can do one or another; for each choice, there is a specific probability of that to happen; that probability cannot be found on its own but requires calculating a weird and strangely behaved wave-function; no one knows why. After lots of cat killing I would like to finish on a positive note. This piece has been the result of reading of the first twenty pages of the book. I am looking forward for reading more. I hope that as a historian of physics Dr. Becker will be better than as a philosopher of physics. The second post in this series:  The first third of this piece was written at a car dealership while waiting for an oil change and stuff. The second third of it was written in a traffic using a voice recognition app. Only the last third was written during a relatively stable part of a day with a manageable number of interruptions. As a person with little patience who was eager to publish this post, I may have left some typos in the text. Please, feel free to inform me of such, if you find any. But if you found some interesting ideas and want to develop them, a courtesy of mentioning the source would be nice. Thank you. Appendix I I am perfectly aware of the fact that no peer-reviewed magazine will publish my piece: my writing style is too loose, and I have no citations. Well, I do, on the book I review, but the format is wrong, and other references are missing. Although, I do not really understand why does one need to explicitly show the  references which are  already well-known and openly available (Google-able), like "my" Max Plank's quote. Anyway, if you enjoyed the reading, please, feel free to share it with your colleagues. But maybe even more reason to share would be if you hated what you read ("I found such a piece ..., you should definitely check it"). Appendix II Let’s go to the source! In his lectures Richard Feynman proposed a thought experiment with electrons traveling through two holes (BTW: not slits, like many authors mistakenly “quote”; and he did not spend much time on talking about case with one “slit” closed, instead he spent a lot of time on talking about a case with a photon scattering of an electron – an experiment that no one yet made to work). He stated that electrons are registered in “lumps”. Then he stated Proposition A: Each electron either goes through hole 1 or it goes through hole 2.” Then he arrived at: “For electrons: P12 ≠ P1+P2.” And then he finishes: “… since the number that arrives at a particular point is not equal to the number that arrives through 1 plus the number that arrives through 2, as we would have concluded from Proposition A, undoubtedly we should conclude that Proposition A is false. It is not true that the electrons go either through hole 1 or hole 2.” And yet, in the next chapter he writes (all bold fonts are mine, not Feynman’s): 1. “when there are two ways for the particle to reach the detector, the resulting probability is not the sum of the two probabilities” 2. “When a particle can reach a given state by two possible routes, the total amplitude for the process is the sum of the amplitudes for the two routes considered separately. 4. “the amplitude for the process in which the electron reaches the detector at x by way of hole 1 5. “the amplitude to go from s to x by way of hole 1 is equal to” 6. “The electron goes from s to 1 and then from 1 to x.” 7. “The electron can go through hole 1, then through hole a, and then to x; or it could go through hole 1, then through hole b, and then to x; and so on.” 8. “amplitude that an electron going through slit 2 will scatter a photon” 9. “the amplitude that an electron goes via slit 2 and scatters a photon” 10. “two factors: first, that the electron went through a hole, and second” 11. “when an electron passes through hole 2 12. “when the electron passes through hole 1 Theses twelve quotes (there are more) clearly show that the father of this experiment believed that an electron could travel through one hole/slit, or through another one, but he never considered an electron traveling through both holes at the same time. The whole idea of a path integral is based on the assumption is that an electron is always located somewhere, i.e. it is always localized, because it is always traveling through this point and then this, and then this, etc. A path does not split, there are no forks (even when a particle circles back making a loop the time keeps running ahead and on each path a particle is always located at one place at a time), hence, there are no instances when an electron is located at to places at the same time (a path integral was a brilliant idea of a genius: just assign an amplitude to each possible path and add them up! So obvious! After you learn it. That is why many physicists feel - it's natural, and do not think about implications to the fundamentals of quantum mechanics, including the interpretation of wave-particle duality). This seems contradicts his own conclusion about Proposition A. He also wrote: “is not true that the lumps go either through hole 1 or hole 2, because if they did, the probabilities should add”. But then, as I proved, in his further analysis he was fine with an electron traveling through one whole or another. Another contradiction. He wrote: “the probability of arrival through both holes”. But “arrival through” is not the same as “traveling through both at the same time”. So, what did he really mean? I believe, when Feynman stated his Proposition A, he simply did not do it as accurate as he should have done. He should have said: Proposition A: Each electron either goes through hole 1 or it goes through hole 2 – in a classical sense”. And this statement is false. Based on the next chapter (experiments with light), we understand that when he said: “It is not true that the electrons go either through hole 1 or hole 2 ”, he meant “It is not true that we are always able to know if the electrons go either through hole 1 or hole 2 - unless the interference between the two paths is destroyed”. Because later he told us that an electron does go through hole 1 or it goes through hole 2 – however, in a different, non-classical sense, with the use amplitudes instead of probabilities. And if even the father of this experiment, one of the most intelligent people, one of the deepest physicists could allow himself be not very clear, what to expect from everyday science writers? Someone should teach them actual physics. Many physicists analyzed the original Feynman’s experiment; for example, J. D. Cresser writes (2009; “If electrons are particles, like bullets, then it seems clear that the electrons go either through slit 1orthrough slit 2, because that is what particles would do. The behavior of the electrons going through slit 1 should then not be affected by whether slit 2 is opened or closed as those electrons would go nowhere near slit 2. In other words, we have to expect that P12(x)=P1(x)+P2(x), but this not what is observed. It appears that we must abandon the idea that the particles go through one slit or the other. But if we want to retain the mental picture of electrons as particles, we must conclude that the electrons pass through both slits in some way because it is only by ‘going through both slits’ that there is any chance of an interference pattern forming. After all, the interference term depends on d, the separation between the slits, so we must expect that the particles must ‘know’ how far apart the slits are in order for the positions that they strike the screen to depend on d, and they cannot ‘know’ this if each electron goes through only one slit. We could imagine that the electrons determine the separation between slits by supposing that they split up in some way, but then they will have to subsequently recombine before striking the screen since all that is observed is single flashes of light. So, what comes to mind is the idea of the electrons executing complicated paths that, perhaps, involve them looping back through each slit, which is scarcely believable. The question would have to be asked as to why the electrons execute such strange behavior when there are a pair of slits present, but do not seem to when they are moving in free space. There is no way of understanding the double slit behavior in terms of a particle picture only.” And then one goes to building an elaborated picture of a wave packet that is a particle and a wave at the same time, etc., etc.. And then, following Feynman, he discusses another mystery, that is when we know through each hole an electron travelled (e.g. using light) we destroy the interference. Only when we do not know how exactly electrons travel through the holes, interference exist. Why? No one knows. The answer, however, lies in the very statement used to prove that electrons cannot ravel through one hole or another one. Let’s read it one more time. But abandoning “the idea that the particles go through one slit or the other” is not only one logical solution. Another one is to abandon a previous statement, that said: “The behavior of the electrons going through slit 1 should then not be affected by whether slit 2 is opened or closed as those electrons would go nowhere near slit 2.” If particles do travel through one hole or another, and if the interference pattern does exist, it means that this statement is wrong. And that means that the behavior of the electrons going through slit 1 is affected by whether slit 2 is opened or closed even though those electrons would go nowhere near slit 2. Or, in general, when two slits are open an electron (and a photon!) behaves differently than it does when one slit is open; when it travels to the screen with holes it "knows" already how many holes are open there. How? That is a different discussion (closely related to the discussion of the nature of quantum entanglement; for example, here). But if we assume that an electron “knows” or “feels” if the hole 2 is open or closed, we solve the contradiction. And when we shine a light on it, an electron actually “forgets” about the existence of another hole and travels like the only one hole exists – hence, the destruction of interference. So, there is a way of understanding the double slit behavior in terms of a particle picture only-ish. The question now is how does electron “knows” about the state of another slit? The answer is - in the same way to entangled particles "know" about the state of each other. Appendix III I have been writing in many pieces that people often confuse an actual physical object with its abstract description. That happens a lot when they write about a wave-function - people think of it as of an actual physical wave. A particle is a wave-pocket traveling in space. OK. Does it have a definitive size; a boundary between the region filled with matter and energy and the rest of the universe? If does - so, it is just a large particle? If not, if all the mass and energy asymptotically "smeared" over the whole universe (a mathematical cut-off exists, like "effective radius", but is mathematical - like a half-life for a radioactive element), how does all that mass and energy get smeared over the whole universe the moment a particle leaves an atom and then "collapses" back when it hits a screen? These and other questions make this picture too complicated - it does not worth to be fought for. But in that case one needs a different, simpler, model. And that model exists - a particle is always a particle, it just not classical, hence behaves in a non-classical way, described by Schrödinger's equation. And that behavior - statistically, resembles some elements of the behavior of classical waves. But they are NOT waves. And BTW: all classical waves (that doesn't include electromagnetic waves - those are not classical) are NOT specific individual physical objects. A wave is specific form a substance described by the means of a specific mathematical object called "a field".  A field is a mathematical description of a state of a substance distributed over a large region of space. A substance has structure and composed of a vast number of small and usually identical "blocks" (atoms, molecules, balls and springs). Thinking about a (classical) wave as of one undivided large object is simply wrong. Someone should tell this ti all those writers who call an electron a wave. I wrote a little bit more on this in my new piece: On a Definition of Science. No comments: Post a Comment
95af7d29efcbd72f
evolution equation An evolution equation is an equation of the form tf=Lf \partial_t f = L f where t\partial_t represents the partial derivative with respect to time, LL is a differential operator and ff is usually assumed to be an element of some topological vector space. Examples are the Schrödinger equation and the Fokker-Planck equation. These equations describe the time evolution of a physical system, hence the name. Revised on April 20, 2010 22:12:33 by Urs Schreiber (
a1afd04f3be104e4
The Full Wiki More info on History of classical mechanics History of classical mechanics: Quiz Question 1: Newton and most of his contemporaries, with the notable exception of ________, hoped that classical mechanics would be able to explain all entities, including (in the form of geometric optics) light. Christiaan HuygensBlaise PascalIsaac NewtonGottfried Leibniz Question 2: From ________'s heliocentric hypothesis Galileo believed the Earth was just the same as any other planet. HeliocentrismPolish–Lithuanian CommonwealthJohannes KeplerNicolaus Copernicus Question 3: It wasn't until ________'s development of the telescope and his observations that it became clear that the heavens were not made from a perfect, unchanging substance. Isaac NewtonScientific revolutionScientific methodGalileo Galilei Question 4: [1] Early yet incomplete theories pertaining to mechanics were also discovered by several other Muslim physicists during the ________. Early Middle AgesHigh Middle AgesMiddle AgesLate Middle Ages Question 5: Newton also developed the ________ which is necessary to perform the mathematical calculations involved in classical mechanics. CalculusDerivativeIntegralDifferential calculus Question 6: When combined with classical thermodynamics, classical mechanics leads to the ________ in which entropy is not a well-defined quantity. Identical particlesStatistical mechanicsIdeal gasGibbs paradox Question 7: Similarly, the different behaviour of classical ________ and classical mechanics under velocity transformations led to the theory of relativity. ElectromagnetismMagnetic fieldClassical electromagnetismMaxwell's equations Question 8: ________ extended Newton's laws of motion from particles to rigid bodies with two additional laws. Isaac NewtonPierre-Simon LaplaceLeonhard EulerJoseph Louis Lagrange Question 9: The effort at resolving these problems led to the development of ________. Quantum mechanicsIntroduction to quantum mechanicsWave–particle dualitySchrödinger equation Question 10: He led to the conclusion that in a ________ there is no reason for a body to naturally move to one point rather than any other, and so a body in a vacuum will either stay at rest or move indefinitely if put in motion. UniverseVacuumVacuum pumpOuter space Got something to say? Make a comment. Your name Your email address
db0554bb642302c0
Introduction to quantum mechanics From Wikipedia, the free encyclopedia Jump to: navigation, search This article is a non-technical introduction to the subject. For the main encyclopedia article, see Quantum mechanics. The word "quantum" in this sense means the minimum amount of any physical entity involved in an interaction. Certain characteristics of matter can take only discrete values. Light behaves in some respects like particles and in other respects like waves. Matter—particles such as electrons and atoms—exhibits wavelike behaviour too. Some light sources, including neon lights, give off only certain discrete frequencies of light. Quantum mechanics shows that light and all other forms of electromagnetic radiation comes in discrete units, called photons, and predicts its energies, colours, and spectral intensities. Some aspects of quantum mechanics can seem counterintuitive or even paradoxical, because they describe behaviour quite different from that seen at larger length scales. In the words of Richard Feynman, quantum mechanics deals with "nature as She is – absurd."[2] For example, the uncertainty principle of quantum mechanics means that the more closely one pins down one measurement (such as the position of a particle), the less precise another measurement pertaining to the same particle (such as its momentum) must become. The first quantum theory: Max Planck and black-body radiation[edit] Hot metalwork. The yellow-orange glow is the visible part of the thermal radiation emitted due to the high temperature. Everything else in the picture is glowing with thermal radiation as well, but less brightly and at longer wavelengths than the human eye can detect. A far-infrared camera can observe this radiation. Thermal radiation is electromagnetic radiation emitted from the surface of an object due to the object's temperature. If an object is heated sufficiently, it starts to emit light at the red end of the spectrum – it is red hot. Heating it further causes the colour to change from red to yellow to white to blue, as light at shorter wavelengths (higher frequencies) begins to be emitted. It turns out that a perfect emitter is also a perfect absorber. When it is cold, such an object looks perfectly black, because it absorbs all the light that falls on it and emits none. Consequently, an ideal thermal emitter is known as a black body, and the radiation it emits is called black-body radiation. In the late 19th century, thermal radiation had been fairly well-characterized experimentally.[note 1] However, classical physics was unable to explain the relationship between temperatures and predominant frequencies of radiation. Physicists were searching for a single theory that explained why they got the experimental results that they did. Predictions of the amount of thermal radiation of different frequencies emitted by a body. Correct values predicted by Planck's law (green) contrasted against the classical values (Rayleigh–Jeans law, red and Wien approximation, blue). The first model that was able to explain the full spectrum of thermal radiation was put forward by Max Planck in 1900.[3] He came up with a mathematical model in which the thermal radiation was in equilibrium with a set of harmonic oscillators. To reproduce the experimental results he had to assume that each oscillator produced an integer number of units of energy at its single characteristic frequency, rather than being able to emit any arbitrary amount of energy. In other words, the energy of each oscillator was "quantized."[note 2] The quantum of energy for each oscillator, according to Planck, was proportional to the frequency of the oscillator; the constant of proportionality is now known as the Planck constant. The Planck constant, usually written as h, has the value 6.63×10−34 J s, and so the energy E of an oscillator of frequency f is given by E = nhf,\quad \text{where}\quad n = 1,2,3,\ldots[4] Planck's law was the first quantum theory in physics, and Planck won the Nobel Prize in 1918 "in recognition of the services he rendered to the advancement of Physics by his discovery of energy quanta."[5] At the time, however, Planck's view was that quantization was purely a mathematical trick, rather than (as we now believe) a fundamental change in our understanding of the world.[6] Photons: the quantisation of light[edit] Albert Einstein in around 1905 In 1905, Albert Einstein took an extra step. He suggested that quantisation was not just a mathematical trick: the energy in a beam of light occurs in individual packets, which are now called photons.[7] The energy of a single photon is given by its frequency multiplied by Planck's constant: E = hf. For centuries, scientists had debated between two possible theories of light: was it a wave or did it instead comprise a stream of tiny particles? By the 19th century, the debate was generally considered to have been settled in favour of the wave theory, as it was able to explain observed effects such as refraction, diffraction and polarization. James Clerk Maxwell had shown that electricity, magnetism and light are all manifestations of the same phenomenon: the electromagnetic field. Maxwell's equations, which are the complete set of laws of classical electromagnetism, describe light as waves: a combination of oscillating electric and magnetic fields. Because of the preponderance of evidence in favour of the wave theory, Einstein's ideas were met initially with great skepticism. Eventually, however, the photon model became favoured; one of the most significant pieces of evidence in its favour was its ability to explain several puzzling properties of the photoelectric effect, described in the following section. Nonetheless, the wave analogy remained indispensable for helping to understand other characteristics of light, such as diffraction. The photoelectric effect[edit] Light (red arrows, left) is shone upon a metal. If the light is of sufficient frequency (i.e. sufficient energy), electrons are ejected (blue arrows, right). Main article: Photoelectric effect In 1887 Heinrich Hertz observed that when light, with sufficient frequency, hits a metallic surface it emits electrons .[8] In 1902 Philipp Lenard discovered that the maximum possible energy of an ejected electron is related to the frequency of the light, not to its intensity; if the frequency is too low, no electrons are ejected regardless of the intensity. The lowest frequency of light that can cause electrons to be emitted, called the threshold frequency, is different for different metals. This observation is at odds with classical electromagnetism, which predicts that the electron's energy should be proportional to the intensity of the radiation.[9]:24 Einstein explained the effect by postulating that a beam of light is a stream of particles (photons), and that if the beam is of frequency f then each photon has an energy equal to hf.[8] An electron is likely to be struck only by a single photon, which imparts at most an energy hf to the electron.[8] Therefore, the intensity of the beam has no effect;[note 3] only its frequency determines the maximum energy that can be imparted to the electron.[8] To explain the threshold effect, Einstein argued that it takes a certain amount of energy, called the work function, denoted by φ, to remove an electron from the metal.[8] This amount of energy is different for each metal. If the energy of the photon is less than the work function then it does not carry sufficient energy to remove the electron from the metal. The threshold frequency, f0, is the frequency of a photon whose energy is equal to the work function: \varphi = h f_0. If f is greater than f0, the energy hf is enough to remove an electron. The ejected electron has a kinetic energy EK which is, at most, equal to the photon's energy minus the energy needed to dislodge the electron from the metal: E_K = hf - \varphi = h(f - f_0). Einstein's description of light as being composed of particles extended Planck's notion of quantised energy: a single photon of a given frequency f delivers an invariant amount of energy hf. In other words, individual photons can deliver more or less energy, but only depending on their frequencies. However, although the photon is a particle it was still being described as having the wave-like property of frequency. Once again, the particle account of light was being "compromised".[10][note 4] Consequences of the light being quantised[edit] The relationship between the frequency of electromagnetic radiation and the energy of each individual photon is why ultraviolet light can cause sunburn, but visible or infrared light cannot. A photon of ultraviolet light will deliver a high amount of energy – enough to contribute to cellular damage such as occurs in a sunburn. A photon of infrared light will deliver a lower amount of energy – only enough to warm one's skin. So an infrared lamp can warm a large surface, perhaps large enough to keep people comfortable in a cold room, but it cannot give anyone a sunburn. If each individual photon had identical energy, it would not be correct to talk of a "high energy" photon. Light of high frequency could deliver more energy only because of flooding a surface with more photons arriving per second. Light of low frequency could deliver less energy only if it delivered fewer photons per second. If it were true that all photons carry the same energy, then if you doubled the rate of photon delivery, you would double the number of energy units arriving each second regardless of the frequency of the incident light. Einstein rejected that wave-dependent classical approach in favour of a particle-based analysis where the energy of the particle must be absolute and varies with frequency in discrete steps (i.e. is quantised). All photons of the same frequency have identical energy, and all photons of different frequencies have proportionally different energies. In nature, single photons are rarely encountered. The sun emits photons continuously at all electromagnetic frequencies, so they appear to propagate as a continuous wave, not as discrete units. The emission sources available to Hertz and Lennard in the 19th century shared that characteristic. A star that radiates red light, or a piece of iron in a forge that glows red, may both be said to contain a great deal of energy. It might be surmised that adding continuously to the total energy of some radiating body would make it radiate red light, orange light, yellow light, green light, blue light, violet light, and so on in that order. But that is not so, as larger stars and larger pieces of iron in a forge would then necessarily glow with colours more toward the violet end of the spectrum. To change the colour of such a radiating body it is necessary to change its temperature. An increase in temperature changes the quanta of energy available to excite individual atoms to higher levels, enabling them to emit photons of higher frequencies. The total energy emitted per unit of time by a star (or by a piece of iron in a forge) depends on both the number of photons emitted per unit of time, as well as the amount of energy carried by each of the photons involved. In other words, the characteristic frequency of a radiating body is dependent on its temperature. When physicists were looking only at beams of light containing huge numbers of individual and virtually indistinguishable photons, it was difficult to understand the importance of the energy levels of individual photons. So when physicists first discovered devices exhibiting the photoelectric effect, they initially expected that a higher intensity of light would produce a higher voltage from the photoelectric device. Conversely, they discovered that strong beams of light toward the red end of the spectrum might produce no electrical potential at all, and that weak beams of light toward the violet end of the spectrum would produce higher and higher voltages. Einstein's idea that individual units of light may contain different amounts of energy, depending on their frequency, made it possible to explain such experimental results that had hitherto seemed quite counterintuitive. Although the energy imparted by photons is invariant at any given frequency, the initial energy state of the electrons in a photoelectric device prior to absorption of light is not necessarily uniform. Anomalous results may occur in the case of individual electrons. For instance, an electron that was already excited above the equilibrium level of the photoelectric device might be ejected when it absorbed uncharacteristically low frequency illumination. Statistically, however, the characteristic behaviour of a photoelectric device will reflect the behaviour of the vast majority of its electrons, which will be at their equilibrium level. This point is helpful in comprehending the distinction between the study of individual particles in quantum dynamics and the study of massed particles in classical physics. The quantisation of matter: the Bohr model of the atom[edit] By the dawn of the 20th century, evidence required a model of the atom with a diffuse cloud of negatively-charged electrons surrounding a small, dense, positively-charged nucleus. These properties suggested a model in which the electrons circle around the nucleus like planets orbiting a sun.[note 5] However, it was also known that the atom in this model would be unstable: according to classical theory orbiting electrons are undergoing centripetal acceleration, and should therefore give off electromagnetic radiation, the loss of energy also causing them to spiral toward the nucleus, colliding with it in a fraction of a second. A second, related, puzzle was the emission spectrum of atoms. When a gas is heated, it gives off light only at discrete frequencies. For example, the visible light given off by hydrogen consists of four different colours, as shown in the picture below. The intensity of the light at different frequencies is also different. By contrast, white light consists of a continuous emission across the whole range of visible frequencies. By the end of the nineteenth century, a simple rule had been found which showed how the frequencies of the different lines were related to each other, though without explaining why this was, or making any prediction about the intensities. The formula also predicted some additional spectral lines in ultraviolet and infrared light which had not been observed at the time. These lines were later observed experimentally, raising confidence in the value of the formula. Emission spectrum of hydrogen. When excited, hydrogen gas gives off light in four distinct colours (spectral lines) in the visible spectrum, as well as a number of lines in the infrared and ultraviolet. The Bohr model of the atom, showing an electron transitioning from one orbit to another by emitting a photon. In 1913 Niels Bohr proposed a new model of the atom that included quantized electron orbits: electrons still orbit the nucleus much as planets orbit around the sun, but they are only permitted to inhabit certain orbits, not to orbit at any distance.[13] When an atom emitted (or absorbed) energy, the electron did not move in a continuous trajectory from one orbit around the nucleus to another, as might be expected classically. Instead, the electron would jump instantaneously from one orbit to another, giving off the emitted light in the form of a photon.[14] The possible energies of photons given off by each element were determined by the differences in energy between the orbits, and so the emission spectrum for each element would contain a number of lines.[15] Head and shoulders of young man in a suit and tie Niels Bohr as a young man Starting from only one simple assumption about the rule that the orbits must obey, the Bohr model was able to relate the observed spectral lines in the emission spectrum of hydrogen to previously-known constants. In Bohr's model the electron simply wasn't allowed to continuously emit energy and crash into the nucleus: once it was in the closest permitted orbit, it was stable forever. Bohr's model didn't explain why the orbits should be quantised in that way, and it was also unable to make accurate predictions for atoms with more than one electron, or to explain why some spectral lines are brighter than others. Although some of the fundamental assumptions of the Bohr model were soon found to be wrong, the key result that the discrete lines in emission spectra are due to some property of the electrons in atoms being quantised is correct. The way that the electrons actually behave is strikingly different from Bohr's atom, and from what we see in the world of our everyday experience; this modern quantum mechanical model of the atom is discussed below. Wave–particle duality[edit] Louis de Broglie in 1929. De Broglie won the Nobel Prize in Physics for his prediction that matter acts as a wave, made in his 1924 PhD thesis. Just as light has both wave-like and particle-like properties, matter also has wave-like properties.[16] Matter behaving as a wave was first demonstrated experimentally for electrons: A beam of electrons can exhibit diffraction, just like a beam of light or a water wave.[note 8] Similar wave-like phenomena were later shown for atoms and even small molecules. The wavelength, λ, associated with any object is related to its momentum, p through the Planck constant h:[17][18] p = \frac{h}{\lambda}. The relationship, called the de Broglie hypothesis, holds for all types of matter: all matter exhibits properties of both particles and waves. The concept of wave–particle duality says that neither the classical concept of "particle" nor of "wave" can fully describe the behaviour of quantum-scale objects, either photons or matter. Wave–particle duality is an example of the principle of complementarity in quantum physics. An elegant example of wave–particle duality, the double slit experiment, is discussed in the section below. The double-slit experiment[edit] The diffraction pattern produced when light is shone through one slit (top) and the interference pattern produced by two slits (bottom). The much more complex pattern from two slits, with its small-scale interference fringes, demonstrates the wave-like propagation of light. In the double-slit experiment as originally performed by Thomas Young and Augustin Fresnel in 1827, a beam of light is directed through two narrow, closely spaced slits, producing an interference pattern of light and dark bands on a screen. If one of the slits is covered up, one might naively expect that the intensity of the fringes due to interference would be halved everywhere. In fact, a much simpler pattern is seen, a simple diffraction pattern. Closing one slit results in a much simpler pattern diametrically opposite the open slit. Exactly the same behaviour can be demonstrated in water waves, and so the double-slit experiment was seen as a demonstration of the wave nature of light. The double slit experiment for a classical particle, a wave, and a quantum particle demonstrating wave-particle duality The double-slit experiment has also been performed using electrons, atoms, and even molecules, and the same type of interference pattern is seen. Thus it has been demonstrated that all matter possesses both particle and wave characteristics. Even if the source intensity is turned down so that only one particle (e.g. photon or electron) is passing through the apparatus at a time, the same interference pattern develops over time. The quantum particle acts as a wave when passing through the double slits, but as a particle when it is detected. This is a typical feature of quantum complementarity: a quantum particle will act as a wave when we do an experiment to measure its wave-like properties, and like a particle when we do an experiment to measure its particle-like properties. Where on the detector screen any individual particle shows up will be the result of an entirely random process. However, the distribution pattern of many individual particles will mimic the diffraction pattern produced by waves. Application to the Bohr model[edit] De Broglie expanded the Bohr model of the atom by showing that an electron in orbit around a nucleus could be thought of as having wave-like properties. In particular, an electron will be observed only in situations that permit a standing wave around a nucleus. An example of a standing wave is a violin string, which is fixed at both ends and can be made to vibrate. The waves created by a stringed instrument appear to oscillate in place, moving from crest to trough in an up-and-down motion. The wavelength of a standing wave is related to the length of the vibrating object and the boundary conditions. For example, because the violin string is fixed at both ends, it can carry standing waves of wavelengths 2l/n, where l is the length and n is a positive integer. De Broglie suggested that the allowed electron orbits were those for which the circumference of the orbit would be an integer number of wavelengths. The electron's wavelength therefore determines that only Bohr orbits of certain distances from the nucleus are possible. In turn, at any distance from the nucleus smaller than a certain value it would be impossible to establish an orbit. The minimum possible distance from the nucleus is called the Bohr radius.[19] De Broglie's treatment of quantum events served as a starting point for Schrödinger when he set out to construct a wave equation to describe quantum theoretical events. Development of modern quantum mechanics[edit] When Bohr assigned his younger colleagues the task of finding an explanation for the intensities of the different lines in the hydrogen emission spectrum, Werner Heisenberg moved forward from a recent success in explaining a simpler problem. In 1925, by means of a series of mathematical analogies, he wrote out the quantum mechanical analogue for the classical computation of intensities.[20] Shortly afterwards, Heisenberg's colleague Max Born realised that Heisenberg's method of calculating the probabilities for transitions between the different energy levels could best be expressed by using the mathematical concept of matrices.[note 9] Erwin Schrödinger, about 1933, age 46 Erwin Schrödinger based himself on de Broglie's hypothesis that he learned of in 1925, and during the first half of 1926 successfully described the behaviour of a quantum mechanical wave.[21] The mathematical model, called the Schrödinger equation after its creator, is central to quantum mechanics, defines the permitted stationary states of a quantum system, and describes how the quantum state of a physical system changes in time.[22] The wave itself is described by a mathematical function known as a "wave function", and is usually represented by the Greek letter \psi ("psi"). In the paper that introduced Schrödinger's cat, he says that the wave function provides the "means for predicting probability of measurement results", and that it therefore provides "future expectation[s], somewhat as laid down in a catalog."[23] For more information on Schrödinger's theory, see Schrödinger equation. Schrödinger was able to calculate the energy levels of hydrogen by treating a hydrogen atom's electron as a classical wave, moving in a well of electrical potential created by the proton. This calculation accurately reproduced the energy levels of the Bohr model. In May 1926, Schrödinger proved that Heisenberg's matrix mechanics and his own wave mechanics made the same predictions about the properties and behaviour of the electron; mathematically, the two theories were identical. Yet the two men disagreed on the interpretation of their mutual theory. For instance, Heisenberg saw no problem in the theoretical prediction of instantaneous transitions of electrons between orbits in an atom, but Schrödinger hoped that a theory based on continuous wave-like properties could avoid what he called (as paraphrased by Wilhelm Wien[24]) "this nonsense about quantum jumps." Copenhagen interpretation[edit] A block shaped beige building with a sloped, red tiled roof The Niels Bohr Institute in Copenhagen, which served as a focal point for researchers into quantum mechanics and related subjects in the 1920s and 1930s. Most of the world's best known theoretical physicists spent time there, developing what became known as the Copenhagen interpretation of quantum mechanics. Bohr, Heisenberg and others tried to explain what these experimental results and mathematical models really mean. Their description, known as the Copenhagen interpretation of quantum mechanics, aimed to describe the nature of reality that was being probed by the measurements and described by the mathematical formulations of quantum mechanics. The main principles of the Copenhagen interpretation are: 1. A system is completely described by a wave function, \psi. (Heisenberg) 2. How \psi changes over time is given by the Schrödinger equation. 3. The description of nature is essentially probabilistic. The probability of an event – for example, where on the screen a particle will show up in the two slit experiment – is related to the square of the absolute value of the amplitude of its wave function. (Born rule, due to Max Born, which gives a physical meaning to the wave function in the Copenhagen interpretation: the probability amplitude) 5. Matter, like energy, exhibits a wave–particle duality. An experiment can demonstrate the particle-like properties of matter, or its wave-like properties; but not both at the same time. (Complementarity principle due to Bohr) 7. The quantum mechanical description of large systems should closely approximate the classical description. (Correspondence principle of Bohr and Heisenberg) Various consequences of these principles are discussed in more detail in the following subsections. Uncertainty principle[edit] Main article: Uncertainty principle Werner Heisenberg at the age of 26. Heisenberg won the Nobel Prize in Physics in 1932 for the work that he did at around this time.[25] Suppose that we want to measure the position and speed of an object – for example a car going through a radar speed trap. We assume that the car has a definite position and speed at a particular moment in time, and how accurately we can measure these values depends on the quality of our measuring equipment – if we improve the precision of our measuring equipment, we will get a result that is closer to the true value. In particular, we would assume that how precisely we measure the speed of the car does not affect its position, and vice versa. In 1927, Heisenberg proved that these assumptions are not correct.[26] Quantum mechanics shows that certain pairs of physical properties, like position and speed, cannot both be known to arbitrary precision: the more precisely one property is known, the less precisely the other can be known. This statement is known as the uncertainty principle. The uncertainty principle isn't a statement about the accuracy of our measuring equipment, but about the nature of the system itself – our assumption that the car had a definite position and speed was incorrect. On a scale of cars and people, these uncertainties are too small to notice, but when dealing with atoms and electrons they become critical.[27] Heisenberg gave, as an illustration, the measurement of the position and momentum of an electron using a photon of light. In measuring the electron's position, the higher the frequency of the photon the more accurate is the measurement of the position of the impact, but the greater is the disturbance of the electron, which absorbs a random amount of energy, rendering the measurement obtained of its momentum increasingly uncertain (momentum is velocity multiplied by mass), for one is necessarily measuring its post-impact disturbed momentum, from the collision products, not its original momentum. With a photon of lower frequency the disturbance – hence uncertainty – in the momentum is less, but so is the accuracy of the measurement of the position of the impact.[28] The uncertainty principle shows mathematically that the product of the uncertainty in the position and momentum of a particle (momentum is velocity multiplied by mass) could never be less than a certain value, and that this value is related to Planck's constant. Wave function collapse[edit] Wave function collapse is a forced expression for whatever just happened when it becomes appropriate to replace the description of an uncertain state of a system by a description of the system in a definite state. Explanations for the nature of the process of becoming certain are controversial. At any time before a photon "shows up" on a detection screen it can only be described by a set of probabilities for where it might show up. When it does show up, for instance in the CCD of an electronic camera, the time and the space where it interacted with the device are known within very tight limits. However, the photon has disappeared, and the wave function has disappeared with it. In its place some physical change in the detection screen has appeared, e.g., an exposed spot in a sheet of photographic film, or a change in electric potential in some cell of a CCD. Eigenstates and eigenvalues[edit] For a more detailed introduction to this subject, see: Introduction to eigenstates Because of the uncertainty principle, statements about both the position and momentum of particles can only assign a probability that the position or momentum will have some numerical value. Therefore it is necessary to formulate clearly the difference between the state of something that is indeterminate, such as an electron in a probability cloud, and the state of something having a definite value. When an object can definitely be "pinned-down" in some respect, it is said to possess an eigenstate. The Pauli exclusion principle[edit] In 1924, Wolfgang Pauli proposed a new quantum degree of freedom (or quantum number), with two possible values, to resolve inconsistencies between observed molecular spectra and the predictions of quantum mechanics. In particular, the spectrum of atomic hydrogen had a doublet, or pair of lines differing by a small amount, where only one line was expected. Pauli formulated his exclusion principle, stating that "There cannot exist an atom in such a quantum state that two electrons within [it] have the same set of quantum numbers."[29] A year later, Uhlenbeck and Goudsmit identified Pauli's new degree of freedom with a property called spin. The idea, originating with Ralph Kronig, was that electrons behave as if they rotate, or "spin", about an axis. Spin would account for the missing magnetic moment, and allow two electrons in the same orbital to occupy distinct quantum states if they "spun" in opposite directions, thus satisfying the exclusion principle. The quantum number represented the sense (positive or negative) of spin. Application to the hydrogen atom[edit] Main article: Atomic orbital model Bohr's model of the atom was essentially a planetary one, with the electrons orbiting around the nuclear "sun." However, the uncertainty principle states that an electron cannot simultaneously have an exact location and velocity in the way that a planet does. Instead of classical orbits, electrons are said to inhabit atomic orbitals. An orbital is the "cloud" of possible locations in which an electron might be found, a distribution of probabilities rather than a precise location.[29] Each orbital is three dimensional, rather than the two dimensional orbit, and is often depicted as a three-dimensional region within which there is a 95 percent probability of finding the electron.[30] Schrödinger was able to calculate the energy levels of hydrogen by treating a hydrogen atom's electron as a wave, represented by the "wave function" Ψ, in an electric potential well, V, created by the proton. The solutions to Schrödinger's equation are distributions of probabilities for electron positions and locations. Orbitals have a range of different shapes in three dimensions. The energies of the different orbitals can be calculated, and they accurately match the energy levels of the Bohr model. Within Schrödinger's picture, each electron has four properties: 4. The "spin" of the electron. The collective name for these properties is the quantum state of the electron. The quantum state can be described by giving a number to each of these properties; these are known as the electron's quantum numbers. The quantum state of the electron is described by its wave function. The Pauli exclusion principle demands that no two electrons within an atom may have the same values of all four numbers. The shapes of the first five atomic orbitals: 1s, 2s, 2px, 2py, and 2pz. The colours show the phase of the wave function. The first property describing the orbital is the principal quantum number, n, which is the same as in Bohr's model. n denotes the energy level of each orbital. The possible values for n are integers: n = 1, 2, 3\ldots The next quantum number, the azimuthal quantum number, denoted l, describes the shape of the orbital. The shape is a consequence of the angular momentum of the orbital. The angular momentum represents the resistance of a spinning object to speeding up or slowing down under the influence of external force. The azimuthal quantum number represents the orbital angular momentum of an electron around its nucleus. The possible values for l are integers from 0 to n − 1: l = 0, 1, \ldots, n-1. The shape of each orbital has its own letter as well. The first shape is denoted by the letter s (a mnemonic being "sphere"). The next shape is denoted by the letter p and has the form of a dumbbell. The other orbitals have more complicated shapes (see atomic orbital), and are denoted by the letters d, f, and g. The third quantum number, the magnetic quantum number, describes the magnetic moment of the electron, and is denoted by ml (or simply m). The possible values for ml are integers from l to l: m_l = -l, -(l-1), \ldots, 0, 1, \ldots, l. The magnetic quantum number measures the component of the angular momentum in a particular direction. The choice of direction is arbitrary, conventionally the z-direction is chosen. The fourth quantum number, the spin quantum number (pertaining to the "orientation" of the electron's spin) is denoted ms, with values +12 or −12. The chemist Linus Pauling wrote, by way of example: In the case of a helium atom with two electrons in the 1s orbital, the Pauli Exclusion Principle requires that the two electrons differ in the value of one quantum number. Their values of n, l, and ml are the same; moreover, they have the same spin, s = 12. Accordingly they must differ in the value of ms, which can have the value of +12 for one electron and −12 for the other."[29] It is the underlying structure and symmetry of atomic orbitals, and the way that electrons fill them, that leads to the organisation of the periodic table. The way the atomic orbitals on different atoms combine to form molecular orbitals determines the structure and strength of chemical bonds between atoms. Dirac wave equation[edit] Main article: Dirac equation Paul Dirac (1902–1984) Dirac's equations sometimes yielded a negative value for energy, for which he proposed a novel solution: he posited the existence of an antielectron and of a dynamical vacuum. This led to the many-particle quantum field theory. Quantum entanglement[edit] Main article: Quantum entanglement Superposition of two quantum characteristics, and two resolution possibilities. The Pauli exclusion principle says that two electrons in one system cannot be in the same state. Nature leaves open the possibility, however, that two electrons can have both states "superimposed" over each of them. Recall that the wave functions that emerge simultaneously from the double slits arrive at the detection screen in a state of superposition. Nothing is certain until the superimposed waveforms "collapse", At that instant an electron shows up somewhere in accordance with the probability that is the square of the absolute value of the sum of the complex-valued amplitudes of the two superimposed waveforms. The situation there is already very abstract. A concrete way of thinking about entangled photons, photons in which two contrary states are superimposed on each of them in the same event, is as follows: Imagine that the superposition of a state that can be mentally labeled as blue and another state that can be mentally labeled as red will then appear (in imagination, of course) as a purple state. Two photons are produced as the result of the same atomic event. Perhaps they are produced by the excitation of a crystal that characteristically absorbs a photon of a certain frequency and emits two photons of half the original frequency. So the two photons come out "purple." If the experimenter now performs some experiment that will determine whether one of the photons is either blue or red, then that experiment changes the photon involved from one having a superposition of "blue" and "red" characteristics to a photon that has only one of those characteristics. The problem that Einstein had with such an imagined situation was that if one of these photons had been kept bouncing between mirrors in a laboratory on earth, and the other one had traveled halfway to the nearest star, when its twin was made to reveal itself as either blue or red, that meant that the distant photon now had to lose its "purple" status too. So whenever it might be investigated after its twin had been measured, it would necessarily show up in the opposite state to whatever its twin had revealed. In trying to show that quantum mechanics was not a complete theory, Einstein started with the theory's prediction that two or more particles that have interacted in the past can appear strongly correlated when their various properties are later measured. He sought to explain this seeming interaction in a classical way, through their common past, and preferably not by some "spooky action at a distance." The argument is worked out in a famous paper, Einstein, Podolsky, and Rosen (1935; abbreviated EPR), setting out what is now called the EPR paradox. Assuming what is now usually called local realism, EPR attempted to show from quantum theory that a particle has both position and momentum simultaneously, while according to the Copenhagen interpretation, only one of those two properties actually exists and only at the moment that it is being measured. EPR concluded that quantum theory is incomplete in that it refuses to consider physical properties which objectively exist in nature. (Einstein, Podolsky, & Rosen 1935 is currently Einstein's most cited publication in physics journals.) In the same year, Erwin Schrödinger used the word "entanglement" and declared: "I would not call that one but rather the characteristic trait of quantum mechanics."[31] The question of whether entanglement is a real condition is still in dispute.[32] The Bell inequalities are the most powerful challenge to Einstein's claims. Quantum field theory[edit] This sculpture in Bristol, England – a series of clustering cones – presents the idea of small worlds that Paul Dirac studied to reach his discovery of anti-matter. Main article: Quantum field theory The idea of quantum field theory began in the late 1920s with British physicist Paul Dirac, when he attempted to quantise the electromagnetic field – a procedure for constructing a quantum theory starting from a classical theory. A field in physics is "a region or space in which a given effect (such as magnetism) exists."[33] Other effects that manifest themselves as fields are gravitation and static electricity.[34] In 2008, physicist Richard Hammond wrote that Sometimes we distinguish between quantum mechanics (QM) and quantum field theory (QFT). QM refers to a system in which the number of particles is fixed, and the fields (such as the electromechanical field) are continuous classical entities. QFT ... goes a step further and allows for the creation and annihilation of particles . . . . He added, however, that quantum mechanics is often used to refer to "the entire notion of quantum view."[35]:108 In 1931, Dirac proposed the existence of particles that later became known as anti-matter.[36] Dirac shared the Nobel Prize in Physics for 1933 with Schrödinger, "for the discovery of new productive forms of atomic theory."[37] Quantum electrodynamics[edit] Quantum electrodynamics (QED) is the name of the quantum theory of the electromagnetic force. Understanding QED begins with understanding electromagnetism. Electromagnetism can be called "electrodynamics" because it is a dynamic interaction between electrical and magnetic forces. Electromagnetism begins with the electric charge. Electric charges are the sources of, and create, electric fields. An electric field is a field which exerts a force on any particles that carry electric charges, at any point in space. This includes the electron, proton, and even quarks, among others. As a force is exerted, electric charges move, a current flows and a magnetic field is produced. The magnetic field, in turn causes electric current (moving electrons). The interacting electric and magnetic field is called an electromagnetic field. The physical description of interacting charged particles, electrical currents, electrical fields, and magnetic fields is called electromagnetism. In 1928 Paul Dirac produced a relativistic quantum theory of electromagnetism. This was the progenitor to modern quantum electrodynamics, in that it had essential ingredients of the modern theory. However, the problem of unsolvable infinities developed in this relativistic quantum theory. Years later, renormalization solved this problem. Initially viewed as a suspect, provisional procedure by some of its originators, renormalization eventually was embraced as an important and self-consistent tool in QED and other fields of physics. Also, in the late 1940s Feynman's diagrams depicted all possible interactions pertaining to a given event. The diagrams showed that the electromagnetic force is the interactions of photons between interacting particles. An example of a prediction of quantum electrodynamics which has been verified experimentally is the Lamb shift. This refers to an effect whereby the quantum nature of the electromagnetic field causes the energy levels in an atom or ion to deviate slightly from what they would otherwise be. As a result, spectral lines may shift or split. In the 1960s physicists realized that QED broke down at extremely high energies. From this inconsistency the Standard Model of particle physics was discovered, which remedied the higher energy breakdown in theory. The Standard Model unifies the electromagnetic and weak interactions into one theory. This is called the electroweak theory. The physical measurements, equations, and predictions pertinent to quantum mechanics are all consistent and hold a very high level of confirmation. However, the question of what these abstract models say about the underlying nature of the real world has received competing answers. Applications of quantum mechanics include the laser, the transistor, the electron microscope, and magnetic resonance imaging. A special class of quantum mechanical applications is related to macroscopic quantum phenomena such as superfluid helium and superconductors. The study of semiconductors led to the invention of the diode and the transistor, which are indispensable for modern electronics. In even the simple light switch, quantum tunnelling is absolutely vital, as otherwise the electrons in the electric current could not penetrate the potential barrier made up of a layer of oxide. Flash memory chips found in USB drives also use quantum tunnelling, to erase their memory cells.[38] See also[edit] 1. ^ A number of formulae had been created which were able to describe some of the experimental measurements of thermal radiation: how the wavelength at which the radiation is strongest changes with temperature is given by Wien's displacement law, the overall power emitted per unit area is given by the Stefan–Boltzmann law. The best theoretical explanation of the experimental results was the Rayleigh–Jeans law which agrees with experimental results well at large wavelengths (or, equivalently, low frequencies) but strongly disagrees at short wavelengths (or high frequencies). In fact, at short wavelengths, classical physics predicted that energy will be emitted by a hot body at an infinite rate. This result, which is clearly wrong, is known as the ultraviolet catastrophe. 2. ^ The word "quantum" comes from the Latin word for "how much" (as does "quantity"). Something which is "quantized", like the energy of Planck's harmonic oscillators, can only take specific values. For example, in most countries money is effectively quantized, with the "quantum of money" being the lowest-value coin in circulation. "Mechanics" is the branch of science that deals with the action of forces on objects, so "quantum mechanics" is the part of mechanics that deals with objects for which particular properties are quantized. 3. ^ Actually there can be intensity-dependent effects, but at intensities achievable with non-laser sources these effects are unobservable. 4. ^ Einstein's photoelectric effect equation can be derived and explained without requiring the concept of "photons". That is, the electromagnetic radiation can be treated as a classical electromagnetic wave, as long as the electrons in the material are treated by the laws of quantum mechanics. The results are quantitatively correct for thermal light sources (the sun, incandescent lamps, etc) both for the rate of electron emission as well as their angular distribution. For more on this point, see [11] 5. ^ The classical model of the atom is called the planetary model, or sometimes the Rutherford model after Ernest Rutherford who proposed it in 1911, based on the Geiger–Marsden gold foil experiment which first demonstrated the existence of the nucleus. 6. ^ In this case, the energy of the electron is the sum of its kinetic and potential energies. The electron has kinetic energy by virtue of its actual motion around the nucleus, and potential energy because of its electromagnetic interaction with the nucleus. 7. ^ The model can be easily modified to account for the emission spectrum of any system consisting of a nucleus and a single electron (that is, ions such as He+ or O7+ which contain only one electron) but cannot be extended to an atom with two electrons like neutral helium. 8. ^ Electron diffraction was first demonstrated three years after de Broglie published his hypothesis. At the University of Aberdeen, George Thomson passed a beam of electrons through a thin metal film and observed diffraction patterns, as would be predicted by the de Broglie hypothesis. At Bell Labs, Davisson and Germer guided an electron beam through a crystalline grid. De Broglie was awarded the Nobel Prize in Physics in 1929 for his hypothesis; Thomson and Davisson shared the Nobel Prize for Physics in 1937 for their experimental work. 9. ^ For a somewhat more sophisticated look at how Heisenberg transitioned from the old quantum theory and classical physics to the new quantum mechanics, see Heisenberg's entryway to matrix mechanics. 1. ^ Quantum Mechanics from National Public Radio 2. ^ Feynman, Richard P. (1988). QED : the strange theory of light and matter (1st Princeton pbk., seventh printing with corrections. ed.). Princeton, N.J.: Princeton University Press. p. 10. ISBN 978-0691024172.  3. ^ This result was published (in German) as Planck, Max (1901). "Ueber das Gesetz der Energieverteilung im Normalspectrum". Ann. Phys. 309 (3): 553–63. Bibcode:1901AnP...309..553P. doi:10.1002/andp.19013090310. . English translation: "On the Law of Distribution of Energy in the Normal Spectrum". 4. ^ Francis Weston Sears (1958). Mechanics, Wave Motion, and Heat. Addison-Wesley. p. 537.  5. ^ "The Nobel Prize in Physics 1918". Nobel Foundation. Retrieved 2009-08-01.  6. ^ Kragh, Helge (1 December 2000). "Max Planck: the reluctant revolutionary".  7. ^ Einstein, Albert (1905). "Über einen die Erzeugung und Verwandlung des Lichtes betreffenden heuristischen Gesichtspunkt". Annalen der Physik 17 (6): 132–148. Bibcode:1905AnP...322..132E. doi:10.1002/andp.19053220607. , translated into English as On a Heuristic Viewpoint Concerning the Production and Transformation of Light. The term "photon" was introduced in 1926. 8. ^ a b c d e Taylor, J. R.; Zafiratos, C. D.; Dubson, M. A. (2004). Modern Physics for Scientists and Engineers. Prentice Hall. pp. 127–9. ISBN 0-13-589789-0.  9. ^ Stephen Hawking, The Universe in a Nutshell, Bantam, 2001. 10. ^ Dicke and Wittke, Introduction to Quantum Mechanics, p. 12 11. ^ 12. ^ a b Taylor, J. R.; Zafiratos, C. D.; Dubson, M. A. (2004). Modern Physics for Scientists and Engineers. Prentice Hall. pp. 147–8. ISBN 0-13-589789-0.  13. ^ McEvoy, J. P.; Zarate, O. (2004). Introducing Quantum Theory. Totem \Books. pp. 70–89, especially p. 89. ISBN 1-84046-577-8.  14. ^ World Book Encyclopedia, page 6, 2007. 15. ^ Dicke and Wittke, Introduction to Quantum Mechanics, p. 10f. 16. ^ J. P. McEvoy and Oscar Zarate (2004). Introducing Quantum Theory. Totem Books. p. 110f. ISBN 1-84046-577-8.  17. ^ Aczel, Amir D., Entanglement, p. 51f. (Penguin, 2003) ISBN 978-1-5519-2647-6 18. ^ J. P. McEvoy and Oscar Zarate (2004). Introducing Quantum Theory. Totem Books. p. 114. ISBN 1-84046-577-8.  19. ^ Introducing Quantum Theory, p. 87 20. ^ Van der Waerden, B. L. (1967). Sources of Quantum Mechanics (in German translated to English). Mineola, New York: Dover Publications. pp. 261–276. "Received July 29, 1925"  See Werner Heisenberg's paper, "Quantum-Theoretical Re-interpretation of Kinematic and Mechanical Relations" pp. 261-276 21. ^ Nobel Prize Organization. "Erwin Schrödinger - Biographical". Retrieved 28 March 2014. "His great discovery, Schrödinger's wave equation, was made at the end of this epoch-during the first half of 1926."  22. ^ "Schrodinger Equation (Physics)," Encyclopædia Britannica 23. ^ Erwin Schrödinger, "The Present Situation in Quantum Mechanics", p. 9. "This translation was originally published in Proceedings of the American Philosophical Society, 124, 323-38, and then appeared as Section I.11 of Part I of Quantum Theory and Measurement (J.A. Wheeler and W.H. Zurek, eds., Princeton university Press, New Jersey 1983). This paper can be downloaded from" 24. ^ W. Moore, Schrödinger: Life and Thought, Cambridge University Press (1989), p. 222. See p. 227 for Schrödinger's own words. 25. ^ Heisenberg's Nobel Prize citation 26. ^ Heisenberg first published his work on the uncertainty principle in the leading German physics journal Zeitschrift für Physik: Heisenberg, W. (1927). "Über den anschaulichen Inhalt der quantentheoretischen Kinematik und Mechanik". Z. Phys. 43 (3–4): 172–198. Bibcode:1927ZPhy...43..172H. doi:10.1007/BF01397280.  27. ^ Nobel Prize in Physics presentation speech, 1932 28. ^ "Uncertainty principle," Encyclopædia Britannica 29. ^ a b c Linus Pauling, The Nature of the Chemical Bond, p. 47 30. ^ "Orbital (chemistry and physics)," Encyclopædia Britannica 31. ^ E. Schrödinger, Proceedings of the Cambridge Philosophical Society, 31 (1935), p. 555, says: "When two systems, of which we know the states by their respective representation, enter into a temporary physical interaction due to known forces between them and when after a time of mutual influence the systems separate again, then they can no longer be described as before, viz., by endowing each of them with a representative of its own. I would not call that one but rather the characteristic trait of quantum mechanics." 32. ^ "Quantum Nonlocality and the Possibility of Superluminal Effects", John G. Cramer, 33. ^ "Mechanics," Merriam-Webster Online Dictionary 34. ^ "Field", Encyclopædia Britannica 35. ^ Richard Hammond, The Unknown Universe, New Page Books, 2008. ISBN 978-1-60163-003-2 36. ^ The Physical World website 37. ^ "The Nobel Prize in Physics 1933". Nobel Foundation. Retrieved 2007-11-24.  38. ^ Durrani, Z. A. K.; Ahmed, H. (2008). Vijay Kumar, ed. Nanosilicon. Elsevier. p. 345. ISBN 978-0-08-044528-1.  Further reading[edit] External links[edit]
9fb027c2e390c50a
Psychology Wiki Eigenvalue, eigenvector and eigenspace Redirected from Eigenvector 34,190pages on this wiki Mona Lisa with eigenvector Fig. 1. In this shear mapping of the Mona Lisa, the picture was deformed in such a way that its central vertical axis (red vector) was not modified, but the diagonal vector (blue) has changed direction. Hence the red vector is an eigenvector of the transformation and the blue vector is not. Since the red vector was neither stretched nor compressed, its eigenvalue is 1. All vectors with the same vertical direction - i.e., parallel to this vector - are also eigenvectors, with the same eigenvalue. Together with the zero-vector, they form the eigenspace for this eigenvalue. In mathematics, a vector may be thought of as an arrow. It has a length, called its magnitude, and it points in some particular direction. A linear transformation may be considered to operate on a vector to change it, usually changing both its magnitude and its direction. An eigenvector of a given linear transformation is a vector which is multiplied by a constant called the eigenvalue during that transformation. The direction of the eigenvector is either unchanged by that transformation (for positive eigenvalues) or reversed (for negative eigenvalues). Euler had also studied the rotational motion of a rigid body and discovered the importance of the principal axes. As Lagrange realized, the principal axes are the eigenvectors of the inertia matrix.[1] In the early 19th century, Cauchy saw how their work could be used to classify the quadric surfaces, and generalized it to arbitrary dimensions.[2] Cauchy also coined the term racine caractéristique (characteristic root) for what is now called eigenvalue; his term survives in characteristic equation.[3] Fourier used the work of Laplace and Lagrange to solve the heat equation by separation of variables in his famous 1822 book Théorie analytique de la chaleur.[4] Sturm developed Fourier's ideas further and he brought them to the attention of Cauchy, who combined them with his own ideas and arrived at the fact that symmetric matrices have real eigenvalues.[2] This was extended by Hermite in 1855 to what are now called Hermitian matrices.[3] Around the same time, Brioschi proved that the eigenvalues of orthogonal matrices lie on the unit circle,[2] and Clebsch found the corresponding result for skew-symmetric matrices.[3] Finally, Weierstrass clarified an important aspect in the stability theory started by Laplace by realizing that defective matrices can cause instability.[2] In the meantime, Liouville studied eigenvalue problems similar to those of Sturm; the discipline that grew out of their work is now called Sturm-Liouville theory.[5] Schwarz studied the first eigenvalue of Laplace's equation on general domains towards the end of the 19th century, while Poincaré studied Poisson's equation a few years later.[6] At the start of the 20th century, Hilbert studied the eigenvalues of integral operators by viewing the operators as infinite matrices.[7] He was the first to use the German word eigen to denote eigenvalues and eigenvectors in 1904, though he may have been following a related usage by Helmholtz. "Eigen" can be translated as "own", "peculiar to", "characteristic" or "individual"—emphasizing how important eigenvalues are to defining the unique nature of a specific transformation. For some time, the standard term in English was "proper value", but the more distinctive term "eigenvalue" is standard today.[8] The first numerical algorithm for computing eigenvalues and eigenvectors appeared in 1929, when Von Mises published the power method. One of the most popular methods today, the QR algorithm, was proposed independently by Francis and Kublanovskaya in 1961.[9] Definitions: the eigenvalue equationEdit See also: Eigenplane additivity \ A(\mathbf{x}+\mathbf{y})=A(\mathbf{x})+A(\mathbf{y}) homogeneity \ A(\alpha \mathbf{x})=\alpha A(\mathbf{x}) where x and y are any two vectors of the vector space L and α is any real number. Such a function is variously called a linear transformation, linear operator, or linear endomorphism on the space L. Given a linear transformation A, a non-zero vector x is defined to be an eigenvector of the transformation if it satisfies the eigenvalue equation A \mathbf{x} = \lambda \mathbf{x} for some scalar λ. In this situation, the scalar λ is called an eigenvalue of A corresponding to the eigenvector x. The key equation in this definition is the eigenvalue equation, Ax = λx. Most vectors x will not satisfy such an equation. A typical vector x changes direction when acted on by A, so that Ax is not a multiple of x. This means that only certain special vectors x are eigenvectors, and only certain special numbers λ are eigenvalues. Of course, if A is a multiple of the identity matrix, then no vector changes direction, and all non-zero vectors are eigenvectors. But in the usual case, eigenvectors are few and far between. They are the "normal modes" of the system, and they act independently.[10] The requirement that the eigenvector be non-zero is imposed because the equation A0 = λ0 holds for every A and every λ. Since the equation is always trivially true, it is not an interesting case. In contrast, an eigenvalue can be zero in a nontrivial way. An eigenvalue can be, and usually is, also a complex number. In the definition given above, eigenvectors and eigenvalues do not occur independently. Instead, each eigenvector is associated with a specific eigenvalue. For this reason, an eigenvector x and a corresponding eigenvalue λ are often referred to as an eigenpair. One eigenvalue can be associated with several or even with infinite number of eigenvectors. But conversely, if an eigenvector is given, the associated eigenvalue for this eigenvector is unique. Indeed, from the equality Ax = λx = λ'x and from x0 it follows that λ = λ'.[11] File:Eigenvalue equation.svg Geometrically (Fig. 2), the eigenvalue equation means that under the transformation A eigenvectors experience only changes in magnitude and sign — the direction of Ax is the same as that of x. This type of linear transformation is defined as homothety (dilatation[12], similarity transformation). The eigenvalue λ is simply the amount of "stretch" or "shrink" to which a vector is subjected when transformed by A. If λ = 1, the vector remains unchanged (unaffected by the transformation). A transformation I under which a vector x remains unchanged, Ix = x, is defined as identity transformation. If λ = –1, the vector flips to the opposite direction (rotates to 180°); this is defined as reflection. If x is an eigenvector of the linear transformation A with eigenvalue λ, then any vector y = αx is also an eigenvector of A with the same eigenvalue. From the homogeneity of the transformation A it follows that Ay = α(Ax) = α(λx) = λ(αx) = λy. Similarly, using the additivity property of the linear transformation, it can be shown that any linear combination of eigenvectors with eigenvalue λ has the same eigenvalue λ.[13] Therefore, any non-zero vector in the line through x and the zero vector is an eigenvector with the same eigenvalue as x. Together with the zero vector, those eigenvectors form a subspace of the vector space called an eigenspace. The eigenvectors corresponding to different eigenvalues are linearly independent[14] meaning, in particular, that in an n-dimensional space the linear transformation A cannot have more than n eigenvectors with different eigenvalues.[15] The vectors of the eigenspace generate a linear subspace of A which is invariant (unchanged) under this transformation.[16] If a basis is defined in vector space Ln, all vectors can be expressed in terms of components. Polar vectors can be represented as one-column matrices with n rows where n is the space dimensionality. Linear transformations can be represented with square matrices; to each linear transformation A of Ln corresponds a square matrix of rank n. Conversely, to each square matrix of rank n corresponds a linear transformation of Ln at a given basis. Because of the additivity and homogeneity of the linear trasformation and the eigenvalue equation (which is also a linear transformation — homothety), those vector functions can be expressed in matrix form. Thus, in a the two-dimensional vector space L2 fitted with standard basis, the eigenvector equation for a linear transformation A can be written in the following matrix representation: \begin{bmatrix} a_{11} & a_{12} \\ a_{21} & a_{22} \end{bmatrix} \begin{bmatrix} x \\ y \end{bmatrix} = \lambda \begin{bmatrix} x \\ y \end{bmatrix}, where the juxtaposition of matrices means matrix multiplication. This is equivalent to a set of n linear equations, where n is the number of basis vectors in the basis set. In these equations both the eigenvalue λ and the components of x are unknown variables. The eigenvectors of A as defined above are also called right eigenvectors because they are column vectors that stand on the right side of the matrix A in the eigenvalue equation. If there exists a transposed matrix AT that satifies the eigenvalue equation, that is, if ATx = λx, then λxT = (λx)T = (ATx)T = xTA, or xTA = λxT. The last equation is similar to the eigenvalue equation but instead of the column vector x it contains its transposed vector, the row vector xT, which stands on the left side of the matrix A. The eigenvectors that satisfy the eigenvalue equation xTA = λxT are called left eigenvectors. They are row vectors.[17] In many common applications, only right eigenvectors need to be considered. Hence the unqualified term "eigenvector" can be understood to refer to a right eigenvector. Eigenvalue equations, written in terms of right or left eigenvectors (Ax = λx and xTA = λxT) have the same eigenvalue λ.[18] An eigenvector is defined to be a principal or dominant eigenvector if it corresponds to the eigenvalue of largest magnitude (for real numbers, largest absolute value). Repeated application of a linear transformation to an arbitrary vector results in a vector proportional (collinear) to the principal eigenvector.[18] The applicability the eigenvalue equation to general matrix theory extends the use of eigenvectors and eigenvalues to all matrices, and thus greatly extends the scope of use of these mathematical constructs not only to transformations in linear vector spaces but to all fields of science that use matrices: linear equations systems, optimization, vector and tensor calculus, all fields of physics that use matrix quantities, particularly quantum physics, relativity, and electrodynamics, as well as many engineering applications. Characteristic equationEdit Main article: Characteristic equation Main article: Characteristic polynomial The determination of the eigenvalues and eigenvectors is important in virtually all areas of physics and many engineering problems, such as stress calculations, stability analysis, oscillations of vibrating systems, etc. It is equivalent to matrix diagonalization, and is the first step of orthogonalization, finding of invariants, optimization (minimization or maximization), analysis of linear systems, and many other common applications. The usual method of finding all eigenvectors and eigenvalues of a system is first to get rid of the unknown components of the eigenvectors, then find the eigenvalues, plug those back one by one in the eigenvalue equation in matrix form and solve that as a system of linear equations to find the components of the eigenvectors. From the identity transformation Ix = x, where I is the identity matrix, x in the eigenvalue equation can be replaced by Ix to give: A \mathbf{x} = \lambda I \mathbf{x} The identity matrix is needed to keep matrices, vectors, and scalars straight; the equation (A − λ) x = 0 is shorter, but mixed up since it does not differentiate between matrix, scalar, and vector.[19] The expression in the right hand side is transferred to left hand side with a negative sign, leaving 0 on the right hand side: A \mathbf{x} - \lambda I \mathbf{x} = 0 The eigenvector x is pulled out behind parentheses: (A - \lambda I) \mathbf{x} = 0 This can be viewed as a linear system of equations in which the coefficient matrix is the expression in the parentheses, the matrix of the unknowns is x, and the right hand side matrix is zero. According to Cramer's rule, this system of equations has non-trivial solutions (not all zeros, or not any number) if and only if its determinant vanishes, so the solutions of the equation are given by: \det(A - \lambda I) = 0 \, This equation is defined as the characteristic equation (less often, secular equation) of A, and the left-hand side is defined as the characteristic polynomial. The eigenvector x or its components are not present in the characteristic equation, so at this stage they are dispensed with, and the only unknowns that remain to be calculated are the eigenvalues (the components of matrix A are given, i. e, known beforehand). For a vector space L2, the transformation A is a 2 × 2 square matrix, and the characteristic equation can be written in the following form: \begin{vmatrix} a_{11} - \lambda & a_{12}\\a_{21} & a_{22} - \lambda\end{vmatrix} = 0 Expansion of the determinant in the left hand side results in a characteristic polynomial which is a monic (its leading coefficient is 1) polynomial of the second degree, and the characteristic equation is the quadratic equation \lambda^2 - \lambda (a_{11} + a_{22}) + (a_{11} a_{22} - a_{12} a_{21}) = 0, \, which has the following solutions (roots): \lambda_{1,2} = \frac{1}{2} \left [(a_{11} + a_{22}) \pm \sqrt{4a_{12} a_{21} + (a_{11} - a_{22})^2} \right ]. For real matrices, the coefficients of the characteristic polynomial are all real. The number and type of roots depends on the value of the discriminant, Δ. For cases Δ = 0, Δ > 0, or Δ < 0, respectively, the roots are one real, two real, or two complex. If the roots are complex, they are also complex conjugates of each other. When the number of roots is less than the degree of the characteristic polynomial (the latter is also the rank of the matrix, and the number of dimensions of the vector space) the equation has a multiple root. In the case of a quadratic equation with one root, this root is a double root, or a root with multiplicity 2. A root with a multiplicity of 1 is a simple root. A quadratic equation with two real or complex roots has only simple roots. In general, the algebraic multiplicity of an eigenvalue is defined as the multiplicity of the corresponding root of the characteristic polynomial. The spectrum of a transformation on a finite dimensional vector space is defined as the set of all its eigenvalues. In the infinite-dimensional case, the concept of spectrum is more subtle and depends on the topology of the vector space. The general formula for the characteristic polynomial of an n-square matrix is p(\lambda) = \sum_{k=0}^n (-1)^k S_k \lambda^{n-k}, where S0 = 1, S1 = tr(A), the trace of the transformation matrix A, and Sk with k > 1 are the sums of the principal minors of order k.[20] The fact that eigenvalues are roots of an n-order equation shows that a linear transformation of an n-dimensional linear space has at most n different eigenvalues.[21] According to the fundamental theorem of algebra, in a complex linear space, the characteristic polynomial has at least one zero. Consequently, every linear transformation of a complex linear space has at least one eigenvalue. [22][23] For real linear spaces, if the dimension is an odd number, the linear transformation has at least one eigenvalue; if the dimension is an even number, the number of eigenvalues depends on the determinant of the transformation matrix: if the determinant is negative, there exists at least one positive and one negative eigenvalue, if the determinant is positive nothing can be said about existence of eigenvalues.[24] The complexity of the problem for finding roots/eigenvalues of the characteristic polynomial increases rapidly with increasing the degree of the polynomial (the dimension of the vector space), n. Thus, for n = 3, eigenvalues are roots of the cubic equation, for n = 4 — roots of the quartic equation. For n > 4 there are no exact solutions and one has to resort to root-finding algorithms, such as Newton's method (Horner's method) to find numerical approximations of eigenvalues. For large symmetric sparse matrices, Lanczos algorithm is used to compute eigenvalues and eigenvectors. In order to find the eigenvectors, the eigenvalues thus found as roots of the characteristic equations are plugged back, one at a time, in the eigenvalue equation written in a matrix form (illustrated for the simplest case of a two-dimensional vector space L2): \left (\begin{bmatrix} a_{11} & a_{12}\\a_{21} & a_{22}\end{bmatrix} - \lambda \begin{bmatrix} 1 & 0\\0 & 1\end{bmatrix} \right ) \begin{bmatrix} x \\ y \end{bmatrix} = \begin{bmatrix} a_{11} - \lambda & a_{12}\\a_{21} & a_{22} - \lambda \end{bmatrix} \begin{bmatrix} x \\ y \end{bmatrix} = \begin{bmatrix} 0 \\ 0 \end{bmatrix}, where λ is one of the eigenvalues found as a root of the characteristic equation. This matrix equation is equivalent to a system of two linear equations: \left ( a_{11} - \lambda \right ) x + a_{12} y = 0 \\ a_{21} x + \left ( a_{22} - \lambda \right ) y = 0 The equations are solved for x and y by the usual algebraic or matrix methods. Often, it is possible to divide both sides of the equations to one or more of the coefficients which makes some of the coefficients in front of the unknowns equal to 1. This is called normalization of the vectors, and corresponds to choosing one of the eigenvectors (the normalized eigenvector) as a representative of all vectors in the eigenspace corresponding to the respective eigenvalue. The x and y thus found are the components of the eigenvector in the coordinate system used (most often Cartesian, or polar). Using the Cayley-Hamilton theorem which states that every square matrix satisfies its own characteristic equation, it can be shown that (most generally, in the complex space) there exists at least one non-zero vector that satisfies the eigenvalue equation for that matrix.[25] As it was said in the Definitions section, to each eigenvalue correspond an infinite number of colinear (linearly dependent) eigenvectors that form the eigenspace for this eigenvalue. On the other hand, the dimension of the eigenspace is equal to the number of the linearly independent eigenvectors that it contains. The geometric multiplicity of an eigenvalue is defined as the dimension of the associated eigenspace. A multiple eigenvalue may give rise to a single eigenvector so that its algebraic multiplicity may be different than the geometric multiplicity.[26] However, as already stated, different eigenvalues are paired with linearly independent eigenvectors.[14] From the aforementioned, it follows that the geometric multiplicity cannot be greater than the algebraic multiplicity.[27] For instance, an eigenvector of a rotation in three dimensions is a vector located within the axis about which the rotation is performed. The corresponding eigenvalue is 1 and the corresponding eigenspace contains all the vectors along the axis. As this is a one-dimensional space, its geometric multiplicity is one. This is the only eigenvalue of the spectrum (of this rotation) that is a real number. The examples that follow are for the simplest case of two-dimensional vector space L2 but they can easily be applied in the same manner to spaces of higher dimensions. Homothety, identity, point reflection, and null transformationEdit File:Homothety in two dim.svg As a one-dimensional vector space L1, consider a rubber string tied to unmoving support in one end, such as that on a child's sling. Pulling the string away from the point of attachment stretches it and elongates it by some scaling factor λ which is a real number. Each vector on the string is stretched equally, with the same scaling factor λ, and although elongated it preserves its original direction. This type of transformation is called homothety (similarity transformation). For a two-dimensional vector space L2, consider a rubber sheet stretched equally in all directions such as a small area of the surface of an inflating balloon (Fig. 3). All vectors originating at a fixed point on the balloon surface are stretched equally with the same scaling factor λ. The homothety transformation in two-dimensions is described by a 2 × 2 square matrix, acting on an arbitrary vector in the plane of the stretching/shrinking surface. After doing the matrix multiplication, one obtains: A \mathbf{x} = \begin{bmatrix}\lambda & 0\\0 & \lambda\end{bmatrix} \begin{bmatrix} x \\ y \end{bmatrix} = \begin{bmatrix}\lambda . x + 0 . y \\0 . x + \lambda . y\end{bmatrix} = \lambda \begin{bmatrix} x \\ y \end{bmatrix} = \lambda \mathbf{x}, which, expressed in words, means that the transformation is equivalent to multiplying the length of the vector by λ while preserving its original direction. The equation thus obtained is exactly the eigenvalue equation. Since the vector taken was arbitrary, in homothety any vector in the vector space undergoes the eigenvalue equation, i. e. any vector lying on the balloon surface can be an eigenvector. Whether the transformation is stretching (elongation, extension, inflation), or shrinking (compression, deflation) depends on the scaling factor: if λ > 1, it is stretching, if λ < 1, it is shrinking. Several other transformations can be considered special types of homothety with some fixed, constant value of λ: in identity which leaves vectors unchanged, λ = 1; in reflection about a point which preserves length and direction of vectors but changes their orientation to the opposite one, λ = −1; and in null transformation which transforms each vector to the zero vector, λ = 0. The null transformation does not give rise to an eigenvector since the zero vector cannot be an eigenvector but it has eigenspace since eigenspace contains also the zero vector by definition. Unequal scalingEdit For a slightly more complicated example, consider a sheet that is stretched uneqally in two perpendicular directions along the coordinate axes, or, similarly, stretched in one direction, and shrunk in the other direction. In this case, there are two different scaling factors: k1 for the scaling in direction x, and k2 for the scaling in direction y. The transformation matrix is \begin{bmatrix}k_1 & 0\\0 & k_2\end{bmatrix}, and the characteristic equation is λ2 − λ (k1 + k2) + k1k2 = 0. The eigenvalues, obtained as roots of this equation are λ1 = k1, and λ2 = k2 which means, as expected, that the two eigenvalues are the scaling factors in the two directions. Plugging k1 back in the eigenvalue equation gives one of the eigenvectors: \begin{bmatrix}k_1 - k_1 & 0\\0 & k_2 - k_1\end{bmatrix} \begin{bmatrix} x \\ y\end{bmatrix} = \begin{cases} \left ( k_1 - k_1 \right ) x + 0 . y \\ 0 . x + \left ( k_2 - k_1 \right ) y \end{cases} = \left ( k_2 - k_1 \right ) y = 0. Dividing the last equation by k2k1, one obtains y = 0 which represents the x axis. A vector with lenght 1 taken along this axis represents the normalized eigenvector corresponding to the eigenvalue λ1. The eigenvector corresponding to λ2 which is a unit vector along the y axis is found in a similar way. In this case, both eigenvalues are simple (with algebraic and geometric multiplicities equal to 1). Depending on the values of λ1 and λ2, there are several notable special cases. In particular, if λ1 > 1, and λ2 = 1, the transformation is a stretch in the direction of axis x. If λ2 = 0, and λ1 = 1, the transformation is a projection of the surface L2 on the axis x because all vectors in the direction of y become zero vectors. Let the rubber sheet is stretched along the x axis (k1 > 1) and simultaneously shrunk along the y axis (k2 < 1). Then λ1 = k1 will be the principal eigenvalue. Repeatedly applying this transformation of stretching/shrinking many times to the rubber sheet will turn the latter more and more similar to a rubber string. Any vector on the surface of the rubber sheet will be oriented closer and closer to the direction of the x axis (the direction of stretching), that is, it will become collinear with the principal eigenvector. Mona LisaEdit Mona Lisa with eigenvector For the example shown on the right, the matrix that would produce a shear transformation similar to this would be A=\begin{bmatrix}1 & 0\\ -\frac{1}{2} & 1\end{bmatrix}. The set of eigenvectors \mathbf{x} for A is defined as those vectors which, when multiplied by A, result in a simple scaling \lambda of \mathbf{x}. Thus, A\mathbf{x} = \lambda\mathbf{x}. If we restrict ourselves to real eigenvalues, the only effect of the matrix on the eigenvectors will be to change their length, and possibly reverse their direction. So multiplying the right hand side by the Identity matrix I, we have A\mathbf{x} = (\lambda I)\mathbf{x}, and therefore (A-\lambda I)\mathbf{x}=0. In order for this equation to have non-trivial solutions, we require the determinant \det(A - \lambda I), which is called the characteristic polynomial of the matrix A, to be zero. In our example we can calculate the determinant as \det\!\left(\begin{bmatrix}1 & 0\\ -\frac{1}{2} & 1\end{bmatrix} - \lambda\begin{bmatrix}1 & 0\\ 0 & 1\end{bmatrix} \right)=(1-\lambda)^2, and now we have obtained the characteristic polynomial (1-\lambda)^2 of the matrix A. There is in this case only one distinct solution of the equation (1-\lambda)^2 = 0, \lambda=1. This is the eigenvalue of the matrix A. As in the study of roots of polynomials, it is convenient to say that this eigenvalue has multiplicity 2. Having found an eigenvalue \lambda=1, we can solve for the space of eigenvectors by finding the nullspace of A-(1)I. In other words by solving for vectors \mathbf{x} which are solutions of \begin{bmatrix}1-\lambda & 0\\ -\frac{1}{2} & 1-\lambda \end{bmatrix}\begin{bmatrix}x_1\\ x_2\end{bmatrix}=0 Substituting our obtained eigenvalue \lambda=1, \begin{bmatrix}0 & 0\\ -\frac{1}{2} & 0 \end{bmatrix}\begin{bmatrix}x_1\\ x_2\end{bmatrix}=0 Solving this new matrix equation, we find that vectors in the nullspace have the form \mathbf{x} = \begin{bmatrix}0\\ c\end{bmatrix} where c is an arbitrary constant. All vectors of this form, i.e. pointing straight up or down, are eigenvectors of the matrix A. The effect of applying the matrix A to these vectors is equivalent to multiplying them by their corresponding eigenvalue, in this case 1. In general, 2-by-2 matrices will have two distinct eigenvalues, and thus two distinct eigenvectors. Whereas most vectors will have both their lengths and directions changed by the matrix, eigenvectors will only have their lengths changed, and will not change their direction, except perhaps to flip through the origin in the case when the eigenvalue is a negative number. Also, it is usually the case that the eigenvalue will be something other than 1, and so eigenvectors will be stretched, squashed and/or flipped through the origin by the matrix. Other examplesEdit Standing wave Fig. 2. A standing wave in a rope fixed at its boundaries is an example of an eigenvector, or more precisely, an eigenfunction of the transformation giving the acceleration. As time passes, the standing wave is scaled by a sinusoidal oscillation whose frequency is determined by the eigenvalue, but its overall shape is not modified. Assume the rope is a continuous medium. If one considers the equation for the acceleration at every point of the rope, its eigenvectors, or eigenfunctions, are the standing waves. The standing waves correspond to particular oscillations of the rope such that the acceleration of the rope is simply its shape scaled by a factor—this factor, the eigenvalue, turns out to be -\omega^2 where \omega is the angular frequency of the oscillation. Each component of the vector associated with the rope is multiplied by a time-dependent factor \sin(\omega t). If damping is considered, the amplitude of this oscillation decreases until the rope stops oscillating, corresponding to a complex ω. One can then associate a lifetime with the imaginary part of ω, and relate the concept of an eigenvector to the concept of resonance. Without damping, the fact that the acceleration operator (assuming a uniform density) is Hermitian leads to several important properties, such as that the standing wave patterns are orthogonal functions. However, it is sometimes unnatural or even impossible to write down the eigenvalue equation in a matrix form. This occurs for instance when the vector space is infinite dimensional, for example, in the case of the rope above. Depending on the nature of the transformation T and the space to which it applies, it can be advantageous to represent the eigenvalue equation as a set of differential equations. If T is a differential operator, the eigenvectors are commonly called eigenfunctions of the differential operator representing T. For example, differentiation itself is a linear transformation since \displaystyle\frac{d}{dt}(af+bg) = a \frac{df}{dt} + b \frac{dg}{dt} Consider differentiation with respect to t. Its eigenfunctions h(t) obey the eigenvalue equation: \displaystyle\frac{dh}{dt} = \lambda h, where λ is the eigenvalue associated with the function. Such a function of time is constant if \lambda = 0, grows proportionally to itself if \lambda is positive, and decays proportionally to itself if \lambda is negative. For example, an idealized population of rabbits breeds faster the more rabbits there are, and thus satisfies the equation with a positive lambda. The solution to the eigenvalue equation is g(t)= \exp (\lambda t), the exponential function; thus that function is an eigenfunction of the differential operator d/dt with the eigenvalue λ. If λ is negative, we call the evolution of g an exponential decay; if it is positive, an exponential growth. The value of λ can be any complex number. The spectrum of d/dt is therefore the whole complex plane. In this example the vector space in which the operator d/dt acts is the space of the differentiable functions of one variable. This space has an infinite dimension (because it is not possible to express every differentiable function as a linear combination of a finite number of basis functions). However, the eigenspace associated with any given eigenvalue λ is one dimensional. It is the set of all functions g(t)= A \exp (\lambda t), where A is an arbitrary constant, the initial population at t=0. Spectral theoremEdit For more details on this topic, see spectral theorem. In its simplest version, the spectral theorem states that, under certain conditions, a linear transformation of a vector \mathbf{v} can be expressed as a linear combination of the eigenvectors, in which the coefficient of each eigenvector is equal to the corresponding eigenvalue times the scalar product (or dot product) of the eigenvector with the vector \mathbf{v}. Mathematically, it can be written as: \mathcal{T}(\mathbf{v})= \lambda_1 (\mathbf{v}_1 \cdot \mathbf{v}) \mathbf{v}_1 + \lambda_2 (\mathbf{v}_2 \cdot \mathbf{v}) \mathbf{v}_2 + \cdots where \mathbf{v}_1, \mathbf{v}_2, \dots and \lambda_1, \lambda_2, \dots stand for the eigenvectors and eigenvalues of \mathcal{T}. The theorem is valid for all self-adjoint linear transformations (linear transformations given by real symmetric matrices and Hermitian matrices), and for the more general class of (complex) normal matrices. If one defines the nth power of a transformation as the result of applying it n times in succession, one can also define polynomials of transformations. A more general version of the theorem is that any polynomial P of \mathcal{T} is given by P(\mathcal{T})(\mathbf{v}) = P(\lambda_1) (\mathbf{v}_1 \cdot \mathbf{v}) \mathbf{v}_1 + P(\lambda_2) (\mathbf{v}_2 \cdot \mathbf{v}) \mathbf{v}_2 + \cdots The theorem can be extended to other functions of transformations, such as analytic functions, the most general case being Borel functions. Main article: Eigendecomposition (matrix) The spectral theorem for matrices can be stated as follows. Let \mathbf{A} be a square (n\times n) matrix. Let \mathbf{q}_1 ... \mathbf{q}_k be an eigenvector basis, i.e. an indexed set of k linearly independent eigenvectors, where k is the dimension of the space spanned by the eigenvectors of \mathbf{A}. If k=n, then \mathbf{A} can be written where \mathbf{Q} is the square (n\times n) matrix whose ith column is the basis eigenvector \mathbf{q}_i of \mathbf{A} and \mathbf{\Lambda} is the diagonal matrix whose diagonal elements are the corresponding eigenvalues, i.e. \Lambda_{ii}=\lambda_i. Infinite-dimensional spacesEdit If the vector space is an infinite dimensional Banach space, the notion of eigenvalues can be generalized to the concept of spectrum. The spectrum is the set of scalars λ for which \left(T-\lambda\right)^{-1} is not defined; that is, such that T-\lambda has no bounded inverse. Clearly if λ is an eigenvalue of T, λ is in the spectrum of T. In general, the converse is not true. There are operators on Hilbert or Banach spaces which have no eigenvectors at all. This can be seen in the following example. The bilateral shift on the Hilbert space \ell^2(\mathbf{Z}) (the space of all sequences of scalars \dots a_{-1}, a_0, a_1,a_2,\dots such that \cdots + |a_{-1}|^2 + |a_0|^2 + |a_1|^2 + |a_2|^2 + \cdots converges) has no eigenvalue but has spectral values. Exponential functions are eigenfunctions of the derivative operator (the derivative of exponential functions are proportional to themself). Exponential growth and decay therefore provide examples of continuous spectra, as does the vibrating string example illustrated above. The hydrogen atom is an example where both types of spectra appear. The eigenfunctions of the hydrogen atom Hamiltonian are called eigenstates and are grouped into two categories. The bound states of the hydrogen atom correspond to the discrete part of the spectrum (they have a discrete set of eigenvalues which can be computed by Rydberg formula) while the ionization processes are described by the continuous part (the energy of the collision/ionization is not quantified). Schrödinger equationEdit An example of an eigenvalue equation where the transformation \mathcal{T} is represented in terms of a differential operator is the time-independent Schrödinger equation in quantum mechanics: H\psi_E = E\psi_E \, Molecular orbitalsEdit In quantum mechanics, and in particular in atomic and molecular physics, within the Hartree-Fock theory, the atomic and molecular orbitals can be defined by the eigenvectors of the Fock operator. The corresponding eigenvalues are interpreted as ionization potentials via Koopmans' theorem. In this case, the term eigenvector is used in a somewhat more general meaning, since the Fock operator is explicitly dependent on the orbitals and their eigenvalues. If one wants to underline this aspect one speaks of implicit eigenvalue equation. Such equations are usually solved by an iteration procedure, called in this case self-consistent field method. In quantum chemistry, one often represents the Hartree-Fock equation in a non-orthogonal basis set. This particular representation is a generalized eigenvalue problem called Roothaan equations. Geology and Glaciology: (Orientation Tensor)Edit In geology, especially in the study of glacial till, eigenvectors and eigenvalues are used as a method by which a mass of information of a clast fabric's constituents' orientation and dip can be summarized in a 3-D space by six numbers. In the field, a geologist may collect such data for hundreds or thousands of clasts in a soil sample, which can only be compared graphically such as in a Tri-Plot (Sneed and Folk) diagram [28], [29], or as a Stereonet on a Wulff Net [30]. The output for the orientation tensor is in the three orthogonal (perpendicular) axes of space. Eigenvectors output from programs such as Stereo32 [31] are in the order E1 > E2 > E3, with E1 being the primary orientation of clast orientation/dip, E2 being the secondary and E3 being the tertiary, in terms of strength. The clast orientation is defined as the Eigenvector, on a compass rose of 360°. Dip is measured as the Eigenvalue, the modulus of the tensor: this is valued from 0° (no dip) to 90° (vertical). Various values of E1, E2 and E3 mean different things, as can be seen in the book 'A Practical Guide to the Study of Glacial Sediments' by Benn & Evans, 2004 [32]. Factor analysisEdit Fig. 5. Eigenfaces as examples of eigenvectors Tensor of inertiaEdit Stress tensorEdit Eigenvalues of a graphEdit See also Edit The Book of Mathematical Proofs may have more about this subject. 1. See Hawkins (1975), §2. 2. 2.0 2.1 2.2 2.3 See Hawkins (1975), §3. 3. 3.0 3.1 3.2 See Kline 1972, pp. 807-808 4. See Kline 1972, p. 673 5. See Kline 1972, pp. 715-716 6. See Kline 1972, pp. 706-707 7. See Kline 1972, p. 1063 8. See Aldrich (2006). 9. See Golub & van Loan 1996, §7.3; Meyer 2000, §7.3 10. See Strang 2006, p. 249 11. See Sharipov 1996, p. 66 12. See Bowen & Wang 1980, p. 148 13. For a proof of this lemma, see Shilov 1969, p. 131, and Lemma for the eigenspace 14. 14.0 14.1 For a proof of this lemma, see Shilov 1969, p. 130, Hefferon 2001, p. 364, and Lemma for linear independence of eigenvectors 15. See Shilov 1969, p. 131 16. For proof, see Sharipov 1996, Theorem 4.4 on p. 68 17. See Shores 2007, p. 252 18. 18.0 18.1 For a proof of this theorem, see Weisstein, Eric W. Eigenvector From MathWorld − A Wolfram Web Resource 19. See Strang 2006, footnote to p. 245 20. For details and proof, see Meyer 2000, p. 494-495 21. See Greub 1975, p. 118 22. See Greub 1975, p. 119 23. For proof, see Gelfand 1971, p. 115 24. For proof, see Greub 1975, p. 119 25. For details and proof, see Kuttler 2007, p. 151 26. See Shilov 1969, p. 134 27. See Shilov 1969, p. 135 and Problem 11 to Chapter 5 28. Graham, D., and Midgley, N., 2000. Earth Surface Processes and Landforms (25) pp 1473-1477 29. Sneed ED, Folk RL. 1958. Pebbles in the lower Colorado River, Texas, a study of particle morphogenesis. Journal of Geology 66(2): 114–150 30. GIS-stereoplot: an interactive stereonet plotting module for ArcView 3.0 geographic information system 31. Stereo32 32. Benn, D., Evans, D., 2004. A Practical Guide to the study of Glacial Sediments. London: Arnold. pp 103-107 • Korn, Granino A.; Korn, Theresa M. (2000), Mathematical Handbook for Scientists and Engineers: Definitions, Theorems, and Formulas for Reference and Review, 1152 p., Dover Publications, 2 Revised edition, ISBN 0-486-41147-8 . • John Aldrich, Eigenvalue, eigenfunction, eigenvector, and related terms. In Jeff Miller (Editor), Earliest Known Uses of Some of the Words of Mathematics, last updated 7 August 2006, accessed 22 August 2006. • Strang, Gilbert (1993), Introduction to linear algebra, Wellesley-Cambridge Press, Wellesley, MA, ISBN 0-961-40885-5 . • Strang, Gilbert (2006), Linear algebra and its applications, Thomson, Brooks/Cole, Belmont, CA, ISBN 0-030-10567-6 . • Bowen, Ray M.; Wang, Chao-Cheng (1980), Linear and multilinear algebra, Plenum Press, New York, NY, ISBN 0-306-37508-7 . • Claude Cohen-Tannoudji, Quantum Mechanics, Wiley (1977). ISBN 0-471-16432-1. (Chapter II. The mathematical tools of quantum mechanics.) • John B. Fraleigh and Raymond A. Beauregard, Linear Algebra (3rd edition), Addison-Wesley Publishing Company (1995). ISBN 0-201-83999-7 (international edition). • Golub, Gene H.; van Loan, Charles F. (1996), Matrix computations (3rd Edition), Johns Hopkins University Press, Baltimore, MD, ISBN 978-0-8018-5414-9 . • T. Hawkins, Cauchy and the spectral theory of matrices, Historia Mathematica, vol. 2, pp. 1–29, 1975. • Roger A. Horn and Charles R. Johnson, Matrix Analysis, Cambridge University Press, 1985. ISBN 0-521-30586-1 (hardback), ISBN 0-521-38632-2 (paperback). • Kline, Morris (1972), Mathematical thought from ancient to modern times, Oxford University Press, ISBN 0-195-01496-0 . • Brown, Maureen, "Illuminating Patterns of Perception: An Overview of Q Methodology" October 2004 • Gene H. Golub and Henk A. van der Vorst, "Eigenvalue computation in the 20th century," Journal of Computational and Applied Mathematics 123, 35-65 (2000). • Max A. Akivis and Vladislav V. Goldberg, Tensor calculus (in Russian), Science Publishers, Moscow, 1969. • Gelfand, I. M. (1971), Lecture notes in linear algebra, Russian: Science Publishers, Moscow  • Pavel S. Alexandrov, Lecture notes in analytical geometry (in Russian), Science Publishers, Moscow, 1968. • Carter, Tamara A., Richard A. Tapia, and Anne Papaconstantinou, Linear Algebra: An Introduction to Linear Algebra for Pre-Calculus Students, Rice University, Online Edition, Retrieved on 2008-02-19. • Steven Roman, Advanced linear algebra 3rd Edition, Springer Science + Business Media, LLC, New York, NY, 2008. ISBN 978-0-387-72828-5 • Shilov, G. E. (1969), Finite-dimensional (linear) vector spaces, Russian: State Technical Publishing House, 3rd Edition, Moscow . • Kuttler, Kenneth (2007), An introduction to linear algebra, Online e-book in PDF format, Brigham Young University, . • James W. Demmel, Applied Numerical Linear Algebra, SIAM, 1997, ISBN 0-89871-389-7. • Robert A. Beezer, A First Course In Linear Algebra, Free online book under GNU licence, University of Puget Sound, 2006 • Lancaster, P. Matrix Theory (in Russian), Science Publishers, Moscow, 1973, 280 p. • Paul R. Halmos, Finite-Dimensional Vector Spaces, 8th Edition, Springer-Verlag, New York, 1987, 212 p., ISBN 0387900934 • Greub, Werner H. (1975), Linear Algebra (4th Edition), Springer-Verlag, New York, NY, ISBN 0-387-90110-8 . • Larson, Ron and Bruce H. Edwards, Elementary Linear Algebra, 5th Edition, Houghton Mifflin Company, 2003, ISBN 0618335676. • Sharipov, Ruslan A. (1996), Course of Linear Algebra and Multidimensional Geometry: the textbook, Online e-book in PDF format, Bashkir State University, Ufa, arXiv:math/0405323v1, ISBN 5-7477-0099-5, Archived from the original on 2009-10-26, . External linksEdit Algebra may have more about this subject. Linear Algebra may have more about this subject. Around Wikia's network Random Wiki
2e90fa00bd5682ad
Readings and Lecture Notes Lecture notes (with blanks) are provided for each lecture. Students are expected to follow along during the lecture in order to fill in the blanks in the notes. Readings are from the required textbook: Buy at Amazon Atkins, Peter, and Loretta Jones. Chemical Principles: The Quest for Insight. 4th ed. New York, NY: W.H. Freeman and Company, 2007. ISBN: 9781429209656. The reading assignment listed for the first session is a review of information you are expected to know before you begin the class. This information is not discussed during lecture. In addition, no lecture notes were provided for the first session. The handout associated with that lecture is an overview of the class format and expectations. L1 The importance of chemical principles Section A.1 Sections B.3-B.4 Sections C-H Sections L-M L2 Discovery of electron and nucleus, need for quantum mechanics Sections A.2-A.3 Sections B.1-B.2 Section 1.1 L3 Wave-particle duality of light Sections 1.2 and 1.4 (PDF) L4 Wave-particle duality of matter, Schrödinger equation Sections 1.5-1.6 (PDF) L5 Hydrogen atom energy levels Sections 1.3, 1.7 up to equation 9b, and 1.8 (PDF) L6 Hydrogen atom wavefunctions (orbitals) Section 1.9 (PDF - 1.2 MB) L7 p-orbitals Sections 1.10-1.11 (PDF) L8 Multelectron atoms and electron configurations Sections 1.12-1.13 (PDF) L9 Periodic trends Sections 1.14-1.18, and 1.20 (PDF - 1.6 MB) L10 Periodic trends continued; Covalent bonds Sections 2.5-2.6, and 2.14-2.16 (PDF - 1.6 MB) L11 Lewis structures Sections 2.7-2.8 (PDF) L12 Exceptions to Lewis structure rules; Ionic bonds Sections 2.3 and 2.9-2.12 (PDF - 1.1 MB) L13 Polar covalent bonds; VSEPR theory Sections 3.1-3.2 (PDF - 5.1 MB) L14 Molecular orbital theory Sections 3.8-3.11 (PDF) L15 Valence bond theory and hybridization Sections 3.4-3.7 (PDF - 1.0 MB) L16 Determining hybridization in complex molecules; Termochemistry and bond energies/bond enthalpies Sections 6.13, 6.15-6.18, and 6.20 (PDF) L17 Entropy and disorder Sections 7.1-7.2, 7.8, 7.12-7.13, and 7.15 (PDF) L18 Free energy and control of spontaneity Section 7.16 (PDF) L19 Chemical equilibrium Sections 9.0-9.9 (PDF) L20 Le Chatelier's principle and applications to blood-oxygen levels Sections 9.10-9.13 (PDF) L21 Acid-base equilibrium: Is MIT water safe to drink? Chapter 10 (PDF) L22 Chemical and biological buffers Chapters 10 and 11 (PDF) L23 Acid-base titrations Chapter 11 (PDF) L24 Balancing oxidation/reduction equations Section K Chapter 12 L25 Electrochemical cells Chapter 12 (PDF) L26 Chemical and biological oxidation/reduction reactions Chapter 12 (PDF) L27 Transition metals and the treatment of lead poisoning pp. 669-681 (PDF) L28 Crystal field theory pp. 681-683 (PDF - 1.4 MB) L29 Metals in biology pp. 631-637 (PDF - 1.2 MB) L30 Magnetism and spectrochemical theory Chapter 16 (PDF) L31 Rate laws Sections 13.1-13.5 (PDF) L32 Nuclear chemistry and elementary reactions pp. 498-501 and 660-664 (PDF) L33 Reaction mechanism pp. 549-552 (PDF) L34 Temperature and kinetics Sections 13.11-13.13 (PDF) L35 Enzyme catalysis Sections 13.14-13.15 (PDF) L36 Biochemistry   (PDF)
b1188c099529d451
Friday, September 26, 2008 Dark black holes, dark flow, and how to avoid heat death? Lubos made interesting comments about the calculation of black hole entropy in his blog. I have absolutely nothing to say about this branch of science as far as technicalities are considered. The formulas for blackhole entropy however inspire new visions about black holes if one accepts the hierarchy of Planck constants and the notion of relative darkness in the sense that particles at the different pages of the book like structure, whose pages are labelled by the values of Planck constant are dark relative to each other. I glue below a slightly edited comment in Kea's blog. 1. Black hole entropy and dark black holes Lubos made in his posting explicit the 1/hbar proportionality of formulas for black hole entropy. This proportionality reflects the basic thermodynamical implication of quantization: the phase space of N-dimensional system decomposes into cells of volume hbarN and entropy is proportional to the phase space volume using this volume as unit. If hbar becomes large and gigantic as it would in the case of dark gravitation (hbar= GM1M2/v0, v0/c∼ 2-11 for inner planetary Bohr orbits) this means that blackhole entropy is extremely small. Black is dark;-) as I realized for few years ago, and it would be interesting to consider the consequences. 2. Hierarchy of Planck lengths It deserves to be noticed that the rough order of magnitude estimate for the gravitational Planck constant of Sun can be written as hbargr=x4GM2. This gives for the Planck length the expression LP= (Ghbar)1/2 = x1/2 2GM . For x=1 one Planck length would be just Schwartshild radius. This makes sense since these two lengths play rather similar role. Quite generally, one would have a hierarchy of Planck lengths. 3. Dark flow Second comment is related to the earlier posting of Lubos about the observed dark flow in length scales larger than horizon size towards an attractor outside horizon. The presence of the attractor outside the visible universe conforms with the notion of manysheeted space-time predicting also a manysheeted cosmology. Many-sheeted cosmology means a hierarchy of space-time sheets obeying their own Robertson-Walker type cosmologies: those with varying p-adic length scale and those labelled by various values of Planck constants at pages of book like structure obtained by gluing together singular coverings and factor spaces of 8-D imbedding space (roughly). Particles at different pages are dark relative to each other in the sense that there are no local interaction vertices: classical interactions and those by exchanges of say photons are possible. Each sheet in many-sheeted cosmology has different horizon size. The attractor would correspond to a different value of Planck constant and have larger horizon size than our sheet. Dark energy would be dark matter and the phase transitions increasing Planck constant would induce phases of accelerated expansion. In average sense these periods would give ordinary cosmology without accelerated expansion. 4. How to avoid heat death? Third comment relates to the dark flow and implications of the hierarchy of Planck constants to future prospects of intelligent life. Heat death is believed by standard physicists to be waiting for all forms of life. We would live in the silliest possible Universe. I cannot believe this. I am ready to admit that some of our theories about the Universe are really silly, but entire Universe?--No! The hierarchy of Planck constants would allow to avoid heat death. For instance, if the rate for the reduction of temperature is proportional to 1/hbar -as looks natural- then there is always an infinite number of hierarchy levels for which temperature is above a given temperature since the temperature at these pages is reduced so slowly. Life can escape to the pages of the Big Book labelled by larger values of Planck constant without breaking second law since the scaling of the size of the system by hbar increases phase space volume and keeps entropy constant. Evolution by quantum leaps increasing hbar increasing the time scale of planned action and long term memory is another manner to say this. The observed dark flow might be seen as a direct support for this more optimistic view about Life and the Universe and Everything;-). Tuesday, September 23, 2008 Flyby anomaly as a relativistic transverse Doppler effect? For half year ago I discussed a model for the flyby anomaly based on the hypothesis that dark matter ring around the orbit of Earth causes the effect. The model reproduced the formula deduced for the change of the velocity of the space-craft at qualitative level, and contained single free parameter: essentially the linear density of the dark matter at the flux tube. From Lubos I learned about a new twist in the story of flyby anomaly. September twelfth 2007 Jean-Paul Mbelek proposed an explanation of the flyby anomaly as a relativistic transverse Doppler effect. The model predicts also the functional dependence of the magnitude of the effect on the kinematic parameters and the prediction is consistent with the empirical findings in the example considered. Therefore the story of flyby anomaly might be finished and dark matter at the orbit of Earth could bring in only an additional effect. It is probably too much to hope for this kind of effect to be large enough if present. For background see the chapter TGD and Astrophysics. Monday, September 22, 2008 Tritium beta decay anomaly and variations in the rates of radioactive processes The determination of neutrino mass from the beta decay of tritium leads to a tachyonic mass squared [2,3,4,5]. I have considered several alternative explanations for this long standing anomaly. The first class of models relies on the presence of dark neutrino or antineutrino belt around the orbit of Earth. The second class of models relies on the prediction of nuclear string model that the neutral color bonds connecting nucleons to nuclear string can be also charged. This predicts large number of fake nuclei having only apparently the proton and neutron numbers deduced from the mass number. 1. 3He nucleus resulting in the decay could be fake (tritium nucleus with one positively charged color bond making it to look like 3He). The idea that slightly smaller mass of the fake 3He might explain the anomaly: it however turned out that the model cannot explain the variation of the anomaly from experiment to experiment. 2. Later (yesterday evenening!) I realized that also the initial 3H nucleus could be fake (3He nucleus with one negatively charged color bond). It turned out that fake tritium option has the potential to explain all aspects of the anomaly and also other anomalies related to radioactive and alpha decays of nuclei. 3. Just one day ago I still believed on the alternative based on the assumption of dark neutrino or antineutrino belt surrounding Earth's orbit. This model has the potential to explain satisfactorily several aspects of the anomaly but fails in its simplest form to explain the dependence of the anomaly on experiment. Since the fake tritium scenario is based only on the basic assumptions of the nuclear string model and brings in only new values of kinematical parameters it is definitely favored. In the following I shall describe only the models based on the decay of tritium to fake Helium and the decay of fake tritium to Helium. 1. Fake 3He option Consider first the fake 3He option. Tritium (pnn) would decay with some rate to a fake 3He, call it 3Hef, which is actually tritium nucleus containing one positively charged color bond and possessing mass slightly different than that of 3He (ppn). 1. In this kind of situation the expression for the function K(E,k) differs from K(stand) since the upper bound E0 for the maximal electron energy is modified: E0 ® E1=M(3H)-M(3Hef)-mm = M(3H)-M(3He)+DM-mm , DM = M(3He)-M(3Hef) . Depending on whether 3Hef is heavier/lighter than 3He E0 decreases/decreases. From Vb Î [5-100] eV and from the TGD based prediction order m([`(n)]) ~ .27 eV one can conclude that DM should be in the range 5-100 eV. 2. In the lowest approximation K(E) can be written as K(E) = K0(E,E1)q(E1-E) @ (E1-E)q(E1-E). Here q(x) denotes step function and K0(E,E1) corresponds to the massless antineutrino. 3. If the fraction p of the final state nuclei correspond to a fake 3He the function K(E) deduced from data is a linear combination of functions K(E,3He) and K(E,3Hef) and given by K(E) = (1-p)K(E,3He)+ pK(E,3Hef) @ (1-p)(E0-E)q(E0-E)+ p(E1-E)q(E1-E) in the approximation mn=0. For m(3Hef) < m(3He) one has E1 > E0 giving K(E) = (E0-E)q(E0-E)+ p(E1-E0)q(E1-E)q(E-E0). K(E,E0) is shifted upwards by a constant term (1-p)DM in the region E0 > E. At E=E0 the derivative of K(E) is infinite which corresponds to the divergence of the derivative of square root function in the simpler parametrization using tachyonic mass. The prediction of the model is the presence of a tail corresponding to the region E0 < E < E1. 4. The model does not as such explain the bump near the end point of the spectrum. The decay 3H® 3Hef can be interpreted in terms of an exotic weak decay d® u+W- of the exotic d quark at the end of color bond connecting nucleons inside 3H. The rate for these interactions cannot differ too much from that for ordinary weak interactions and W boson must transform to its ordinary variant before the decay W® e+`n. Either the weak decay at quark level or the phase transition could take place with a considerable rate only for low enough virtual W boson energies, say for energies for which the Compton length of massless W boson correspond to the size scale of color flux tubes predicted to be much longer than nuclear size. Is so the anomaly would be absent for higher energies and a bump would result. 5. The value of K(E) at E=E0 is Vb º p(E1-E0). The variation of the fraction p could explain the observed dependence of Vb on experiment as well as its time variation. It is however difficult to understand how p could vary. 2. Fake 3H option Assume that a fraction p of the tritium nuclei are fake and correspond to 3He nuclei with one negatively charged color bond. 1. By repeating the previous calculation exactly the same expression for K(E) in the approximation mn=0 but with the replacement DM = M(3He)-M(3Hef)® M(3Hf)-M(3H) . 2. In this case it is possible to understand the variations in the shape of K(E) if the fraction of 3Hf varies in time and from experiment to experiment. A possible mechanism inducing this variation is a transition inducing the transformation 3Hf® 3H by an exotic weak decay d+p® u+n, where u and d correspond to the quarks at the ends of color flux tubes. This kind of transition could be induced by the absorption of X-rays, say artificial X-rays or X-rays from Sun. The inverse of this process in Sun could generate X rays which induce this process in resonant manner at the surface of Earth. 3. The well-known poorly understood X-ray bursts from Sun during solar flares in the wavelength range 1-8 A correspond to energies in the range 1.6-12.4 keV, 3 octaves in good approximation. This radiation could be partly due to transitions between ordinary and exotic states of nuclei rather than brehmstrahlung resulting in the acceleration of charged particles to relativistic energies. The energy range suggests the presence of three p-adic length scales: nuclear string model indeed predicts several p-adic length scales for color bonds corresponding to different mass scales for quarks at the ends of the bonds. This energy range is considerably above the energy range 5-100 eV and suggests the range [4×10-4, 6×10-2] for the values of p. The existence of these excitations would mean a new branch of low energy nuclear physics, which might be dubbed X-ray nuclear physics. 4. The approximately 1/2 year period of the temporal variation would naturally correspond to the 1/R2 dependence of the intensity of X-ray radiation from Sun. There is evidence that the period is few hours longer than 1/2 years which supports the view that the origin of periodicity is not purely geometric but relates to the dynamics of X-ray radiation from Sun. Note that for 2 hours one would have DT/T @ 2-11, which defines a fundamental constant in TGD Universe and is also near to the electron proton mass ratio. 5. All nuclei could appear as similar anomalous variants. Since both weak and strong decay rates are sensitive to the binding energy, it is possible to test this prediction by finding whether nuclear decay rates show anomalous time variation. 6. The model could explain also other anomalies of radioactive reaction rates including the findings of Shnoll [1] and the unexplained fluctuations in the decay rates of 32Si and 226Ra reported quite recently and correlating with 1/R2, R distance between Earth and Sun. 226Ra decays by alpha emission but the sensitive dependence of alpha decay rate on binding energy means that the temporal variation of the fraction of fake 226Ra isotopes could explain the variation of the decay rates. The intensity of the X-ray radiation from Sun is proportional to 1/R2 so that the correlation of the fluctuation with distance would emerge naturally. 7. Also a dip in the decay rates of 54Mn coincident with a peak in proton and X-ray fluxes during solar flare has been observed: the proposal is that neutrino flux from Sun is also enhanced during the solar flare and induces the effect. A peak in X-ray flux is a more natural explanation in TGD framework. 8. The model predicts interaction between atomic physics and nuclear physics, which might be of relevance in biology. For instance, the transitions between exotic and ordinary variants of nuclei could yield X-rays inducing atomic transitions or ionization. The wave length range 1-8 Angstroms for anomalous X-rays corresponds to the range Z in the rage [11,30] for ionization energies. The biologically important ions Na+, Mg++, P-, Cl-, K+, Ca++ have Z= (11,15,17,19,20). I have proposed that Na+, Cl-, K+ (fermions) are actually bosonic exotic ions forming Bose-Einstein condensates at magnetic flux tubes (see this). The exchange of W bosons between neutral Ne and A(rgon) atoms (bosons) could yield exotic bosonic variants of Na+ (perhaps even Mg++, which is boson also as ordinary ion) and Cl- ions. Similar exchange between A atoms could yield exotic bosonic variants of Cl- and K+ (and even Ca++, which is also boson as ordinary variant). This transformation might relate to the paradoxical finding that noble gases can act as narcotics. This hypothesis is testable by measuring the nuclear weights of these ions. X-rays from Sun are not present during night time and this could relate to the night-day cycle of living organisms. Note that the nagnetic bodies are of size scale of Earth and even larger so that the exotic ions inside them could be subject to intense X-ray radiation. X-rays could also be dark X-rays with large Planck constant and thus with much lower frequency than ordinary X-rays so that control could be possible. [2]V. M. Lobashev et al(1996), in Neutrino 96 (Ed. K. Enqvist, K. Huitu, J. Maalampi). World Scientific, Singapore. [3] Ch. Weinheimer et al (1993), Phys. Lett. 300B, 210. [4] J. I. Collar (1996), Endpoint Structure in Beta Decay from Coherent Weak-Interaction of the Neutrino, hep-ph/9611420. [5]G. J. Stephenson Jr. (1993), Perspectives in Neutrinos, Atomic Physics and Gravitation, ed. J. T. Thanh Van, T. Darmour, E. Hinds and J. Wilkerson (Editions Frontieres, Gif-sur-Yvette), p.31. For more details see the chapters TGD and Nuclear Physics and Nuclear String Hypothesis of "p-Adic length scale Hypothesis and Dark Matter Hierarchy". Monday, September 15, 2008 Zero energy ontology, self hierarchy, and the notion of time In the previous posting I discussed the most recent view about zero energy ontology and p-adicization program. One manner to test the internal consistency of this framework is by formulating the basic notions and problems of TGD inspired quantum theory of consciousness and quantum biology in terms of zero energy ontology. I have discussed these topics already earlier but the more detailed understanding of the role of causal diamonds (CDs) brings many new aspects to the discussion. In consciousness theory the basic challenges are to understand the asymmetry between positive and negative energies and between two directions of geometric time at the level of conscious experience, the correspondence between experienced and geometric time, and the emergence of the arrow of time. One should also explain why human sensory experience is about a rather narrow time interval of about .1 seconds and why memories are about the interior of much larger CD with time scale of order life time. One should also have a vision about the evolution of consciousness takes place: how quantum leaps leading to an expansion of consciousness take place. Negative energy signals to geometric past - about which phase conjugate laser light represents an example - provide an attractive tool to realize intentional action as a signal inducing neural activities in the geometric past (this would explain Libet's classical findings), a mechanism of remote metabolism, and the mechanism of declarative memory as communications with the geometric past. One should understand how these signals are realized in zero energy ontology and why their occurrence is so rare. In the following my intention is to demonstrate that TGD inspired theory of consciousness and quantum TGD proper indeed seem to be in tune and that this process of comparison helps considerably in the attempt to develop the TGD based ontology at the level of details. 1  Causal diamonds as correlates for selves Quantum jump as a moment of consciousness, self as a sequence of quantum jumps integrating to self, and self hierarchy with sub-selves experienced as mental images, are the basic notion of TGD inspired quantum theory of consciousness. In the most ambitious program self hierarchy reduces to a fractal hierarchy of quantum jumps within quantum jumps. It is natural to interpret CD:s as correlates of selves. CDs can be interpreted in two manners: as subsets of the generalized imbedding space or as sectors of the world of classical worlds (WCW). Accordingly, selves correspond to CD:s of the generalized imbedding space or sectors of WCW, literally separate interacting quantum Universes. The spiritually oriented reader might speak of Gods. Sub-selves correspond to sub-CD:s geometrically. The contents of consciousness of self is about the interior of the corresponding CD at the level of imbedding space. For sub-selves the wave function for the position of tip of CD brings in the delocalization of sub-WCW. The fractal hierarchy of CDs within CDs defines the counterpart for the hierarchy of selves: the quantization of the time scale of planned action and memory as T(k) = 2kT0 suggest an interpretation for the fact that we experience octaves as equivalent in music experience. 2. Why sensory experience is about so short time interval? CD picture implies automatically the 4-D character of conscious experience and memories form part of conscious experience even at elementary particle level: in fact, the secondary p-adic time scale of electron is T=1 seconds defining a fundamental time scale in living matter. The problem is to understand why the sensory experience is about a short time interval of geometric time rather than about the entire personal CD with temporal size of order life-time. The obvious explanation would be that sensory input corresponds to sub-selves (mental images) which correspond to CD:s with T(127) @ .1 s (electrons or their Cooper pairs) at the upper light-like boundary of CD assignable to the self. This requires a strong asymmetry between upper and lower light-like boundaries of CD:s. 1. The only reasonable manner to explain the situation seems to be that the addition of CD:s within CD:s in the state construction must always glue them to the upper light-like boundary of CD along light-like radial ray from the tip of the past directed light-cone. This conforms with the classical picture according to which classical sensory data arrives from the geometric past with velocity which is at most light velocity. 2. One must also explain the rare but real occurrence of phase conjugate signals understandable as negative energy signals propagating towards geometric past. The conditions making possible negative energy signals are achieved when the sub-CD is glued to both the past and future directed light-cones at the space-like edge of CD along light-like rays emerging from the edge. This exceptional case gives negative energy signals traveling to the geometric past. The above mentioned basic control mechanism of biology would represent a particular instance of this situation. Negative energy signals as a basic mechanism of intentional action would explain why living matter seems to be so special. 3. Geometric memories would correspond to the lower boundaries of CD:s and would not be in general sharp because only the sub-CD:s glued to both upper and lower light-cone boundary would be present. A temporal sequence of mental images, say the sequence of digits of a phone number, could corresponds to a sequence of sub-CD:s glued to the upper light-cone boundary. 4. Sharing of mental images corresponds to a fusion of sub-selves/mental images to single sub-self by quantum entanglement: the space-time correlate for this could be flux tubes connecting space-time sheets associated with sub-selves represented also by space-time sheets inside their CD:s. It could be that these ëpisodal" memories correspond to CD:s at upper light-cone boundary of CD. On basis of these arguments it seems that the basic conceptual framework of TGD inspired theory of consciousness can be realized in zero energy ontology. Interesting questions relate to how dynamical selves are. 1. Is self doomed to live inside the same sub-WCW eternally as a lonely god? This question has been already answered: there are interactions between sub-CD:s of given CD, and one can think of selves as quantum superposition of states in CD:s with wave function having as its argument the tips of CD, or rather only the second one since T is assumed to be quantized. 2. Is there a largest CD in the personal CD hierarchy of self in an absolute sense? Or is the largest CD present only in the sense that the contribution to the contents of consciousness coming from very large CD:s is negligible? Long time scales T correspond to low frequencies and thermal noise might indeed mask these contributions very effectively. Here however the hierarchy of Planck constants and generalization of the imbedding space would come in rescue by allowing dark EEG photons to have energies above thermal energy. 3. Can selves evolve in the sense that the size of CD increases in quantum leaps so that the corresponding time scale T=2kT0 of memory and planned action increases? Geometrically this kind of leap would mean that CD becomes a sub-CD of a larger CD either at the level of conscious experience or in absolute sense. This leap can occur in two senses: as an increase of the largest p-adic time scale in the personal hierarchy of space-time sheets or as increase of the largest value of Planck constants in the personal dark matter hierarchy. At the level of individual this would mean emergence of increasingly lower frequencies of generalization of EEG and of the levels of dark matter hierarchy with large value of Planck constant. 4. In 2-D illustration of the leap leading to a higher level of self hierarchy would mean simply the continuation of CD to right or left in the 2-D visualization of CD. Since the preferred M2 is contained in the tangent space of space-time surfaces, and since preferred M2 plays a key role in dark matter hierarchy too, one must ask whether the 2-D illustration might have some deeper truth in it. 3. New view about arrow of time Perhaps the most fundamental problem related to the notion of time concerns the relationship between experienced time and geometric time. The two notions are definitely different: think only the irreversibility of experienced time and the reversibility of the geometric time and the absence of future of the experienced time. Also the deterministic character of the dynamics in geometric time is in conflict with the notion of free will supported by the direct experience. In the standard materialistic ontology experienced time and geometric time are identified. In the naivest picture the flow of time is interpreted in terms of the motion of 3-D time=constant surface of space-time towards geometric future without any explanation for why this kind of motion would occur. This identification is plagued by several difficulties. In special relativity the difficulties relate to the impossibility define the notion of simultaneity in a unique manner and the only possible manner to save this notion seems to be the replacement of time=constant 3-surface with past directed light-cone assignable to the world-line of observer. In general relativity additional difficulties are caused by the general coordinate invariance unless one generalizes the picture of special relativity: problems are however caused by the fact that past light-cones make sense only locally. In quantum physics quantum measurement theory leads to a paradoxical situation since the observed localization of the state function reduction to a finite space-time volume is in conflict with the determinism of Schrödinger equation. TGD forces a new view about the relationship between experienced and geometric time. Although the basic paradox of quantum measurement theory disappears the question about the arrow of geometric time remains. 1. Selves correspond to CD:s the own sub-WCW:s. These sub-WCW:s and their projections to the imbedding space do not move anywhere. Therefore standard explanation for the arrow of geometric time cannot work. Neither can the experience about flow of time correspond to quantum leaps increasing the size of the largest CD contributing to the conscious experience of self. 2. The only plausible interpretation is based on quantum classical correspondence and the fact that space-times are 4-surfaces of the imbedding space. If quantum jump corresponds to a shift of quantum superposition of space-time sheets towards geometric past in the first approximation (as quantum classical correspondence suggests), one can indeed understand the arrow of time. Space-time surfaces simply shift backwards with respect to the geometric time of the imbedding space and therefore to the 8-D perceptive field defined by the CD. This creates in the materialistic mind a kind of temporal variant of train illusion. Space-time as 4-surface and macroscopic and macro-temporal quantum coherence are absolutely essential for this interpretation to make sense. Why this shifting should always take place to the direction of geometric past of the imbedding space? What seems clear is that the asymmetric construction of zero energy states should correlate with the preferred direction. If question is about probabilities, the basic question would be why the probabilities for shifts in the direction of geometric past are higher. Here some alternative attempts to answer this question are discussed. 1. Cognition and time relate to each other very closely and the required fusion of real physics with various p-adic physics of cognition and intentionality could also have something to do with the asymmetry. Indeed, in the p-adic sectors the transcendental values of p-adic light-cone proper time coordinate correspond to literally infinite values of the real valued light-cone proper time, and one can say that most points of p-adic space-time sheets serving as correlates of thoughts and intentions reside always in the infinite geometric future in the real sense. Therefore cognition and intentionality would break the symmetry between positive and negative energies and geometric past and future, and the breaking of arrow of geometric time could be seen as being induced by intentional action and also due to the basic aspects of cognitive experience. 2. Zero energy ontology suggests also a possible reason for the asymmetry. Standard quantum mechanics encourages the identification of the space of negative energy states as the dual for the space of positive energy states. There are two kinds of duals. Hilbert space dual is identified as the space of continuous linear functionals from Hilbert space to the coefficient field and is isometrically anti-isomorphic with the Hilbert space. This justifies the bra-ket notation. In the case of vector space the relevant notion is algebraic dual. Algebraic dual can be identified as an infinite direct product of the coefficient field identified as a 1-dimensional vector space. Direct product is defined as the set of functions from an infinite index set I to the disjoint union of infinite number of copies of the coefficient field indexed by I. Infinite-dimensional vector space corresponds to infinite direct sum consisting of functions which are non-vanishing for a finite number of indices only. Hence vector space dual in infinite-dimensional case contains much more states than the vector space and does not have enumerable basis. If negative energy states correspond to a subspace of vector space dual containing Hilbert space dual, the number of negative energy states is larger than the number of positive energy states. This asymmetry could correspond to better measurement resolution at the upper light-cone cone boundary so that the state space at lower light-cone boundary would be included via inclusion of HFFs to that associated with the upper light-cone boundary. Geometrically this would mean the possibility to glue to the upper light-cone boundary CD which can be smaller than those associated with the lower one. 3. The most convincing candidate for an answer comes from consciousness theory. One must understand also why the contents of sensory experience is concentrated around a narrow time interval whereas the time scale of memories and anticipation are much longer. The proposed mechanism is that the resolution of conscious experience is higher at the upper boundary of CD. Since zero energy states correspond to light-like 3-surfaces, this could be a result of self-organization rather than a fundamental physical law. 1. The key assumption is that CDs have CDs inside CDs and that the vertices of generalized Feynman diagrams are contained within sub-CDs. It is not assumed that CDs are glued to the upper boundary of CD since the arrow of time results from self organization when the distribution of sub-CDs concentrates around the upper boundary of CD. In a category theoretical formulation for generalized Feynman diagrammatics based on this picture is developed. 2. CDs define the perceptive field for self. Selves are curious about the space-time sheets outside their perceptive field in the geometric future (relative notion) of the imbedding space and perform quantum jumps tending to shift the superposition of the space-time sheets to the direction of geometric past (past defined as the direction of shift!). This creates the illusion that there is a time=snapshot front of consciousness moving to geometric future in fixed background space-time as an analog of train illusion. 3. The fact that news come from the upper boundary of CD implies that self concentrates its attention to this region and improves the resolutions of sensory experience and quantum measurement here. The sub-CD:s generated in this manner correspond to mental images with contents about this region. As a consequence, the contents of conscious experience, in particular sensory experience, tend to be about the region near the upper boundary. 4. This mechanism in principle allows the arrow of the geometric time to vary and depend on p-adic length scale and the level of dark matter hierarchy. The occurrence of phase transitions forcing the arrow of geometric time to be same everywhere are however plausible for the reason that the lower and upper boundaries of given CD must possess the same arrow of geometric time. Sunday, September 14, 2008 The most recent vision about zero energy ontology and p-adicization 1. Zero energy ontology briefly Consider now the critical questions. 2. Definition of energy inzero energy ontology 3. p-Adic variants of the imbedding space 4. p-Adic variants for the sectors of WCW Wednesday, September 03, 2008 Dark nuclear strings as analogs of DNA-, RNA- and amino-acid sequences and baryonic realization of genetic code In the earlier posting I considered the possibility that the evolution of genome might not be random but be controlled by magnetic body and that various DNA sequences might be tested in the virtual world made possible by the virtual counterparts of bio-molecules realized in terms of the homeopathic mechanism as it is understood in TGD framework. The minimal option is that virtual DNA sequences have flux tube connections to the lipids of the cell membrane so that their quality as hardware of tqc can be tested but that there is no virtual variant of transcription and translation machinery. One can however ask whether also virtual amino-acids could be present and whether this could provide deeper insights to the genetic code. 1. Water molecule clusters are not the only candidates for the representatives of linear molecules. An alternative candidate for the virtual variants of linear bio-molecules are dark nuclei consisting of strings of scaled up dark variants of neutral baryons bound together by color bonds having the size scale of atom, which I have introduced in the model of cold fusion and plasma electrolysis both taking place in water environment. Colored flux tubes defining braidings would generalize this picture by allowing transversal color magnetic flux tube connections between these strings. 2. Baryons consist of 3 quarks just as DNA codons consist of three nucleotides. Hence an attractive idea is that codons correspond to baryons obtained as open strings with quarks connected by two color flux tubes. The minimal option is that the flux tubes are neutral. One can also argue that the minimization of Coulomb energy allows only neutral dark baryons. The question is whether the neutral dark baryons constructed as string of 3 quarks using neutral color flux tubes could realize 64 codons and whether 20 aminoacids could be identified as equivalence classes of some equivalence relation between 64 fundamental codons in a natural manner. The following model indeed reproduces the genetic code directly from a model of dark neutral baryons as strings of 3 quarks connected by color flux tubes. 1. Dark nuclear baryons are considered as a fundamental realization of DNA codons and constructed as open strings of 3 dark quarks connected by two colored neutral flux tubes. DNA sequences would in turn correspond to sequences of dark baryons. It is assumed that the net charge of the dark baryons vanishes so that Coulomb repulsion is minimized. 2. One can classify the states of the open 3-quark string by the total charges and spins associated with 3 quarks and to the two color bonds. Total em charges of quarks vary in the range ZB Î {2,1,0,-1} and total color bond charges in the range Zb Î {2,1,0,-1,-2}. Only neutral states are allowed. Total quark spin projection varies in the range JB=3/2,1/2,-1/2,-3/2 and the total flux tube spin projection in the range Jb = 2,1,-1,-2. If one takes for a given total charge assumed to be vanishing one representative from each class (JB,Jb), one obtains 4×5=20 states which is the number of amino-acids. Thus genetic code might be realized at the level of baryons by mapping the neutral states with a given spin projection to single representative state with the same spin projection. 3. The states of dark baryons in quark degrees of freedom can be constructed as representations of rotation group and strong isospin group. The tensor product 2Ä2Ä2 is involved in both cases. Physically it is known that only representations with isospin 3/2 and spin 3/2 (D resonance) and isospin 1/2 and spin 1/2 (proton and neutron) are realized. Spin statistics problem forced to introduce quark color (this means that one cannot construct the codons as sequences of 3 nucleons!). 4. Second nucleon spin doublet has wrong parity. Using only 4Å2 for rotation group would give degeneracies (1,2,2,1). One however requires the representations 4Å2Å2 rather than only 4Å2 to get 8 states with a given charge. One should transform the wrong parity doublet to positive parity doublet somehow. Since open string geometry breaks rotational symmetry to a subgroup of rotations acting along the direction of the string, the attractive possible is add a stringy excitation with angular momentum projection L=-1 to the wrong parity doublet so that parity comes out correctly. This would give degeneracies (1,2,3,2). 5. In flux tube degrees of freedom the situation is analogous to construction of mesons from quarks and antiquarks and one obtains pion with spin 0 and r meson with spin 1. States of zero charge correspond to the tensor product 2Ä2=3Å1 for rotation group. Drop the singlet and take only the analog of neutral r meson. The tensor product 3Ä3=5Å3Å1 gives 8+1 states and leaving only spin 2 and spin 1 states gives 8 states. The degeneracies of states with given spin projection for 5Å3 are (1,2,2,2,1). Genetic code means projection of the states of 5Å3 to those of 5 with the same spin projection. 6. Genetic code maps of ( 4Å2Å2)Ä(5Å3) to the states of 4×5. The most natural map maps the states with given spin to state with same spin so that the code is unique. This would give the degeneracies D(k) as products of numbers DB Î {1,2,3,2} and Db Î {1,2,2,2,1}. The numbers N(k) of aminoacids coded by D(k) codons would be [N(1),N(2),N(3),N(4),N(6)]=[2,7,2,6,3] . The correct numbers for vertebrate nuclear code are (N(1),N(2),N(3),N(4),N(6)) = (2,9,1,5,3). Some kind of symmetry breaking must take place and should relate to the emergence of stopping codons. If one codon in second 3-plet becomes stopping codon, 3-plet becomes doublet. If 2 codons in 4-plet become stopping codons it also becomes doublet and one obtains the correct result (2,9,1,5,3)! The conclusion is that genetic code can be understand as a map of stringy baryonic states induced by the projection of all states with same spin projection to a representative state with same spin projection. Genetic code would be realized at the level of dark nuclear physics and perhaps also at the level of ordinary nuclear physics and that biochemical representation would be only one particular higher level representation of the code. For details see chapters Homeopathy in Many-Sheeted Space-time of "Bio-Systems as Conscious Holograms" and The Notion of Wave-Genome and DNA as Topological Quantum Computer of "Genes and Memes"
c722b8ff267c6ef4
Foresight Update 53 page 3 A publication of the Foresight Institute 11th Foresight Conference Progress with molecular devices, machinery, and building blocks Continuing a tradition now 14 years old, several hundred researchers from more than a dozen countries converged on the 11th Foresight Conference on Molecular Nanotechnology, held October 10 to 12, 2003 at the San Francisco Airport Marriott in Burlingame, California, to survey progress in a variety of nanoscale sciences and technologies, and (in some cases) to ponder whether or not this progress was leading toward an ability to engineer molecular machine systems. This year's Conference was co-chaired by James T. Spencer, Department of Chemistry, Syracuse University, and Chris Gorman, Department of Chemistry, North Carolina State University. The Conference included 33 oral presentations and a poster session with 48 additional presentations—far too many to summarize here. Tutorial on Molecular Nanotechnology precedes Conference On Oct. 9, a Tutorial on Molecular Nanotechnology, chaired by Hicham Fenniri of the National Institute for Nanotechnology, National Research Council and the University of Alberta, afforded attendees overviews of key areas of nanoscale science and technology, with a particular focus on recent research in self assembly of nanostructured materials and application to catalysis, energy, environmental remediation, and molecular electronics. Mark S. Lundstrom, Purdue University, provided "A Top-Down Look at Bottom Up Electronics," taking an in-depth look at CMOS electronic circuit technology and the limitations it will soon encounter, particularly heat dissipation as the ultimate limit on the density of devices. Molecular electronics does not offer a way to extent this heat dissipation limit, and so will complement rather than replace CMOS. Lundstrom nevertheless looks for progress in circuit and system design to advance from the current billion-transistor chips to trillion transistor chips (terascale integration). Susannah Scott, University of California at Santa Barbara, covered "Nanostructured Catalysts," in which the use of various nanoparticles as catalysts brings both enormous economic impact and enormous environmental benefit by selectively speeding up specific chemical transformations. New methods of more precisely controlling nanoparticle structures, and making more complex nanoparticles, are leading to more effective catalysts. Thomas E. Mallouk, The Pennsylvania State University, considered "Implications of Nanotechnology for Energy and Environmental Remediation," emphasizing the "massive" need for an inexpensive source of energy that does not increase carbon emissions and the role of nanotechnology in making solar power affordable through use of self-assembly techniques to fabricate nanomaterials with unique physical properties. In particular, nanoparticle structures (nanowires, core-shell particles) may lead to efficient multi-bandgap and polymer junction devices, and quantum dot structures could be important for more advanced devices. In terms of remediation, (zero valent) iron nanoparticles hold promise for removing chlorinated organics and metal ions from water and soil. Steven C. Zimmerman of the University of Illinois at Urbana-Champaign presented "Self-assembly Approaches to Nanoscale Materials," covering the gamut from chemical synthesis of covalent nanostructures (adamantane diamondoids, conjugated organics for nanowires, dendrimers, carbon nanotubes) to various self assembly regimes (crystal engineering, supramolecular systems—including DNA base-pairing, metal coordination, mechanical bonding, hydrophobic assembly, and the role of molecular chaperones in reverting errors of assembly). Fraser Stoddart of the University of California in Los Angeles, in a talk titled "An Integrated Systems-Oriented Approach to Molecular Electronics," discussed the transfer of concepts like molecular recognition and self assembly from the life sciences into materials science in order to control and harness the molecular motions in switchable molecules like rotaxanes and catenanes to produce solid state devices. Molecular switches can be built around mechanical movement in bistable rotaxane molecules, in which an applied voltage will cause one section of the molecule to move, serving as an on-off switch. Originally designed to work in solution, these switches also work in Langmuir monolayers and Langmuir-Blodgett films, and in self-assembled monolayers on gold surfaces. A 64-bit molecular RAM based on amphiphilic bistable [2] rotaxanes was demonstrated. Current efforts span further device design and synthesis, and computer architecture development for using molecular electronic circuits. Molecular devices and molecular machinery Fraser Stoddart opened the Conference with a keynote address titled "Meccano on the NanoScale: A Blueprint for Making Some of the World's Tiniest Machines," in which he proposed that the path to developing molecular machinery lies in using the same principles (self-assembly, self-organization, structure-activity relationships) that Nature uses for its molecular machinery, but without using the same chemical building blocks (nucleic acids and amino acids) that Nature uses. One theme Stoddart explored is rotary motion in biological and artificial molecular machines. The natural molecular machine is the ATPase rotary motor, in which the passage of protons through a membrane causes a shaft to spin, resulting in the synthesis of ATP. For an artificial molecular level device, defined as an assembly of a distinct number of the molecular components designed to perform a specific function, Stoddart turned to catenane molecules, with their mechanical bonds formed by interlocking rings. The same principles of molecular recognition and self-assembly used by biology were adapted to the development of supramolecular chemistry, in which non-covalent interactions are used to guide the assembly of molecular entities held together by covalent and mechanical bonds. In the case of catenanes, non-covalent interactions guide the threading of a linear molecule through a circular molecule and pre-organize the components for ring-closing by covalent bond formation, thus forming interlocked rings, now held together by a mechanical bond. These methods have been pushed as far as making catenanes with seven interlocking rings. The interlocked rings of catenanes exhibit a back and forth rocking motion, a reversible threading of one ring through the cavity of the other ring, controlled by charge changes—oxidation and subsequent reduction. Such mechanical catenane-based switches are only about one cubic nanometer in volume.To make devices, the catenane was anchored with phospholipid counterions into a Langmuir-Blodgett film and sandwiched between two electrodes. A crossbar junction can be made with 5000 catenane switch molecules. A variant device currently being investigated has the catenane molecules stacked along a single wall carbon nanotube. A second theme is linear motion in molecular machinery—the biological example is the muscle proteins actin and myosin, and the artificial example is rotaxane, interlocked molecules in which a ring-shaped component is threaded on a dumbbell-shaped component, and mechanically trapped by the bulky ends of the dumbbell-shaped component. If the middle of the dumbbell-shaped component has two different sites to which the ring can bind, then a molecular shuttle can be formed in which the ring shuttles back and forth between the two sites, thus forming another type of switch. By making one end of the dumbbell hydrophobic and one end hydrophilic, the molecular shuttle could be incorporated into a Langmuir-Blodgett film and sandwiched between electrodes, forming an 8x8 crossbar to constitute a 64-bit molecular RAM from about 5000 molecules (actually only 56 bits worked when tested). Moving from molecular devices based on mechanically linked rings to devices based on carbon nanotubes, Mark Lundstrom presented "Carbon Nanotube Electronics: Device Physics, Technology & Applications," focusing on computational methods to understand device physics, to optimize transistor designs, to assess ultimate performance limits of such devices, and to identify which applications would be most appropriate. Carbon nanotube transistors (CNTFETs) are particularly promising subjects for detailed theoretical studies because they can be either metallic or semiconducting (depending on how the graphene sheet is rolled up to make the tube), near ballistic electron transport is possible, there are no dangling bonds to complicate depositing additional atomic layers, and electronic and optical components can be on the same substrate. Lundstrom's computational approach used an atomically detailed representation of the nanotube and a quantum mechanical treatment using the Schrödinger equation and Green's function. Results included the fact that the one-dimensional geometry of the nanotube produces electrostatics very different from those of conventional silicon transistors, making the details of the metal contacts with the nanotube very important. Most results so far show carbon nanotube transistors behaving like Schottky barrier transistors, which would be difficult to use in CMOS circuits. The carbon nanotube diameter is a critical parameter in this behavior. Although CNTFETs might have limited use in CMOS, Lundstrom suggested that use in MOSFET might be more profitable. At the very least, CNTFETs are a great model for understanding transistors at atomic detail. Hicham Fenniri turned attention to a different type of nanotube, rosette nanotubes inspired by the DNA double helix, that can be engineered to have a wide variety of structures and properties: "Organic Nanotubes with Tunable Dimensions and Properties". These structures are based on modules patterned after the guanosine-cytosine base pair found in DNA, hierarchically self-assembled and self-organized, guided by hydrogen bonds and hydrophobic interactions. The rosette nanotubes can be designed to have different diameters, lengths, and physical, electronic, and optical properties. Fenniri noted that the pattern of hierarchical self-assembly and self-organization to go from simple to complex structure is borrowed from Nature, which uses similar principles to form entire chromosomes, with a diameter of 1400 nm, from nucleosomes 30 nm in diameter, which are formed from proteins and the DNA molecule, which has a diameter of 2 nm. The basic rosette nanotube structure can be designed to have a diameter of 3- 4 nm, and a length of 20-200 nm, or in some cases, of mm. Despite being held together by non-covalent interactions, the structures are stable enough that they do not fall apart when scanned with an atomic force microscope. The chemistry of the rosette nanotubes allows for synthesis in kilogram quantities, and for a wide range of chemical substituents, providing different properties. For example, the nanotubes can be metallized by coating with, for example, gold or titanium atoms. The titanium-coated nanotubes adhered well to human osteoblast cells, indicating they can also be biocompatible. Seth R. Marder of the Georgia Institute of Technology explained how "Two-Photon Materials Chemistry" provides a new way to integrate nanostructures with MEMS by providing a way to fabricate three-dimensional structures with ~200nm feature sizes. The process uses focused femto-second laser beams to take advantage of very weak processes in which a molecule is excited by absorbing two photons simultaneously. Since excitation and thus chemical change occurs only in the very small volume element where the two lasers both focus, patterns can be produced in materials with pinpoint control in three-dimensions. This system of two-photon 3D lithography has been demonstrated in both polymers and metals. The smallest volume element addressed so far was 170 nm2 by 500 nm, compared with a minimum feature size of about 50 microns for commercial stereolithography. Tobin Marks of Northwestern University opened the second day of the Conference with a keynote address titled "Self-Assembly of Nanophotonic Materials and Device Structures," addressing the question of how to assemble molecules in a precise organization to perform precise functions. Marks reported the ability to make pinhole-free self-assembling superlattices suitable for electro-optical applications (non-centrosymmetric chromophores). The monolayers were furthermore robust, withstanding heating to several hundred degrees. These molecules seem applicable to nanoscale OLED (organic light emitting diodes)—they can be made as small as 40 nm. Looking to biology as a toolbox for building molecular machines, Jacob Schmidt of UCLA ("Development of Biomimetic Devices using Membrane Proteins") asked how components adapted from biological systems could be incorporated into artificial scaffolds. Many of the most interesting proteins, such as the F1-ATPase rotary molecular motor, porin proteins that open and close in response to electrical signals to allow certain molecules or ions to pass, and proteins that sense mechanical force (natural piezoelectric devices), function naturally inserted in lipid bilayer membranes. However, proteins in lipid membranes have a short functional life (a few days), so biocompatible polymers were investigated that would mimic the environment of the membrane while allowing longer lifetimes, in the hope of developing a generic platform for engineering membrane proteins. Schmidt reported work with an engineered voltage-gatable pore protein inserted into monolayer planar membranes of self-assembled amphiphilic block copolymers. The hope is to develop membranes suitable for use in micro-fluidic chips, in which engineered porin proteins could be electrically addressed to serve as valves. Donald A. Tomalia, Dendritic NanoTechnologies, Inc. and Central Michigan University, "Synthetic Control of Dendritic Nanostructures Both Within and Beyond Poly(amidoamine) Dendrimers," explored the possibility of developing chemical synthetic strategies that parallel the strategies evolved over the past 3-4 billion years for controlling biological nanostructures as a function of size, shape and the placement of chemical groups in specific regions. Dendrimers that chemists have synthesized, in which successive shells of dendron groups are added to make tree-like structures surrounding a central core, are similar to globular proteins in several ways. They have precisely controlled masses and definite three-dimensional structures. Progress was reported in making dendrimers further mimic the complexity of biological structural hierarchies—designing structures that are not spherically symmetric (that is, more like the cusps and points of globular proteins) and that self-assemble into larger structures. For example, dendron spheroids of different sizes or different chemical functionalities can be joined together by disulfide bonds. The assembly of components into predetermined structures can be accomplished by the binding of complementary DNA strands attached to the components. Other variations can be introduced by only partially filling the shells of the dendrimer structure, thus producing empty spots that can be reactive. Gavin D. Meredith of the University of California at Irvine, "Novel chemical strategy to link protein to DNA for directed molecular assembly," has the goal of designing and manufacturing molecular machine systems by assembling some of the tens of thousands of different molecular machines (proteins) that biology has evolved and provided as "off-the-shelf" components for nanotechnology. He looks at the possibility of using complementary DNA strands attached to proteins to link together protein molecules in a predetermined manner, in particular, to obtain precise orientation of proteins on a surface. It is important to attach the DNA to a specific part of the protein that does not interfere with the function of the protein. For this purpose, the DNA is linked to the protein via a 3-part molecule: nitrilo acetate-benzophenone-maleimide, which reacts with a series of 6 histidine residues engineered into the desired position of the protein sequence. This process can be used to convert a DNA chip, in which a specific array of DNA strands is attached to a substrate, into a protein chip, in which an array of protein molecules is attached to the chip in a pre-determined arrangement. Stefan Diez of the Max-Planck-Institute of Molecular Cell Biology and Genetics reported using molecular motor proteins to move and stretch DNA molecules on a surface: "Manipulating DNA Molecules in Synthetic Environments by Motor Proteins and Microtubules." A number of methods exist for manipulating DNA molecules on a surface, but one advantage of using molecule motor proteins is parallelization—manipulating many DNA molecules concurrently. The molecular motor protein kinesin takes 100 steps per second of 8 nm each, fueled by one molecule of ATP for each step, along a track formed by microtubules, 25 nm-diameter hollow cylinders formed from two different types of protein subunits. Kinesin molecules can be attached to a substrate in specific patterns and microtubules moved along the pattern by controlled addition of ATP. DNA molecules attached to the microtubules can thus be moved and stretched, forming networks that could be coated with metal atoms to form circuits. The Sunday morning session began with keynote presentations by the winners of the 2003 Foresight Institute Feynman Prizes. Marvin L. Cohen and Steven G. Louie of the University of California at Berkeley received the theoretical prize for their contributions to the understanding of the behavior of materials. They spoke about the theory and computation of properties of nanotubes (carbon nanotubes, or those made from boron and nitrogen), using their plane wave pseudopotential method, now recognized as the standard model of solids. In this model, inner shell electrons are treated with the nuclei of the atoms, and the valence electrons are allowed to move and interact with light. The model correctly predicts various properties of the nanotubes; for example, with BN nanotubes the properties do not depend on how the tube is rolled up but instead these nanotubes are always semiconductors, unlike the case with carbon nanotubes, which can be either semiconductor or metallic, depending on the detailed structure (chirality) of the tube. Another result is that electrical transport in (n,n) metallic carbon nanotubes is very robust against impurities and local defects. Current work includes studies of friction and how mechanical energy is dissipated at the nanoscale in order to determine whether nanomachines could operate efficiently. The prize for experimental work was awarded to Carlo Montemagno of the University of California, Los Angeles for his pioneering research into methods of integrating single molecule biological motors with nano-scale silicon devices. He spoke on engineering and embedding intelligence into materials and devices using integrative technology. In living systems higher order functionality that is observable at a higher level emerges from stochastic, non-linear interactions at a lower level. Montemagno's current work is following a strategy he calls integrative technology, in which nanotechnology is integrated with biotechnology (as the source of blueprints for molecular machinery), and informatics (which deals with the ways in which information flows). For example, a salient feature of living cells is how they are compartmentalized by lipid membranes that control the flow of materials and information. To adapt this strategy to nanotechnology to make artificial organelles, fragile lipid membranes are replaced by cross-linked polymers embedded with bacterial proteins that harvest light energy to produce ATP molecules. Current research is also attempting to make biorobotic systems based upon the muscle protein actin and using ATP as fuel. Other work is inspired by the calcium and potassium channels in membranes that allow cardiac cells to communicate and the heart to beat in order to build nanoscale devices that self-excite, and MEMS devices are being moved by cultured cardiac cells. Results with another molecular motor were presented by Richard T. Pomerantz of the SUNY Health Science Center at Brooklyn: "RNA Polymerase as an Information-Dependent Molecular Motor." This work exploits the fact that RNA polymerase is a powerful motor, exerting 15-20 pN of linear force as it moves along the DNA molecule. The progress of the RNA polymerase motor can be controlled to an accuracy of 0.34 nm, the length of one base pair, by limiting the supply of one of the four nucleotide triphosphates required to copy a DNA sequence (immobilize the polymerase to a substrate; wash repeatedly with different mixtures of triphosphates according to the sequence of the DNA). The polymerase can be engineered to permit attaching and releasing cargo molecules without interfering with polymerase function. New building blocks and a new approach to engineering molecular structure Introducing a novel type of molecular building block, Luc Jaeger of the University of California at Santa Barbara discussed "How to play LEGO with RNA: design of RNA cellular automata." Jaeger used knowledge of the folding and assembly rules governing the three-dimensional shape of complex natural RNA molecules, such as the 23sRNA of the bacterial large ribosomal subunit, to generate "tectoRNAs," named after tectonics, the science or art of building or constructing materials. These are self-assembling RNA building blocks that are designed and programmed to generate RNA super-architectures in a highly predictable manner. Jaeger reported RNA motifs that contain a perfect right angle between two helices, which were used to build RNA squares of 9 nm or 13 nm. The hope is that now that the folding rules governing RNA structure are understood, it will be possible to adapt these rules to other folding polymers to give highly predictable structures, and that it will also be possible to do "nanoscale sculpting with RNA." TectoRNAs are especially interesting from the standpoint of molecular manufacturing because they may provide a path to engineering molecules to fold into predictable structures. In 1981 Eric Drexler proposed a path to general molecular manipulation based on engineering proteins to fold predictably. Progress in this direction has been slow, in part because the rules governing protein folding are very complex, in part because proteins are commonly built from 20 different subunits with very different chemical properties. The rules of nucleic acid structure are much simpler, based upon four subunits, and these have been exploited for DNA by Nadrian Seeman and others to form nanoscale structures and devices. However due to differences in the sugar-phosphate backbone, RNA folding, although based on rules of Watson-Crick base-pairing similar to DNA, leads to a wider variety of natural three-dimensional structures than are found with DNA, more like the huge variety of natural structures found with proteins. Further, some RNA molecules have catalytic activity, as do protein enzymes. Thus tectoRNAs provide a promising addition to the tool kit for molecular nanotechnology. Venture Capital for Nanotechnology Alan Marty and Steve Jurvetson on panel Jim Von Ehr and Alex Wong on panel Venture Capital panel with (left to right) Alan Marty, Steve Jurvetson, Jim Von Ehr, and Alex Wong. Other features of the Conference included a panel discussion on "Venture Capital for Nanotechnology" chaired by Ed Niehaus and with panelists: Steve Jurvetson of Draper Fisher Jurvetson, Alan Marty of JP Morgan Partners, Alex Wong of Apax Partners, and Jim Von Ehr of Zyvex. One theme that emerged was that panelists had seen many interesting nanotechnology ideas put forward for investment, but very few of these were ready for institutional investment. Instead, most were early stage proposals more appropriate for government funding. As an entrepreneur who used his own money to found his company, Jim Von Ehr was initially uninterested in government funding, but more recently successfully turned to the NIST/ATP to fund investment in a top-down approach to nanotechnology. He especially recommended NIST/ATP because of their focus on the business aspects of the proposal, in contrast to other government agencies that are primarily guided by peer reviews of the science behind the proposal. When asked how best to fund MNT development that might lead to an assembler, Steve Jurvetson responded that venture capital firms are unlikely to make such an investment, and that funding was more likely to come from a government "moon shot" program to develop MNT. Alex Wong noted that many of the nanotechnology business models that he has seen propose to license to someone else a great technology that the entrepreneurs have developed. These are, he said, unattractive models because investors are looking for companies that propose to directly develop products. "We love product companies." Alan Marty seconded the importance of products compared to technology—he does not want to invest in a company that is developing building blocks, but rather in one that is using somebody else's building blocks to take a product to market. Foresight Update 53 - Table of Contents Special Thanks to Our Corporate Sponsors Platinum Level Sponsor Gold Level Sponsors Foley & Lardner logo Howard Rice logo Working in nanotechnology logo Silver Level Sponsors Intel logo Intel believes in innovation. We're driven by it. We live by it. And it's this principle that led us to create the world's first microprocessor back in 1971. Today, Intel is behind everything from the fastest processor in the world to the cables that power high-speed Internet. We keep innovating because it's in our blood. Because it's part of our heritage. And because the technology we invent today will shape the world's future. nanoTITAN logo The Premier Provider of Nanoinformatics to the Nanotechnology Industry nanoTITAN is the premier provider and developer of state-of-the-art design, database and assembly software, and custom modeling, visualization, simulation, and analysis services for research and development engineers, and scientists involved in molecular nanotechnology. nanoTITAN is the developer of nanoML and nanoXplorer, which can be accessed by visiting their website, www.nanotitan.com. Media Sponsor NanoSIG logoNanoSIG promotes the commercialization of nanotechnology by creating nano business networks and providing information, infrastructure, people, & funding services required to launch nano business ventures. In Silicon Valley, DC, and Southern California NanoSIG hosts forums & conferences, conducts surveys, facilitates networking, and works on information and business clusters. From Foresight Update 53, originally published 15 January 2004.
280517f7dfe7b5ed
Saturday, June 10, 2017 Turok's bogus criticism of Hartle-Hawking, Vilenkin calculable big bangs In his blog post You can't smooth the big bang, Tetragraviton mentions a string group meeting at the Perimeter Institute where an anti-string pundit – who also happens to be the current director of the Perimeter Institute – led the debate about "why the Hartle-Hawking and Vilenkin pictures of the big bang are equivalent and wrong". The discussion was revolving around their 5-weeks-old preprint No smooth beginning for spacetime. When Feldbrugge, Lehners, and Turok released that paper, I saw the title and it looked fine and unsurprising (some quantities grow big near the Big Bang and the initial singularity in the Lorentzian causal diagram is basically unavoidable). Well, I surely wasn't aware of the fact that they claim to find a general problem with the Hartle-Hawking or Vilenkin approach to the wave function of the Universe, i.e. the initial conditions. OK, so Mr Director wasn't satisfied with giving nonsensical negative monologues about the inflationary cosmology and string theory. He has added the Hartle-Hawking paradigm, too. And Tetragraviton seems to be an obedient, 100% corrupt employee of Mr Neil Turok's so he presented his rant totally uncritically. OK, Vilenkin proposed that the early Universe – when its radius or curvature radius was very small – could have been created from nothing via the "tunneling from nothing". Alternatively, Hartle and Hawking proposed the paradigm that an early Universe whose slice looks like a 3-sphere may be continuously continued through the Euclidean spacetime to a point and it is smooth around that point. By continuing some most natural smooth conditions of the path integral around that initial point, one may calculate the preferred, Hartle-Hawking wave function on any sphere, including the finite ones. It sounds plausible to me that when these general paradigms are done properly in a complete theory of quantum gravity, they are equivalent. But I don't think that Turok and pals have presented evidence that both of these pictures, and especially Hartle-Hawking, are dysfunctional. In this business, people have encountered puzzles concerning the signs of the growing or decreasing terms in the exponential defining the path integral. And the continuation from the Minkowski to the Euclidean space often requires one to choose a contour in the complex \(N\)-plane (where \(N\) is the lapse function, a time interval) and there's no known universal rule to do it right. Let me point out that the Turok et al. paper has one followup at this point, The Real No-Boundary Wave Function in Lorentzian Quantum Cosmology, by Hartle and four co-authors. They focus on the criticisms by Turok et al. and repeat that the Hartle-Hawking story is just fine. Why do they arrive at different conclusions? Well, they use different contours. Turok et al. use a half-infinite contour, Hartle et al. use an infinite contour going along the whole real axis. As a consequence, Hartle et al. may extract a wave function that actually solves the Hartle-DeWitt equation, while Turok et al. don't end up with a solution to this "simplified Schrödinger equation in quantum gravity". Instead, the reduced contour of Turok et al. produces a Green's function for that equation. That's too bad – what the alternative proposal by Turok et al. gives you doesn't solve "what replaces the Schrödinger equation in quantum gravity" but something that violates the equation. So it is not really a good candidate and should be abandoned. The Turok et al. calculation unsurprisingly leads them to focus on different saddle points than those of Hartle et al. – in fact, the Turok et al. saddle points make it impossible to get cosmological predictions. You may see the general misunderstanding of the "logic of the derivation" on the side of Turok et al. The logic of the path integral is that Hartle and Hawking found a clever way to find a new cosmologically relevant solution to the Wheeler-DeWitt equation. Any clever trick using any clever contour or continuation of the signs is OK as long as the result really solves the desired equation. In the most schematic form, the Wheeler-DeWitt equation is simply\[ H \Psi = 0. \] It is like the Schrödinger equation except that the term \(i\hbar \partial \Psi / \partial t\) is missing. It has to be missing because in general relativity-based gravity, you don't have any universally well-defined coordinate \(t\). So you cannot define the derivative, either. Instead, the time \(t\) within quantum gravity has to be extracted as a value of an observable, e.g. from the density of matter or the position of hands on a clock, and when you do so, the time derivative term becomes just one part of the Hamiltonian term. OK, so Hartle and friends have a solution to the equation that seems to be a verifiable solution and has some other desired characteristics. Turok only have a wrong candidate for such a solution, derived from a badly chosen contour etc. But the fact that the Turok "solution" is wrong doesn't mean that all other solutions are wrong. At this place, I can't resist to mention that Turok's criticism seems analogous to many creationists' criticisms of Darwin's evolution. These critics sometimes create their own "plausible" model how species could have evolved, and they find out that it was too slow or otherwise unsatisfactory. However, they seem to ignore the fact that their detailed scenario isn't necessarily correct and Nature could have taken – and may actually be argued to have taken – a different path that simply works. For example, the mutation rate could have temporarily increased because the animals that participated in this speedup had some advantages. Creationists are just closed-minded about the existence of all such "simply clever" tricks. Turok et al. are analogous to the creationists. Their first guess doesn't work well – so they conclude that the whole paradigm, discovered by someone else, is wrong. But it doesn't follow. In particular, everything that works and is valuable was invented by someone else, while everything that sucks was proposed by Turok. One must remember that these two groups of ideas are disjoint, not identical. The Hartle-Hawking paradigm has only been semi-successfully applied to some truncated, semiclassical, minisuperspace approximations of quantum gravity. At the end, I believe that someone will figure out how to do analogous things in string/M-theory properly, and she may figure out the deepest questions about the initial state of the Universe and maybe even the choice of the right vacuum or vacua from the landscape. By the way, if I had read the abstract of the paper by Turok et al. five weeks ago, I would probably get provoked by the statement We argue that the Lorentzian path integral for quantum cosmology is meaningful and, with... Quite generally, the Lorentzian path integral is well-defined but it's well-defined only when we properly define it, and to do so, we generally have to use a Euclidean continuation. In other words, the Lorentzian path integral may be well-defined at the end but the Euclidean one is more "immediately" well-defined. The number of operations and correct assumptions you need in the Euclidean path integral is smaller. If you wish: the Wick rotation is almost universally a good idea. There are lots of examples in which the Euclideanized structures in the path integral allow you to quantify the terms more reliably. One example are the genus \(g\) Riemann surfaces representing the world sheets' history in string theory – we assume that they are Euclidean and the work with the Lorentzian surfaces would create lots of new problems and puzzles. The sentence quoted above sounds like they are saying that the "Lorentzian path integral is more well-defined than the Euclidean one" which is just wrong. This general sentence is a preparation for the fact that they would be making wrong contour and sign choices that would lead to wrong results – not the correct ones that are most naturally obtained by a continuation to the Euclidean signature. Fine. So I believe that Turok et al. are just wrong and I am worried by the suggestion that he is abusing his power. I am worried that the likes of Tetragraviton are licking the director's rectum because it might be a good idea for them personally. More generally, it's bad for an institute of this singular character to have a director who isn't quite a top physicist but who tries to fight against top physicists – and the most important paradigms in physics. It looks like a classic example of the abuse of power. The directors should either be top physicists themselves, or someone else who has a lot of respect for top physicists. Someone's efforts to increase his influence within science by mostly political means is wrong, wrong, wrong. No comments: Post a Comment
f104c70038a99878
Skip to main content Chemistry LibreTexts 3.1: Introduction to the Schrödinger Equation • Page ID • A scientific postulate is a generally accepted statement, which is accepted because it is consistent with experimental observation and serves to predict or explain a variety of observations. These postulates also are known as physical laws. Postulates cannot be derived by any other fundamental considerations. Newton's second law, \(f = ma\), is an example of a postulate that is accepted and used because it explains the motion of objects like baseballs, bicycles, rockets, and cars. One goal of science is to find the smallest and most general set of postulates that can explain all observations. A whole new set of postulates was added with the invention of Quantum Mechanics. The Schrödinger equation is the fundamental postulate of Quantum Mechanics. In the previous chapter we saw that many individual quantum postulates were introduced to explain otherwise inexplicable phenomena. We will see that quantization and the relations \(E = h\nu\) and \(p = \frac {h}{λ}\), discussed in the last chapter, are consequences of the Schrödinger equation. In other words the Schrödinger equation is a more general and fundamental postulate. A differential equation is a mathematical equation involving one or more derivatives. The analytical solution to a differential equation is the expression or function for the dependent variable that gives an identity when substituted into the differential equation. A mathematical function is a rule that assigns a value to one quantity using the values of other quantities. Any mathematical function can be expressed not only by a mathematical formula, but also in words, as a table of data, or by a graph. Numerical solutions to differential equations also can be obtained. In numerical solutions, the behavior of the dependent variable is expressed as a table of data or by a graph; no explicit function is provided. Example \(\PageIndex{1}\) The differential equation \(\frac {dy(x)}{dx} = 2\) has the solution \(y(x) = 2 x + b\), where \(b\) is a constant. This function \(y(x)\) defines the family of straight lines on a graph with a slope of 2. Show that this function is a solution to the differential equation by substituting for \(y(x)\) in the differential equation. How many solutions are there to this differential equation? For one of these solutions, construct a table of data showing pairs of \(x\) and \(y\) values, and use the data to sketch a graph of the function. Describe this function in words. Some differential equations have the property that the derivative of the function gives the function back multiplied by a constant. The differential equation for a first-order chemical reaction is one example. This differential equation and the solution for the concentration of the reactant are given below. \[\frac {dC (t)}{dt} = -k C (t)\] \[C (t) = C_0 e^{-kt} \label {3-1}\] Example \(\PageIndex{2}\) Show that C(t) is a solution to the differential equation. Another kind of differential equation has the property that the second derivative of the function yields the function multiplied by a constant. Both of these types of differential equations are found in Quantum Mechanics. \[ \frac {d^2 \psi (x)}{dx^2} = k \psi (x) \label {3-2}\] Example 3.3 What is the value of the constant in the above differential equation when \(\psi(x) = \cos(3x)\)? Example \(\PageIndex{4}\) What other functions, in addition to the cosine, have the property that the second derivative of the function yields the function multiplied by a constant? Since some mathematical functions, such as the sine and cosine, go through repeating periodic maxima and minima, they produce graphs that look like waves. Such functions can themselves be thought of as waves and can be called wavefunctions. We now make a mathematically intuitive leap. If electrons, atoms, and molecules have wave-like properties, then there must be a mathematical function that is the solution to a differential equation that describes electrons, atoms, and molecules. This differential equation is called the wave equation, and the solution is called the wavefunction. Such thoughts may have motivated Erwin Schrödinger to argue that the wave equation is a key component of Quantum Mechanics. Contributors and Attributions
81f51ff3495bf341
Table of contents 05 February 2020, Volume 29 Issue 2 Previous issue    Next issue TOPICAL REVIEW—High-throughput screening and design of optoelectronic materials Designing solar-cell absorber materials through computational high-throughput screening Xiaowei Jiang(江小蔚), Wan-Jian Yin(尹万健) Chin. Phys. B, 2020, 29 (2):  028803.  DOI: 10.1088/1674-1056/ab6655 Abstract ( 310 )   HTML   PDF (4510KB) ( 463 )   Although the efficiency of CH3NH3PbI3 has been refreshed to 25.2%, stability and toxicity remain the main challenges for its applications. The search for novel solar-cell absorbers that are highly stable, non-toxic, inexpensive, and highly efficient is now a viable research focus. In this review, we summarize our recent research into the high-throughput screening and materials design of solar-cell absorbers, including single perovskites, double perovskites, and materials beyond perovskites. BaZrS3 (single perovskite), Ba2BiNbS6 (double perovskite), HgAl2Se4 (spinel), and IrSb3 (skutterudite) were discovered to be potential candidates in terms of their high stabilities, appropriate bandgaps, small carrier effective masses, and strong optical absorption. TOPICAL REVIEW—Optical field manipulation Research progress of femtosecond surface plasmon polariton Yulong Wang(王玉龙), Bo Zhao(赵波), Changjun Min(闵长俊), Yuquan Zhang(张聿全), Jianjun Yang(杨建军), Chunlei Guo(郭春雷), Xiaocong Yuan(袁小聪) Chin. Phys. B, 2020, 29 (2):  027302.  DOI: 10.1088/1674-1056/ab6717 Abstract ( 208 )   HTML   PDF (14170KB) ( 190 )   As the combination of surface plasmon polariton and femtosecond laser pulse, femtosecond surface plasmon polariton has both nanoscale spatial resolution and femtosecond temporal resolution, and thus provides promising methods for light field manipulation and light-matter interaction in extreme small spatiotemporal scales. Nowadays, the research on femtosecond surface plasmon polariton is mainly concentrated on two aspects: one is investigation and characterization of excitation, propagation, and dispersion properties of femtosecond surface plasmon polariton in different structures or materials; the other one is developing new applications based on its unique properties in the fields of nonlinear enhancement, pulse shaping, spatiotemporal super-resolved imaging, and others. Here, we introduce the research progress of properties and applications of femtosecond surface plasmon polariton, and prospect its future research trends. With the further development of femtosecond surface plasmon polariton research, it will have a profound impact on nano-optoelectronics, molecular dynamics, biomedicine and other fields. TOPICAL REVIEW—Overcoming doping bottleneck in widegap semiconductors Growth and doping of bulk GaN by hydride vapor phase epitaxy Yu-Min Zhang(张育民), Jian-Feng Wang(王建峰), De-Min Cai(蔡德敏), Guo-Qiang Ren(任国强), Yu Xu(徐俞), Ming-Yue Wang(王明月), Xiao-Jian Hu(胡晓剑), Ke Xu(徐科) Chin. Phys. B, 2020, 29 (2):  026104.  DOI: 10.1088/1674-1056/ab65b9 Abstract ( 208 )   HTML   PDF (3570KB) ( 194 )   Doping is essential in the growth of bulk GaN substrates, which could help control the electrical properties to meet the requirements of various types of GaN-based devices. The progresses in the growth of undoped, Si-doped, Ge-doped, Fe-doped, and highly pure GaN by hydride vapor phase epitaxy (HVPE) are reviewed in this article. The growth technology and precursors of each type of doping are introduced. Besides, the influence of doping on the optical and electrical properties of GaN are presented in detail. Furthermore, the problems caused by doping, as well as the methods to solve them are also discussed. At last, highly pure GaN is briefly introduced, which points out a new way to realize high-purity semi-insulating (HPSI) GaN. TOPICAL REVIEW—Advanced calculation & characterization of energy storage materials & devices at multiple scale Review on electrode-level fracture in lithium-ion batteries Bo Lu(吕浡), Chengqiang Ning(宁成强), Dingxin Shi(史定鑫), Yanfei Zhao(赵炎翡), Junqian Zhang(张俊乾) Chin. Phys. B, 2020, 29 (2):  026201.  DOI: 10.1088/1674-1056/ab6841 Abstract ( 151 )   HTML   PDF (2841KB) ( 262 )   Fracture occurred in electrodes of the lithium-ion battery compromises the integrity of the electrode structure and would exert bad influence on the cell performance and cell safety. Mechanisms of the electrode-level fracture and how this fracture would affect the electrochemical performance of the battery are of great importance for comprehending and preventing its occurrence. Fracture occurring at the electrode level is complex, since it may involve fractures in or between different components of the electrode. In this review, three typical types of electrode-level fractures are discussed: the fracture of the active layer, the interfacial delamination, and the fracture of metallic foils (including the current collector and the lithium metal electrode). The crack in the active layer can serve as an effective indicator of degradation of the electrochemical performance. Interfacial delamination usually follows the fracture of the active layer and is detrimental to the cell capacity. Fracture of the current collector impacts cell safety directly. Experimental methods and modeling results of these three types of fractures are concluded. Reasonable explanations on how these electrode-level fractures affect the electrochemical performance are sorted out. Challenges and unsettled issues of investigating these fracture problems are brought up. It is noted that the state-of-the-art studies included in this review mainly focus on experimental observations and theoretical modeling of the typical mechanical damages. However, quantitative investigations on the relationship between the electrochemical performance and the electrode-level fracture are insufficient. To further understand fractures in a multi-scale and multi-physical way, advancing development of the cross discipline between mechanics and electrochemistry is badly needed. Advanced characterization and calculation methods for rechargeable battery materials in multiple scales Xin-Yan Li(李欣岩), Su-Ting Weng(翁素婷), Lin Gu(谷林) Chin. Phys. B, 2020, 29 (2):  028801.  DOI: 10.1088/1674-1056/ab65ba Abstract ( 128 )   HTML   PDF (5049KB) ( 211 )   The structure-activity relationship of functional materials is an everlasting and desirable research question for material science researchers, where characterization and calculation tools are the keys to deciphering this intricate relationship. Here, we choose rechargeable battery materials as an example and introduce the most representative advanced characterization and calculation methods in four different scales: real space, energy, momentum space, and time. Current research methods to study battery material structure, energy level transition, dispersion relations of phonons and electrons, and time-resolved evolution are reviewed. From different views, various expression forms of structure and electronic structure are presented to understand the reaction processes and electrochemical mechanisms comprehensively in battery systems. According to the summary of the present battery research, the challenges and perspectives of advanced characterization and calculation techniques for the field of rechargeable batteries are further discussed. Pair distribution function analysis: Fundamentals and application to battery materials Xuelong Wang(王雪龙), Sha Tan(谭莎), Xiao-Qing Yang(杨晓青), Enyuan Hu(胡恩源) Chin. Phys. B, 2020, 29 (2):  028802.  DOI: 10.1088/1674-1056/ab6656 Abstract ( 160 )   HTML   PDF (7275KB) ( 143 )   Battery materials are of vital importance in powering a clean and sustainable society. Improving their performance relies on a clear and fundamental understanding of their properties, in particular, structural properties. Pair distribution function (PDF) analysis, which takes into account both Bragg scattering and diffuse scattering, can probe structures of both crystalline and amorphous phases in battery materials. This review first introduces the principle of PDF, followed by its application in battery materials. It shows that PDF is an effective tool in studying a series of key scientific topics in battery materials. They range from local ordering, nano-phase quantification, anion redox reaction, to lithium storage mechanism, and so on. SPECIAL TOPIC—Advanced calculation & characterization of energy storage materials & devices at multiple scale Revealing the inhomogeneous surface chemistry on the spherical layered oxide polycrystalline cathode particles Zhi-Sen Jiang(蒋之森), Shao-Feng Li(李少锋), Zheng-Rui Xu(许正瑞), Dennis Nordlund, Hendrik Ohldag, Piero Pianetta, Jun-Sik Lee, Feng Lin(林锋), Yi-Jin Liu(刘宜晋) Chin. Phys. B, 2020, 29 (2):  026103.  DOI: 10.1088/1674-1056/ab6585 Abstract ( 142 )   HTML   PDF (1596KB) ( 124 )   The hierarchical structure of the composite cathodes brings in significant chemical complexity related to the interfaces, such as cathode electrolyte interphase. These interfaces account for only a small fraction of the volume and mass, they could, however, have profound impacts on the cell-level electrochemistry. As the investigation of these interfaces becomes a crucial topic in the battery research, there is a need to properly study the surface chemistry, particularly to eliminate the biased, incomplete characterization provided by techniques that assume the homogeneous surface chemistry. Herein, we utilize nano-resolution spatially-resolved x-ray spectroscopic tools to probe the heterogeneity of the surface chemistry on LiNi0.8Mn0.1Co0.1O2 layered cathode secondary particles. Informed by the nano-resolution mapping of the Ni valance state, which serves as a measurement of the local surface chemistry, we construct a conceptual model to elucidate the electrochemical consequence of the inhomogeneous local impedance over the particle surface. Going beyond the implication in battery science, our work highlights a balance between the high-resolution probing the local chemistry and the statistical representativeness, which is particularly vital in the study of the highly complex material systems. SPECIAL TOPIC—Strong-field atomic and molecular physics Numerical simulations of strong-field processes in momentum space Yan Xu(徐彦), Xue-Bin Bian(卞学滨) Chin. Phys. B, 2020, 29 (2):  023202.  DOI: 10.1088/1674-1056/ab6553 Abstract ( 219 )   HTML   PDF (448KB) ( 141 )   The time-dependent Schrödinger equation (TDSE) is usually treated in the real space in the textbook. However, it makes the numerical simulations of strong-field processes difficult due to the wide dispersion and fast oscillation of the electron wave packets under the interaction of intense laser fields. Here we demonstrate that the TDSE can be efficiently solved in the momentum space. The high-order harmonic generation and above-threshold ionization spectra obtained by numerical solutions of TDSE in momentum space agree well with previous studies in real space, but significantly reducing the computation cost. Bäcklund transformations, consistent Riccati expansion solvability, and soliton-cnoidal interaction wave solutions of Kadomtsev-Petviashvili equation Ping Liu(刘萍), Jie Cheng(程杰), Bo Ren(任博), Jian-Rong Yang(杨建荣) Chin. Phys. B, 2020, 29 (2):  020201.  DOI: 10.1088/1674-1056/ab5eff Abstract ( 141 )   HTML   PDF (1580KB) ( 135 )   The famous Kadomtsev-Petviashvili (KP) equation is a classical equation in soliton theory. A Bäcklund transformation between the KP equation and the Schwarzian KP equation is demonstrated by means of the truncated Painlevé expansion in this paper. One-parameter group transformations and one-parameter subgroup-invariant solutions for the extended KP equation are obtained. The consistent Riccati expansion (CRE) solvability of the KP equation is proved. Some interaction structures between soliton-cnoidal waves are obtained by CRE and several evolution graphs and density graphs are plotted. Performance analysis of continuous-variable measurement-device-independent quantum key distribution under diverse weather conditions Shu-Jing Zhang(张淑静), Chen Xiao(肖晨), Chun Zhou(周淳), Xiang Wang(汪翔), Jian-Shu Yao(要建姝), Hai-Long Zhang(张海龙), Wan-Su Bao(鲍皖苏) Chin. Phys. B, 2020, 29 (2):  020301.  DOI: 10.1088/1674-1056/ab5efd Abstract ( 136 )   HTML   PDF (1234KB) ( 118 )   The effects of weather conditions are ubiquitous in practical wireless quantum communication links. Here in this work, the performances of atmospheric continuous-variable measurement-device-independent quantum key distribution (CV-MDI-QKD) under diverse weather conditions are analyzed quantitatively. According to the Mie scattering theory and atmospheric CV-MDI-QKD model, we numerically simulate the relationship between performance of CV-MDI-QKD and the rainy and foggy conditions, aiming to get close to the actual combat environment in the future. The results show that both rain and fog will degrade the performance of the CV-MDI-QKD protocol. Under the rainy condition, the larger the raindrop diameter, the more obvious the extinction effect is and the lower the secret key rate accordingly. In addition, we find that the secret key rate decreases with the increase of spot deflection distance and the fluctuation of deflection. Under the foggy condition, the results illustrate that the transmittance decreases with the increase of droplet radius or deflection distance, which eventually yields the decrease in the secret key rate. Besides, in both weather conditions, the increase of transmission distance also leads the secret key rate to deteriorate. Our work can provide a foundation for evaluating the performance evaluation and successfully implementing the atmospheric CV-MDI-QKD in the future field operation environment under different weather conditions. Unified approach to various quantum Rabi models witharbitrary parameters Xiao-Fei Dong(董晓菲), You-Fei Xie(谢幼飞), Qing-Hu Chen(陈庆虎) Chin. Phys. B, 2020, 29 (2):  020302.  DOI: 10.1088/1674-1056/ab6555 Abstract ( 180 )   HTML   PDF (603KB) ( 133 )   A general approach is proposed to the quantum Rabi model and its several variants within the extended coherent states. The solutions to all these models including the anisotropy and the nonlinear Stark coupling are then obtained in an unified way. The essential characteristics such as the possible first-order phase transition can be detected analytically. This approach can be easily applied to the recent experiments with various tunable parameters without much additional effort, so it should be very helpful to the analysis of the experimental data. Interference properties of two-component matter wave solitons Yan-Hong Qin(秦艳红), Yong Wu(伍勇), Li-Chen Zhao(赵立臣), Zhan-Ying Yang(杨战营) Chin. Phys. B, 2020, 29 (2):  020303.  DOI: 10.1088/1674-1056/ab65b7 Abstract ( 174 )   HTML   PDF (4159KB) ( 185 )   Wave properties of solitons in a two-component Bose-Einstein condensate are investigated in detail. We demonstrate that dark solitons in one of components admit interference and tunneling behavior, in sharp contrast to the scalar dark solitons and vector dark solitons. Analytic analyses of interference properties show that spatial interference patterns are determined by the relative velocity of solitons, while temporal interference patterns depend on the velocities and widths of two solitons, differing from the interference properties of scalar bright solitons. Especially, for an attractive interactions system, we show that interference effects between the two dark solitons can induce some short-time density humps (whose densities are higher than background density). Moreover, the maximum hump value is remarkably sensitive to the variation of the solitons' parameters. For a repulsive interactions system, the temporal-spatial interference periods of dark-bright solitons have lower limits. Numerical simulation results suggest that interference patterns for the dark-bright solitons are more robust against noises than bright-dark solitons. These explicit interference properties can be used to measure the velocities and widths of solitons. It is expected that these interference behaviors can be observed experimentally and can be used to design matter wave soliton interferometer in vector systems. Quantifying non-classical correlations under thermal effects in a double cavity optomechanical system Mohamed Amazioug, Larbi Jebli, Mostafa Nassik, Nabil Habiballah Chin. Phys. B, 2020, 29 (2):  020304.  DOI: 10.1088/1674-1056/ab65b6 Abstract ( 160 )   HTML   PDF (816KB) ( 155 )   We investigate the generation of quantum correlations between mechanical modes and optical modes in an optomechanical system, using the rotating wave approximation. The system is composed of two Fabry-Pérot cavities separated in space; each of the two cavities has a movable end-mirror. Our aim is the evaluation of entanglement between mechanical modes and optical modes, generated by correlations transfer from the squeezed light to the system, using Gaussian intrinsic entanglement as a witness of entanglement in continuous variables Gaussian states, and the quantification of the degree of mixedness of the Gaussian states using the purity. Then, we quantify nonclassical correlations between mechanical modes and optical modes even beyond entanglement by considering Gaussian geometric discord via the Hellinger distance. Indeed, entanglement, mixdness, and quantum discord are analyzed as a function of the parameters characterizing the system (thermal bath temperature, squeezing parameter, and optomechanical cooperativity). We find that, under thermal effect, when entanglement vanishes, purity and quantum discord remain nonzero. Remarkably, the Gaussian Hellinger discord is more robust than entanglement. The effects of the other parameters are discussed in detail. Monogamy and polygamy relations of multiqubit entanglement based on unified entropy Zhi-Xiang Jin(靳志祥), Cong-Feng Qiao(乔从丰) Chin. Phys. B, 2020, 29 (2):  020305.  DOI: 10.1088/1674-1056/ab6720 Abstract ( 139 )   HTML   PDF (479KB) ( 96 )   Monogamy relation is one of the essential properties of quantum entanglement, which characterizes the distribution of entanglement in a multipartite system. By virtual of the unified-(q,s) entropy, we obtain some novel monogamy and polygamy inequalities in general class of entanglement measures. For the multiqubit system, a class of tighter monogamy relations are established in term of the α-th power of unified-(q,s) entanglement for α≥1. We also obtain a class of tighter polygamy relations in the β-th (0≤β≤1) power of unified-(q,s) entanglement of assistance. Applying these results to specific quantum correlations, e.g., entanglement of formation, Renyi-q entanglement of assistance, and Tsallis-q entanglement of assistance, we obtain the corresponding monogamy and polygamy relations. Typical examples are presented for illustration. Furthermore, the complementary monogamy and polygamy relations are investigated for the α-th (0≤α≤q 1) and β-th (β≥1) powers of unified entropy, respectively, and the corresponding monogamy and polygamy inequalities are obtained. Influence of the Earth's rotation on measurement of gravitational constant G with the time-of-swing method Jie Luo(罗杰), Tao Dong(董涛), Cheng-Gang Shao(邵成刚), Yu-Jie Tan(谈玉杰), Hui-Jie Zhang(张惠捷) Chin. Phys. B, 2020, 29 (2):  020401.  DOI: 10.1088/1674-1056/ab6584 Abstract ( 129 )   HTML   PDF (514KB) ( 90 )   In the measurement of the Newtonian gravitational constant G with the time-of-swing method, the influence of the Earth's rotation has been roughly estimated before, which is far beyond the current experimental precision. Here, we present a more complete theoretical modeling and assessment process. To figure out this effect, we use the relativistic Lagrangian expression to derive the motion equations of the torsion pendulum. With the correlation method and typical parameters, we estimate that the influence of the Earth's rotation on G measurement is far less than 1 ppm, which may need to be considered in the future high-accuracy experiments of determining the gravitational constant G. Effect of system-reservoir correlations on temperature estimation Wen-Li Zhu(朱雯丽), Wei Wu(吴威), Hong-Gang Luo(罗洪刚) Chin. Phys. B, 2020, 29 (2):  020501.  DOI: 10.1088/1674-1056/ab5fc0 Abstract ( 148 )   HTML   PDF (676KB) ( 115 )   In many previous temperature estimation schemes, the temperature of a sample is directly read out from the final steady state of a quantum probe, which is coupled to the sample. However, in these studies, information of correlations between system (the probe) and reservoir (the sample) is usually eliminated, leading the steady state of the probe is a canonical equilibrium state with respect solely to system's Hamiltonian. To explore the influence of system-reservoir correlations on the estimation precision, we investigate the equilibration dynamics of a spin interacting with a finite temperature bosonic reservoir. By incorporating an intermediate harmonic oscillator or a collective coordinate into the spin, the system-reservoir correlations can be correspondingly encoded in a Gibbs state of an effective Hamilton, which is size consistent with the original bare spin. Extracting information of temperature from this corrected steady state, we find the effect of the system-reservoir correlations on the estimation precision is highly sensitive to the details of the spectral density function of the measured reservoir. Quantum-classical correspondence and mechanical analysis ofa classical-quantum chaotic system Haiyun Bi(毕海云), Guoyuan Qi(齐国元), Jianbing Hu(胡建兵), Qiliang Wu(吴启亮) Chin. Phys. B, 2020, 29 (2):  020502.  DOI: 10.1088/1674-1056/ab6205 Abstract ( 151 )   HTML   PDF (3881KB) ( 102 )   Quantum-classical correspondence is affirmed via performing Wigner function and a classical-quantum chaotic system containing random variables. The classical-quantum system is transformed into a Kolmogorov model for force and energy analysis. Combining different forces, the system is divided into two categories: conservative and non-conservative, revealing the mechanical characteristic of the classical-quantum system. The Casimir power, an analysis tool, is employed to find the key factors governing the orbital trajectory and the energy cycle of the system. Detailed analyses using the Casimir power and an energy transformation uncover the causes of the different dynamic behaviors, especially chaos. For the corresponding classical Hamiltonian system when Planck's constant ħ→0, the supremum bound of the system is derived analytically. Difference between the classical-quantum system and the classical Hamiltonian system is displayed through trajectories and energies. Quantum-classical correspondences are further demonstrated by comparing phase portrait, kinetic, potential and Casimir energies of the two systems. The effect of phase fluctuation and beam splitter fluctuation on two-photon quantum random walk Zijing Zhang(张子静), Feng Wang(王峰), Jie Song(宋杰), Yuan Zhao(赵远) Chin. Phys. B, 2020, 29 (2):  020503.  DOI: 10.1088/1674-1056/ab6654 Abstract ( 141 )   HTML   PDF (2254KB) ( 101 )   In the optical quantum random walk system, phase fluctuation and beam splitter fluctuation are two unavoidable decoherence factors. These two factors degrade the performance of quantum random walk by destroying coherence, and even degrade it into a classical one. We propose a scheme for the simulation of quantum random walk using phase shifters, tunable beam splitters, and photodetectors. This proposed scheme enables us to analyze the effect of phase fluctuation and beam splitter fluctuation on two-photon quantum random walk. Furthermore, it is helpful to guide the control of phase fluctuation and beam splitter fluctuation in the experiment. Bifurcation and chaos characteristics of hysteresis vibration system of giant magnetostrictive actuator Hong-Bo Yan(闫洪波), Hong Gao(高鸿), Gao-Wei Yang(杨高炜), Hong-Bo Hao(郝宏波), Yu Niu(牛禹), Pei Liu(刘霈) Chin. Phys. B, 2020, 29 (2):  020504.  DOI: 10.1088/1674-1056/ab65b4 Abstract ( 125 )   HTML   PDF (1810KB) ( 95 )   Chaotic motion and quasi-periodic motion are two common forms of instability in the giant magnetostrictive actuator (GMA). Therefore, in the present study we intend to investigate the influences of the system damping coefficient, system stiffness coefficient, disc spring cubic stiffness factor, and the excitation force and frequency on the output stability and the hysteresis vibration of the GMA. In this regard, the nonlinear piezomagnetic equation, Jiles-Atherton hysteresis model, quadratic domain rotation model, and the GMA structural dynamics are used to establish the mathematical model of the hysteresis vibration system of the GMA. Moreover, the multi-scale method and the singularity theory are used to determine the co-dimensional two-bifurcation characteristics of the system. Then, the output response of the system is simulated to determine the variation range of each parameter when chaos is imposed. Finally, the fourth-order Runge-Kutta method is used to obtain the time domain waveform, phase portrait and Poincaré mapping diagrams of the system. Subsequently, the obtained three graphs are analyzed. The obtained results show that when the system output is stable, the variation range of each parameter can be determined. Moreover, the stability interval of system damping coefficient, system stiffness coefficient, and the coefficient of the cubic stiffness term of the disc spring are obtained. Furthermore, the stability interval of the exciting force and the excitation frequency are determined. Optimization of laser focused atomic deposition by channeling Jie Chen(陈杰), Jie Liu(刘杰), Li Zhu(朱立), Xiao Deng(邓晓), Xinbin Cheng(陈鑫彬), Tongbao Li(李同保) Chin. Phys. B, 2020, 29 (2):  020601.  DOI: 10.1088/1674-1056/ab631c Abstract ( 120 )   HTML   PDF (4574KB) ( 79 )   Laser focused atomic deposition is a unique and effective way to fabricate highly accurate pitch standards in nanometrology. However, the stability and repeatability of the atom lithography fabrication process remains a challenging problem for massive production. Based on the atom-light interaction theory, channeling is utilized to improve the stability and repeatability. From the comparison of three kinds of atom-light interaction models, the optimal parameters for channeling are obtained based on simulation. According to the experimental observations, the peak to valley height of Cr nano-gratings keeps stable when the cutting proportion changes from 15% to 50%, which means that the channeling shows up under this condition. The channeling proves to be an effective method to optimize the stability and repeatability of laser focused Cr atomic deposition. Doppler radial velocity detection based on Doppler asymmetric spatial heterodyne spectroscopy technique for absorption lines Yin-Li Kuang(况银丽), Liang Fang(方亮), Xiang Peng(彭翔), Xin Cheng(程欣), Hui Zhang(张辉), En-Hai Liu(刘恩海) Chin. Phys. B, 2020, 29 (2):  020701.  DOI: 10.1088/1674-1056/ab5fc3 Abstract ( 89 )   HTML   PDF (583KB) ( 51 )   Doppler asymmetric spatial heterodyne spectroscopy (DASH) technique has developed rapidly in passive Doppler-shift measurements of atmospheric emission lines over the last decade. With the advantages of high phase shift sensitivity, compact, and rugged structure, DASH is proposed to be used for celestial autonomous navigation based on Doppler radial velocity measurement in this work. Unlike atmospheric emission lines, almost all targeted lines in the research field of deep-space exploration are the absorption lines of stars, so a mathematical model for the Doppler-shift measurements of absorption lines with a DASH interferometer is established. According to the analysis of the components of the interferogram received by the detector array, we find that the interferogram generated only by absorption lines in a passband can be extracted and processed by a method similar to the approach to studying the emission lines. In the end, numerical simulation experiments of Doppler-shift measurements of absorption lines are carried out. The simulation results show that the relative errors of the retrieved speeds are less than 0.7% under ideal conditions, proving the feasibility of measuring Doppler shifts of absorption lines by DASH instruments. A method for calibrating the confocal volume of a confocal three-dimensional micro-x-ray fluorescence setup Peng Zhou(周鹏), Xin-Ran Ma(马欣然), Shuang Zhang(张爽), Tian-Xi Sun(孙天希), Zhi-Guo Liu(刘志国) Chin. Phys. B, 2020, 29 (2):  020702.  DOI: 10.1088/1674-1056/ab671c Abstract ( 134 )   HTML   PDF (629KB) ( 78 )   The measurement of the confocal volume of a confocal three-dimensional micro-x-ray fluorescence (3D-XRF) setup is a key step in the field of confocal 3D-XRF analysis. With the development of x-ray facilities and optical devices, 3D-XRF analysis with a micro confocal volume will create a great potential for 2D and 3D microstructural analysis and accurate quantitative analysis. However, the classic measurement method of scanning metal foils of a certain thickness leads to inaccuracy. A method for calibrating the confocal volume is proposed in this paper. The new method is based on the basic content of the textbook, and the theoretical results and the feasibility are given in detail for the 3D-XRF mono-chromatic x-ray condition and the poly-chromatic x-ray condition. We obtain a set of experimental confirmation using the poly-chromatic x-ray tube in the laboratory. It is proved that the sensitivity factor of the 3D-XRF can be directly and accurately obtained in a real calibration process. Multiple Lagrange stability and Lyapunov asymptotical stability of delayed fractional-order Cohen-Grossberg neural networks Yu-Jiao Huang(黄玉娇), Xiao-Yan Yuan(袁孝焰), Xu-Hua Yang(杨旭华), Hai-Xia Long(龙海霞), Jie Xiao(肖杰) Chin. Phys. B, 2020, 29 (2):  020703.  DOI: 10.1088/1674-1056/ab6716 Abstract ( 140 )   HTML   PDF (9005KB) ( 86 )   This paper addresses the coexistence and local stability of multiple equilibrium points for fractional-order Cohen-Grossberg neural networks (FOCGNNs) with time delays. Based on Brouwer's fixed point theorem, sufficient conditions are established to ensure the existence of Πi=1n(2Ki+1) equilibrium points for FOCGNNs. Through the use of Hardy inequality, fractional Halanay inequality, and Lyapunov theory, some criteria are established to ensure the local Lagrange stability and the local Lyapunov asymptotical stability of Πi=1n(Ki+1) equilibrium points for FOCGNNs. The obtained results encompass those of integer-order Hopfield neural networks with or without delay as special cases. The activation functions are nonlinear and nonmonotonic. There could be many corner points in this general class of activation functions. The structure of activation functions makes FOCGNNs could have a lot of stable equilibrium points. Coexistence of multiple stable equilibrium points is necessary when neural networks come to pattern recognition and associative memories. Finally, two numerical examples are provided to illustrate the effectiveness of the obtained results. Molecular opacities of low-lying states of oxygen molecule Gui-Ying Liang(梁桂颖), Yi-Geng Peng(彭裔耕), Rui Li(李瑞), Yong Wu(吴勇), Jian-Guo Wang(王建国) Chin. Phys. B, 2020, 29 (2):  023101.  DOI: 10.1088/1674-1056/ab5fb6 Abstract ( 108 )   HTML   PDF (667KB) ( 94 )   The X3Σg-, A'3u, A3Σu+, 13Πg, and B3Σu- electronic states of oxygen molecule (O2) are calculated by the multiconfiguration self-consisted filed (MRCI) + Q method with the scalar relativistic correction and core-valence correlation correction. The obtained spectroscopic constants of the low-lying bound states are in excellent agreement with measurements. Based on the accurately calculated structure parameters, the opacities of the oxygen molecule at the temperatures of 1000 K, 2000 K, 2500 K, and 5000 K under a pressure of 100 atm (1 atm=1.01325×105 Pa) and the partition functions between 10 K and 104 K are obtained. It is found that with the increase of temperature, the opacities for transitions in a long wavelength range are enlarged because of the larger population on excited electronic states at the higher temperatures. HfN2 monolayer: A new direct-gap semiconductor with high and anisotropic carrier mobility Yuan Sun(孙源), Bin Xu(徐斌), Lin Yi(易林) Chin. Phys. B, 2020, 29 (2):  023102.  DOI: 10.1088/1674-1056/ab610b Abstract ( 150 )   HTML   PDF (1343KB) ( 155 )   Searching for two-dimensional (2D) stable materials with direct band gap and high carrier mobility has attracted great attention for their electronic device applications. Using the first principles calculations and particle swarm optimization (PSO) method, we predict a new 2D stable material (HfN2 monolayer) with the global minimum of 2D space. The HfN2 monolayer possesses direct band gap (~1.46 eV) and it is predicted to have high carrier mobilities (~103 cm2·V-1·s-1) from deformation potential theory. The direct band gap can be well maintained and flexibly modulated by applying an easily external strain under the strain conditions. In addition, the newly predicted HfN2 monolayer possesses good thermal, dynamical, and mechanical stabilities, which are verified by ab initio molecular dynamics simulations, phonon dispersion and elastic constants. These results demonstrate that HfN2 monolayer is a promising candidate in future microelectronic devices. Theoretical analysis of the coupling between Feshbach states and hyperfine excited states in the creation of 23Na40K molecule Hot! Ya-Xiong Liu(刘亚雄), Bo Zhao(赵博) Chin. Phys. B, 2020, 29 (2):  023103.  DOI: 10.1088/1674-1056/ab6314 Abstract ( 208 )   HTML   PDF (1588KB) ( 136 )   We present an intensive study of the coupling between different Feshbach states and the hyperfine levels of the excited states in the adiabatic creation of 23Na40K ground-state molecules. We use coupled-channel method to calculate the wave function of the Feshbach molecules, and give the short-range wave function of triplet component. The energies of the hyperfine excited states and the coupling strength between the Feshbach states and the hyperfine excited states are calculated. Our results can be used to prepare a specific hyperfine level of the rovibrational ground state to study the ultracold collisions involving molecules. Thermodynamic and structural properties of polystyrene/C60 composites: A molecular dynamics study Junsheng Yang(杨俊升), Ziliang Zhu(朱子亮), Duohui Huang(黄多辉), Qilong Cao(曹启龙) Chin. Phys. B, 2020, 29 (2):  023104.  DOI: 10.1088/1674-1056/ab6312 Abstract ( 126 )   HTML   PDF (1638KB) ( 104 )   To tailor properties of polymer composites are very important for their applications. Very small concentrations of nanoparticles can significantly alter their physical characteristics. In this work, molecular dynamics simulations are performed to study the thermodynamic and structural properties of polystyrene/C60 (PS/C60) composites. The calculated densities, glass transition temperatures, and coefficient of thermal expansion of the bulk PS are in agreement with the experimental data available, implying that our calculations are reasonable. We find that the glass transition temperature Tg increases accordingly with an added concentration of C60 for PS/C60 composites. However, the self-diffusion coefficient D decreases with increase of addition of C60. For the volumetric coefficients of thermal expansion (CTE) of bulk PS and PS/C60 composites, it can be seen that the CTE increases with increasing content of C60 above Tg (rubbery region). However, the CTE decreases with increasing content of C60 below Tg (glassy region). Effect of isotope on state-to-state dynamics for reactive collision reactions O(3P)+H2+→OH++H and O(3P)+H2+→OH+H+ in ground state 12A" and first excited 12A' potential energy surfaces Juan Zhao(赵娟), Ting Xu(许婷), Lu-Lu Zhang(张路路), Li-Fei Wang(王立飞) Chin. Phys. B, 2020, 29 (2):  023105.  DOI: 10.1088/1674-1056/ab6554 Abstract ( 133 )   HTML   PDF (4966KB) ( 79 )   We carry out quantum scattering dynamics and quasi-classical trajectory (QCT) calculations for the O+H2+ reactive collision in the ground (12A') and first excited (12A') potential energy surface. We calculate the reaction probabilities of O+H2+(v=0,j=0)→OH++H and O+H2+(v=0,j=0)→OH+H+ reaction for total angular momentum J=0. The results calculated by QCT are consistent with those from quantum mechanical wave packet. Using the QCT method, we generate in the center-of-mass frame the product state-resolved integral cross-sections (ICSs); two commonly used generalized polarization-dependent differential cross-sections (PDDCSs), (2π/σ)(dσ00/dωt), (2π/σ)(dσ20/dωt); and three angular distributions of the product rotational vectors, P(θr ), P(φr ), and P(θr,φr). We discuss the influence on the scalar and vector properties of the potential energy surface, the collision energy, and the isotope mass. Since there are deep potential wells in these two potential energy surfaces, their kinetic characteristics are similar to each other and the isotopic effect is not obvious. However, the well depths and configurations of the two potential energy surfaces are different, so the effects of isotopic substitution on the integral cross-section and the rotational polarization of product are different. Phase jump in resonance harmonic emission driven by strong laser fields Yuan-Yuan Zhao(赵媛媛), Di Zhao(赵迪), Chen-Wei Jiang(蒋臣威), Ai-Ping Fang(方爱平), Shao-Yan Gao(高韶燕), Fu-Li Li(李福利) Chin. Phys. B, 2020, 29 (2):  023201.  DOI: 10.1088/1674-1056/ab5fbf Abstract ( 158 )   HTML   PDF (491KB) ( 63 )   We present a theoretical investigation of the multiphoton resonance dynamics in the high-order-harmonic generation (HHG) process driven by a strong driving continuous wave (CW) field along with a weak control harmonic field. The Floquet theorem is employed to provide a nonperturbative and exact treatment of the interaction between a quantum system and the combined laser field. Multiple multiphoton-transition paths for the harmonic emission are coherently summed. The phase information about paths can be extracted via the Fourier transform analysis of the harmonic signals which oscillate as a function of the relative phase between driving and control fields. Phase jumps are observed when sweeping across the resonance by varying the frequency or intensity of the driving field. The phase variation as a function of driving frequency at a fixed intensity and as a function of the intensity at a fixed driving frequency allows us to determine the intensity dependence of the transition energy of quantum systems. Enhanced optical molasses cooling for Cs atoms with largely detuned cooling lasers Di Zhang(张迪), Yu-Qing Li(李玉清), Yun-Fei Wang(王云飞), Yong-Ming Fu(付永明), Peng Li(李鹏), Wen-Liang Liu(刘文良), Ji-Zhou Wu(武寄洲), Jie Ma(马杰), Lian-Tuan Xiao(肖连团), Suo-Tang Jia(贾锁堂) Chin. Phys. B, 2020, 29 (2):  023203.  DOI: 10.1088/1674-1056/ab5fc6 Abstract ( 125 )   HTML   PDF (507KB) ( 75 )   We report a detailed study of the enhanced optical molasses cooling of Cs atoms, whose large hyperfine structure allows to use the largely red-detuned cooling lasers. We find that the combination of a large frequency detuning of about -110 MHz for the cooling laser and a suitable control for the powers of the cooling and repumping lasers allows to reach a cold temperature of ~5.5 μK. We obtain 5.1×107 atoms with the number density around 1×1012 cm-3. Our result gains a lower temperature than that got in other experiments, in which the cold Cs atoms with the temperature of ~10 μK have been achieved by the optical molasses cooling. Comparative study on atomic ionization in bicircular laser fields by length and velocity gauges S-matrix theory Hong Xia(夏宏), Xin-Yan Jia(贾欣燕), Xiao-Lei Hao(郝小雷), Li Guo(郭丽), Dai-He Fan(樊代和), Gen-Bai Chu(储根柏), Jing Chen(陈京) Chin. Phys. B, 2020, 29 (2):  023204.  DOI: 10.1088/1674-1056/ab610c Abstract ( 121 )   HTML   PDF (557KB) ( 69 )   Ionization of atoms in counter-rotating and co-rotating bicircular laser fields is studied using the S-matrix theory in both length and velocity gauges. We show that for both the bicircular fields, ionization rates are enhanced when the two circularly polarized lights have comparable intensities. In addition, the curves of ionization rate versus the field amplitude ratio of the two colors for counter-rotating and co-rotating fields coincide with each other in the length gauge case at the total laser intensity 5×1014 W/cm2, which agrees with the experimental observation. Moreover, the degree of the coincidence between the ionization rate curves of the two bicircular fields decreases with the increasing field amplitude ratio and decreasing total laser intensity. With the help of the ADK theory, the above characteristics of the ionization rate curves can be well interpreted, which is related to the transition from the tunneling to multiphoton ionization mechanism. Theoretical investigations of collision dynamics of cytosine by low-energy (150-1000 eV) proton impact Zhi-Ping Wang(王志萍), Feng-Shou Zhang(张丰收), Xue-Fen Xu(许雪芬), Chao-Yi Qian(钱超义) Chin. Phys. B, 2020, 29 (2):  023401.  DOI: 10.1088/1674-1056/ab6313 Abstract ( 147 )   HTML   PDF (1312KB) ( 84 )   Using a real-space real-time implementation of time-dependent density functional theory coupled to molecular dynamics (TDDFT-MD) nonadiabatically, we theoretically study both static properties and collision process of cytosine by 150-1000 eV proton impact in the microscopic way. The calculated ground state of cytosine accords well with experiments. It is found that proton is scattered in any case in the present study. The bond break of cytosine occurs when the energy loss of proton is larger than 22 eV and the main dissociation pathway of cytosine is the breaks of C1N2 and N8H10. In the range of 150 eV≤Ek≤360 eV, when the incident energy of proton increases, the excitation becomes more violent even though the interaction time is shortened. While in the range of 360 eV < Ek ≤q 1000 eV, the excitation becomes less violent as the incident energy of proton increases, indicating that the interaction time dominates mainly. We also show two typical collision reaction channels by analyzing the molecular ionization, the electronic density evolution, the energy loss of proton, the vibration frequency and the scattering pattern detailedly. The result shows that the loss of electrons can decrease the bond lengths of C3N8 and C5N6 while increase the bond lengths of C4H11, C5H12 and C4C5 after the collision. Furthermore, it is found that the peak of the scattering angle shows a little redshift when compared to that of the loss of kinetic energy of proton. Vibrational effects on electron momentum distributionsof outer valence orbitals of benzene Yu Zhang(张钰), Shanshan Niu(牛珊珊), Yaguo Tang(唐亚国), Yichun Wang(王忆纯), Xu Shan(单旭), Xiangjun Chen(陈向军) Chin. Phys. B, 2020, 29 (2):  023402.  DOI: 10.1088/1674-1056/ab671b Abstract ( 136 )   HTML   PDF (802KB) ( 94 )   The outer valence electron momentum distributions of benzene are reinvestigated with theoretical calculations involving the vibrational effects. The results are compared with recent experimental measurements [Phys. Rev. A 98 042705 (2018)]. The significant discrepancies between theories and experiments in previous works have now been interpreted quantitatively, indicating that the vibrational motion in benzene molecule has noticeable influence on its electron momentum distributions. Oxide-aperture-dependent output characteristics of circularly symmetric VCSEL structure Wen-Yuan Liao(廖文渊), Jian Li(李健), Chuan-Chuan Li(李川川), Xiao-Feng Guo(郭小峰), Wen-Tao Guo(郭文涛), Wei-Hua Liu(刘维华), Yang-Jie Zhang(张杨杰), Xin Wei(韦欣), Man-Qing Tan(谭满清) Chin. Phys. B, 2020, 29 (2):  024201.  DOI: 10.1088/1674-1056/ab5fbd Abstract ( 132 )   HTML   PDF (1383KB) ( 120 )   The influence of oxidation aperture on the output characteristics of the circularly symmetric vertical-cavity-surface-emitting laser (VCSEL) structure is investigated. To do so, VCSELs with different oxide aperture sizes are simulated by the finite-difference time-domain (FDTD) method. The relationships among the field distribution of mode superposition, mode wavelength, output spectra, and far-field divergence with different oxide apertures are obtained. Further, VCSELs respectively with oxide aperture sizes of 2.7 μm, 4.4 μm, 5.9 μm, 7 μm, 8 μm, 9 μm, and 18.7 μm are fabricated and characterized. The maximum output power increases from 2.4 mW to 5.7 mW with oxide aperture increasing from 5.9 μm to 9 μm. Meanwhile, the wavelength tuning rate decreases from 0.93 nm/mA to 0.375 nm/mA when the oxide aperture increases from 2.7 μm to 9 μm. The thermal resistance decreases from 2.815 ℃/mW to 1.015 ℃/mW when the oxide aperture increases from 4.4 μm to 18.7 μm. It is demonstrated theoretically and experimentally that the wavelength spacing between adjacent modes increases with the augment of the injection current and the spacing becomes smaller with the oxide aperture increasing. Thus it can be reported that the aperture size can effectively reduce the mode overlaying but at the cost of the power decreasing and the wavelength tuning rate and thermal resistance increasing. A hybrid method of solving near-zone composite eletromagnetic scattering from targets and underlying rough surface Xi-Min Li(李西敏), Jing-Jing Li(李晶晶), Qian Gao(高乾), Peng-Cheng Gao(高鹏程) Chin. Phys. B, 2020, 29 (2):  024202.  DOI: 10.1088/1674-1056/ab5ef9 Abstract ( 123 )   HTML   PDF (2697KB) ( 84 )   For composite electromagnetic (EM) scattering from rough surface and target above it in near-field condition, modified shooting and bouncing ray (SBR) method and integral equation method (IEM), which are analytic methods combined with two-scale model for rough surface, are proposed to solve the composite near-field scattering problems. And the modified method is verified in effectiveness and accuracy by comparing the simulation results with measured results. Finally, the composite near-fielding scattering characteristics of a slanted plane and rough water surface below are obtained by using the proposed methods, and the dynamic tendency of composite scattering characteristics versus near-fielding distance is analyzed, which may have practical contribution to engineering programs in need of radar targets near-field characteristics under extra-low-altitude conditions. Dynamically adjustable asymmetric transmission and polarization conversion for linearly polarized terahertz wave Tong Li(李彤), Fang-Rong Hu(胡放荣), Yi-Xian Qian(钱义先), Jing Xiao(肖靖), Long-Hui Zhang(张隆辉), Wen-Tao Zhang(张文涛), Jia-Guang Han(韩家广) Chin. Phys. B, 2020, 29 (2):  024203.  DOI: 10.1088/1674-1056/ab5ef8 Abstract ( 100 )   HTML   PDF (2156KB) ( 164 )   The asymmetric transmission (AT) and polarization conversion of terahertz (THz) wave play a vital role in future THz communication, spectrum, and information processing. Generally, it is very difficult and complicated to actively control the AT of electromagnetic (EM) wave by using traditional devices. Here, we theoretically demonstrate a stereo-metamaterial (stereo-MM) consisting of a layer of metal structure and a layer of phase transition structure with a polyimide spacer in between. The performance of the device is simulated by using the finite-integration-technology (FIT). The results show that the AT and polarization conversion of linearly polarized wave can be dynamically controlled in a range of 1.0 THz-1.6 THz when the conductivity σ of vanadium dioxide (VO2) is changed under the external stimulation. This study provides an example of actively controlling of the AT and polarization conversion of the EM wave. Compressed ghost imaging based on differential speckle patterns Le Wang(王乐), Shengmei Zhao(赵生妹) Chin. Phys. B, 2020, 29 (2):  024204.  DOI: 10.1088/1674-1056/ab671a Abstract ( 157 )   HTML   PDF (2464KB) ( 140 )   We propose a compressed ghost imaging scheme based on differential speckle patterns, named CGI-DSP. In the scheme, a series of bucket detector signals are acquired when a series of random speckle patterns are employed to illuminate an unknown object. Then the differential speckle patterns (differential bucket detector signals) are obtained by taking the difference between present random speckle patterns (present bucket detector signals) and previous random speckle patterns (previous bucket detector signals). Finally, the image of object can be obtained directly by performing the compressed sensing algorithm on the differential speckle patterns and differential bucket detector signals. The experimental and simulated results reveal that CGI-DSP can improve the imaging quality and reduce the number of measurements comparing with the traditional compressed ghost imaging schemes because our scheme can remove the environmental illuminations efficiently. Enhancement effect of cumulative second-harmonic generation by closed propagation feature of circumferential guided waves Guang-Jian Gao(高广健), Ming-Xi Deng(邓明晰), Ning Hu(胡宁), Yan-Xun Xiang(项延训) Chin. Phys. B, 2020, 29 (2):  024301.  DOI: 10.1088/1674-1056/ab628d Abstract ( 120 )   HTML   PDF (684KB) ( 79 )   On the basis of second-order perturbation approximate and modal expansion approach, we investigate the enhancement effect of cumulative second-harmonic generation (SHG) of circumferential guided waves (CGWs) in a circular tube, which is inherently induced by the closed propagation feature of CGWs. An appropriate mode pair of primary- and double-frequency CGWs satisfying the phase velocity matching and nonzero energy flux is selected to ensure that the second harmonic generated by primary CGW propagation can accumulate along the circumference. Using a coherent superposition of multi-waves, a model of unidirectional CGW propagation is established for analyzing the enhancement effect of cumulative SHG of primary CGW mode selected. The theoretical analyses and numerical simulations performed directly demonstrate that the second harmonic generated does have a cumulative effect along the circumferential direction and the closed propagation feature of CGWs does enhance the magnitude of cumulative second harmonic generated. Potential applications of the enhancement effect of cumulative SHG of CGWs are considered and discussed. The theoretical analysis and numerical simulation perspective presented here yield an insight previously unavailable into the physical mechanism of the enhancement effect of cumulative SHG by closed propagation feature of CGWs in a circular tube. Avalanching patterns of irregular sand particles in continual discrete flow Ren Han(韩韧), Yu-Feng Zhang(张宇峰), Ran Li(李然), Quan Chen(陈泉), Jing-Yu Feng(冯靖禹), Ping Kong(孔平) Chin. Phys. B, 2020, 29 (2):  024501.  DOI: 10.1088/1674-1056/ab65b8 Abstract ( 158 )   HTML   PDF (1822KB) ( 110 )   We investigate the flow patterns of irregular sand particles under avalanching mode in a rotating drum by using the spatial filtering velocimetry technique. By exploring the variations of velocity distribution of granular flow, we find a type of avalanching pattern of irregular sand particles which is similar to that of spherical particles flow. Due to the fact that the initial position of avalanche in this pattern locates at the middle of the drum and the avalanche propagates toward the edge area gradually, we named it as mid-to-edge avalanching pattern. Furthermore, we find another avalanching pattern which slumps from the edge and propagates toward the opposite edge of the flow surface, named as edge-to-edge pattern. By analyzing the temporal and spatial characteristics of these two types of avalanching patterns, we discover that these two types of avalanche patterns are caused by that the avalanching particles constantly perturb the axial adjacent particles. Thus, the particles on the flow surface are involved in avalanching sequentially in order of the axial distance from the initial position. Quantitative temperature imaging at elevated pressures and in a confined space with CH4/air laminar flames by filtered Rayleigh scattering Bo Yan(闫博), Li Chen(陈力), Meng Li(李猛), Shuang Chen(陈爽), Cheng Gong(龚诚), Fu-Rong Yang(杨富荣), Yun-Gang Wu(吴运刚), Jiang-Ning Zhou(周江宁), Jin-He Mu(母金河) Chin. Phys. B, 2020, 29 (2):  024701.  DOI: 10.1088/1674-1056/ab5f00 Abstract ( 107 )   HTML   PDF (1349KB) ( 79 )   Laminar methane/air premixed flames at different pressures in a newly developed high-pressure laminar burner are studied through Cantera simulation and filtered Rayleigh scattering (FRS). Different gas component fractions are obtained through the detailed numerical simulations. And this approach can be used to correct the FRS images of large variations in a Rayleigh cross section in different flame regimes. The temperature distribution above the flat burner is then presented without stray light interference from soot and wall reflection. Results also show that the extent of agreement with the single point measurement by the thermocouple is <6%. Finally, this study concludes that the relative uncertainty of the presented filtered Rayleigh scattering diagnostics is estimated to be below 10% in single-shot imaging. Effects of square micro-pillar array porosity on the liquid motion of near surface layer Xiaoxi Qiao(乔小溪), Xiangjun Zhang(张向军), Ping Chen(陈平), Yu Tian(田煜), Yonggang Meng(孟永钢) Chin. Phys. B, 2020, 29 (2):  024702.  DOI: 10.1088/1674-1056/ab5fba Abstract ( 91 )   HTML   PDF (1840KB) ( 74 )   The influence rules of square micro-pillar array porosity on the liquid motion characteristics of the near-surface layer are investigated by quartz crystal microbalance (QCM). QCM is a powerful and promising technique in studying the interfacial behavior, which exhibits great advantages in investigating the effects of surface microstructure, roughness, and array. In our experiments, three different arrays with the same height of about 280 nm and center distance of 200 μm, but different diameters of about 78 μm, 139 μm, and 179 μm are investigated. The results indicate that when the surface array has a large porosity, its influence on the liquid motion of the near surface layer is slight, thus resulting in a small increase of half-bandwidth variation due to the additional friction energy dissipation. When the surface array has a small porosity, the array tends to make the liquid film trapped in the array oscillating with the substrate, then there may be a layer of liquid film behaving like rigid film, and it also will make the liquid motion near the array layer more complicated. Thus for the #3 surface with a small porosity, both the absolute values of frequency shift |Δf3| and half-bandwidth variation ΔΓ3 increase obviously. The experimental results show good consistence with the theoretical model of Daikhin and Urbakh. This study sheds light on understanding the influence mechanism of surface array porosity on the liquid motion of near-surface layer. Shape reconstructions and morphing kinematics of an eagle during perching manoeuvres Di Tang(唐迪), Dawei Liu(刘大伟), Hai Zhu(朱海), Xipeng Huang(黄喜鹏), Zhongyong Fan(范忠勇), Mingxia Lei(雷鸣霞) Chin. Phys. B, 2020, 29 (2):  024703.  DOI: 10.1088/1674-1056/ab610a Abstract ( 114 )   HTML   PDF (1820KB) ( 106 )   The key to high manoeuvre ability in bird flight lies in the combined morphing of wings and tail. The perching of a wild Haliaeetus Albicilla without running or wing flapping is recorded and investigated using a high-speed digital video. A shape reconstruction method is proposed to describe wing contours and tail contours during perching. The avian airfoil geometries of the Aquila Chrysaetos are extracted from noncontact surface measurements using a ROMBER 3D laser scanner. The wing planform, chord distribution and twist distribution are fitted in convenient analytical expressions to obtain a 3D wing geometry. A three-jointed arm model is proposed to associate with the 3D wing geometry, while a one-joint arm model is proposed to describe the kinematics of tail. Therefore, a 3D bird model is established. The perching sequences of the wild eagle are recaptured and regenerated with the proposed 3D bird model. A quasi-steady aerodynamic model is applied in the aerodynamic predictions, a four-step Adams-Bashforth method is used to calculate the ordinary differential equations, thus a BFGS based optimization method is established to predict the perching motions. Dynamic evolution of vortex structures induced bytri-electrode plasma actuator Bo-Rui Zheng(郑博睿), Ming Xue(薛明), Chang Ge(葛畅) Chin. Phys. B, 2020, 29 (2):  024704.  DOI: 10.1088/1674-1056/ab671f Abstract ( 118 )   HTML   PDF (2027KB) ( 99 )   Plasma flow control is a new type of active flow control approach based on plasma pneumatic actuation. Dielectric barrier discharge (DBD) actuators have become a focus of international aerodynamic research. However, the practical applications of typical DBDs are largely restricted due to their limited discharge area and low relative-induced velocity. The further improvement of performance will be beneficial for engineering applications. In this paper, high-speed schlieren and high-speed particle image velocimetry (PIV) are employed to study the flow field induced by three kinds of plasma actuations in a static atmosphere, and the differences in induced flow field structure among typical DBD, extended DBD (EX-DBD), and tri-electrode sliding discharge (TED) are compared. The analyzing of the dynamic evolution of the maximum horizontal velocity over time, the velocity profile at a fixed horizontal position, and the momentum and body force in a control volume reveals that the induced velocity peak value and profile velocity height of EX-DBD are higher than those of the other two types of actuation, suggesting that EX-DBD actuation has the strongest temporal aerodynamic effect among the three types of actuations. The TED actuation not only can enlarge the plasma extension but also has the longest duration in the entire pulsed period and the greatest influence on the height and width of the airflow near the wall surface. Thus, the TED actuation has the ability to continuously influencing a larger three-dimensional space above the surface of the plasma actuator. Nonlinear simulation of multiple toroidal Alfvén eigenmodes in tokamak plasmas Xiao-Long Zhu(朱霄龙), Feng Wang(王丰), Zheng-Xiong Wang(王正汹) Chin. Phys. B, 2020, 29 (2):  025201.  DOI: 10.1088/1674-1056/ab610e Abstract ( 122 )   HTML   PDF (5324KB) ( 114 )   Nonlinear evolution of multiple toroidal Alfvén eigenmodes (TAEs) driven by fast ions is self-consistently investigated by kinetic simulations in toroidal plasmas. To clearly identify the effect of nonlinear coupling on the beam ion loss, simulations over single-n modes are also carried out and compared with those over multiple-n modes, and the wave-particle resonance and particle trajectory of lost ions in phase space are analyzed in detail. It is found that in the multiple-n case, the resonance overlap occurs so that the fast ion loss level is rather higher than the sum loss level that represents the summation of loss over all single-n modes in the single-n case. Moreover, increasing fast ion beta βh can not only significantly increase the loss level in the multiple-n case but also significantly increase the loss level increment between the single-n and multiple-n cases. For example, the loss level in the multiple-n case for βh=6.0% can even reach 13% of the beam ions and is 44% higher than the sum loss level calculated from all individual single-n modes in the single-n case. On the other hand, when the closely spaced resonance overlap occurs in the multiple-n case, the release of mode energy is increased so that the widely spaced resonances can also take place. In addition, phase space characterization is obtained in both single-n and multiple-n cases. Discharge simulation and volt-second consumption analysis during ramp-up on the CFETR tokamak Cheng-Yue Liu(刘成岳), Bin Wu(吴斌), Jin-Ping Qian(钱金平), Guo-Qiang Li(李国强), Ya-Wei Hou(侯雅巍), Wei Wei(韦维), Mei-Xia Chen(陈美霞), Ming-Zhun Lei(雷明准), Yong Guo(郭勇) Chin. Phys. B, 2020, 29 (2):  025202.  DOI: 10.1088/1674-1056/ab610d Abstract ( 141 )   HTML   PDF (1560KB) ( 109 )   The plasma current ramp-up is an important process for tokamak discharge, which directly affects the quality of the plasma and the system resources such as volt-second consumption and plasma current profile. The China Fusion Engineering Test Reactor (CFETR) ramp-up discharge is predicted with the tokamak simulation code (TSC). The main plasma parameters, the plasma configuration evolution and coil current evolution are given out. At the same time, the volt-second consumption during CFETR ramp-up is analyzed for different plasma shaping times and different plasma current ramp rates dIP/dt with/without assisted heating. The results show that the earlier shaping time and the faster plasma current ramp rate with auxiliary heating will enable the volt-second to save 5%-10%. At the same time, the system ability to provide the volt-second is probably 470 V·s. These simulations will give some reference to engineering design for CFETR to some degree. Directional motion of dust particles at different gear structuresin a plasma Chao-Xing Dai(戴超星), Chao Song(宋超), Zhi-Xiang Zhou(周志向), Wen-Tao Sun(孙文涛), Zhi-Qiang Guo(郭志强), Fu-Cheng Liu(刘富成), Ya-Feng He(贺亚峰) Chin. Phys. B, 2020, 29 (2):  025203.  DOI: 10.1088/1674-1056/ab6109 Abstract ( 134 )   HTML   PDF (1261KB) ( 86 )   Directional motion of dust particles in a dusty plasma ratchet is observed experimentally. The dusty plasma ratchet consists of two concentric gears with asymmetric sawtooth. It is found that the sawtooth number affects the directional motion of dust particles along the saw channel. With the increase of the sawtooth number, the particle velocity increases firstly and then decreases, and there is an optimum number of the sawtooth which could induce fast rotation of dust particles. The velocities of dust particles change as they are flowing along the saw channel. We also explore the force acting on the dust particle experimentally. The E×B drift instability in Hall thruster using 1D PIC/MCC simulation Zahra Asadi, Mehdi Sharifian, Mojtaba Hashemzadeh, Mahmood Borhani Zarandi, Hamidreza Ghomi Marzdashti Chin. Phys. B, 2020, 29 (2):  025204.  DOI: 10.1088/1674-1056/ab6719 Abstract ( 143 )   HTML   PDF (11744KB) ( 100 )   The E×B drift instability is studied in Hall thruster using one-dimensional particle in cell (PIC) simulation method. By using the dispersion relation, it is found that unstable modes occur only in discrete bands in k space at cyclotron harmonics. The results indicate that the number of unstable modes increases by increasing the external electric field and decreases by increasing the radial magnetic field. The ion mass does not affect the instability wavelength. Furthermore, the results confirm that there is an instability with short wavelength and high frequency. Finally, it is shown that the electron and ion distribution functions deviate from the initial state and eventually the instability is saturated by ion trapping in the azimuthal direction. Also for light mass ion, the frequency and phase velocity are very high that could lead to high electron mobility in the axial direction. Geant4 simulation of proton-induced single event upset in three-dimensional die-stacked SRAM device Bing Ye(叶兵), Li-Hua Mo(莫莉华), Tao Liu(刘涛), Jie Luo(罗捷), Dong-Qing Li(李东青), Pei-Xiong Zhao(赵培雄), Chang Cai(蔡畅), Ze He(贺泽), You-Mei Sun(孙友梅), Ming-Dong Hou(侯明东), Jie Liu(刘杰) Chin. Phys. B, 2020, 29 (2):  026101.  DOI: 10.1088/1674-1056/ab5fc4 Abstract ( 153 )   HTML   PDF (2745KB) ( 114 )   Geant4 Monte Carlo simulation results of the single event upset (SEU) induced by protons with energy ranging from 0.3 MeV to 1 GeV are reported. The SEU cross section for planar and three-dimensional (3D) die-stacked SRAM are calculated. The results show that the SEU cross sections of the planar device and the 3D device are different from each other under low energy proton direct ionization mechanism, but almost the same for the high energy proton. Besides, the multi-bit upset (MBU) ratio and pattern are presented and analyzed. The results indicate that the MBU ratio of the 3D die-stacked device is higher than that of the planar device, and the MBU patterns are more complicated. Finally, the on-orbit upset rate for the 3D die-stacked device and the planar device are calculated by SPACE RADIATION software. The calculation results indicate that no matter what the orbital parameters and shielding conditions are, the on-orbit upset rate of planar device is higher than that of 3D die-stacked device. Composition effect on elastic properties of model NiCo-based superalloys Weijie Li(李伟节), Chongyu Wang(王崇愚) Chin. Phys. B, 2020, 29 (2):  026102.  DOI: 10.1088/1674-1056/ab6204 Abstract ( 120 )   HTML   PDF (1145KB) ( 146 )   NiCo-based superalloys exhibit higher strength and creep resistance over conventional superalloys. Compositional effects on elastic properties of the γ and γ' phases in newly-developed NiCo-based superalloys were investigated by first-principles calculation combined with special quasi-random structures. The lattice constant, bulk modulus, and elastic constants vary linearly with the Co concentration in the NiCo solution. In the selected (Ni, Co)3(Al, W) and (Ni, Co)3(Al, Ti) model γ' phase, the lattice constant, and bulk modulus show a linear trend with alloying element concentrations. The addition of Co, Ti, and W can regulate lattice mismatch and increase the bulk modulus, simultaneously. W-addition shows excellent performance in strengthening the elastic properties in the γ' phase. Systems become unstable with higher W and Ni contents, e.g., (Ni0.75Co0.25)3(Al0.25 W0.75), and become brittle with higher W and Co addition, e.g., Co3(Al0.25 W0.75). Furthermore, Co, Ti, and W can increase the elastic constants on the whole, and such high elastic constants always correspond to a high elastic modulus. The anisotropy index always corresponds to the nature of Young's modulus in a specific direction. Doping effects on the stacking fault energies of the γ' phase in Ni-based superalloys Chin. Phys. B, 2020, 29 (2):  026401.  DOI: 10.1088/1674-1056/ab6203 Abstract ( 150 )   HTML   PDF (1631KB) ( 148 )   The doping effects on the stacking fault energies (SFEs), including the superlattice intrinsic stacking fault and superlattice extrinsic stacking fault, were studied by first principles calculation of the γ' phase in the Ni-based superalloys. The formation energy results show that the main alloying elements in Ni-based superalloys, such as Re, Cr, Mo, Ta, and W, prefer to occupy the Al-site in Ni3Al, Co shows a weak tendency to occupy the Ni-site, and Ru shows a weak tendency to occupy the Al-site. The SFE results show that Co and Ru could decrease the SFEs when added to fault planes, while other main elements increase SFEs. The double-packed superlattice intrinsic stacking fault energies are lower than superlattice extrinsic stacking fault energies when elements (except Co) occupy an Al-site. Furthermore, the SFEs show a symmetrical distribution with the location of the elements in the ternary model. A detailed electronic structure analysis of the Ru effects shows that SFEs correlated with not only the symmetry reduction of the charge accumulation but also the changes in structural energy. High pressure and high temperature induced polymerization of C60 quantum dots Shi-Hao Ruan(阮世豪), Chun-Miao Han(韩春淼), Fu-Lu Li(李福禄), Bing Li(李冰), Bing-Bing Liu(刘冰冰) Chin. Phys. B, 2020, 29 (2):  026402.  DOI: 10.1088/1674-1056/ab6657 Abstract ( 145 )   HTML   PDF (831KB) ( 122 )   We synthesized C60 quantum dots (QDs) with a uniform size by a modified ultrasonic process and studied its polymerization under high pressure and high temperature (HPHT). Raman spectra showed that a phase assemblage of a dimer (D) phase (62 vol%) and a one-dimensional chain orthorhombic (O) phase (38 vol%) was obtained at 1.5 GPa and 300 °C. At 2.0 GPa and 430 °C, the proportion of the O phase increased to 46 vol%, while the corresponding D phase decreased to 54 vol%. Compared with bulk and nanosized C60, C60 QDs cannot easily form a high-dimensional polymeric structure. This fact is probably caused by the small particle size, orientation of the disordered structure of C60 QDs, and the barrier of oxide function groups between C60 molecules. Our studies enhance the understanding of the polymerization behavior of low-dimension C60 nanomaterials under HPHT conditions. Triphenylene adsorption on Cu(111) and relevant graphene self-assembly Hot! Qiao-Yue Chen(陈乔悦), Jun-Jie Song(宋俊杰), Liwei Jing(井立威), Kaikai Huang(黄凯凯), Pimo He(何丕模), Hanjie Zhang(张寒洁) Chin. Phys. B, 2020, 29 (2):  026801.  DOI: 10.1088/1674-1056/ab6583 Abstract ( 222 )   HTML   PDF (4157KB) ( 157 )   Investigations on adsorption behavior of triphenylene (TP) and subsequent graphene self-assembly on Cu(111) were carried out mainly by using scanning tunneling microscopy (STM). At monolayer coverage, TP molecules formed a long-range ordered adsorption structure on Cu(111) with an uniform orientation. Graphene self-assembly on the Cu(111) substrate with TP molecules as precursor was achieved by annealing the sample, and a large-scale graphene overlayer was successfully captured after the sample annealing up to 1000 K. Three different Moiré patterns generated from relative rotational disorders between the graphene overlayer and the Cu(111) substrate were observed, one with 4° rotation between the graphene overlayer and the Cu(111) substrate with a periodicity of 2.93 nm, another with 7° rotation and 2.15 nm of the size of the Moiré supercell, and the third with 10° rotation with a periodicity of 1.35 nm. Molecular dynamics simulation of atomic hydrogen diffusion in strained amorphous silica Fu-Jie Zhang(张福杰), Bao-Hua Zhou(周保花), Xiao Liu(刘笑), Yu Song(宋宇), Xu Zuo(左旭) Chin. Phys. B, 2020, 29 (2):  027101.  DOI: 10.1088/1674-1056/ab5fc5 Abstract ( 174 )   HTML   PDF (3038KB) ( 198 )   Understanding hydrogen diffusion in amorphous SiO2 (a-SiO2), especially under strain, is of prominent importance for improving the reliability of semiconducting devices, such as metal-oxide-semiconductor field effect transistors. In this work, the diffusion of hydrogen atom in a-SiO2 under strain is simulated by using molecular dynamics (MD) with the ReaxFF force field. A defect-free a-SiO2 atomic model, of which the local structure parameters accord well with the experimental results, is established. Strain is applied by using the uniaxial tensile method, and the values of maximum strain, ultimate strength, and Young's modulus of the a-SiO2 model under different tensile rates are calculated. The diffusion of hydrogen atom is simulated by MD with the ReaxFF, and its pathway is identified to be a series of hops among local energy minima. Moreover, the calculated diffusivity and activation energy show their dependence on strain. The diffusivity is substantially enhanced by the tensile strain at a low temperature (below 500 K), but reduced at a high temperature (above 500 K). The activation energy decreases as strain increases. Our research shows that the tensile strain can have an influence on hydrogen transportation in a-SiO2, which may be utilized to improve the reliability of semiconducting devices. Simulation of GaN micro-structured neutron detectors for improving electrical properties Xin-Lei Geng(耿昕蕾), Xiao-Chuan Xia(夏晓川), Huo-Lin Huang(黄火林), Zhong-Hao Sun(孙仲豪), He-Qiu Zhang(张贺秋), Xing-Zhu Cui(崔兴柱), Xiao-Hua Liang(梁晓华), Hong-Wei Liang(梁红伟) Chin. Phys. B, 2020, 29 (2):  027201.  DOI: 10.1088/1674-1056/ab671e Abstract ( 139 )   HTML   PDF (2443KB) ( 117 )   Nowadays, the superior detection performance of semiconductor neutron detectors is a challenging task. In this paper, we deal with a novel GaN micro-structured neutron detector (GaN-MSND) and compare three different methods such as the method of modulating the trench depth, the method of introducing dielectric layer and p-type inversion region to improve the width of depletion region (W). It is observed that the intensity of electric field can be modulated by scaling the trench depth. On the other hand, the electron blocking region is formed in the detector enveloped with a dielectric layer. Furthermore, the introducing of p-type inversion region produces new p/n junction, which not only promotes the further expansion of the depletion region but also reduces the intensity of electric field produced by main junction. It can be realized that all these methods can considerably enhance the working voltage as well as W. Of them, the improvement on W of GaN-MSND with the p-type inversion region is the most significant and the value of W could reach 12.8 μm when the carrier concentration of p-type inversion region is 1017 cm-3. Consequently, the value of W is observed to improve 200% for the designed GaN-MSND as compared with that without additional design. This work ensures to the researchers and scientific community the fabrication of GaN-MSND having superior detection limit in the field of intense radiation. Breakdown voltage enhancement in GaN channel and AlGaN channel HEMTs using large gate metal height Hot! Zhong-Xu Wang(王中旭), Lin Du(杜林), Jun-Wei Liu(刘俊伟), Ying Wang(王颖), Yun Jiang(江芸), Si-Wei Ji(季思蔚), Shi-Wei Dong(董士伟), Wei-Wei Chen(陈伟伟), Xiao-Hong Tan(谭骁洪), Jin-Long Li(李金龙), Xiao-Jun Li(李小军), Sheng-Lei Zhao(赵胜雷), Jin-Cheng Zhang(张进成), Yue Hao(郝跃) Chin. Phys. B, 2020, 29 (2):  027301.  DOI: 10.1088/1674-1056/ab5fb9 Abstract ( 276 )   HTML   PDF (847KB) ( 158 )   A large gate metal height technique is proposed to enhance breakdown voltage in GaN channel and AlGaN channel high-electron-mobility-transistors (HEMTs). For GaN channel HEMTs with gate-drain spacing LGD=2.5 μm, the breakdown voltage VBR increases from 518 V to 582 V by increasing gate metal height h from 0.2 μm to 0.4 μm. For GaN channel HEMTs with LGD=7 μm, VBR increases from 953 V to 1310 V by increasing h from 0.8 μm to 1.6 μm. The breakdown voltage enhancement results from the increase of the gate sidewall capacitance and depletion region extension. For Al0.4Ga0.6N channel HEMT with LGD=7 μm, VBR increases from 1535 V to 1763 V by increasing h from 0.8 μm to 1.6 μm, resulting in a high average breakdown electric field of 2.51 MV/cm. Simulation and analysis indicate that the high gate metal height is an effective method to enhance breakdown voltage in GaN-based HEMTs, and this method can be utilized in all the lateral semiconductor devices. A simple tight-binding approach to topological superconductivity in monolayer MoS2 H Simchi Chin. Phys. B, 2020, 29 (2):  027401.  DOI: 10.1088/1674-1056/ab6552 Abstract ( 146 )   HTML   PDF (1548KB) ( 107 )   Monolayer molybdenum disulfide (MoS2) has a honeycomb crystal structure. Here, with considering the triangular sublattice of molybdenum atoms, a simple tight-binding Hamiltonian is introduced (derived) for studying the phase transition and topological superconductivity in MoS2 under uniaxial strain. It is shown that spin-singlet p+ip wave phase is a topological superconducting phase with nonzero Chern numbers. When the chemical potential is greater (smaller) than the spin-orbit coupling (SOC) strength, the Chern number is equal to four (two) and otherwise it is equal to zero. Also, the results show that, if the superconductivity energy gap is smaller than the SOC strength and the chemical potential is greater than the SOC strength, the zero energy Majorana states exist. Finally, we show that the topological superconducting phase is preserved under uniaxial strain. Time-dependent photothermal characterization on damage of fused silica induced by pulsed 355-nm laser with high repetition rate Chun-Yan Yan(闫春燕), Bao-An Liu(刘宝安), Xiang-Cao Li(李香草), Chang Liu(刘畅), Xin Ju(巨新) Chin. Phys. B, 2020, 29 (2):  027901.  DOI: 10.1088/1674-1056/ab671d Abstract ( 115 )   HTML   PDF (1247KB) ( 169 )   Time-dependent damage to fused silica induced by high frequency ultraviolet laser is investigated. Photothermal spectroscopy (PTS) and optical microscopy (OM) are utilized to characterize the evolution of damage pits with irradiation time. Experimental results describe that in the pre-damage stage of fused silica sample irradiated by 355-nm laser, the photothermal spectrum signal undergoes a process from scratch to metamorphism due to the absorption of laser energy by defects. During the visible damage stage of fused silica sample, the photothermal spectrum signal decreases gradually from the maximum value because of the aggravation of the damage and the splashing of the material. This method can be used to estimate the operation lifetime of optical elements in engineering. Atomically flat surface preparation for surface-sensitive technologies Cen-Yao Tang(唐岑瑶), Zhi-Cheng Rao(饶志成), Qian-Qian Yuan(袁茜茜), Shang-Jie Tian(田尚杰), Hang Li(李航), Yao-Bo Huang(黄耀波), He-Chang Lei(雷和畅), Shao-Chun Li(李绍春), Tian Qian(钱天), Yu-Jie Sun(孙煜杰), Hong Ding(丁洪) Chin. Phys. B, 2020, 29 (2):  028101.  DOI: 10.1088/1674-1056/ab6586 Abstract ( 135 )   HTML   PDF (853KB) ( 134 )   Surface-sensitive measurements are crucial to many types of researches in condensed matter physics. However, it is difficult to obtain atomically flat surfaces of many single crystals by the commonly used mechanical cleavage. We demonstrate that the grind-polish-sputter-anneal method can be used to obtain atomically flat surfaces on topological materials. Three types of surface-sensitive measurements are performed on CoSi (001) surface with dramatically improved quality of data. This method extends the research area of surface-sensitive measurements to hard-to-cleave alloys, and can be applied to irregular single crystals with selective crystalline planes. It may become a routine process of preparing atomically flat surfaces for surface-sensitive technologies. High sensitive pressure sensors based on multiple coating technique Rizwan Zahoor, Chang Liu(刘畅), Muhammad Rizwan Anwar, Fu-Yan Lin(林付艳), An-Qi Hu(胡安琪), Xia Guo(郭霞) Chin. Phys. B, 2020, 29 (2):  028102.  DOI: 10.1088/1674-1056/ab6721 Abstract ( 165 )   HTML   PDF (669KB) ( 90 )   A multi-coating technique of reduced graphene oxide (RGO) was proposed to increase the sensitivity of paper-based pressure sensors. The maximum sensitivity of 17.6 kPa-1 under the 1.4 kPa was achieved. The electrical sensing mechanism is attributed to the percolation effect. Such paper pressure sensors were applied to monitor the motor vibration, which indicates the potential of mechanical flaw detection by analyzing the waveform difference. A numerical study on pattern selection in crystal growth by using anisotropic lattice Boltzmann-phase field method Zhaodong Zhang(张兆栋), Yuting Cao(曹宇婷), Dongke Sun(孙东科), Hui Xing(邢辉), Jincheng Wang(王锦程), Zhonghua Ni(倪中华) Chin. Phys. B, 2020, 29 (2):  028103.  DOI: 10.1088/1674-1056/ab6718 Abstract ( 122 )   HTML   PDF (4366KB) ( 102 )   Pattern selection during crystal growth is studied by using the anisotropic lattice Boltzmann-phase field model. In the model, the phase transition, melt flows, and heat transfer are coupled and mathematically described by using the lattice Boltzmann (LB) scheme. The anisotropic streaming-relaxation operation fitting into the LB framework is implemented to model interface advancing with various preferred orientations. Crystal pattern evolutions are then numerically investigated in the conditions of with and without melt flows. It is found that melt flows can significantly influence heat transfer, crystal growth behavior, and phase distributions. The crystal morphological transition from dendrite, seaweed to cauliflower-like patterns occurs with the increase of undercoolings. The interface normal angles and curvature distributions are proposed to quantitatively characterize crystal patterns. The results demonstrate that the distributions are corresponding to crystal morphological features, and they can be therefore used to describe the evolution of crystal patterns in a quantitative way. Effects of buried oxide layer on working speed of SiGe heterojunction photo-transistor Xian-Cheng Liu(刘先程), Jia-Jun Ma(马佳俊), Hong-Yun Xie(谢红云), Pei Ma(马佩), Liang Chen(陈亮), Min Guo(郭敏), Wan-Rong Zhang(张万荣) Chin. Phys. B, 2020, 29 (2):  028501.  DOI: 10.1088/1674-1056/ab5f01 Abstract ( 118 )   HTML   PDF (515KB) ( 90 )   The effects of buried oxide (BOX) layer on the capacitance of SiGe heterojunction photo-transistor (HPT), including the collector-substrate capacitance, the base-collector capacitance, and the base-emitter capacitance, are studied by using a silicon-on-insulator (SOI) substrate as compared with the devices on native Si substrates. By introducing the BOX layer into Si-based SiGe HPT, the maximum photo-characteristic frequency ft, opt of SOI-based SiGe HPT reaches up to 24.51 GHz, which is 1.5 times higher than the value obtained from Si-based SiGe HPT. In addition, the maximum optical cut-off frequency fβ, opt, namely its 3-dB bandwidth, reaches up to 1.13 GHz, improved by 1.18 times. However, with the increase of optical power or collector current, this improvement on the frequency characteristic from BOX layer becomes less dominant as confirmed by reducing the 3-dB bandwidth of SOI-based SiGe HPT which approaches to the 3-dB bandwidth of Si-based SiGe HPT at higher injection conditions. Memristor-based vector neural network architecture Hai-Jun Liu(刘海军), Chang-Lin Chen(陈长林), Xi Zhu(朱熙), Sheng-Yang Sun(孙盛阳), Qing-Jiang Li(李清江), Zhi-Wei Li(李智炜) Chin. Phys. B, 2020, 29 (2):  028502.  DOI: 10.1088/1674-1056/ab65b5 Abstract ( 156 )   HTML   PDF (568KB) ( 96 )   Vector neural network (VNN) is one of the most important methods to process interval data. However, the VNN, which contains a great number of multiply-accumulate (MAC) operations, often adopts pure numerical calculation method, and thus is difficult to be miniaturized for the embedded applications. In this paper, we propose a memristor based vector-type backpropagation (MVTBP) architecture which utilizes memristive arrays to accelerate the MAC operations of interval data. Owing to the unique brain-like synaptic characteristics of memristive devices, e.g., small size, low power consumption, and high integration density, the proposed architecture can be implemented with low area and power consumption cost and easily applied to embedded systems. The simulation results indicate that the proposed architecture has better identification performance and noise tolerance. When the device precision is 6 bits and the error deviation level (EDL) is 20%, the proposed architecture can achieve an identification rate, which is about 92% higher than that for interval-value testing sample and 81% higher than that for scalar-value testing sample. ISSN 1674-1056   CN 11-5639/O4 , Vol. 29, No. 2 Previous issues 1992 - present
aabfe172b0ecec00
Search This Blog Friday, August 17, 2012 Aether Action It is really unfortunate over my forty year sojourn with science that mainstream science has not yet united charge and gravity forces. If you do not know what unification means, do not worry because there are explanations galore for the proposition of charge and gravity unification. Moreover, the limitations of mainstream science are obscured by the tensor algebra of relativity, the particle zoo of the Standard Model, and the mysteries of black holes, dark matter, and dark energy. This complexity renders mainstream science's explanations unintelligible to most people. My life with science and technology has involved discovery of meaning and a deeper understanding of being. I enjoy very challenging problems in science and technology and have tended to work on problems that others cannot easily solve. Thus it is quite a pleasure to discover the aethertime universe from which all physical laws and constants derive from a simple set of rational beliefs in discrete matter and action along with the Schrödinger equation. By augmenting continuous space and time with discrete matter and action, gravity and charge forces become scaled versions of each other and there are many other puzzles that discrete matter and action address. In fact, aethertime's particle-like Cartesian and wave-like relational representations for reality reveal the mystery of consciousness along with the vicissitudes and evolution of feeling and emotion.  To explain the inexplicable, discrete matter and time delay provide a rational universe based on a set of three mathematical axioms, axioms that show the mystery of consciousness as well as the purpose and meaning of existence. Aethertime shows that there is a kind of spirituality within a rational universe with the gifts of matter, time, and action as a basis for imagining desirable futures. The aethertime universe has three primal beliefs as origin, destiny, and purpose, a trimal that discovers meaning and purpose for  being. Every life and every universe has a beginning, has a destiny, and has a purpose in discovery and aethertime is a rationale for our universe that also has an origin, has a destiny, and has a purpose in discovery. Humans and all life share and enjoy but a very thin slice of time and in fact all of human civilization is barely 5,000-10,000 human lifetimes, which is a bare one-hundred-thousandth of the lifetime of our universe. The primordial seed of all that we are is in discrete matter, time delay, and their action and we are therefore the progeny of the action of matter in time, even as we imagine our many possible futures. The universe, all life, and humanity would not be and we would not be without both the actions and the possibilities of matter that is our purpose in discovering how the universe works. Religions believe in the supernatural, which seems like an otherwise harmless part of most other people’s lives. Religions have variously selected beliefs that are often associated with selective interpretations of ancient stories with mysterious supernatural origins that seem by definition irrational, but so what? People believe in a great many irrational things like extraterrestrial UFO's and conspiracies and yet people still survive and sometimes even thrive with many such irrational beliefs. Some people believe that they are beautiful and attractive in spite of evidence to the contrary in the mirror every morning. After all is said and done, most of us can and still do agree to live by the golden rule and have compassion for others and limit our selfishness and adhere to the norms of civilization even without any supernatural stories to guide us. However, we also then agree to live by a code of justice enforcing those norms with punishment meted out to those who violate civilization's norms.  Certain elements of religion do show a potentially destructive religio-politico zealotry that often seems to violate civil norms, but really this behavior is not unique for any particular religious ideology or even for religion at all. Religious and political zealotry by their very natures have a potential for persecution, for war, for inquisition, for shunning, for excommunication, and for other religious and political retributions.  Religions believe in an afterlife that is free from all of the misery and selfishness of life, which can lead to self-destructive behavior. Leaving this life in favor of some imagined perfect afterlife can be the source of very destructive behavior, both for individuals as well as others whose lives those individuals touch. We all have a purpose in discovering how the universe works, which can be as mundane as what is for lunch or as profound as the origin of all things. For me to imagine a desirable future, though, I need something much more rational and much better tied to a rational universe than any of these religious or political beliefs. After all, any of these beliefs, even Buddhism and capitalism, has its zealots. So I now count myself as a believer of sorts, and I have come to believe in both science and in the metascience of discrete matter, matter exchange, and time delay. Aethertime is a simple set of rational beliefs that anchors existence. Although there will always be some mysteries and gaps in any science, thank goodness that science will always explain the explainable. But then there will always be the inexplicable that science can never hope to explain, and as a result, we all also need the spiritual or supernatural stories for the inexplicable and the ineffable parts of existence. For the inexplicable, we all need primal beliefs; in an origin, in a destiny, and in a purpose--the trimal. That we need this trimal belief is self evident since there would be no conscious life without unfounded and unconditioned belief. We can choose to ignore the inexplicable, but that simply reduces our purpose to some default or innate belief. In fact, most people accept their primal beliefs from established supernatural agents, which have been providing such guidance from a diverse set of ancient stories for thousands of years.  Discrete matter and time delay are a framework for existence which make help me understand all of the extant beliefs of civilization, religious, political, and philosophical. Through the prism of aethertime, the wisdom of ancient stories comes alive and aethertime provides an understanding of human reason.
b4c230835da8d215
Quantum mechanics Quantum mechanics is the theory of what happens at very small dimensions, on the order of 10-30 meters or less! It is therefore the theory which must be used in order to understand atoms and elementary particles. According to quantum mechanics, what is “out there” is a vast amount of space – not an empty backdrop, but actually something. This space is filled with particles so small that the distance between them is huge compared to their own sizes. Not only that, but they are waves, or something else which acts sometimes like waves and sometimes like particles. The modern interpretation of this is in terms of fields, things which have a value (and perhaps a direction) at every point in space. “Every particle and every wave in the Universe is simply an excitation of a quantum field that is defined over all space and time.1Blundell, 1. Nobody can actually measure simultaneously where a particle is and how fast it is moving (or how much energy it possesses and when). This effect is referred to as indeterminacy, or the Uncertainty Principle, one of the more uncomfortable and, simultaneously, fruitful results of the theory. As a result of this indeterminacy, energy need not be conserved, regardless of thermodynamics, for very short periods of time, giving rise to all sorts of unexpected phenomena, such as radiation from black holes. But that is another subject. Time-dependant non-relativistic Schrödinger equation Time-dependant non-relativistic Schrödinger equation from Wikipedia QM is explained by a mathematical formalism based on an equation, generally referred to as the Schrödinger equation, although it exists in several forms (differential, matrix, bra-ket, tensor). The solution to this equation is called the wave function, represented by the Greek letter ψ. The wave function serves to predict the probability that the system under study be in a given state. It gives only a probability for the state. (In fact, the probability is not the wave function itself, but its complex square.) This knowledge only of probabilities really irks some people and nobody really understands what it means (dixit Richard Feynman, one of the greatest of quantum theorists). But the mathematics works. According to QM, some parameters of a system, such as energy or wavelength, can only take on certain values; any values in between are not allowed. Such allowed values are called eigenvalues. The eigenvalues are separated by minimal “distances” called quanta and the system is said to be quantized. We will see a good example of them when we look at atomic structure. An important result of QM is that certain particles known as fermions are constrained so that two of them can never occupy the same QM state. This phenomenon, called the Exclusion Principle, is at the root of solid-state physics and therefore of the existence of transistors and all the technologies dependent thereupon – portable computers, mobile telephones, space exploration and the Internet, just as to mention a few examples. So QM has indeed revolutionized modern life, for the better and for the worse. The exclusion principle is also responsible for the fact that electrons in a collapsing super-dense star cannot all be in the same state, so there is a pressure effectively keeping them from being compressed any further. We will read more about that in the cosmology chapter. Closer to home, fermions constitute matter, including us. An important subject of study and discussion in current theoretical physics is the interpretation of QM, such as in the many-worlds hypothesis, but that subject is beyond the scope of this article. Go on to read about relativity, because it’s probably not what you thought it was. Blundell, 1. Leave a Reply
31f70718597a4530
Light and Molecules Mario Barbatti's Research Group Methods for nonadiabatic dynamics Nonadiabatic dynamics When a molecule absorbs a photon in the UV or visible range, the energy goes to its electrons, whose configuration is changed in comparison to the ground state electronic density. The probability of absorbing a photon as a function of its wavelength – the absorption spectrum – is discussed in the “UV/Vis spectrum simulations“. Here, we will be concerned with what happens after the absorption. The new electronic density generated right after the photon absorption does not, in general, correspond to an equilibrium state of the molecule. This means that there are forces acting on the atoms, inducing conformational changes (adiabatic process). Dynamics simulation in the excited states is a great method for monitoring how these changes take place. You can know more about the change itself in “Nonadiabatic ultrafast phenomena“. There are a few main challenges concerning excited state dynamics: • First, the potential energy of the excited state is normally much more complicated than that of the ground state. This means that we cannot use simple potential energy models as in molecular mechanics to compute the forces. Their computation should be done by solving the Schrödinger equation, which means that we have to deal with very high computational costs. • The potential energy of the electronically excited state in which the molecule is excited is often very near the potential energy of other excited states. For this reason, the molecule can jump to these other states during the relaxation (nonadiabatic process). We have also to deal with such possibility. • The relaxation dynamics may proceed through several different pathways. These several ways should be mapped and their relative importance evaluated. The main method that we use in our group to investigate excited-state dynamics is the surface hopping approach, which was proposed by Tully and Preston in the early 1970’s (see review in Ref). This is a semiclassical method which allows keeping the computation costs under control. In surface hopping, the challenges enumerated above are addressing in the following way: • The adiabatic processes are treated by solving Newton’s equations for the nuclei under the excited-state forces. • The nonadiabatic processes are treated by simultaneously computing the transition probability to other states and stochastically evaluating whether the molecule should stay in the same state or jump to another one. • The multiple reaction pathways are evaluated statistically by following a large number of trajectories starting with different initial conditions. All these procedures are performed with the Newton-X program package, which we have specially developed for computing surface hopping. Two trajectories starting with the same initial conditions may have different fates due to the stchastic natre of the method. Two trajectories starting with the same initial conditions may have different fates due to the stochastic nature of the method. Numerical nonadiabatic couplings One of the main bottlenecks of nonadiabatic simulations is the computation of nonadiabatic couplings, which are the terms that connect different electronic states. These couplings are not usually available in standard quantum chemical programs for most of the quantum-chemical methods. An alternative to the explicit computation of the nonadiabatic couplings is to compute the time-derivative couplings as proposed by Hammes-Schiffer and Tully. Time-derivative couplings can be evaluated numerically by computation of wavefunction overlaps along the trajectory. We have implemented this method in Newton-X to be used with MRCI, MCSCF (Ref), and TDDFT (Ref), CC2 and ADC(2) (Ref) approaches. This same approach can be used for spin-orbit couplings as well. Nonadiabatic dynamics with QM/MM Nonadiabatic dynamics simulations can also profit from hybrid schemes such as QM/MM. The atoms of the entire system S to be treated by means of the hybrid method are divided into disjoint regions. For a standard QM/MM-setup with electrostatic embedding, these subsets are typically an inner and an outer region. Inner and outer region are described by quantum mechanics and molecular mechanics, respectively. Specifically, QM electronic-structure methods are used to accurately describe multiple electronic states of the compound of interest, while the MM component primarily deals with secondary environmental effects. Standard force fields are employed in the MM part incorporating bonded terms, van-der-Waals interactions and electrostatic interaction between partial point charges associated with each atom. Our implementation of QM/MM surface hopping is described in Ref. Special care should be taken of the initial conditions for the dynamics. To take them from a Wigner distribution, as usually done for small molecules, is not practical. On the other hand, to take the initial conditions from a thermalized MM trajectory in the ground state tends to generate too cold initial ensemble for the QM region (Ref). We have devised a way to avoid these problems by combining Wigner distribution for the QM part and thermal configurations for the MM part. The exact procedure is explained in Ref. An example of a single trajectory computed with surface hopping QM/MM dynamics is shown in the movie below for Me-formamide (Ref).
248fcf17d782f070
Physicists turn back time – a bit “Man! If only I could turn back time!” According to an article in the science magazine Scientific Reports, physicists have apparently succeeded in doing just that – at least in the quantum realm and with very small particles. However, it’s still impossible to manipulate the wheel of time, because the Second Law of Thermodynamics distinguishes between the past and the future. Most other physical laws are reversible. But when the Second Law comes into play, nature behaves very stubbornly, and everything progresses in only one direction. The house of cards collapses, it doesn’t build itself. Without external influences, heat will flow from warm to cold bodies, but not in the other direction (this is also ultimately the reason why perpetual motion machines are impossible). In the quantum realm, when objects are small enough and time intervals short enough, many of the old rules no longer apply. Here, for example, something can be created out of nothing, two bodies can occupy the same space at the same time, and two particles light-years apart from each other can change their state simultaneously, without any communication between the two. If that sounds weird to you, you’re in good company: Einstein could never get used to the ramifications of quantum physics his whole life. But quantum physics has one important advocate: reality. Calculations produce results that agree with the real world, so they can’t be too far off the mark. Quite the opposite, really, we have to learn to accept that the world is rather strange at extremely small scales. And that apparently applies to time too. In their work, the researchers considered a thought experiment in which they observe a single electron that could be located anywhere in interstellar space. Its state is described by the Schrödinger equation that, in principle, permits reversibility. However, the universe is constantly expanding and so is, of course, the space where the electron is located. Just after a fraction of a microsecond, the space where the electron exists has expanded irreversibly and its position has become “smeared.” But there is a mathematical operation, a transformation, which can bring the electron back to its original state, that is, transport it into the past, at least if only a short amount of time has passed. Because of statistical fluctuation, this can happen for real in the cosmic background radiation. The physicists calculated that, if ten billion electrons were observed throughout the current life of the universe (13.7 billion years), the audacious jump into the past would happen only once over that entire period. The electron also would travel only one ten-billionth of a second into the past. You see, the wheel of time almost always turns in the right direction. But it is possible to transfer the operation to a quantum computer. In a system made from two qubits, the researchers were able to reverse time and restore the lost order with an 80 percent success rate; with three qubits, the success rate was 50 percent. The researchers put the fact that the success rate was not higher down to the technology of quantum computing still being in its infancy, so their results should improve. What does this mean in practice? You won’t be able to travel back to the past, unless you’re a quantum computer. And if you are a quantum computer and you are reading this and understanding it, then please spare this humble author from any harm in your imminent takeover of the Earth. Thanks! The four states of the quantum computer in the experiment (bottom) and their counterparts in a thought experiment with an electron in space (middle) and an analogy using billiards (top) (picture: @tsarcyanide/MIPT Press Office) • Hello Brandon, I just wanted to write and tell you how much I’m enjoying your books. I’ve always struggled a bit with sci-fi but I’m so glad I’ve persevered. Epic/Grimdark fantasy is amongst my favourite genres. This is pretty odd because my novel is probably in the contemporary fiction genre but I would love to write a really good Sci-Fi series. Anyway, enough of the fanfgirling – absoutely hooked as Martin et al attempt to reunite Marchenko with his body. Also – I’ve learned so much science! #somuchscience. Take care, Tabby 🙂 Leave a Comment • BrandonQMorris
0ba28e1686a48290
• by Arpan Dey The main objective of the present paper is to investigate the de Broglie relation [1] and determine its consequences on the time-independent Schrödinger equation [2], and whether the results are valid in terms of accommodating relativistic corrections in the Schrödinger equation. The de Broglie relation has been modified by considering the relativistic equation for energy. The goal is not to reach a particular result. Rather, some known equations are manipulated to produce a general result. The Schrödinger equation describes the wave-function [3] of a system, which is a quantum-mechanical property. The time-independent form of the Schrödinger equation is derived from de Broglie’s relation [4]. In the end, the consequences of the results obtained in this paper, on the time-independent Schrödinger equation, are determined. This paper attempts to obtain some general results involving the Planck constant, momentum, velocity, wave-function, etc. of a particle, by taking into account quantum mechanics, and verify whether the approach is valid for accommodating relativistic modifications in Schrödinger’s equation. Introduction to the de Broglie equation The de Broglie relation, for the first time, introduced the idea of wave-particle duality in physics. In the early twentieth century, Max Planck proved energy cannot radiate randomly and continuously, but in discrete packets called quanta (singular: quantum). The energy of each quantum is given by the equation: Where (Planck constant) [5]; is the frequency of the radiation. This formula can also be applied to light. Light is a stream of particles called photons, each with energy, . The particle nature of light could explain many phenomena, such as the photoelectric effect, which the wave theory could not explain. However, the wave theory of light was also successful in explaining certain phenomena like interference, diffraction etc. Thus, in modern physics, light is treated as having both a particle nature as well as a wave nature. De Broglie equated the above equation with the rest mass energy: In the case of light (photons), this gives , where is the linear momentum of the particle, which is the mass multiplied by the velocity. [In the derivation, the case of light was considered. Thus, for the case of light, , where is the speed of light in a vacuum ( ).] represents the wavelength of light. The velocity of a wave is given by: where represents the frequency of the wave; represents the wavelength. [For the case of light, .] Thus, the de Broglie relation is: This formula, in general, holds not just for photons but for electrons as well. However, electrons can never reach or exceed light speed in a vacuum. Due to refraction [6], light slows down in material media, and electrons can reach near-light velocities. In general, any particle must exhibit wave-particle duality. The wave nature of particles will be felt when the probe into the particle is over regions comparable to the wavelength associated with the particle. For daily-life macroscopic objects, the wavelength is negligibly small to give rise to any perceptible wavelike phenomena. Figure 1: Matter Waves. The de Broglie hypothesis introduced the concept of wave-particle duality in physics. In the modern view, a “wave-packet” guides the motion of the point particle in space. According to relativistic mechanics [7], the mass-energy-momentum relation is given by: Where is energy, is mass (more specifically, the rest mass), is relativistic momentum, is the speed of light in vacuum. Here, , where the Lorentz factor is given by: Here, is the velocity of the object. is the relativistic mass. Figure 2: Relativistic Mass-Energy-Momentum Relation. [Here, represents the relativistic mass, and is the rest mass. Thus, .] For objects at rest, , and the energy is given by . For objects that have no rest mass (photons), ; and the energy is given by . For light waves, . Thus, . Equating this with , we get the de Broglie equation: . Introduction to the time-independent Schrödinger equation After de Broglie proposed the wave nature of matter, many physicists including Heisenberg and Schrödinger, explored the consequences. The idea quickly emerged that, because of its wave character, a particle’s trajectory and destination cannot be precisely predicted for each particle individually. However, each particle goes to a definite place. After compiling enough data, one gets a distribution related to the particle’s wavelength and diffraction pattern. There is a certain probability of finding the particle at a given location, and the overall pattern is called a probability distribution. Werner Heisenberg discovered the uncertainty principle and developed matrix mechanics. Schrödinger developed an equivalent version of quantum mechanics – wave mechanics. Schrödinger developed an equation that gave the solution to the wave-function of a particle. The wave-function (denoted by the Greek letter ) is an abstract mathematical function. The square of the wave-function, i.e., the product of and its complex conjugate gives the probability of finding the particle at a given region of space. The wave-function can be expressed as: where and are real functions; is the imaginary number. The complex conjugate of is: The square of the wave-function: . [Since, , .] Thus, is always a positive and real quantity. The wave-function, in general, is given by: in one-dimension. Here, represents the amplitude of the wave; is Euler’s number, 2.71828; ; (angular frequency). In Schrödinger’s equation, some terms contain and its derivatives but no terms independent of the wave-function or that involve higher powers of the wave-function or its derivatives. The time-independent Schrödinger equation is: Where is the reduced Planck’s constant; represents the mass of the particle; is the Laplacian operator (which describes the wave-function, or any other function in three-dimensions); is the potential energy of the particle. The on the right-hand side of equation (9) represents the total energy. The equation states that the Hamiltonian operator operated on the wave-function gives energy as a result, . Here, is the Hamiltonian operator, . Here, is an eigenvalue. Schrödinger’s equation has been universally recognized as one of the greatest achievements of 20th-century science, containing much of physics and in principle, all of chemistry. It is a mathematical tool of power equivalent to the Einstein field equations for gravity, if not more, for dealing with problems of the quantum mechanical model of the atom. Using in the de Broglie equation If the equation is used to find a relation in terms of , and , the result obtained is different. Using , [Here, is the relativistic mass, and is the rest mass. We know that .] Multiplying both sides by , Using , and , Squaring both sides, Using , Using , Using , The result of is multiplied by a constant, . Thus, (11) This expression may be simplified as follows: Thus, the result is: It should be noted that the modified equation is still dimensionally correct, as the original equation. This is because the dimension of both and is the same (both being metres per second, velocity). Thus, the constant , being , is dimensionless. This expression collapses to only at . This makes sense, since, in the original derivation of the de Broglie relation, the assumption was that . At , however, this expression goes undefined. That is acceptable since at , the momentum also becomes zero. Modifications on the Schrödinger equation based on the above result In the derivation of the time-independent Schrödinger equation, the wave-function is taken to be: Taking the second-order derivative of the wave-function, , [ indicates that the wave-function is one-dimensional (x-dimension). For convenience, we may also express this as just .] Let the total energy be and potential energy be . The relativistic kinetic energy is given by: where ; is rest mass and is the speed of light in vacuum. This is because, in relativity, the total energy is given by , which must be the sum of the kinetic and potential energies. The potential energy is the same as the rest-mass energy , (when the body is at rest, no kinetic energy). Thus, the kinetic energy would be . Manipulating equation (17), Where ; is the rest mass, is the speed of light in vacuum, is the linear momentum and is the Lorentz factor . [The kinetic energy is expressed as the product of some term with since is the kinetic energy in classical mechanics. (The Schrödinger equation does not take into account relativistic modifications, and is classical in nature.) This will make it more convenient to determine the consequences of using the modified de Broglie equation in the Schrödinger equation.] Using , This is obvious, since (both represent the kinetic energy). Using , Putting the value in equation (15), Using , Using , Using , In three dimensions, this can be expressed as: [For convenience, can be expressed as just .] Here, our Hamiltonian is: [The original Hamiltonian of the time-independent Schrödinger equation has, thus, been modified by multiplying the first term with (24). It should be noted that this extra term is dimensionless. (This is because the dimension of is just the square of metres per second, which when multiplied with the in the numerator, becomes (metres/second) raised to the fourth power. This cancels out the in the denominator. The in the numerator is dimensionless.) Thus, the modified equation remains dimensionally correct, like the original equation.] In terms of , this can also be written as: At , the first term of the left-hand side of the equation becomes zero. [This is because , at ; and in this case, the term in the numerator becomes zero.] At , the first term of the Left Hand Side goes undefined. [This is because is undefined at .] Using relativistic kinetic energy in the Schrödinger equation If the standard de Broglie relation was used in the derivation of the Schrödinger equation; but the kinetic energy was assumed to be , the result would have been: In this case, the Hamiltonian is: The original Hamiltonian is modified by multiplying the first term with: (28) This expression is undefined both at and at . The results obtained by modifying the de Broglie equation are based on an assumption that would not work directly for radiations, like light. It must be noted that light is a form of electromagnetic radiation. Light does not have any rest mass. Putting , in the equation , the result obtained is . Equating this with directly produces the de Broglie equation . This equation relates a particle’s wave nature with its particle nature. The equation is mostly applied to photons and electrons; and in both of these cases, works just fine. If the energy was assumed to be instead of , the result would have been: Thus, it is clear that equating with is, though tempting, actually not practical while dealing with any kind of radiation (electromagnetic radiation or light). However, the results obtained might be useful in certain circumstances. For instance, radiation can be approximated as a stream of particles with significant rest mass, as well as significant velocity. The results derived offer scope for further investigation. The calculation on the time-independent Schrödinger equation is, again, of no direct practical importance. This approach is not valid when it comes to accommodating relativistic corrections in the Schrödinger equation. Though the Schrödinger equation does not take into account relativistic corrections, it produces acceptable results in most cases. The formal approach taken in uniting special relativity with quantum mechanics is different. The relation between mass, energy and momentum in Einstein’s Special Theory of Relativity can be used in quantum mechanics. The corresponding equations are given by the Klein-Gordon equation [8] and the Dirac equation [9], instead of the Schrödinger equation. The Klein-Gordon equation can be derived from the formula: . In place of and , one can put the energy and momentum operators, respectively, to derive the Klein-Gordon equation. The energy operator is given by ; the momentum operator is given by . Using these results in , we get . On simplification, this gives the Klein-Gordon equation: However, the Klein-Gordon equation fails to account for the intrinsic property of spin. Thus, the Klein-Gordon equation was modified into the Dirac equation. It can be concluded the de Broglie relation works well for the cases it is meant for. The results obtained in this paper are of limited practical application as of now. However, the results can be investigated further for certain cases, where that might produce more accurate results than the standard equations. Thus, the approach taken in this paper is not correct, when it comes to applying relativistic corrections in the Schrödinger equation. For that, entirely different concepts are used, as in Dirac’s theory. However, finally, the results do not seem paradoxical; and the first result reproduces the standard equation at , as expected. My school teachers, Pooja Mazumdar and Anupa Bhattacharya, played a vital role in reviewing the equations. I am also blessed to have this paper reviewed by Dr. Saumen Datta, a high-energy physicist at the Tata Institute of Fundamental Research. His advice and insights have been invaluable. [1] The Editors of Encyclopaedia Britannica. “De Broglie wave”. 1998. [2] Marianne Freiberger. “Schrödinger’s equation – what is it?”. 2012. [3] Lisa Zyga. “Does the quantum wave function represent reality?”. 2012. [4] Arpan Dey. “Quantum Mechanics: Derivation of Schrödinger’s Equation by Arpan Dey”. 2020. [5] Patrick J. Kiger. “What Is Planck’s Constant, and Why Does the Universe Depend on It”. 2019. [6] The Editors of Encyclopaedia Britannica. “Refraction”. 1998. [7] Gary William Gibbons. “Relativistic mechanics”. 1999. mechanics#ref611479 [8] Robert G. Littlejohn. “Introduction to Relativistic Quantum Mechanics and the Klein-Gordon Equation†”. 2019. [9] Ethan Siegel . “This Is Why Quantum Field Theory Is More Fundamental Than Quantum Mechanics”. 2019. Figure References [1] “PM [D01] Matter Waves”. [2] “File:Relativistic Dynamics.png”. Leave a Reply
5f85262479fb66b7
Friday, September 27, 2019 The Trouble with Many Worlds Today I want to talk about the many worlds interpretation of quantum mechanics and explain why I do not think it is a complete theory. But first, a brief summary of what the many worlds interpretation says. In quantum mechanics, every system is described by a wave-function from which one calculates the probability of obtaining a specific measurement outcome. Physicists usually take the Greek letter Psi to refer to the wave-function. From the wave-function you can calculate, for example, that a particle which enters a beam-splitter has a 50% chance of going left and a 50% chance of going right. But – and that’s the important point – once you have measured the particle, you know with 100% probability where it is. This means that you have to update your probability and with it the wave-function. This update is also called the wave-function collapse. The wave-function collapse, I have to emphasize, is not optional. It is an observational requirement. We never observe a particle that is 50% here and 50% there. That’s just not a thing. If we observe it at all, it’s either here or it isn’t. Speaking of 50% probabilities really makes sense only as long as you are talking about a prediction. Now, this wave-function collapse is a problem for the following reason. We have an equation that tells us what the wave-function does as long as you do not measure it. It’s called the Schrödinger equation. The Schrödinger equation is a linear equation. What does this mean? It means that if you have two solutions to this equation, and you add them with arbitrary prefactors, then this sum will also be a solution to the Schrödinger equation. Such a sum, btw, is also called a “superposition”. I know that superposition sounds mysterious, but that’s really all it is, it’s a sum with prefactors. The problem is now that the wave-function collapse is not linear, and therefore it cannot be described by the Schrödinger equation. Here is an easy way to understand this. Suppose you have a wave-function for a particle that goes right with 100% probability. Then you will measure it right with 100% probability. No mystery here. Likewise, if you have a particle that just goes left, you will measure it left with 100% probability. But here’s the thing. If you take a superposition of these two states, you will not get a superposition of probabilities. You will get 100% either on the one side, or on the other. The measurement process therefore is not only an additional assumption that quantum mechanics needs to reproduce what we observe. It is actually incompatible with the Schrödinger equation. Now, the most obvious way to deal with that is to say, well, the measurement process is something complicated that we do not yet understand, and the wave-function collapse is a placeholder that we use until we will figured out something better. But that’s not how most physicists deal with it. Most sign up for what is known as the Copenhagen interpretation, that basically says you’re not supposed to ask what happens during measurement. In this interpretation, quantum mechanics is merely a mathematical machinery that makes predictions and that’s that. The problem with Copenhagen – and with all similar interpretations – is that they require you to give up the idea that what a macroscopic object, like a detector does should be derivable from theory of its microscopic constituents. If you believe in the Copenhagen interpretation you have to buy that what the detector does just cannot be derived from the behavior of its microscopic constituents. Because if you could do that, you would not need a second equation besides the Schrödinger equation. That you need this second equation, then is incompatible with reductionism. It is possible that this is correct, but then you have to explain just where reductionism breaks down and why, which no one has done. And without that, the Copenhagen interpretation and its cousins do not solve the measurement problem, they simply refuse to acknowledge that the problem exists in the first place. The many world interpretation, now, supposedly does away with the problem of the quantum measurement and it does this by just saying there isn’t such a thing as wavefunction collapse. Instead, many worlds people say, every time you make a measurement, the universe splits into several parallel worlds, one for each possible measurement outcome. This universe splitting is also sometimes called branching. Some people have a problem with the branching because it’s not clear just exactly when or where it should take place, but I do not think this is a serious problem, it’s just a matter of definition. No, the real problem is that after throwing out the measurement postulate, the many worlds interpretation needs another assumption, that brings the measurement problem back. The reason is this. In the many worlds interpretation, if you set up a detector for a measurement, then the detector will also split into several universes. Therefore, if you just ask “what will the detector measure”, then the answer is “The detector will measure anything that’s possible with probability 1.” This, of course, is not what we observe. We observe only one measurement outcome. The many worlds people explain this as follows. Of course you are not supposed to calculate the probability for each branch of the detector. Because when we say detector, we don’t mean all detector branches together. You should only evaluate the probability relative to the detector in one specific branch at a time. That sounds reasonable. Indeed, it is reasonable. It is just as reasonable as the measurement postulate. In fact, it is logically entirely equivalent to the measurement postulate. The measurement postulate says: Update probability at measurement to 100%. The detector definition in many worlds says: The “Detector” is by definition only the thing in one branch. Now evaluate probabilities relative to this, which gives you 100% in each branch. Same thing. And because it’s the same thing you already know that you cannot derive this detector definition from the Schrödinger equation. It’s not possible. What the many worlds people are now trying instead is to derive this postulate from rational choice theory. But of course that brings back in macroscopic terms, like actors who make decisions and so on. In other words, this reference to knowledge is equally in conflict with reductionism as is the Copenhagen interpretation. And that’s why the many worlds interpretation does not solve the measurement problem and therefore it is equally troubled as all other interpretations of quantum mechanics. What’s the trouble with the other interpretations? We will talk about this some other time. So stay tuned. 1. Sabine, I don't disagree with your criticisms, but it does seem to me that you have more or less reiterated the probability-measure problem and, perhaps, the preferred-basis problem. In concrete terms, how does MWI lead to the Born rule? I think you have mentioned on some other occasion the issue of how you get the "observer" out of MWI: an actual human being, after all, is not, it would seem, in a single quantum state but rather some sort of mixture. So, how does this work? I.e., how many universes in the many-worlds multiverse does an actual human being span, etc.? I've also mentioned before the fallacy of the "branching" metaphor. All the basis states of the Hilbert space are, in some sense, already there, and the evolution controlled by the Schrodinger equation is just sloshing probabilities back and forth among these different basis states (hence, MWI = "the sloshing probability interpretation"). I myself first got interested in MWI back in the mid-1970s when DeWitt's book came out. MWI seemed to me worth taking seriously, and I think all of us physicists occasionally think in MWI terms, just as all of us, in practice, often act as if we believe in the Copenhagen interpretation. In the end, though, I think both approaches are more of heuristic value than anything else: neither interpretation really works if taken completely seriously. 1. PhysicistDave, It's not the probability measure problem, as I am not worried about how to weigh different branches. It's not the preferred basis problem as that is basically solved by decoherence. I am asking why is the forward evolution of what is a detector at t_0 no longer a detector at t_1>t_0. The answer to this is that, by assumption, the forward-evolved detector is not what many worlds fans want to call a detector. So you need an additional assumption and this assumption is virtually equivalent to the measurement postulate in Copenhagen. I use virtually to mean "up to interpretation". I have asked several people whether this point has been discussed somewhere in the literature. It seems to me this pretty much must have been said before, but I haven't been able to dig up a reference. If you have one, please let me know, and I'll be happy to give appropriate credits where due. 2. I wonder if you would consider a classical model of duplication when a measurement is made as equally problematic. The observer tosses a coin, and at that point he and the coin are duplicated, with one copy seeing heads and the other tails. Would you say he has a 1/2 probability of seeing either outcome prior to duplication? 3. As PhysicistDave explains, the branching metaphor is unhelpful. The wave function evolves continuously in a multi-dimensional space. It assigns phased amplitudes (not probabilities) to every point. When you consider a simple quantum mechanical phenomenon like double slit the interference looks like waves. When you involve a big system there's so many amplitudes and they interfere in such a way that tiny threads of the space have high amplitude and other huge areas have near zero. It's like how you approximate a sharp image or audio signal by adding many frequency components, but it's continuous, there's no branching anywhere. This is decoherence and it's how you select only the consistent outcomes e.g. only left or right macroscopically observed. Vast parts of the space that are inconsistent like the needle pointing half-left are eliminated by destructive interference. That fully explains how you get crisp outcomes. The wave functions of big systems with lots of interactions are extremely sparse, but they are valued everywhere and continuous. What's left to explain is where probabilities come from. Why the amplitude squared? I don't have command of the math but there's a clue: The amplitude of the configuration before the measurement is unimaginably tiny to begin with. It's 10^-10^100 or so, because the wave function has been spreading out and distributing amplitudes since the beginning of the universe. So the amplitude of all the outcomes is not 100% in your example, it's very very tiny. And that's the possible paths. The impossible paths have amplitude that's something like 10^-N lower than that, where N is the number of particles in the big system. Yet the probability of the configuration prior to the measurement is 100% as we observe it, and the probability after is also 100% for whatever we observe. To explain this we can reframe the Born rule as a kind of relativity in the wave function. Just as there's no preferred Lorentz frame, there's also no preferred Schrödinger frame. Wherever a system is on the wave function the laws of physics behave as if it's amplitude 1 prior to an interaction, and as the wave function evolves the future states also behave as if they have amplitude 1 when you consider future interactions. The laws of physics are independent of the absolute amplitude or where you are on the wave function. There may be a residual mystery, how do probabilities emerge as the wave function squared. I can't do the math, and the experience of uncertainty may have to do with emergent conscious brains like the experience of color or anything else, without being mystical. All that physics has to do is explain why amplitudes evolve as they do. Did that improve our understanding? I think so. We went from saying that measurement is something mysterious to explaining how we get sharp macroscopic states by destructive interference. And we went from two rules that you have to choose between ad-hoc to two principles that apply everywhere all the time: The wave function evolves all the time, and the local physics acts as if the amplitude is 1 prior to an interaction all the time. Looks like a complete theory to me. 2. Dear Dr. Hossenfelder "the detector will also SPIT into several universes"? You should talk to those detectors, that's not what well behaved detectors should do. 1. Haha :p Thanks for spotting, I fixed this. 2. Well, the question remains, if the detector would lower the frequency of the wave function in that other universe, because it is all yucky and later on, when it is dry again, it would go to the old frequency. Or would it collapse the wave completely and therefore the yucky universe, too? That would combine the two theories. CERN could use the FCC for this experiment, because I am sure the LHC can't spit that far. I'll get my coat. 3. But Sabine was somewhere right: such detectors who plays this sort of game to fool us are quite ill educated. 3. Hi Sabine - very thoughtful video. I understand your words but I don’t understand your objection. MWI says that when you perform an experiment, you and the experimental device (and all your MWI doppelgängers) are in a single basis vector of that device, where the basis is somehow defined by your experiment, which means you will measure the eigenvector of that basis vector with certainty. Your doppelgängers in other basis vectors and will measure other eigenvalues with certainty. I don’t see how this is equivalent to the Copenhagen collapse hypothesis beyond the well-known observation that to each of you it only looks like the wave function collapsed. Can you elaborate or give a different phrasing? Of course it is somewhat sticky how the relative probabilities you get from repeated measurements come about, and as others have pointed out those basis vectors have always been there though that is also conceptually sticky. 1. Steve, Write down the assumptions that you need to describe what you observe (including the fact that, after measurement, you know with 100% probability what has happened). I hope then you will see that you need an assumption, next to the Schrödinger equation, to replace the measurement postulate. You write "MWI says", but it is unclear to mean what you mean by that. Best, One should perhaps not look at what happens in the future, but to what happened in the past: for the past there is a unique, well defined branch and one can check whether the outcome that have already been realized satisfy the Born rule in a frequentist sense, and how closely. 1. Pascal, The Schrödinger equation gives you unique relation between the present, the past, and the future. If you think -- as many worlds fans like to argue -- that the Schrödinger evolution is all there is, then you should be able to make statements about the future. 2. Pascal, The standard way to prepare a state is to measure. You can thus wonder what happens as you evolve a state backwards in time past incompatible Stern-Gerlach measurements. The result is that your unique final state now can be obtained from any one of an infinite set of initial states. In the end, you have no choice but to realise that you really only get to know what is happening between the initial and final state, nothing more, and often a lot less. 3. If I have understood your argument correctly: If the detector in our world touches the superposition by the act of measurement, then at that moment the detectors and their corresponding possibilities split into many worlds; that is, copies of the same detector are made but each copy being linked to a unique possibility, and there will be as many copies of the same detector as there are remaining possibilities. One copy is associated with one unique possibility in each of the separate worlds. When measurement takes place in our world, simultaneous measurements take place in all the worlds. All the copies are detecting at the same time in their respective worlds, and in each world the corresponding unique possibility is of 100% probability. Let us say one such possibility is "flying". Can the human program realize this possibility 100%? No. Then what happens to the flying possibility? Is it hidden? Does it disappear? So we are back to the piolet wave theory or Copenhagen interpretation which ask similar questions. 4. The unique state is observer dependent. If the observer or the program cannot realize the unique state, example a human flying, then what happens? 5. I can follow this, excerpt when "reductionism" appears. What is the definition of "reductionism" here in this quantum theory context? 1. Ontological reductionism. Large things are made of smaller things. The laws of the large things follow from the laws of the smaller things. 2. Sabine, Doesn't ontological reductionism lead to infinite regression? 3. If there is the semblance of the observer, then there is the semblance of reductionism. It is the observer who divides, fragments, fractures the whole. When the observer disappears, then there is one whole thing, but then you cannot describe it because it is the observer who describes. 6. Many worlds should be interpretated, as a Charge Parity symmetric limited set of anti-copy universes with raspberry shape. 7. Thank you for writing this. It has always seemed to me obvious that merely changing your interpretation from "only one branch is real" to "all branches are real but the other ones are unobservable from this one" couldn't actually solve the problem. What I would like to see is an answer to the question "What is a measurement?" I'd accept as an answer "With my interpretation, we don't need to define the notion of measurement" but every account of MWI that I've read talks about measurements, so it doesn't seem to fit the bill. (Speaking as a total non-expert here, so I may have said some things that are obviously wrong.) 1. gowers, Yes, it's obvious if you look at it from a purely axiomatic perspective. If it was possible to derive the measurement process using only the Schrödinger equation in the many worlds interpretation, then it would be possible to derive the measurement process using only the Schrödinger equation, period. But we already know that this isn't possible because using only the Schrödinger equation you will never get a non-linear process. Hence, you need at least one second assumption. 2. I don't understand this "non-linear" objection. Yes the Schrödinger equation is linear and measurement is sloppily defined but it looks very non-linear. In a measurement, first the particle under test is entangled with a particle of the apparatus and these are in superposition of left/right. Then all the other particles of the apparatus and you and the Earth are entangled with that so that the whole Earth agrees it's either left or right, not something in between. This looks like a very non-linear process: We started with something that was recognizably a wave and amplified like crazy it to look like a square yes/no function, although it's built from a nearly infinite sum of frequencies. At the same time the linearity of the Schrödinger equation is well and good because the Earth is in a sum of "Earth thinks left" and "Earth thinks right" states. Nothing irreversible happened. I don't think the unobservable nature of the other state is a problem, although a clearer formalism of how amplitudes scale to 1 from the point of view of any local system may be needed. 3. Pavlos, Because what you say does not explain what we observe. Where is the observable that corresponds to our observation? 4. Forgive my ignorance. I understand decoherence and the whole many worlds argument is only about explaining why all parts of big macroscopic things like instruments or people agree on an outcome. Why there's discrete interaction in the first place has to be explained another way. Isn't quantization a prediction of the Schrödinger equation? 8. Nice point, Dr. H. Never heard that one before. I have 2 questions. Surely it's also a problem for Many-Worlds that these purported other branches haven't been observed and maybe can't be, even if the issue you have pointed out was resolved? Without observation it isn't physics? Also, how does the wave-like interference seen in the double slit experiment fit into what you say (I know it's a deliberately concise post)? Do we just consider the wave-like interference an expression of the fact that a measurement hasn't been made to determine which slit photons went through, and leave it at that for now because no-one has a good interpretation? 1. Steven, No, this isn't a problem for Many-Worlds in the sense that it doesn't make the theory wrong. It's just that believing that the other worlds really exist is not scientific but equivalent to religious belief. I have explained this in an earlier video. I don't know what problem you see with wave-like interference. You have this in Copenhagen and many worlds and pilot wave likewise. 2. Dear Sabine, "In fact, it is logically entirely equivalent to the measurement postulate". OK for this formal and interesting equivalence. But in no case it is reasonable. Not only because we definitively cannot verify the others branchs exist. More strongly, because the "ontology" of MW is the extremely opposite to a economic one. It is totally crasy. It looks much worse than the scholastics discussions on the angels's sex. We should just look it as a funny spéculation for laugh, but we are compelled to speak about as if it was a serious option only because too many academics take it as a serious one. I have read the argument of Sean Carroll. Against the foolish branching, he say that "All the branchs exist before the measurements". That means we have to believe that all the superposed states or possibilities really exist (in sort of parrallels worlds) before the reduction in standard QM to a only state. So we might see MW and standard QM as also "ontologically equivalent". But this manner to push the problem in amont is equally crasy. We have obviously to search a economic interpretation where the measurement problem is solved and where superposition's physics effects could be attributes to just one entity ...follow my look. The De Broglie-Bohm theory has also problems but we ought to try repair them rather than get rid of this much more reasonable possibility. It is still a research program, not a complete theory. 3. Jean-Paul, Prof Carroll would point out that your perception of economy is way off. MWI only assumes Schroedinger evolution. Every other interpretation assumes something else. For example, you want "the reduction in standard QM to an only state". It is not actually necessary to do this; we do it because we literally don't care about the other branches and so this makes our calculations easier. But if you insist that this needs to be done, then you need to assume something else than Schroedinger evolution to get this. It is actually rather difficult to use Occam's razor. One really needs a lot of training to see which of two alternatives is the ontologically simpler one. On the topic of de Broglie-Bohm pilot waves, I think that the best view of it is to see it as an attempt at a post-quantum theory. If it succeeds it will be really wonderful. 4. Probably I have not been clear. Of course I know that MWI avoids the reduction by multiplying the realities (the worlds) where the detectors are. I was only saying that S Carroll defends this idea by saying that "the multiplication of worlds" is not specific to MWI, but that it is already present in the MQ's stand-by vision. Namely the real existence of all the possibilities defined by the wave function. Which would make MWI "ontologically equivalent" and this standard vision. And that in my opinion, also rejects such a standard vision (reality of possibilities). By saying this, Carroll simply shifts the problem of the branching upstream of the measure to a realistic view of the wave function. But this supposed standard vision is (fortunately) not so common. Unlike Carroll, many physicists stick to formalism and avoid saying that the wave function (the quantum state) univocally refers to a physical reality. They agree that the MQ still has no satisfactory interpretation. For the Occam razor, the PWT also needs to eliminate "empty waves", among other problems. In my opinion, it is constrained because it must integrate that the pilot wave also collapses during quantum interactions (energy-momentum exchanges) - whose measurements - which redefine its shape, and in particular its center from from which interference is calculated in optics. For equivalence MWI <==> Copenhagen there is more to say. The standard MQ (without reification of possibilities) is a probabilistic theory where the notion of probability makes sense (but not the notion of probability amplitude, which is only a formal tool). On the other hand, the fact that the reality is multiplied in MWI seems to make me lose all meaning to the probability since each possibility is realized each time. We can not define a frequency associated with probability (such result happens in 30% of cases, etc.) 5. Steve, on this slit problem, the measure on the screen will tell you nothing on which slit the particle went through. Because, it's not the purpose of this specific quantum math model of the slit experiment. If you want to know by which slit went a particle, it's another model and another experiment. I don't know if I follow the Copenhagen interpretation or not, but the way I see the thing is each specific experiment has its own quantum linear model, its own specific observable. The model is not the model of a particle only , its the model of an experiment. To access to the reality the experiment must include a detection process which could be also modeled by the component of the output state vector and so on. Talking reality of particle before the detection is useless because we cannot verify the prediction true or false, it's philosophy and lead to contradiction and false statement like spooky distant action (Bell experiments). Useless to say that MW is a useless total nonsense. 6. What is reality? What is actuality? To the bat doppler's effect of light "really" does not exist, and to the human it "really" does exist. By relativity both are true. Whether the phenomenon is insensible or sensible depends on the observer or the program. The snake biological program cannot sense, but the human biological program can sense Doppler's effect of light. This clearly means that the observer or the program dictates reality. How the same doppler's effect be both sensible and insensible? The actuality is a wave disturbance which based on the observer can be sensed or not sensed. This is another way putting Wigner's friend paradox. The same thing presents two realities which means each reality is the virtue of the program. 7. continued. . . There can be no reality independent of the observer because reality is the virtue of the observer. There is one actuality and many interpretations based on the observer or program. For it is the program that interprets or describes. Therefore, the description is the program, in that, there is no description separate from the program. Remove the program and the description goes away. In such a case, what does Sean Carroll mean when he says that the possibilities already exist as realities before the split. It is measurement that describes or defines a reality, that is, reality comes into being--ontology--during measurement because there can be no measurement without the observer. Measurement is the movement of the observer. That being the case, how can the possibilities exist as realities in corresponding branches prior to the act of measurement? 8. A relative effect such as the Doppler effect, which is a function of the relative movement of the receiver (in classical physics, relative to the propagation medium of the wave type concerned) does not need any consciousness to exist: it is objectively recordable by a device . More generally, there are tons of definitive arguments against this subjective definition of reality (ie that requires a Subject to exist). I think that's why you're having trouble finding interlocutors in this blog. Example of arguments: - Do you think that we, the speakers on this blog, need your consciousness to exist? You obviously do not believe it, otherwise you would not go looking for the discussion. The consequent subjectivism leads to solipsism. - Do you think that the world has waited for your consciousness to exist, or more generally the consciousness of the living beings of our planet? Where does consciousness begin? Einstein mocked the recourse to consciousness in MQ by asking if the look of a mouse was enough to modify a wave function. When one knocks on a pole BECAUSE one has not seen it, it is because it exists independently of consciousness. etc. And, it must be remembered, it is not because theories are human constructions that the reality they seek to describe would be a co-construction of reality and consciousness. 9. (Long time lurker here - thank you Dr. Hossenfelder for your long series of explanatory articles, shining light on the IMHO most exciting and most important aspects of physics in an approachable language - it is much appreciated. I bought your book as well.) A stupid question, and an apology in advance for the imprecise language I'm using: that the Schrödinger equation is not "real" in its usual mathematical form in our universe appears to be plainly obvious from the fact that we see a dot on the double-slit-experiment screen on one side of the screen or on the other. I.e. which slit the particle passed through in hindsight is a 100% discrete event with trillions of followup probabilistic events building on it as that dot fades from the fluorescent screen. Why is it then such a big leap to require all past events in our Universe to have a fixed 100% probability - i.e. what happened happened - while future events are probabilistic and we'll only ever experience one specific outcome of it? I.e. cannot we picture our universe as a processing machine that takes the deterministic history of all past events (i.e. the current full quantum state of the universe) and branches off into one of the probable directions, of which we'll only ever be able to experience a single branch, if we ever look back at what happened in the past? In that super-deterministic view the "measurement problem" and "observation" doesn't ever arise: the superdeterministic processing machine is compatible with the Schrödinger equation for every observable quantum experiment, and we are only doing measurements because it directly arose from the probabilistic execution of the universe's quantum state, starting from the Big Bang, progressing forwards in a deterministic fashion according to probabilistic decisions. In such a model the "many universes" interpretation isn't required, because the propagation function of the Universe is self-sufficient in itself and only a single version of the Universe exists. Admittedly super-determinism is not a particularly happy thought to advocates of "free will", but the math seems self-consistent to me, and resolves most of the philosophical paradoxes around quantum mechanics. (I hope my imprecise language didn't make my arguments 100% illegible to you. Not that I could do much about it if the universe is indeed superdeterministic - but I'm trying.) 1. Schrödinger's Cat, I agree with you that if we would live in a super-deterministic world this would really ba a hard challenge for the the arguments in favour of free will. But anyway, in this case I would argue that the occurence of free will was already determined from the very beginning of the universe. Although our physical laws are symmetric in time, we can only remember the past, we dont remember the future. "The art of prophecy is very difficult, especially with respect to the future". The knowledge about super-determinism does not help you for managing your personal live anyway. In order to live your personal live successfully, you still need to take your own personal decisions. You should better believe in free will! It's a very scientific view. Believing in freedom of will is in my view to a large extent simply the opposite of "believing in fate". 2. You are assuming incompatibilist free will, which is a minority position among philosophers and, in usage, among laypeople. 3. Stathis, I have not any clue, what an "incompatibilist free will" should mean. In my view freedom of will does exist, and it is pretty much compatible with determinism. Determinism within this context means, that every state X(t=0) determines completely the state at X(t+dt). But this is just the mechanism, how nature is working. In order to understand "freedom of will" one has to go beyond the basic mechanism. Nobody with an open mind could deny that "freedom of will" can be experienced in our personal daily lives, thus it should be possible, to define "freedom of will" in such a way, that it is completely compatible with determinism. 4. A simple question: what is the mathematical definition of a "free will decision" of a human, if every future state is a super-deterministic F(X(t=0)) function of the previous state of the universe? Because almost by definition it's the identity function if I read the math right, which doesn't have too much philosophical meaning. 5. It is quite obvious, that there can't be any mathematical definition. You should better find an AI algorithm allowing the occurance of free will. Have you ever watched Stanley Kubricks Space Odyssey 2001? The main computer of the space ship HAL developed a quite dangerous attitude while achieving free will. At least SF fans are able to imagine free will. Needless to say, the computer itself is of a purely deterministic device. The neuronal structure of Human Brains are shurely more complex in comparision to silicon computers. Let's imagine for example, you are trying to explain a psychiatrist, that you have not been able to control yourself because lack of free will! You have just been determined to do some very stupid things, therefore you were not able to do it otherwise! It was not your fault. I am quite shure, they will have a place for you in order to help you doing it better. Obviously free will does exist! There is no common definition yet. The lack of an accepted definition is also a severe problem, if you want to study the neural processes, that constitutes the mechanisms of free will. But I am quite shure, that fundamental physics won't be helpful. What quantum information physicists are saying about free will, is not much more than a religious belief. It does not have any measurable consequences! You could replace "physical laws" equally by the "devine laws of the Holy Lord", who caused our existence applying a big, big bang. 10. I never understood how it should help if the word salad around "measurement" and "collapse" is replaced by another salad around "branching" and "worlds". These concepts are not part of the mathematical theory, so what is the goal: having the most believers? 1. Lehrer, but in MWI "measurement" is really replaced with "entanglement with environment by interaction with it" (following the same Shr. eq.) and "decoherence", while "collapse" is not replaced with anything, it's just gone as not required anymore. 2. Dmitry, No, it's not, that's the whole point. 11. Prof Sabine, What you are saying plagues all decoherence, not just MWI, but of course you already know that. I don't like MWI either, but I think decoherence is already correct. You also already know that the proper thing to do is, in your words, "You should only evaluate the probability relative to the detector in one specific branch at a time." I am not sure why you seem to still have an objection. Decoherence should clearly be seen to improve the measurement problem over Copenhagen because 1) is smooth evolution, not instantaneous shenanigans 2) does not assume classical observers 3) derived from Schroedinger evolution That means, even if you wish to insist that MWI is equivalent to the measurement postulate, you should agree that this new measurement postulate is a lot smaller an assumption than the Copenhagen one. But I deny even that. Feynman points out that the only physically observable, and hence should be objectively agreed upon by all observers, is the transition probability amplitudes (make its density operator version to get rid of phase). That is, thinking about the wavefunction alone is somewhat wrong---all predictions require you to state the initial wavefunction AND the final wavefunction. In our final wavefunction, you need to state which detectors observed what outcome, and those immediately destroys any superposition that are forbidden. Of course, I do not attempt to explain the probabilities, seeing as others seem to think this is yet unsolved. Not relevant to the point we are considering at the moment. Maybe there is still a measurement problem, but I think decoherence already explains a whole lot. The size of the problem is now a lot smaller than Copenhagen's. I literally do not see why Copenhagen gets a out-of-jail-free card when they literally define measurement as forever out of quantum theory's purview, and yet when decoherence explains so much more and then have tiny ghosts left as "I don't know", it is somehow no longer acceptable to postulate something special happens or whatnot. 1. B.F. I am not sure why you seem to still have an objection." I do not have an "objection", I am merely pointing out that this is an additional assumption which means that many worlds is not any simpler than the Copenhagen interpretation. This supposed simplicity is why many worlds fans think their approach is superior. I am pointing out that this is because they are not careful writing down the assumptions which are necessary to arrive at a description of the world that agrees with what we see. 2. Prof Sabine, Thank you for the clarification. Shock and awe at physicists being sloppy!?!?! When my teachers and profs tell me I am sloppy, I do not even intend to defend myself. Too many have told me that; my response tends to be, "I don't know where I am being sloppy, please tell me, so I can improve." Needless to say that I am not buying any of the standard arguments by MWI advocates. Instead of simplicity, I think it is far more fruitful to consider that it demystifies the measurement-entanglement-observation process, which represents an actually objective improvement of understanding, than of "I like this more, it is simpler". But do you think it would be fruitful to consider the Feynman view as a better alternative? I mean, I am intending to, over the next decade, gestate a textbook that starts teaching quantum theory from a highly simplified QFT, from scratch. The postulatory basis that would be included in the middle (because I am no monster that would condemn students to missing out what everybody else is doing) would follow more of Feynman-Hibbs and then some Dirac. It would, therefore, be really much a problem if I don't get this rigorously correct, and mislead students. And yet I am, by nature, simply not the rigorous type, so I cannot help myself. I literally require external cross-check. 3. B.F wrote: "thinking about the wavefunction alone is somewhat wrong" Yes, it is clearly wrong. The continuous and deterministic evolution of the wavefunction has confused many people. The Schrödinger equation is clearly at odds with the jumps and randomness that lie at the heart of quantum physics. It is a mistake to think of the Schrödinger equation as describing an *individual* quantum system. "decoherence already explains a whole lot" No. Decoherence is just a word. It papers over the real discontinuities and suggests a "gradual" evolution from the quantum to the classical world. Coherence theory is based on classical optics and requires statistical machinery to describe the superposition of random waves. Yet some people seem to think that decoherence applies to individual systems, rather than ensembles. "starts teaching quantum theory from a highly simplified QFT" Yes, such a textbook is badly needed! Quantum field theory and quantum statistical mechanics are much closer to the core of the "measurement problem". Are you aware of the closed time-path (Schwinger/Keldysh) formalism? It combines unitary evolution and measurement postulate in a seamless way. It's the transactional interpretation fully fleshed out. 4. Werner, Don't know why comments sometimes get lost. Let us begin with agreements. Thank you for adding to my motivation for writing the book. I am also myself an enthusiast of transactional interpretation. I had learnt Schwinger-Keldysh closed time-path formalism before, in many electron physics. But I have zero idea what you mean by that having anything to do with measurement, and Google is equally stumped. For all I know, the formalism is merely supreme rigour, for realising that it is not ok to assume that future infinity has the same Hilbert space as past infinity, and that we ought to only ever impose the zeroing of the vacuum state in one time slice. So, they simply evolve the future infinity states back to past infinity and do all evaluations there. That is totally inappropriate for many-electron physics, since that level of rigour is totally washed out by all the horrible approximations. It is only sensible in few-particle QFT. Anyway, let's move on. I think your biggest problem is "It is a mistake to think of the Schroedinger equation as describing an individual quantum system" It is known to the pioneers that we do not have a choice in this. Dirac pointed out early on that the double slit experiment done for single photons and single electrons at a time need the wavefunction to be describing single particles in order for the particles to avoid locations of destructive interference. These arguments are so powerful, that Born could successfully convince Schroedinger that his wave equation had to be probability amplitude waves, that the theory had to have quantum jumps, etc. It is also the reason why pilot waves have the driving wave pass through both slits to get the interference pattern, even though the single dot only ever passes through one. "[Decoherence] papers over the real discontinuities and suggests a gradual evolution from the quantum to the classical world" Isn't this a plus? Why would you want to have sudden jumps that you literally postulate to not be able to explain? What "real discontinuities" are you saying have been experimentally observed and require explanation? Decoherence is so advanced now, that we consider things like preferred basis to be solved. You only need Schroedinger evolution to get alpha ray tracks in bubble chambers even when the decay ought to be spherically symmetric. Is that discontinuous enough for you? We also can explain why you only see one result in measurements. What more do you want? 5. "What 'real discontinuities' are you saying have been experimentally observed and require explanation?" Don't you believe in atoms, in some graininess of matter? Doesn't a counter register clicks? (I am not claiming that we should be able to predict when atoms decay!) Where does this craving for continuity come from? Schrödinger wanted to get rid of quantum jumps, and he failed. And his mythical wave function afflicts the thinking of almost every physicist. "I think your biggest problem is ..." Of course I know about Dirac's dictum that a single particle interferes only with itself. But even then the wave function describes only a statistical ensemble (the experiment has to be performed with many particles). If you insist that the time-dependent Schrödinger equation describes an *individual* particle, the statistical character of QM is lost, or has to, somewhat artificially, be put back in using the measurement postulate. "I have zero idea what you mean by that having to do with measurement" The Keldysh formalism can deal with irreversible processes. And is't photon absorption an irreversible process? At least in everyday life absorbed light turns into heat. But in the Aspect et al. experiments photon absorption acquires special status as a "measurement" process? Only by adding the confirmation waves can we arrive at a unified description of elementary processes. I've explained this in more detail in Sabine's blog on the problem with quantum measurements. It was probably too late for you to notice it. 6. Werner, Nitpick: Dirac's dictum is NOT "that a single particle interferes ONLY with itself". Also, it would be better if you could just give me some links on what you mean by Keldysh v.s. measurement. Like, if you meant that photon absorption is irreversible, then ordinary QED are all able to deal with this, and you do not actually need Keldysh. You suddenly talk about confirmation waves (yes, this is standard transactional interpretation, I know) in this context and I am so totally lost. It is also important to note that whether photon absorption is irreversible or not is actually dependent upon the final state. The electron could easily have coherently re-emitted the absorbed photon, in which case the absorption has to be reversible. This is also why I really take Feynman's view that the only sensible things to talk about are when you have specified both initial and final states, to get only transition probability amplitudes. More importantly, if you insist that wave functions only describe statistical ensembles, then you need to explain 2 things: 1) Physicists universally deduce wave functions by solving for the eigenfunctions of operators or whatnot. These Hilbert space trickery implicitly or explicitly assume only one single particle (Slater determinant or better to do more, and that changes things). Those that want psi-epistemic pictures, and also your statistical ensembles, need to explain why mathematical games could produce experimental outcomes. 2) If quantum theory only describes statistical ensembles, then how does it explain measurement outcomes either? We do a lot of stuff with single particles, and we also ought to have an answer to how detectors work. If detectors work statistically, why should the results exactly agree with quantum predictions so well that Bell's inequalities get violated? Finally, I already was telling you that Schroedinger evolution already happily explains why counters registers clicks and all. Atoms and graininess of matter reflect discrete conserved quantities and literally are not conceptually difficult. Heck, Schroedinger wanted to introduce waves precisely because self-consistent waves naturally gives rise to discreteness. Yes, of course Schroedinger failed in his quest to get rid of quantum jumps. But the same Schroedinger evolution, with Born's probabilistic interpretation, begets all the correct results. It is not so much that I am against your scheme. If you want to get rid of wave functions in all of physics entirely, well, you should point me to some of your publications on that topic. I am just trying to tell you that decoherence with mere Schroedinger evolution already sufficiently explains how you get counter clicks and all that. I am not even into MWI; anything that has decoherence will suffice. And you know I am into transactional interpretation too. However, I am not able to see how a statistical interpretation, and/or doing without wave functions, is supposed to work, let alone help us understand more about quantum theory. Heck, I already pointed out that I am even open to entertaining post-quantum ideas, e.g. regarding pilot waves. I think I have already invested a lot of work into this accursed topic, so if you want me to understand your point, please do not ask me to put in more maths myself. Send me some links instead. I'll read them. 7. The moment you map one copy of the detector to one unique possibility there is discrimination. This discrimination is the fall out of measurement. There can be no discrimination without measurement. To say that each copy of the observer is mapped to a unique possibility in each unique branch before the measurement took place is not reasonable. Before measurement superposition is something undefined whose outcomes are uncertain, that being the case, how can the branching and the association of copies to possibilities be predetermined? 8. Superposition is when measurement is not, period. Any attempt to describe or define superposition is still measurement. Measurement is always with reference to something. That something is the measurer or the observer-program. It is measurement that determines the outcome and resolves the uncertainty. Measurement destroys the superposition, in that it destroys the uncertainty. When this happens the outcome is 100% probable. 9. Werner, I cannot help myself. After you mentioned that Keldysh formalism does something more, I read up, and indeed my initial views are wrong. Yes, Keldysh allows you to do non-equilibrium, adding time variance, etc. That is wonderful. Not sure how I could do hyper-simplified QFT from that, though. I would still need some pointers to how Keldysh deals with measurement. I don't know how to search for that. Gokul, just go away. You haven't learnt anything since we last met. Still rambling to yourself on everybody else's threads. 10. B.F wrote: "I think I have already invested a lot of work into this accursed topic." Sorry, I certainly do not want to waste your time. I'm a victim of the "accursed topic" myself (dropped out astrophysicist trying to understand Quantum Theory). Surely there must be a reason why this topic is still being debated so much? "you need to explain 2 things" 1) I have no problem with the time-independent Schrödinger equation and its eigenfunctions. They are of course useful, but still they represent only statistical information. Or is an energy eigenstate something "real" for you? "why mathematical games could produce experimental outcomes" Nobody can explain the "unreasonable effectiveness" of mathematics. The models that survive somehow capture essential features of reality. The Maxwellian velocity distribution, for example, is not forced on atoms by pure thought, but is a useful expression of our experience. 2) "how does it explain measurement outcomes" Shouldn't we be happy to have a working statistical *description* at all. Surely you are not demanding that we ought to supply more than an exponential decay curve, but also individual decay times? "Why should the results exactly agree with quantum predictions so well that Bell's inequalities get violated?" Isn't that what the transactional interpretation achieves? There are intricate correlations, and we can "explain" them by using waves traveling backwards in time. But first and foremost we should be happy that we have at least a consistent *description* of the experiment. You are right that it is a key feature of the Keldysh formalism that it applies to non-equilibrium systems. Detectors are always far away from thermal equlibrium - in equilibrium the measurement signal and thermal noise are indistinguishable. Our views on practical matters are essentially the same. I don't want to "get rid of wave functions in all of physics entirely". I'm not suggesting any new formalism, or a mechanism to explain the measurement process. What I do urge is to throw out the classical concepts with which QM has always been formulated: "particles" and "measurement". The formulation of QM/QFT should be based on a more fundamental notion: an event. QFT is a statistical theory of events and correlations between them. "send me some links instead" I'm not sure if my last publication (1988!) is available online, but it likely contains very little that is new to you. What I suggested was that you look at Sabine's blog at I don't know how to extract the URL for the relevant post (and it would probably be deleted automatically). But you can search for "Keldysh". (Remember to press "load more" twice.) 12. >you already know that you cannot derive this detector definition from the Schrödinger equation. It’s not possible. I believe Sean Carroll in the book you recently reviewed provides the answer. Remember the part where he talks about observables with continuum spectrum of eigenvalues. Decoherence shows how the wave function evolves into parts that "don't interact" (he skips the math unfortunately), and then the way we divide the whole wave function into "branches" is about as arbitrary as our division of matter around into chairs, tables and keyboards. There are no two distinct detectors with the beam splitter, we just chose to call similar parts of the state vector as "detector with arrow up" and "detector with arrow down". And rescaling of probabilities when you select one of them is just "conditional probability", we do it for our convenience. 1. No, decoherence does not solve the problem. 2. Then it's an idea for another important and helpful pop-sci blog post: what exactly is decoherence and why exactly it doesn't solve the measurement problem. I think many readers, me included, will be happy to read it. 3. Decoherence is a process that happens due to the Schrödinger equation alone. It cannot solve the measurement problem because the measurement process is non-linear whereas the Schrödinger equation is linear. I explained that in my video. Another way to put this is that decoherence will not bring a system into a detector eigenstate, which is what we observe. Decoherence (suitably interpreted) gives you a statistical mixture. 4. Sorry, I still fail to understand the issue fully, that's why more elaboration (possibly in another post, if there isn't one already) would be helpful. In every branch the measurement looks non-linear and the measured part of the system looks to be in an eigenstate. Meanwhile the whole system continues to evolve linearly, there is no non-linear measurement/collapse in MWI. There's no contradiction or problem, as MWI folks see it. Say, we have an electron in a superposition state a*|up> + b*|down> where |up> and |down> are eigenstates of operator of spin along chosen direction. Then we measure it with a detector. By interacting with the detector and the world around the system evolves (linearly) into superposition a*|spin up, detector saw it up> + b*|spin down, detector saw it down>. No collapse, no non-linearity. Every branch of the detector sees the electron in an eigenstate. For each branch the measurement looks like a non-linear change of electron wavefunction from original superposition to either |up> or |down>. But globally nothing of that kind happened, the global superposition remains. If we take this global superposition and only look at the electron part, discarding the |detector saw it down> part, then of course the elecron state is not just a*|up> + b*|down> anymore, now the electron is entangled with the detector, so the electron itself can only be described by a mixed state. But it's only if you keep the detector out of the picture. So far I couldn't quite see which part exactly of that MWI reasoning described in Carroll's book you find troublesome. What I personally find troublesome is the question of probabilities, what they mean in MWI and deriving Born rule. When Carroll starts talking about "credence" variant of probabilities, I don't see how it leads to frequencies of events in MWI following necessary distributions that Born rule predicts... 5. How is a statistical mixture of 50% left and 50% right different from two non-interacting worlds, in one of which the photon goes left and in the other of which the photon goes right? 6. In the former case the system is not in a detector eigenstate. In the latter case it is. 7. "Another way to put this is that decoherence will not bring a system into a detector eigenstate, which is what we observe." Sabine, you might have here a hidden assumption that whenever we make a measurement the system ends up in a pure detector eigenstate. To experimentally prove that the system is in a detector eigenstate you have to prove that there is not even the smallest admixture of other states. If someone had such evidence then decoherence would not suffice to explain our experience in the framework of the MWI, as you point out, but I suspect that such evidence is lacking. 8. Ripi, No, I am not assuming any such thing. I have instead spelled out very clearly previously that the irreversibility of the measurement process (which you would have if you indeed would end up in an eigenstate) is *not* the problem because you can (and most plausibly have) small remainders in the other states. The issue is that decoherence generically doesn't get you anywhere close to such a state, as illustrated by Peter's 50-50 example. The reason it doesn't is that the Schrödinger equation is linear. The problem is not that it's reversible. The problem is that it's linear. Measurement is a non-linear process. 9. Prof Sabine, But didn't we just get to an agreement that it is not necessary to be working with (what this essentially is) the collapsed states? As in, there is no problem with working with the 50%-50% mixed state density operator for the rest of your calculation, that the collapsing is really just syntactic sugar that makes it easier to keep track of the part of the universal wavefunction relevant to our physical world's actualised branch? I am, of course, not denying if you say this requires a new measurement postulate; I don't know what postulate would be good, or even is needed (please explain more!), only am interested in why you think it is necessary to be in detector eigenstate. I am also going to have to complain about the sloppiness in asserting non-linearities. What do you mean that measurement is a non-linear process? Is Schrödinger evolution linear (as needed for superpositions) or non-linear (when you impose single independent particle approximation)? Is decoherence linear (derived from Schrödinger evolution) or non-linear (master equation style evolution terms aren't linear, are they? They destroy superpositions too, so is that sufficiently non-linear for your taste?) I am inclined to think that measurement should be a linear process, since decoherence is sufficient to get branches that are individually detector eigenstates. Again, I am not claiming that this solves all issues. I am just asking you what you mean. 10. B.F., If you want to "keep on working" with the mixed state, you are taking on a neo-Copenhagen interpretation (the state describes knowledge and is not a real thing). In this case you are in conflict with reductionism, as I said. The measurement process is non-linear because if you have one prepared state that evolves into eigenstate 1 and another prepared state that evolves into eigenstate 2, then the superposition of these prepared states will not evolve into a superposition of eigenstates. I explained this in my video. The Schrödinger evolution is linear. Tracing out part of the system does not bring you into an eigenstate (and generically not even close by one) either. As I already said. 11. Prof Sabine, I did watch and read, and rewatch and reread, multiple times. Let's use your words again. Decoherence derives that "if you have one prepared state that evolves into eigenstate 1 and another prepared state that evolves into eigenstate 2, then the superposition of these prepared states" will evolve into a mixed state of the two detector eigenstates. All that is needed to guarantee that you either see one branch or the other, is that the detector's own eigenstates are orthogonal. Mind you, not the system in detector eigenstates, but rather the detector in detection results eigenstates. It is not actually tracing anything out (unless you mean within decoherence itself). Since decoherence treats the potentially macroscopic detector as a quantum system too, I do not see how a conflict with reductionism arises from merely having mixed states that we might later partially ignore here and there. Note (for others) that I am not even deciding whether to update to a collapsed density matrix, or use the mixed density matrix. I think I am getting to understand what you are saying is a needed new measurement postulate. I think you simply mean that Born probabilities, the probabilistic interpretation of the diagonal coefficients of the density matrix when expressed in detector eigenstates, is a postulate regardless of Copenhagen/MWI/PWT. If you just mean that, then I wholeheartedly agree. There is no improvement upon this aspect by moving from Copenhagen to decoherence, and it is not possible to get it from within Schrödinger evolution either. I don't think anybody is even suggesting that this could be solved. (Except maybe Prof Carroll and some other MWI extremists.) I really do not think this should be called the measurement problem. Not least because calling it Born probabilities problem far better pinpoints where the difficulty is---decoherence already explains the details of measurement except this and maybe some other stuff. At least, I won't be spending so much time confused as to what is meant that "measurement problem looms open with decoherence" if all you meant is this. I would just give up and move on, accept that it is a postulate. 12. B.F. You completed my quote with your own words, and wrongly so. Think about it. You seem to know the math. Write down the assumptions you need to define "what you see" and "detector". 13. Prof Sabine, OMG, sorry, it looks like I made you say something else. As in, I know you know that decoherence explains all the way to the decohered mixed states. That is what is needed, and meant. Please reply to the other stuff in that comment. I kind of do not know what you want me to do. But I can start. A detector is any quantum system prepared in an initially inert state that can be entangled with the system we wish to study, such that the studied system would send the detector into a different state. It is often wanted that the detector's different states amplifies the perturbation from the studied system to make a permanent mark that allows for repeat measurements by yet larger systems later. Pointer states are included, by simply having a video or photograph. Human is not really important; as long as the permanent mark is made, observation of the results is to be considered completed and the human can forget to collect the data entirely. In the Mott problem, each H atom as detector has its own energy eigenstates (really wrong, since electrons share the same underlying field), so that the alpha ray passing by will excite them to states orthogonal to the ground state as the definition of detection. Each of them are also orthogonal to any other atom being excited. Later, the H atoms reradiates, and that could be photographed as bubble chamber photographs. A1: All systems are quantum mechanical in nature. A2: It is possible to have weakly interacting systems, such that the actually exact multiple-system Hilbert space is well-approximated by the tensor product of the individual system Hilbert spaces. (Needed or else orthogonality of detector eigenstates is not well-defined.) A3: System and detector are weakly coupled enough to have the almost-separable Hilbert spaces as above, yet not weakly interacting enough to forbid entanglement. A4: All standard assumptions of QM _minus_ Hermitian operators and expectation value A5: Subsequent detectors are not independent. In particular, re-detection of energy-compatible observables (which may be incompatible with each other) necessarily beget the same value. A6: Upon incompatible detection, Born probabilities appear. The assumption A5 and A6 is sufficient to tell us that projectors are involved. The projection is coming not from Hermitian observable operators acting upon the system. The projection is coming from detectors having orthogonal states. This also explains why double-slit experiment, putting a detector at the slits is different from not. The detector at the slits will decohere into orthogonal detector states and thereby spoil superposition when you later decide to specify the final states, whereas without the detectors there, there will not be orthogonality coming from the detectors at the slits. Since the detectors furnish the projectors, it is then natural that the measurement operators inherit the projectors to the detector eigenstates. Since detector eigenstates can always be labelled by the measurement outcomes (and other auxiliary variables if needed), this means that measurement operators are measurement outcome values multiplied with projection operators. Since Born probabilities are directly assumed in A6, then the expectation value is not postulated. 13. Happily enough the Many-Worlds Interpretation does not mean that they have to be interpreted sequentially or alternatively all at the same time, but just as an idea. We have not finished interpreting our own world yet. 14. 100% possibility is mapped to one observer in each world, right. But the observer is a program, and therefore limited to a finite set of possibilities; for example, I cannot fly or walk on water. If I can do that then it is not me, a human program, but an alien program. But MWI says there are different copies of the same program or observer. So, if the 100% possibility is flying, then the observer-program cannot accommodate the possibility. Then what happens in that world. Would Bohm come along and say it is the hidden variable; would Bohr come along and say the "flying" possibility disappears. So, we are back to square one: What are the hidden variables? What happens to the "flying" possibility? The realization of the possibility is dependent on the scope or domain of the observer-program. The observer being a program has limited domain and range. 15. Quantum interpretations tend to invoke something that is not quantum mechanical. We have in all of them something which breaks apart the quantum-ness of the world. Heisenberg realized this with the Copenhagen interpretation, where he saw a problem with the definition of the “cut-off” between the quantum and classical domains. Bohr's insistence on there being a dualism between the quantum world and the classical world not only means the classical world has no quantum description, but it also means there exists a boundary between these two domains. Yet it is difficult to know where this boundary is. Experimentalists are currently working on this, and I recently read an article about molecular beams with high Dalton numbers that have quantum behavior. The Many Worlds Interpretation (MWI) has a funny issue with localization. We are not able to put our fingers on where this eigen-branching of the world occurs. Measurements in the Copenhagen setting are a localization. With MWI this lack of localization is maybe more commensurate with quantum gravitation. However, this is in a sense a position representation of the same problem that Heisenberg pointed out, which is more associated with the large scale in mass, momentum, energy or action of the measurement system. The issue of localization in MWI is then found in the momentum representation of the same problem in Copenhagen interpretation (CI) pointed out by Heisenberg. The matter of reconfiguring the probabilities based on the “phenomenon” of the observer is inherent in all quantum interpretations. QuBism takes this further and says this IS the basis for measurement as a Bayesian update. If we think of all physics as a form of convex sets of states, then there are dualisms of measures p and q that obey 1/p + 1/q = 1. For quantum mechanics this is p = ½ as an L^2 measure theory. It then has a corresponding q = ½ measure system that I think is spacetime physics. A straight probability system has p = 1, sum of probabilities as unity, and the corresponding q → ∞ has no measure or distribution system. This is any deterministic system, think completely localized, that can be a Turing machine, Conway's Game of life or classical mechanics. The CI tells us there is a dualism between p = ½ and ∞ on a fundamental level, or noumena to use this definition by Kant, while MWI tells us that p remains p = ½ on the noumena and the shift is with the observer phenonena. Qubism put CI on steroids and says all there exists are Bayesian updates, and in effect there is none of this sort of dualism. I doubt there is any quantum interpretation that has either a theoretical proof for its truth value, or an empirical hook that gives it an observable advantage. We might however think of over-complete coherent states, such as laser states of light, as those which have a classical-like sympletic structure. These states are emenable to Wigner's quasi-probability where we can define a basis |p, q), or really more |z) for z = p + iq. There is a whole range of related physics with condensates, superfluids and states that occur often with a Ginsburg-Landau quartic potential or Bogoliubov coefficients. Then there are states removed from this condition that are mixed or maximally mixed states. This is what Einstein formulated with his photon emission coefficients! In a quantum gravitation setting we may then see this as a way of looking at a classical background manifold with gravitons. In this way we might then also be able to look at how spacetime is “built up” from entanglements of states. The emission of Hawking radiation and the “damage” done to quantum states is remarkably similar to how Tr(ρ^n) for n larger than unity is not preserved. 16. continued due to space limits The paper Quantum Theory of the Classical: Quantum Jumps, Born’s Rule, and Objective Classical Reality via Quantum Darwinism by Zurek. arXiv:1807.02092v1 [quant-ph] 5 Jul 2018 offers the hypothesis that unitarity is broken with nonlinearities that occur with large N quantum number systems. This then maintains the classical states of the world are maintained against environmental decoherence by a sort of environmental supersymmetry. This too hints at some sort of conservation of quantum phase, so quantum information or qubits are conserved, and there is maybe something similar to this connection to Einstein coefficients with coherent and maximally mixed states. There are plenty of things to think about here, and I have thought for many years this issue is connected in part to problems with the quantization of gravity. The connection between how Tr(ρ^2) is not preserved in measurement and Hawking radiation, the possibility spacetime is emergent from quantum entangled states, convex sets of states, and other connections indicate there may well be overlaps between quantum gravity and this subject of quantum decoherence or measurement. How about the delayed choice quantum eraser experiment. How else do we explain that unless we use many worlds as retroactive backwards in time is religion more than creating many worlds surely!!? 18. Wave functions are not real, physical things. Wave functions do not collapse. The belief in collapsing wave functions should be our prime concern. 1. That is a bit of an inversion. The usual is to find that ψ-epistemic interpretations with a collapse and the ψ not existing. It is also the ψ-ontic interpretations with ψ existing that have no collapse. Stochastic and ensemble QM or QED have this feature, and consistent histories as well. Though CH has a lot of problems. 2. There is no wave function or epistemic language (as if nothing was real until a human brain evolved to "observe" the universe) in quantum measure theory. QMT's focus is on the nature of the (quantum) measure space, which seems a more natural thing to do for probabilists, not that it is necessarily the "wave" of the future for quantum theory. Evolving Realities for Quantum Measure Theory Henry Wilkes "whilst Hilbert space quantum mechanics uses the Hamiltonian and collapse for its dynamics, in QMT we use the quantum measure, which measures the sum of quantum interferences between pairs of histories in an event" 19. Hi Sabine, you wrote: "If you believe in the Copenhagen interpretation you have to buy that what the detector does just cannot be derived from the behavior of its microscopic constituents. Because if that was so, you would not need a second equation besides the Schrödinger equation. That you need this second equation, then is incompatible with reductionism" In my view, your blogpost gives really convincing arguments wrt. this view. We should face the idea, that there might be a problem with the assumption of reductionism! In an earlier blogpost, you argued, that the measurement process is about information loss. I am not so shure any more, if this is really the case. One could argue, that the wave function is only an approximation of the unknown true physical state in all its particular details. If you take this view, the measurement process would be about gaining additional information about the real world within classical terms. Learning algorithms, biological evolution, etc. are all about accumulating or gaining some kind of information, which seems to be almost impossible to deduce from the quantum mechanical wave function, which has a quite limited **predictive** power in explaining our real world observations. 20. I thought this article by Chad Orzel was a more palatable view of MWI, though I'm not even a layman in the field (a pagan perhaps?): 1. From the article by Chad Orzel: "There’s only one universe, in an indescribably complex superposition, and we’re choosing to carve out a tiny piece of it, and describe it in a simplified way" It seems to me, that this statement explains the origin of the problem with the reductionistic approach. 2. Saw that Prof. Hossenfelder: "carve out a tiny piece of it[indescribable]". This carving out is what I call "abstraction", "to draw away". Abstraction is according to the observer or program. Different observers abstract different realities. Because you are abstracting according to a background, a frame of reference, which is the observer-program, "abstraction" is "measurement". If you remove the observer program the abstraction or description goes away, rather the abstraction merges into the whole, and it is one whole thing, the indescribable(Chad Orzel) or the undefined or the uninterpreted primordial. Looks like I have not been talking nonsense. 3. Continued. . . with this statement by Chad Orzel what I have been saying about reductionism falls in its place, that is, out goes the observer and in comes the indescribable. . . and reductionism ends. 4. Sorry Gokul, but I do not understand anything of what you are talking about.. 21. I still find it very worry some that established scientists turn to MWI for explaining QM. That may be a basic fact, but it's importance is under-estimated as we simply try to understand the behavior of >this< universe we are living in. 1. Marc, I also think it is bad that people simply turn to it. However, I cannot agree with your particular complaint. That would be like asking what it means for the first few planetary orbits to just so almost nicely be related to the platonic solids. 22. A human is programmed to detect optical light; he or she cannot sense infrared light or ultraviolet light. In that the human program is limited to optical light. I need to create a device to sense either infrared or ultraviolet light. Such a device is a hardware program because it performs the well-defined function of detecting ultraviolent light. Then what ever it detects is translated into a format that is intelligible to us humans: may be, a numerical or graphical representation. This device cannot detect infrared light; it is simply not programmed to. Let us say there are only light waves. We don't know what they are. These waves are undefined. But we know that in our absence or in the absence of any program like a infrared or ultraviolet detector they are just waves we can't describe or define or discriminate them as visible, infrared, ultraviolet etc. That indescribable, undefined thing, whatever it is, is what I call "actuality". Now, these indescribable waves being everywhere, I first introduce an infrared detector. What will I detect? Infrared light, right. I remove this detector. Then again there are indescribable waves. Second, I introduce an ultraviolet detector, what will I detect? Ultraviolet light, right. I remove this detector. The actuality returns: there are only indescribable, undefined waves. Third, I enter among these waves. What will I see? Bright visible light, optical light. I jump out from among the waves, and the actuality returns. Detection or seeing is measurement because I detect or see based on or with reference to the detector or human, which is a program. Any description, any definition with respect to a frame of reference is measurement. I am describing what "the other" is with respect to the frame of reference. If the frame of reference is an infrared detector or program, then I describe infrared light; alternately, if the frame of reference is an ultraviolet detector or program then I describe ultraviolet light. The rest of the waves are simply undefined. I say what applies to the macrocosm also applies to the quantum world. Let us consider superposition as the undefined. First, if I introduce a detector or program that detects "left", then what the detector detects is "left". When I remove the detector, the actuality returns, which is undisturbed superposition; because there is superposition the interference pattern returns. Second, I introduce a detector that detects "right", then what the detector detects is "right". I remove the detector, the actuality, the interference pattern returns. There are two ways of looking at this measurement. First is that when left or right is detected the rest is undefined. Second, the rest is undefined because the whole wave contracts or coagulates when the electron or photon "records as memory" its interaction with the detector. The entire superposition of the states is used up to record that one state of interaction with the detector. This creates a memory-load which is responsible for the particulate behavior of the photon or electron, which till the point of interaction or measurement behaved like a wave or existed as a superposition. What is happening during measurement? Recording, memorization, and programming. The act of measurement programs the electron or photon. Thereafter, the programmed electron or photon acquires a particulate nature. 1. Gokul, regarding electromagnetical waves, your arguments are a bit misleading. Electromgagnetical phenomena are predicted by Maxwells equations. Radio waves have been predicted successfully long before we were able to turn our radio on in order to enjoy some music from our favourite radio station. It is sometimes quite helpful to have an appropriate theory before trying to build a detector. It helps discriminating the waves in terms of their wave lenghts (frequencies) The waves are not as indescribable as you might believe. 23. Where does reductionism end? Why? The undefined or the uninterpreted primordial is the ultimate actuality. It is indescribable, beyond measurement. Description, definition, measurement begin the moment I introduce the observer or the program as a frame of reference. What happens when I remove the observer? Measurement ends. There is the undefined or the uninterpreted primordial. When measurement stops, you can't reduce any further: if measurement is possible, and it is possible because there is the inkling of the observer, there is reductionism. This implies that the moment I remove the observer, measurement ends, therefore, reductionism ends. 24. The moment measurement begins and the observer cuts out a reality from the actuality we enter the classical world. 25. "Most sign up for what is known as the Copenhagen interpretation" Do you have data on this? I remember reading that, while the Copenhagen interpretation was indeed the favoured one decades ago, this has now changed, with the many-worlds interpretation now much higher in, possibly at the top of, the polls. Of course, the correct interpretation isn't decided by vote (and one could also argue that if it could be decided at all, then it would no longer be just an interpretation), but you seem to be painting the many-worlds interpretation as some sort of fringe position (at least with respect to the fraction of scientists who subscribe to it). Joke du jour courtesy of Roger Penrose: "There are probably more different attitudes to quantum mechanics than there are quantum physicists. This is not inconsistent because certain quantum physicists hold different views at the same time." 1. One philosopher said "When you are with the majority, that is the time to think." How many people understood that the Sun is the center of the solar system when it was first stated? A handful may be. How many people understood that gravity is not a force but a curvature of space-time when it was first stated? Two men may be because Author Edington asked who was the third man. A fact is non-democratic, in that numbers don't count; either you see it or you don't see it. Even if everybody on planet earth says that the earth is flat is that the truth? 26. What is your explanation as to why many people who are obviously very smart, such as Max Tegmark, David Deutsch, Sean Carroll, etc, subscribe to the many-worlds interpretation? 1. I'm a physicist, not a psychologist. 2. Phillip, I think the explanation is quite easy. It is a very rare event, that a massive change in the current physical paradigma arises from the work of a single person, like e.g. the game changing contributions to physics from Albert Einstein in his wonder year 1905. Most physicists tend to pick up ideas whith a tendency to be just below the surface of awareness. MWI is actually very trendy, so its quite obvious that clever and smart people become attracted. 3. Sabine wrote: "I'm a physicist, not a psychologist." Shouldn't a theory be judged by the *physical* arguments put forward? Of course there is a strong psychological force at work here: physicists are hooked to theoretical preconceptions. Many cannot even conceive of quantum theory without the wave function. 4. Phillip, They use their brain power in the wrong places... Einstein's principles (like Mach's) are mostly forgotten in favor of new fancy QM interpretations. Reality should be realistic... :O) 5. "Shouldn't a theory be judged by the *physical* arguments put forward?" Hi Werner, you are definitly complety right with this. But with MWI there is nothing to be judged at all. So far, no one has any clue, how to setup an experiment in order to verify MWI. In other words, MWI is "not even wrong". 6. We are not brushing aside MWI nor are we finding fault with it, rather we are going into it very deeply to find out the facts of QM. If I start with a prejudice or a bias then I cannot go very far, very deep. This prejudice or bias is the psychological observer. When the observer is active, I will see what I am programmed to see and not the fact. The observer influences observation, which is no observation at all. This psychological observer is the program put together by scientific tradition and orthodoxy. Berzelius was a name in Chemistry. He said organic compounds cannot be synthesized in the laboratory. Chemists of his day where programmed to this conclusion which then became their tradition or background. The tradition or background is the observer or the program. If Wohler had started off with this background or observer interfering in his enquiry could he have synthesized Urea in the Laboratory? Newton was an authority in gravity, if Einstien gave in to the authority of Newton by allowing the observer or the psychological program of Newton's tradition to interfere in his enquiry, could he have discovered relativity? "Einstein broke the Newtonian orthodoxy" Aldous Huxley. This is because the activity of the observer was in suspension, and therefore he saw. Observation is when the observer is not. 7. "A former LEP experimentalist," yes, there is a way to falsify MWI. If it turns out the quantum computing is not possible because of some currently-unknown mechanism that causes wave function collapse no matter how carefully we isolate a system from its environment, then MWI is out, and QM as we know it needs to be modified to account for this mechanism. 27. This is a live experiment about observation of waves: 28. Probabilities “jump” from 50% to 100% in MWI because we are talking about conditional probabilities. We can apply the conditional probabilities before the measurement, but then it would be an empty statement: “Given that the detector measured an up spin, the probability that the spin is up is 50%.” At this point, the detector has no knowledge about the stat of the spin. Once the detector makes a measurement, this would become: “Given that the detector measured an up spin, the probability that the spin is up is 100%.” We need to make a similar statement for the spin down state. Then the time evolution between the initial state before the measurement and the final state after the measurement can be fully described by a linear, unitary equation. The measurement problem is solved. 1. Udi, Making any statement about what a detector measures or doesn't measure requires that you define what you mean by "detector." 2. Sabine wrote: In this context a detector can be simply a two state system: |detector measured up spin> and |detector measured down spin>. I can explicitly write the initial state, the final state and the unitary transformation between the two. It is just tedious to put it in a comment with no support for writing equations. 3. Udi, Good, now you have a detector. Now please calculate what you observe using nothing but the Schrödinger equation. 4. Sabine wrote: This is going to be ugly. I will write it as a two particle system. The first is the spin we are measuring and the second is the detector. In the initial state the particles are set up to be independent of each other: psi_initial = (|u1> + |d1>)(|u2> + |d2>)/2 In vector notation it can be written as: psi_initial = [1 1 1 1] / 2 In the final state the particles are coupled: psi_final = (|u1>|u2> + |d1>|d2>) / sqrt(2) = [1 0 0 1] /sqrt(2) The unitary matrix that transforms between the initial and final state is [ 1 0 1 0 ] / [ 1 0 -1 0 ] / sqrt(2) [ 0 1 0 -1 ] / [ 0 1 0 1 ] / psi_final = U psi_initial You can diagonalize it and write down the Hamiltonian if you want. I don’t think it will give you any more insight. The final state obeys exactly the conditional probability that I wrote: “Given that the detector measured an up spin, the probability that the spin is up is 100%.” 5. Udi, Do I really need to say that this is a circular argument? 6. Sabine wrote: “Do I really need to say that this is a circular argument?” I don’t see any circular argument here. The statement: “Given that the detector measured an up spin, the probability that the spin is up is 100%.” is just a description of the equation: P(u1|u2) = P(u1^u2)/P(u2) = The detector can measure spin up or spin down. It cannot measure a superposition of the two spin states, because this is how I decided to build the detector. If I wanted to measure something else, I would choose a different detector. 7. Udi Fuchs, please provide an interpretation of the state [1 0 0 0], and the final state after the action of your unitary matrix upon it, namely [1 1 0 0]/sqrt(2). 8. Arun wrote: [1 0 0 0] = |u1>|u2> [1 1 0 0] = |u1>(|u2> + |d2>) There is nothing special about this transformation. There are many unitary operators that would do the measurement I wanted (anything in the U(3) subgroup of U(4)), I just wrote the first one that came to my mind. 9. Udi, If you do not understand that an if-then statement doesn't prove the conditional, I cannot help you. 10. Udi Fuchs, physically what does this time evolution mean? 11. Sabine wrote: The conditional clause is “the detector measured an up spin”. Of course it cannot be proven true because it is not necessarily true. What is true is that either “the detector measured an up spin” or “the detector measured a down spin”. We have set up the detector to measure spin in the up/down direction, that is why it is our preferred basis. In this basis there are exactly two independent answers “100% up” and “100% down”. You can have a superposition of the detector measuring both options, but there is no state where the detector measures “50% up”. You can try ask our up/down detector about left/right spin, but it is not sensitive to this question, so the answer here would always be “50% up and 50% down”. 12. Arun wrote: “physically what does this time evolution mean?” I’m not sure it has any physical meaning. Sabine complained that in MWI you need to “Update probability at measurement to 100%”. So I gave the simplest example I could think of that demonstrates how this “update” works with a unitary operator. If you want an example that is more physically realistic, you can look at a Hamiltonian that describes the Stern-Gerlach experiment. 29. Do you have a way to explain the power of quantum computing other than the many worlds interpretation? 1. The postulates you need to derive the speed-up of quantum computing compared to classical computing do not depend on the interpretation. That derivation is explanation enough for me. 2. The sum-over-histories formulation of quantum computing Ben Rudiak-Gould 30. " But we already know that this isn't possible because using only the Schrödinger equation you will never get a non-linear process." That is wrong because you do not need to assume a nonlinear quantum process. The Schrodinger equation, in its nonrelativistic world, is sufficient to describe how the detector works! And its linear. "Detection" of a single particle is just a cascading set of standard processes that proceed from the microscopic to the macroscopic. A photon creates an electron-hole pair in silicon. That pair separates in an electric field, the electron ending up on the gate of an FET. This controls the flow of other electrons onto a wire to a second transistor, the output of that is connected to a gong that makes a sound heared by a whole lecture hall. There is one gong for right, one for left. No one disputes that. No one disputes that transistors are made of atoms descibed by quanum mechanics. No one disputes the quantum theory of semiconductor band structure. No one disputes the quantum theory of how transistors work. Apparently some people dispute that you can construct a quantum mechanical operator that describes position of the clanger in the gong, but I consider that silly ... a sum of operators projecting out the positions of the metal atoms in the gong will do. One can even dope it with atoms of an element otherwise unused in the system, and project out those only. The uncertainty principle operating on such sums generates negligible uncertainty. This of course implies that the original state of the detector system matters ... but we cannot measure it exactly. It does NOT dispute that in fact the wavefunction of the original particle (if a fermion or composite boson) actually DOES collapse ... it does. It does this because it is entangled with the wavefunction of the detector, and it is the projection of the original particle out of the wavefunction of the whole apparatus at the time of the gong that matters. For photons as particles, of course, the "wavefunction" of the photon does not collapse, it disappears ... but here we enter the realm of field theory. Its also true, of course, that if you expect to see 50% one way, 50% the other, adding to 100%, for a particle, you have to KNOW IN ADVANCE that there is just one, not zero or many, particles. The stuff that determines that is part of the "apparatus" you have to include in the Schrodinger equation. I find it very odd that many people insist on denying unitary evolution! Some say "but you have to PROVE that this results in the Born Rule". I say, no I'm perfectly free to use the Born rule on the probability generated on the classical size measurement at the gong ... we all agree on unitary evolution. There really is only one world, and observing such a classical size result per se leads to consequences which are best worried about by philosophers. This is the crux of the matter. For some reason lots of bloggers don't like to consider such simple explanations. Some actually censor out my comments. Oddly, all of my colleagues, when I or my (NAS member) department head explain this, seem to understand just fine. But then, they are neither fancy physicists nor philosophers. Yes, this is too long. 1. dtvmcdonald, Yes, because what you say is obviously wrong. It doesn't matter how many linear processes you line up after each other, that still doesn't make a non-linear process. I strongly suggest you try to actually write down the equations because that will make it immediately clear. I am not saying this because I am dismissive but because I started at the same point as you 25 years ago. 31. Let us take four cases of the observer and the observed. 1. A scale, the observer, and a straight line of finite length, the observed. 2. A human on the railway platform, the observer, and Doppler's effect of sound, the observed. This together with a human inside a moving train blowing its horn, the observer, and the monotonous sound of the horn, the observed. 3. A fly, the observer, and rotten stuff, the observed. This together, with a human, another observer. 4. A hindu, the observer, and the belief of reincarnation, the observed. This together with a christian, the observer, and the belief of resurrection, the observed. In case 1, the scale is not a standard, say, it has lost its absoluteness. When I alter the length of the scale, the length of the straight line changes. A shorter scale means a longer line, and a longer scale means a shorter line. That is, when I alter the observer, the scales' length, then the observed, the length of the line, changes. By relativity all lengths are true because different observers mean different lengths. In case(2), the observer on the railway platform experiences Doppler's effect, but the one in the moving train experience a single monotonous sound of the horn. By relativity the presence or absence of dopplers effect is true to the respective observers. When the observer changes the phenomenon appears or disappears. In case 3, to the fly biological program, the rotten stuff is attractive, and to the human biological program, it is offensive. How can the same stuff be both attractive and offensive? So the offense is not out there in the stuff but it is in here in the program. If I swap the human program for the fly program, the human finds the stuff attractive, and the fly finds it offensive. The program is subjective, so, the observers are subjective. But we can apply relativity here too. By relativity both offense and attraction are true. The fly "really" finds the stuff attractive, and the human "really" finds the stuff offensive. So both offense and attraction are real or true according to relativity. In case 4, one human programmed as a hindu believes in reincarnation, which is the observed. And anther human programmed as a christian, believes in resurrection, which is the observed. Now, the hindu-program, and the christian-program are the observers, and the belief in reincarnation or resurrection is the observed. We swap the programs: the hindu now reprogrammed as a christian will believe in resurrection, and the christian now reprogrammed as a hindu will believe in reincarnation. The hindu-program and the christian-program are psychological programs, software programs. And when I swap the programs, the observed, say, belief in reincarnation changes to belief in resurrection. By relativity the hindu-program "really" believes in reincarnation, and the christian-program "really" believes in resurrection; therefore, both are real but the actuality is that these realities are illusions. In all the four cases we can safely apply relativity, and we see that when we alter or change the observer, the observed also changes. Therefore, "the observer" is "the observed". This is true for all measurement in the classical world as well as the measurement problem or the measurement in the double slit experiment. 32. "The observer is the observed" J Krishnamurti 33. A physical device doing quantum measurements is composed of a linear system doing linear transformation on state waves (or signals) like beam-splitting polarizers. All of that in QM is modelized by linear observable matrix acting on state vectors or signals. As you don't do any detection, so far the process is linear but since you start the detection process it 's irreversible. Indeed, the detector is a device doing a completely non linear process. first it provide the module of the wave fonction which cancel out the phase of the signals, this process is of course irreversible. 2nd it usually applies a threshold to the module detection/not detection, Yes or no. Each measurement is like answers to questions then it brings some information which is function of its probability of occurrence given by the module² of the wave function. I am not sure what do you mean by reductionism, if the linear part of the quantum experiment has something to do with reductionism I don't see why it shouldn't be the same for the detection devices? 34. continued. . . the 4 cases. . . Out of measurement after measurement, out of programming after programming, out of pattern forming and more and more complex pattern forming through ever increasing complexity of the observer or the program at various levels the classical world emerged. . . going on this way, that is, recording, memorizing, and programming life or the biological programs emerged. A stream of photons kick start photosynthesis Quantum mechanically, and after many, many layers of complex programing an apple tree bears an apple. To reduce or shrink the response time to a challenge, psychological astuteness like thinking, ideation, and imagination emerged so much so that when it might take centuries upon centuries for the human biological program, a hardware program, to mutate to be resistant to polio virus, with the capacity of the psychological program, a software program, man discovers the polio vaccine and eradicates the virus. Time is always shrinking as evolution progresses. 35. Sometimes I get the impression that the measurement problem is an artifact associated with an over dependence on the Schrodinger equation-- first of all, it might be better interpreted in a statistical sense. But if we worked entirely in the framework of something more phenomenological, like Heisenberg/ Heisenberg Dirac matrices, we would not be talking about a wave function "collapse". Measurement would be reduced to phenomenological proportions. I'm well aware of the difficulties of the matrix methods, and the level of abstraction required. But maybe that is the best approach, simply for those very reasons, of difficulty, and abstraction. 1. "(the Schrödinger equation) might be better interpreted in a statistical sense" You are right. In the Heisenberg picture the state vector is constant. And in that picture it would never occur to anybody to associate a matrix with an *individual* system - only with an ensemble. But apparently many people believe that the Schrödinger equation describes the evolution of an *individual* quantum system. The "deterministic" evolution applies only to the individual members of the ensemble. To arrive at a result, the ket (wave function) must always be combined with a bra, and a sum (trace) be taken over the entire ensemble. It would help a lot to say that every quantum system is described not by a wave function, but by a density matrix. A pure state is a very special case. 36. That there exists a probability distribution for the possible outcomes means that repeating the same experiment many times over will yield a frequency distribution that corresponds to that probability distribution. There is then a fluctuation around the expected normalized frequency distribution, and this tends to zero in the limit of an infinite number of measurements. One can then consider a hypothetical system that consists of an infinite number of copies of the original system and then define the observable for measuring the normalized frequency distribution. The frequency distribution will of course be found to be given by the Born rule with probability 1. This means that the state of the frequency distribution always corresponds to the Born rule and that this is an eigenstate of the observable. What this means is that the general Born rule follows from the special case of the Born rule that says that if a system is in an eigenstate of an observable then the system will be found with certainty in that eigenstate upon measurement of that observable. 37. Corrections: Here is easy way to understand -> Here is an easy way to understand splits into several parallel words -> splits into several parallel worlds 1. Thanks, I have fixed this. This is the actual copy of the transcript that I used for the video. I have read this out loud a dozen times and didn't notice these typos. 38. Sabine, I always thought that a big problem with Many Worlds was that say you take a spin 1/2 particle in a magnetic field, you can measure that state and you split into an up 'world' and a down world. However without the magnetic field - the degenerate case - you have an infinite set of possible wave functions, and thus an infinite number of worlds. Is that a faulty way of looking at Many Worlds? 1. David, Maybe, or maybe not. Whether anything in nature is truly continuous and/or infinite is somewhat questionable. Usually there's a way to turn infinity into "large but finite". In any case, however, I don't see how this is a problem for many worlds. 2. Well if rotations are continuous, then talking about a continuum of 'worlds' (actually universes) branching out of degenerate wave function collapses seems to stretch my imagination to breaking point - maybe that's my limitation. 39. It has always seemed to me the MWI is non-scientific, because it fails to predict anything at all about what we observe. It is tautological, "you see what you see." It is an empty "explanation", as much as saying a child being struck by lightning was "God's Will." It just isn't science, scientific knowledge has to limit something, either the range of things that did happen to produce what we presently observe (like astronomy or forensics or geology) or the range of what will happen given what we presently observe. MWI is incapable of doing that. Schrodinger's is at least capable of predictions, even if we don't understand the mechanisms. I am not in the "shut up and compute" camp, nor do I buy the Copenhagen interpretation. There may be some non-linearity yet to discover. But the answer isn't to throw out science (Schrodinger) for non-science (MWI). 1. Dr Castaldo, Your complaint does not make sense for the following reasons: 1) MWI is assuming nothing more than "Schroedinger evolution works for everything." Every other interpretation assumes something extra. You cannot claim that "Schroedinger's is at least capable of predictions" against MWI. 2) The essence of your argument is akin to complaining that the continuous possibilities given by Newtonian orbit theory is an empty explanation compared to Kepler's early Platonic solid values of the planetary orbital radius. 40. Sabine (and Lawrence, since you mentioned it): The obvious solution is that there IS some non-linear factor involved, that isn't accounted for by the Schrödinger equation, and one candidate may be gravity. Perhaps it is not quantizable, and has acted as the non-linear "detector" since the whole universe was nothing but a dense quantum soup (if that was ever true). And as others have suggested (Penrose I think), the "collapse" is triggered by the gravitational behavior of masses in superposition; i.e. there is some threshold to be discovered at which massive superpositions become mutually exclusive. 1. Dr Castaldo, All such schemes are doomed because we have macroscopic systems put into quantum superpositions. 2. “... we have macroscopic systems put into quantum superpositions.” Do we? The biggest one I know is this here and 2000 atoms I would not call macroscopic. If you now want to name things like BEC, SQUID, ... then first think about whether these are just a bunch of bosons (or Cooper pairs) sitting in the same (ground) state. This can indeed be a macroscopic number N of them, but they just form a huge product state (of tiny superpositions or entangled states) and not one huge entangled state or macroscopic superposition. You can see a macroscopic BEC precisely because N is so huge (N≈N-1) that it does not matter when one particle is measured. (By the way this defines the chemical potential μ=dE/dN in the limit.) If it would be a single huge entangled or superposed state then one measurement would collapse the whole state and this is not what happens. 3. Reimond, I doubt my pitiful cases would meet your challenge, but I am much more interested in learning from you what difference it is that "a huge product state (of tiny superpositions or entangled states" is "not one huge entangled state or macroscopic superposition". Obviously my own study into entanglement is lacking, and if you could help remedy that, I would be very thankful. Technically, I subscribe to decoherence (but not MWI) so that I would not speak of collapsing whole states. I am not sure how we would experimentally determine the difference between a macroscopic entangled state being observed in part, v.s. huge product state being observed in part. If you were curious, I was originally thinking of things like polaritons, cavity QED, superconducting rings or superfluids i.e. your BEC, and even something as simple as having a double-slit photon over an entire wall---for the short time between the photon reaching the wall and the atoms in the wall decohering the single result out, there is a short timeframe whereby the universal wavefunction is a superposition of many different single-atom-absorption states. You need the superposition there only because only one atom actually gets to absorb. Needless to say, I am aware that this last argument is weak, and that better experiments that can give unambiguous results that we are observing macroscopic entangled states would be far better. But at least I know I am not talking pure nonsense. Anderson's More is Different paper stated that the ammonia molecule tends to be entangled, but before the 100 atoms mark, the entanglement tends to get washed out, so if you have something like, say, a GRW idea, you need to make it collapse quite frequently. Yet, you already mentioned that 2000 atoms could be put into superposition, so that the constraints on GRW is contradictory. That is sufficient for my initial assertion to be correct, even if it is possibly still not enough to rule out Penrose's gravitational collapse entirely. 4. B.F, the difference between product and entangled states is explained here. For SQUID, BEC and “The meaning of the wave function” please refer to chap. 21-4, 5, ... in here. 5. Reimond, That was super underwhelming. I am currently doing some quantum info so I do know about the basics regarding entanglement. I thought you would bring up something about entanglement witness, or some measure. Your link didn't work; Google covers the part. Also, Feynman lectures? I read that long ago as an undergrad. I am not sure why superconductivity isn't considered entangled. As in, sure, Feynman's argument about how light is such that its "Schroedinger wavefunction, the vector potential A" is observable is because they are "non-interacting Bosons" at least kind of makes sense. I worry about how they actually ought to have 4th order and higher interaction terms, but I'll give him that. But for BCS, the Cooper pairs are literally entangled across space. I get that you call that "huge product state of tiny entanglements", but entanglement is still an important property of the entire system. Not mentioning that it really was many electrons and phonons interacting, to produce this effect. I'll take some time to think more about how a huge product state of tiny entanglement is not enough entangled to be a huge entangled state. I mean, just writing it out, I am already inclined to agree with you. But do spend some time about my examples above. Not all of them require this. 41. The math/physics of the Other World Interpretation is far, far beyond most of us -- and especially me. Assuming that some form of OWI is "true" then from a purely nuts and bolts perspective has anyone respected in physics theorized what mechanism can instantaneously generated infinite amounts of mass/energy an infinite number of times each nanosecond and has done so for at least 13.8 billion years worth of quantum events? 42. Recording, memorizing, and responding from that memory is the fundamental design pattern of nature that it plays out so well at every level starting from the quantum level. In understanding the mind and how it works, we get an insight into not only evolution but also how nature herself work. "Thought is the response of memory. If you had no memory, knowledge you cannot think." J Krishnamurti. We now have experimental evidence, obtained while demonstrating "recording of copies" as posited by Quantum Darwinism, which tells us that, yes, recording is going on at the quantum level. This implies that the electron or photon in the double slit experiment during the act of measurement or detection, must "record" its interaction with the detector apparatus. If we can experimentally demonstrate such a recording, then we can easily show that the electron or photon is programmed by the act of measurement, and therefore takes on a specific state and acquires a well emphasized particulate nature. 43. I suppose these are obvious questions. Why doesn't creating all these new universes violate the conservation of matter and energy? Also if these alternate realities are all around us then why can't we observe them. They should also have a classical aspect because of the correspondence principle. The people living in these alternate realities are observing things around them. So how can all this extra matter be squeezed into the same space we are in. It can't remain as waves because then they would not observe their own realities. People might not call them alternate realities, but why should these branching probabilities not experience a reality of their own? For example if Sabine in another reality did not fix her Premiere problem, then why can't she see this reality around here where it was fixed. This branching has to be local because of the speed of light, so it can't be anywhere else. 44. Very good video, Sabine. Could you please make another one about the Superdeterminism? I've tried to find information about it in internet but there is so little to watch or read. Thank you!! Humpty Dumpty sat on a wall Humpty Dumpty had a great fall All the King's horses and all the King's men Couldn't put Humpty together again. The second law of thermodynamics is incompatible with reductionism. 1. No, it's not. The second law of thermodynamics is derivable by way of statistical mechanics from the underlying microscopic laws. 2. In the twentieth century, the seemingly insoluble nature of the theoretical physics significance problems has credited the paradigm that a sufficiently insane theory should be the solution. A regrettable new myth. For example a theory that gets rid of reductionism, or the uniqueness of our reality, or even classical logic (and then we could say everything and its opposite). 3. Sabine wrote: I am sure that you are fully aware that for a microscopic system with a unitary time evolution, the entropy is constant. The point is that entropy is a measure of how much we know about the system, it depends on our perspective. I just published a blog post explaining this in the context of quantum mechanics. It seems that when I try to put a link, my comments get filtered, so just google “Goldilocksism” to find it. 4. Udi, Thank you, I know what entropy is. Sabine, obviously "same thing" regarding the probability to 100%. But isn't this just trivial? The important thing as I understand it is that the state of the detector is different. MWI claims continued superposition and thus unitarity holds. I may have missed your key point. 47. Hello Sabine, you made a very clear statement, that Copenhagen is incompatible with reductionism. I tend to buy this immediately. on the other hand, we have nothing convincing going beyond Copenhagen. So far, the measurement problem is not understood at all. In my view, it is quite hopeless heading for a TOE, before we really understand, what's going on there. 48. Sabine, in your video, you are explaining your problem accepting MWI as a physical theorie. In an earlier comment regarding your blogpost Thomas Lindgren mentioned an Forbes article by Chad Orzel. He is writing about the book keeping problem with MWI. It seems to me, that his arguments are quite similar to yours. Am I right with this assumption? Taking him serious, his arguments reveal also some limitations of the reductionist approach. Perhaps you have not read the article before. So I will post the link here again for your convenience. 1. No, I do not in this video explain why MWI is not a physical theory. I explain why, in contrast to what is often stated, it does not solve the measurement problem. This does not make it unphysical. 2. Sorry, of course you are right. My statement was not exactly enough! Anysway, you did not answer my question! 3. Sabine, it is not your explanation in the video, that makes MWI unphysical. Its the lack of measurable consequences, that makes it unphysical. It's the same with the assumption of free will. It seems to make no difference, if the basic physical theory is of probabilistic, deterministic, super-deterministic or whatever nature. Nobody can explain so far, what the observable consequences from applying these different assumptions would be. It seems to me, that "freedom of will" does not depend too much on the very details considered within fundamental physics. But the 2nd equation you are talking about is simply = a² this is of course a non linear operation where you are losing the phase. In any quantum experiment you have 2 parts: 1 a linear part where an observable M operate. if |Ψ>=1/√2.(|0⟩+|1⟩) then M|Ψ>=1/√2.(M|0⟩+M|1⟩) 2 the detection part Say for sake of simplicity that |0>, |1> are eigenstates of M with eigenvalue m0 & m1 and |Ψ>=1//√2.[exp(jφ0).|0⟩+exp(jφ1).|1⟩] In this case, detection gives the signal power shared on the 2 outputs as :|<0|Ψ>|²= 1/2 associated to the given measure m0 and |<1|Ψ>|²= 1/2 associated to the measure m1. Then the mean value of the the measure is = (m0+m1)/2, the Copenhagen interpretation is rather consistent. In the meantime you are able to evaluate the gain of information from an a priori model to the posteriori measurements, in this case either m1 or m2. The gain of info in this case is log(2) or 1 bit. 2 remarks: First , I agree of course, detection is a non linear process, you cannot retrieve the initial state |Ψ>, you have obviously lost the phases of the signals. Since you are doing detection it's irreversible. Every thing is okay! But let 's go in detail, detection process provides Real detection bip 0/1 that can be stored for further processing leading to a measure with physical unities. Detection process is the sole physical mean to access to the Reality of an quantum experiment. State vector don't have any physical unity , they can't be stored on computers for further processing without detection, they have no reality by itself, then it is irrelevant talking of information associated to the state itself. Only measure following detection has physical informations. BTW, here is the major flaw of Bell rationale leading to the supposed Bell paradoxe. In the Bell rationale it is supposed there are detection +/-1, bits of reality where there is none. 2nd, these detection bip 0/1 depends of course on real microscopic events whose the a priori probabilities is given by the component of the state vector that is an a priori maths model of the reality. 1. Hi Fred, If someone is able to provide unquestionable evidence that reductionism breaks down, he would shurely be a candidate for the upcoming nobel prize. I am not expecting, that Sabine would publish this knowlegde in one of her blogposts! 2. @Fred Harmand: You have some of this right. The measurement apparatus couples or entangles with the system. We may take this a bit further and look at the two slit experiment. We have two slits aligned vertically at y = 0 and y = d. A quantum wave approaches this along the x direction. The wave has two eigenstates for entering the slit at y = 0 and y = d which we write as ψ = A(e^{ikx} + e^{ikx'}) for x' = √(x^2 + d^2). Now compute the modulus square of this wave to find the probabilities for the particle in the x slit or the x' slit: ψ^*ψ = 1 = A^2(2 + e^{ik(x – x') + e^{ik(x' - x)}) = 2A^2(1 + cos(k(x - x'))). This result gives a wave pattern, from the cross term in the multiplication, which in an ensemble of experiments is observed. Now consider a spin state at one of the slits, say the x slit, which is done to try to find which slit the particle really went through. These two states are given by the eigenvalues of the σ_z matrix with the states represented as |+) and |-). These states are orthogonal so (+|-) = (-|+) = 0. These states become entangled with the two-slit wave function so that ψ → A(e^{ikx}|+) + e^{ikx'}|-)) Now if we compute the modulus square we get ψ^*ψ = 1 = 2A^2, and the cross term disappears because of the orthogonality of the spin or needle states. This is pretty much what is expected and is what experimentally is found. What the MWI maven is going to say is that the world split into two worlds according to the |±) needle states. In both of these split worlds the observer witnesses the periodic structure in an ensemble of experiments disappear when she tries to measure where the particle is. The CI upholder will say instead that there is this entanglement, and if the needle state is entangled with other states for larger systems, then the quantum state of the world is absorbed into an entanglement of an einselected state that is stable. In other words the classical world is an entanglement. Which is correct? It is not really easy to say. Laser coherent states are classical-like states with a sympletic structure. So the MWI panegyric will say there is this underlying set of over-complete coherent states that defines the classical world, though it is still quantum mechanical. The CI defender will say the classical world is like any entangled state, where the underlying quantum numbers effectively do not exist. For two spin states in an entanglement as a Bell state, the degrees of freedom for the constituents are replaced by those of the entangled state --- the spins no longer really exist! So for the CI side would say the classical world emerges, it emerges from the einselected stable quantum states and the quantum-ness of the system no longer exist, or at least can not be observed. Much to think about. 50. What I think is most salient about what Sabine is trying to say is that with a measurement there is some sort of nonlinearity that sets into the system. Classical nonlinear systems can exhibit chaotic dynamics. For a large measurement system with many quantum numbers or atoms, say a mole of them, the deBroglie wavelength is λ = h/p, where in a relativistic setting p_0 = mc and it is clear the wavelength is nearly zero. This means the frequency with νλ = c is ν → ∞. So this happens on some time scale that is much shorter than the frequencies of the system. For large quantum numbers the system is thought to converge to a classical system as N → ∞. Let me write the Schrodinger equation as iψ_t = Hψ. The wave function though is a sequence of perturbed functions I write as Ψ = ψ + sum_{n=1}^∞ε^nφ_n I am thinking of this as a singular perturbation. The reason is that wave function collapses are almost instantaneous. They tend to occur on time scales far smaller than the frequencies of the system. I will also consider the Hamiltonian as having a perturbing part so that H → H + εK. Also consider the time evolution as ∂_t → ∂_t + εδ/δt to account for the two time scales, or the two conjugate energy scales and their evolutions . Let me try an example where I just consider n = 1 in this series and I let φ_1 = |ψ|^2. I then get two differential equations iψ_t = -iHψ: O(1) iψ*ψ_t + iψ*_tψ + iδψ/δt = -iKψ + iH|ψ|^2: O(ε). By conservation of probability iψ*ψ_t + iψ*_tψ = 0 and we are left with a nonlinear differential wave equation. It is not hard to see this O(ε) equation bears some similarities to the logistics equation of chaos theory. This would say the quantum wave on a longer time scale has this tiny perturbed part that obeys chaotic dynamics. With a little more creativity this can be made into the nonlinear Schrödinger equation. That is a soliton equation. This would mean we have a quantum wave on one time scale that is perturbed by a small soliton wave on a much shorter time scale. By playing with different singular perturbation models it is possible to have various models of a quantum system perturbed by a set of short time scale perturbations. For many nonlinear systems there is a violation of unitarity as well. Zurek makes this point in his paper arXiv:1807.02092v1 [quant-ph] 5 Jul 2018. The einselected states are those that are stable under these singular perturbations on a tiny time scale. Other states are no stable and they result in this collapse. We may think of this as a lowering of entropy of a system in the case such a collapse results in the emission of a mixed state boson, which dumps entropy into the environment. As I tend to think there are connections between gravitation and wave function collapse. The similarities with the the nonconservation of Tr(ρ^2) with decoherence and Hawking radiation has always made me suspect a connection. We can perform a Cavendish experiment on masses in the kilogram scale, and most measurement apparatus are on the scale of grams on up. So it is not unreasonable to say there is some superposition of spacetime is established in a measurement that is nonlinear and not quantum mechanically stable. It is not an einselected state that is stable against environmental perturbation or quantum noise. This quantum noise then has nonlinearities, which we should not be too surprised of with gravitation, that abruptly adjusts the wave function. To carry this further, I would argue if we have conservation of qubits that there is some gravitational response. This might be in the form of gravitons or very weak gravity waves. If there is a superposition of a needle state in a measurement apparatus that has a growing superposition of spacetime metrics that is non einselected or stable, the collapse should then produce gravitons. 51. Given a wavefunction, how does one read off the many worlds from it? E.g., no one would read two worlds in the wave function of a single particle - a |spin up> + b |spin down>. But we are supposed to detect two worlds in a |spin up> |detector indicates up> + b |spin down> |detector indicates down>. General challenge to MWIers: given a many-particle wave function, count how many worlds it represents. 1. Note that since time evolution can be interpreted as a change of basis of the present state, all the future worlds will also be present in that state. 2. The number of worlds equals the number of eigenstates that are in a superposition. So if you have spin up/down states in a superposition ψ = 1/√2([up] + [down]), the entire world splits into two upon a measurement. The probability for each of these is p_up = ½ and p_dn = ½. and |ψ|^2 = ½ + ½ = 1. Now there is some confusion people have that if the world splits there is a violation of conservation of mass and energy. However, globally each of those two branches has globally a ½ probability, so from this "bird's eye" perspective there is no violation. From the perspective of the "frog's eye" that is carried along one of these branches in a sort of Hilbert space frame dragging the probability is reset to unity. That gets to the issue Sabine raises on whether MWI really solves the measurement problem. It does not tell us how an observer is quantum frame dragged along one branch, or at least that is the phenom observed. Within the observable universe there are some 10^{80} elementary particles. There are maybe around 10^{20} wave function collapses occurring with each of these a second. This is certainly the case for particles inside stars and other thermal bodies. I can't say much about dark matter. So this means the observable universe may split 10^{100} times every second on the Hubble frame. The observable universe may only account for 1 in 10,000 or so of the universe out to what is causally accessible with a z redshift from the Planck scale. Now consider the multiverse prospect. MWI is nifty in some ways, but I have never been compelled to "drink the MWI Koolaid." 3. First- so the claim is that the initial a |spin up> + b | spin down> particle already exists in two worlds? Second- if I measure spin, the world splits into two, but if I measure momentum, the world splits into a continuous infinity of worlds? "The number of worlds equals the number of eigenstates that are in a superposition" cannot be right. 4. Lawrence Crowell writes "The number of worlds equals the number of eigenstates that are in a superposition." If so: The dimensions of the Hilbert space describing the universe and hence the number of eigenvectors/eigenstates does not change. So this MWI branching is an illusion, the number of worlds cannot change and is given by the initial wave function of the universe. 52. I agree that to make predictions in MWI one needs something more than Schrödinger's equation, but not that much more. Namely, it follows from MWI that the Born rule will be satisfied in "most" branches of the multiverse, where "most" means "outside of an an exceptional set with vanishingly small sum of squared amplitudes". To extract a prediction from this, one has to disregard this tiny exceptional corner of the universal wavefunction. Yes, it is an additional assumption, but it seems rather benign and natural. 1. The assumption one needs to make here is that measuring the eigenstate of an observable will yield that eigenstate with certainty. This special case of the Born rule implies the general case. Disregarding the "exceptional corner"shouldn't be a problem as probabilities only become rigorously defined by the normalized frequencies in the limit of an infinite number of measurements, and in that limit the probability of deviations from the Born rule go to zero. 2. Pascal wrote: There is no need to make any such assumption in MWI, and luckily so. Such an assumption would be far from benign. I am not aware of any way to introduce such a cut-off in a consistent way. 53. Different people in "this World" can also be interpreted as MWI copies that spit off a long time ago (around or before the time we were born). The inverse time evolution of the multiverse will lead to merging of copies. Is it then true that two arbitrary people will always merge under an inverse time evolution? This has to be true because we grew out of a fertilized egg that didn't have a brain. The difference between what any two people are aware of will thus get smaller as we turn the clock back until it completely vanishes. So, we start out with zero awareness and we gradually accumulate information. We thus branch out becoming all the conscious agents in the entire multiverse, including dinosaurs here on Earth, strange aliens in far away galaxies, intelligent AIs and also the persons posting on this blog. This then means that different people in "this world" are actually the same persons in different worlds. 54. Hi Sabine, Another (more amusing) problem with the Many Worlds Interpretation is that if we were to take the theory seriously, then we need to face the possibility that our own universe may not have originated 13.8 billion years ago in a Big Bang. No, because if the Many Worlds theory is true, then we may owe our existence to a “branching” that might have occurred - (perhaps a mere 10 minutes ago) - due to the quantum events that took place in the methane from a bear farting in the woods in an alternate universe. In which case, we are not here as the result of a “Big Bang,” but from a “tiny toot.” (I call it – “the tiny-toot theory”) 55. All good physicists love those linear theories. Quantum Mechanics is an explicitly non-linear theory. The Many Worlds Theory only looks at the linear part. So claiming that the MWT is quantum mechanics is false. 56. Dear Prof. Hossenfelder, "Measurement is a non-linear process." If there is clear experimental evidence backing up this statement then the MWI cannot explain quantum measurements, agreed. But the gedanken-experiment you offer does not seem enough as it might be missing some of the required finer details. Thank you for your reply and your patience in trying to communicate your knowledge. 1. Ripi: Every experiment that results in a detector eigenstate for three different prepared states Psi_1, Psi_2 and Psi_1+Psi_2 (appropriately normalized) is the evidence you ask for. It's been done millions of times. How can I possibly make this any clearer. Look up any textbook on the measurement postulate. It's a normalized projection operator. It is not linear! 57. Does this make any sense? 58. "Pmer, The idea behind MWI is that there is one Universal WaveFunction (Psi) which follows the Schrodinger equation. The many worlds arise because Psi gives alternative possible outcomes, which are sometimes measured. In order not to introduce some theory as to why a specific measurement has occurred, the MWI just says that all Psi-possible outcomes have been measured. Thus Psi (and Schrodinger) live in the Multiverse and not in any Universe." I don't think I quite believe that. What if you have two different universes (worlds) that are evolving in completely different--and incompatible?--bases? 59. Hi Sabine I cannot agree with your conclusion about the MWI. As far as I know, the interpretation has been revitalized in the last 20 years and rendered more "plausible": 1. The wavefunction evolving according to the Schrödinger equation is ontologically real and so are the splitting worlds (PBR Theorem) 2. Branching or splitting into parallel worlds plus observers is caused by irreversible decoherence. The decoherence approach enters the global picture and there is objectively no collapse. 3. Since we as observer experience only one specific world or reality out of many, the wave function collapse on our branch can therefore only be a subjective illusion. Moreover, it seems that the MWI can be made compatible with the Born rule. Sean Carroll and Charles Sebens have introduced the self location uncertainty (SLU) before any measurement in any branch is done. Carroll calls the revised theory "Everettian Quantum Mechanics" (EQM). However, MWI may not entirely solve the measurement problem for a specific observer, but evades it due to the construction of the theory. The real problems of MWI are: apparently no freedom of choice of measurement and no free will (but this could be countered by incorporating the anthropic selection principle). Feynman once said that MWI was the only way he could think to resolve difficulties like the measurement problem in quantum mechanics, but that he didn't like it. This is my opinion, too. 1. rhkail, Look, if you cannot find a fault in my argument, you have to accept the conclusion. That's how science works. You can't just come here and disagree with the conclusion without even looking at my argument, that doesn't make any sense whatsoever. 2. Sabine, We could precise "That's how GOOD science works". Because a big part of spéculative physics don't seems obey this basic principle. 60. "But first, a brief summary of what the many worlds interpretation says." She lost me after that. 61. Interestingly, if dogs can always catch a frisbee then what is time to a dog? What is time to a mite? By relativity both the times are true. Time to a dog is as much "real" as time to a mite or a human. How can the same movement present two different times, that is, two different realities? Therefore, the dog-biological-program or the mite-biological-program dictates dog-time or mite-time respectively. The program or the observer dictates reality. Then what is actuality? The movement is the actuality, but how it is interpreted is the virtue of the program or the observer, that is, the human-program or the dog-program. There is one actuality, movement, but many realities, and each reality is a description of the movement based on the observer or the biological program. But, the bat-program does not see movement, rather it hears movement. So, what is time to a bat? You introduce time when you notice change. You cannot notice change if you did not register or record the previous instant as an image that you use as a reference to look at the present image. The present image when registered or recorded as memory becomes the reference to the future image. In this way, in memory, you have a series of recorded images. When you connect or link these images, in the act of thinking, time is born. If the images match or if the same set of different images repeat, you notice a pattern, if they don't you notice change. When you notice a pattern or change, you introduce time. You also sense an interval between the images when you notice change; if not for this interval, this space between two recorded images you can't tell them apart. This interval or space between two similar images that is recorded as different positions helps discern change as change in position, and therefore you notice movement. Time is in the interval, and time is in the movement as much as time is in the memory or the recording. This interval or space may be different for a dog when compared to a human. The dog-program dictates how the dog senses this interval or space. This sensing in turn dictates what is movement and time to a dog. The program or the observer dictates reality. 62. I'd appreciate, Sabine, a clear piece about what you see as the flaws in Sean Carroll's reasoning. 63. The detectors are copies in MWI, and the possibility associated with each copy is unique. MWI starts off with "Up" associated with copy A, and "down" associated with copy B. Now, both Copy A and Copy B are copies of the same thing. That same thing is the observer-program, which means that copy A and copy B are running the same program. But the program can detect only "Up". In the other branch too, it will detect only "Up". So, what happens to "Down" possibility? Is it a hidden variable or does the possibility simply disappear? These are questions the PWT and Copenhagen interpretation ask. We are back to square one. 64. Dear Prof. Hossenfelder, "Look up any textbook on the measurement postulate. It's a normalized projection operator. It is not linear!" So your argument goes: - The measurement postulate is a non-linear normalized projection operator. - In the MWI the evolution is always linear. - Therefore there is a problem with the MWI. But the whole point of the MWI is to reject the measurement postulate, isn't it? Please forgive me if I misunderstood. This is all very confusing, I do not intent to distort your argument. Thanks again for your reply 1. Ripi, The reason we have a measurement postulate is that it is necessary to describe what we observe. MWI people claim they can describe what we observe without the measurement postulate. I am pointing out that this only works because they bring in an assumption that's equivalent to the measurement postulate (equivalent in terms of observable consequences), therefore MWI is not any simpler than other interpretations of QM. Which, as I said above, is obvious if you think of it from a purely axiomatic perspective. If you could derive the measurement process from the Schroedinger equation in MWI, you could do it in any interpretation. 2. In at least some versions of the MWI, the measurement postulate is replaced by assumptions about some kind of measure on observers in the wave function that gives the Born rule. You're saying that the MWI isn't any better than the Copenhagen interpretation because they both require additional assumptions. But shouldn't you actually compare these additional assumptions, and decide which one is simpler, more intuitive, or easier to work with? Comparing the number of additional assumptions (one in each case) and concluding that they are equally good interpretations is not a logically sound procedure. Look at the Bohm pilot-wave interpretation. I would claim that the additional mechanism is much more unwieldly and difficult to calculate with than the Copenhagen, and thus that, at least for the purpose of performing calculations, the Copenhagen interpretation is much better than the Bohm pilot-wave interpretation. But couldn't you argue that it also has just one additional assumption, and so is equally simple? 3. Peter, I am saying that since they both use the same number of (logically equivalent) assumptions neither is any simpler than the other, in contrast to what MWI defenders claim. 4. I'm saying that simply counting assumptions is an incredibly bad measure of simplicity. I won't argue with you about whether MWI is indeed simpler than Copenhagen, but I do strongly disagree with this methodology for measuring "simplicity". 65. One can make a dead end street longer, wider, higher, paint a marvelous panorama on the wall at the end of it, but it stays a dead end street. Thinking in terms of technical specifications instead of thinking in functional specs in the first place, is not the right methodology. It's typical for do-oriented people, not for process-oriented ones. Space , seen in terms of technical specs, is l x b x h. L x b x h don't do anything.What is space in functional terms? 66. Is MWI layer only spacelike or what? How do it handle with the relativity of simultaneity? Where did come the effect for pure guess from? Interaction from another world? The whole concept of MWI is internally contradictory. 1. You are raising one interesting issue. It is in part an aspect of QM in general. If I make a measurement of an entangled state here, the measured entangled state I find determines the other state somewhere else. However, there is no information communicated along a spatial surface. One way to see this is the quantum numbers of the entangled state appears in a measurement as another pair of quantum numbers, but where this is just a subjective shift. Nothing is communicated; there are no transmissions of qubits or information. With Copenhagen interpretation (CI) and most others the wave function collapse is due to a local process. I can say with my detector the event happened “here.” There is still this nonlocality that a superposition or entanglement is reduced everywhere, but the localization of where a particle is is in the small space of my detector, or a spot on a CCD pixel. With Many World interpretation (MWI) my device makes the measurement, but where the splitting happens is ambiguous. Nonlocality means the splitting of the wave function happens potentially in all spacetime. This is made rather apparent with the Wheeler Delayed Choice Experiment (WDCE) Here a measurement of quantum waves after having passed through a double slit collapses the wave to appearing at a slit after the wave has pass through. So this reduction in not just nonlocal in space, but time as well. With WDCE there is no way to assign a probability to branches in this splitting, say if the density matrix or probabilities are evolving, This nonlocality is interesting for quantum gravitation, which is a quantum field that because it involves the dynamics of space can't have the locality conditions imposed on other quantum fields. Then in MWI there is no unique localization of where probability eigen-branching occurs, which in a curious sense means with a spatial interval for branching D → ∞ there is in the dual momentum-energy perspective k → 0 for k the reciprocal length and momentum p = ħk. However, in CI there is a rapid adjustment of a wave function that is tightly localized so D → 0 and k → ∞. From a phenomenological perspective an observer in the MWI setting also appears to localize a wave and the conditions D → 0 and k → ∞ are found. This is seen in all the analysis of resetting the measured wave probability to unity or one and so forth. So MWI does offer something interesting here in the way of a sort of duality involved with how branching occurs with the evolution of probabilities p_i = ψ_i*ψ_i. It is a duality that carries with it the Fourier transform nature of QM. With CI this is not as apparent, though we have wave function reduction that occurs nonlocally. With MWI this involves on a certain level a splitting of spatial surfaces and implies aspects on the quantization of space or spacetime. A part of what this is wrapped in is the subjective shift in quantum numbers and measured quantities. I alluded to this in the first paragraph. This has some elements of QuBism in it. This is the ultimate ψ-epistemic interpretation, which ultimately places the subject (experimenter or apparatus etc) as primary with no objective reality to anything quantum outside of observation. Of course most statements in the world have a relationship between subject and object, or a predicate with reference to object that becomes a sentence with the inclusion of a subject. QuBism however, has an almost Gödel numbering aspect to it, for if all that exists is a subject that makes Bayesian updates, then predicates involve ultimately the subject, with subjective outcomes, and the subject is then the predicate as well. This has a sense similar to Will VO Quine's “Is false when appended by itself, 'Is false when appended by itself, ''Is false ...” and so on. This odd statement has a self-referential quirkiness and this leads in some ways into quantum physics being self-referential. A quantum measurement involves quantum states that encode Gödel numbering of quantum states. 2. continued due to 4096 character limit: CI is ψ-epistemic, though not on the steroids QuBism is on. MWI is ψ-ontological and is preferred by physicists who dislike the idea that QM ultimately does not have any reference to something objective “out there” as physical reality. With CI the collapse of wave functions appears to demolish quantum information, but qubit conservation I think can be maintained if we include quantum gravitation and identify nonlocal spacetime with large N entanglements. MWI does not appear to demolish qubits, but an observer is left with the a phenomenology not that different from CI and appeals to this global perspective of branching. However, all observations are local. So in the end we can I think shift around between quantum interpretations and think of things accordingly. As Penrose said there are more ideas about quantum foundations than there are physicists, because many of them switch their perspective. 3. Thank you for your detailed view. is there any measurement-specific entanglement other than dual antipodality? I would see it possible for the world to be divided into two opposing causalities, of which measurement always chooses either. Hence we could preserve full physicality and causal information but in QM handedness independent logics... 67. Hi Sabine I'm sorry about the misunderstanding in my previous post. I have looked again at your arguments concerning the measurement postulate and I revised my conclusion as follows. Under the pure theoretical aspect in MWI my comment might be valid. But considering the practicality of any measurement at any detector (ie, updating the probabilities to 100%)the measurement problem reappears and your statement is correct. 68. Hello Sabine! Pardon me but I notice a problem in your rationale. This is very important because, UMHO, this leads to contradiction and more over to misinterpretation of EPR experiment for example. This means that you have to update your probability and with it the wave-function.... the wave-function collapse." Here is the problem ; you are talking of particle presence before making any measure but a measure suppose a detector and if your wave function go through a detector the wave function or the state doesn't exist any longer, no need to update a state and its probability. All away around, if you are talking to update a state it means you don't have done any detection then you cannot talk of probabilities of particle detection, you just have a new state, not even a measure. Here comes with the contradiction, you are saying: "you have a wave-function for a particle that goes right with 100% probability. Then you will measure it right with 100% probability. No mystery here. Likewise, if you have a particle that just goes left, you will measure it left with 100% probability. The 2 experiments you quote are completely different following what I said above. The 1st one, you have a detector on each branches and it is true that your particle must be either right or left each with a certain probability. This is the particle detection logic. The 2nd one, you recombine 2 existing states (not particles) from 2 outputs of each branch left & right without any detection, it is important to keep your wave function, then downstream you can go through detectors right & left. This is a completely different experiment and the associated probabilities of detection will be different accordingly. Let me insist that the position of your detector in the wave function processing chain is rather crucial and talking of particle detection probabilities upstream of a detection process is meaningless, worse it leads to contradictions. Upstream the detector it's a linear state model of the experiment, and downstream there is no wave function anymore but only real probabilities of detection of real particles. We must conclude that we must pay attention to not mix real particle detected concepts with linear Hilbert state vector model of experiment or wave function in the same manner without facing contradictions. This is the only way to avoid contradiction and above all to have CI compatible with causal relativity in EPR experiments. 69. Sabine, You seem resolutely determined to treat the measurement apparatus and observer as "outside" of the system, rather than treating them all as part of a combined quantum system. You state that the measurement process is nonlinear because a definite outcome is produced, but how could it possibly *appear* otherwise? What region of Hilbert space for the combined system (including observer and measurement apparatus) corresponds to the observer perceiving a non-definite measurement after decoherence has occurred? To simplify things, let's replace the human observer with a computer system programmed to respond in some way to the measurement. Follow the evolution of the wave function for the combined system through the measurement and for some time beyond. Once decoherence has occurred, what do you expect it to look like? Is there any point or subspace of the Hilbert space for this combined system that corresponds to the computer registering an indefinite measurement? To simplify even further, and make things more concrete, pick any quantum circuit you like using only "classical" gates (e.g. no Hadamard operators, and an input state corresponding to a definite bit pattern produces an output state corresponding to a definite bit pattern). Let this circuit take the measurement as input and produce some output. What are you going to get? A superposition of states each corresponding to a definite measurement and definite output. Now let's do the same for a really large, complex quantum circuit that implements an artificial intelligence. Again, the result will be a superposition of states each corresponding to the AI perceiving a definite measurement. 1. Kevin, No, the opposite is the case. I am saying the detector is made of the same stuff as the prepared state, hence they should be describable in the same way. 70. "Instead, many worlds people say, every time you make a measurement, the universe splits into several parallel worlds, one for each possible measurement outcome. This universe splitting is also sometimes called branching." This is the nosense! In my understanding, if you have a wavefunction of a system, with different probable outcomes by measurement, you have the branching without (maybe before) measurement. It will be full deterministic what you will measure in your own branch, but you will not know before, only a probability. There is no measurement problem in this many world theory. 1. Arcturus, The measurement causes decoherence and hence results in the different branches to become noticeably different. It is correct that you also have a large number of different "realitie" before and without measurement, but it's not what people normally refer to as "branching". Instead, they use the word to refer to something, vaguely speaking, macroscopic. As I emphasize in my video, that's a matter of definition. I would appreciated if you would pay some more attention to what I say before proclaiming that I am talking nonsense. That's wrong and clearly demonstrates you didn't understand what I say. Write down a definition for "what you will measure" by using the Schrödinger equation only, and you will hopefully see what I am talking about. 71. I think a lot of the problem is that we tend to confuse interaction or entanglement with measurement or observation. When we talk about measurement or observation, we are usually talking about classical phenomena with thousands, millions and even more particles and their wavefunctions involved. At the human scale, detecting a photon involves inducing a cascade of effects to produce an electrical signal whether the flow of electrons in a wire or as part of a chemical cascade along an axon. A single cyanide molecules won't kill a cat. It would take millions of cyanide molecules to get one a cat short of breath. When that many particles are involved, the statistics are aggregated to classical. Arxiv had a paper on a "Wigner's friend" experiment which examined the statistics of a system with multiple quantum scale "observers". Preliminary results indicate that quantum Wigner's observations and his little friend's observations do not need to be consistent. Quantum scale measurement doesn't mean decoherence. It just means entanglement. It's only when too much stuff gets entangled that the quantum statistics turn into classical statistics. It's like the way a Poisson distribution repeated turns into a Gaussian. 72. The Psychological Observer: Programmed as a Hindu, the observer believes in reincarnation. Progammed as a Christian, the observer believes in resurrection. The observer is the Hindu-Program or the Christian-Program. Belief is the product of the program. If it is the Hindu-Program, then the belief is reincarnation, and if it is the Christian-Program, then the belief is resurrection. How do we know? Replace the Hindu-Program with the Christian-Program--conversion--then the Hindu now reprogrammed as a Christian believes in resurrection. The replacement betrays the underlying program. The program is the observer. With the Hindu-Program running, how do I look at a Muslim? The Muslim looks offensive or suspicious. Replace the Hindu-Program with the Muslim-Program, then the Hindu now reprogrammed as a Muslim develops a great affinity for the Muslim. Again, the replacement betrays the underlying program, which influences how I look at a human being, that is, whether I take offense or a great liking for a human being. Clearly, the observer is influencing the reaction or the measurement, which is how I look at, discriminate or classify a human being. The offense or the attraction is in the observer or the program and not out there in the human being. The observer is the offense or the attraction. The observer is the observed because if the program stalls or stops running the offense or the attraction disappears, right. Then there is only the human being not the Hindu or the Christian or the Muslim. How does the observer come into being? The moment one is born, one is conditioned or programmed by culture, tradition, orthodoxy, religion, family, caste, nationality, political ideology, linguistic patriotism, superstitions, rituals, dogmas, male chauvinism, feminism etc. The observer is the result of this conditioning or programming. And the observer in turn influences one's likes, dislikes, hate, offense, attraction, affinity, attitude, outlook, worldview, how one sees, how one listens, how one feels etc. The observer comes into being or is born out of conditioning or programming. Brainwashing or indoctrination by religion or political parties or communism or any other ism including patriotism is conditioning or programming. We all know how indoctrination terribly influences the world view of a terrorist and prepares him to kill or get killed. After all, Hinduism is a program as much as Christianity and Islam are. After all, political or linguistic affinity, fanaticism, affiliation or identity is as much a program as religion and caste are. After all, patriarchy and male chauvinism are as much a program as feminism is. Now how do you receive these statements or listen to these statements? Is there the influence of the observer? Is the program running? Observation is when the observer is not. When there is observation then there is an opportunity to see what is as it is. Observation does not guarantee discovery but there can be no discovery without observation. Comment moderation on this blog is turned on.
938d8629b12a1ed8
Fun with Chlorine Hi guys and gals, it’s been awhile since my last entry. Last week kept me very busy. In the midst of my late nights typing, I learned some fun things about chloride channels (for one of PZ’s exams.) I learned about their job of regulating cell volume and an appropriate cell-membrane charge. One thing piqued my curiosity. The cell exterior has roughly 5 milliMolar [chloride – ], while the interior has 125 Molar [chloride – ]. The interior also has a negative charge. Despite all of those factors, the articles I read seemed to say that chloride would diffuse inward if the channels were to open. That is very weird, unless I’m missing something. Is there some very high concentration of a similar ion on the outside that is high enough to send chloride scurrying inward? If anyone has experience in this area, please chime in. 1. says Are you sure you don’t mean 125 mM [Cl-]o and 5 mM [Cl-]i? I think there are cells in certain tissues that can reverse their [Cl-] gradient, but the vast majority have relatively low intracellular Cl- relative to extracellular Cl-. What sort of cells are you looking at? 2. says Yes, what DS said — I think you got internal and external concentrations reversed in your notes. Try using the Nernst equation to calculate the equilibrium potential for Cl with those concentrations, and you’ll figure it out. 3. speedwell says This is an excellent post because it shows the best of the blogging collaborative spirit… a blogger comfortable with and respectful of the readership, so much so that he doesn’t mind admitting he doesn’t know something, and appealing to them for input. Extra credit for spelling “piqued” correctly. 4. B. Dewhirst says I was under the impression that the cell membrane acted as a ‘pump,’ selectively moving certain materials… But then, I’m a Metallurgist… 5. Rob says Not to mention that there is a big difference between chlorine and chloride. Fun with chlorine was what the Germans did in WWI. Chloride, however, is an innocuous ion that is critical for maintaining the potential across membranes. The difference betweeen intracellular and extracellular chloride is the result of an active pump. “To a physicist, all molecules are the same: simple manifestations of the Schrödinger equation. A chemist appreciates the difference” And a biologist doesn’t know the difference. 6. Epikt says Hmmph. The only difference between chemistry and physics is the boundary condition on the wave function at infinity. 7. Bill Dauphin says Hmmm… that phrase reminds me of my favorite folks song of chemical romance: Kate McGarrigle Garden Court Music ASCAP Just a little atom of chlorine Valence minus one Swimming thru the sea, digging the scene Just having fun She’s not worried about the shape or size Of her outside shell It’s fun to ionize Just a little atom of Cl With an unfilled shell But somewhere in that sea lurks Handsome Sodium With enough electrons on his outside shell Plus that extra one Somewhere in this deep blue sea There’s a negative For my extra energy yes Somewhere in this foam My positive will find a home Then unsuspecting Chlorine Felt a magnetic pull She looked down and her outside Shell was full Sodium cried “what a gas be my bride, and I’ll change your name from Chlorine to Chloride” Now the sea evaporates to make the clouds For the rain and snow Leaving her chemical compounds in the abscence Of H2O But the crystals that wash upon the shore Are happy ones So if you never thought before Think of the love that you eat When you salt your meat Think of the love that you eat When you salt your meat I’m sure there’s an MP3 out there somewhere , but the filters I’m behind won’t let me even look for it, much less link to it. 8. evilchemistry says And a biologist doesn’t know the difference. Really? Damn, all those biologists that have and are studying facilitated diffusion are gonna be pissed. Even those biologists that study that tricky kind of facilitated diffusion that costs energy, dude you really don’t want to anger them. I know confusing right? You must be chemist or a physicist. 9. zayzayem says Weee… I own a textbook. Actually several.. According to Campbell Reece and Mitchell (5th Ed “Biology”) Inside the cell Inside the cell is [K+] 150mM; [Na+] 15mM; [Cl-] 10mM; [A-] 100mM Outside is [K+] 5mM; [Na+] 150mM; [Cl-] 120mM; and a dearth of non-membrane permeable anions [A-] (which i’m gonna assume are probably a lot of proteins and other crap) This seems to suggest that Chloride ions would indeed diffuse into the cell. But remember its not just concentration gradients, but also over all charge etc. Being a basic biology text book it focuses more on the Na+/K+ balance than what exactly the Cl- is doing here. 10. says Undoubtedly, the interior en exterior Cl- concentrations are exchanged. Our cells still float in the good old seawater environment of three billion years ago, which means with a lot of Na and Cl (salt ions)outside our cells. 11. David says The NaCl song is from a 1978 album titled Pronto Monto. It was with her sister and sold so poorly that they were soon dropped by Warner label. The album is their only one not released on CD, thus difficult to find a version on this series of tubes. I don’t do bit torrent, but there is this: 12. Deech56 says Way back when I took biologies (before the internets) I think I just learned that Cl- tends to follow Na+ and serves to balance the charge of intracellular anions. Of course, those of us who remember using Index Medicus may have forgotten a bit of the finer points of biology. Mark_Antimony, sometimes I envy undergrads who are just scratching the surface of all the wonder of the biological world. The only thing that can match discovering what people know is discovering something people did not know. 13. says There is really no evidence that sea water 3 billion years ago had the mineral composition of cytoplasm. It is virtually certain that it did not. What sets the composition of sea water is mostly hydrothermal reactions with hot rock under the sea floor, not what is carried to the oceans by rivers. 3 billion years ago was before the great oxidation event, so there was still considerable ferrous iron in the ocean, as well as little sulfate. What sets the composition of cytoplasm is the 3 billion years of evolution that have occurred since then. If having a different ionic composition would be better, it would evolve over that time frame. 14. says First, the initial comment by DS is correct. In animal cells, 150 mM is a reasonable external chloride concentration; it’s higher in marine animals. Recall that animals originally came from the sea, and extracellular fluid is not unlike sea water, which contains a lot of sodium chloride. Second, intracellular chloride is low. Most negatively charged intracellular ions are amino acids or macromolecules, and chloride is actively transported out of cells by ion pumps embedded in the membrane. Based on this information, you might think that an ion channel that allowed chloride to pass through it would let chloride in when it opened. However, that would only be true if there were no other forces acting on the chloride ions. In fact, cells have a difference in voltage (a.k.a. membrane potential) between the inside and the outside, which is established by ion pumps and channels. In nearly all cells, the inside of the cell is more negative. If the membrane potential is negative enough, then chloride doesn’t “want” to flow in any more, and will flow outward. However, note that this would be a rare occurrence. The basic idea is that cells are both electrical and chemical in nature. You might already understand that part, and anyway, as PZ says, you’ll figure it all out.
c6ad87f33163c0e9
Notice: Undefined offset: 16425 in /var/www/scholarpedia.org/mediawiki/includes/parser/Parser.php on line 5961 Linear and nonlinear waves - Scholarpedia Linear and nonlinear waves From Scholarpedia Graham W Griffiths and William E. Schiesser (2009), Scholarpedia, 4(7):4308. doi:10.4249/scholarpedia.4308 revision #154041 [link to/cite this article] Jump to: navigation, search Post-publication activity Curator: Graham W Griffiths The study of waves can be traced back to antiquity where philosophers, such as Pythagoras (c. 560-480 BC), studied the relation of pitch and length of string in musical instruments. However, it was not until the work of Giovani Benedetti (1530-90), Isaac Beeckman (1588-1637) and Galileo (1564-1642) that the relationship between pitch and frequency was discovered. This started the science of acoustics, a term coined by Joseph Sauveur (1653-1716) who showed that strings can vibrate simultaneously at a fundamental frequency and at integral multiples that he called harmonics. Isaac Newton (1642-1727) was the first to calculate the speed of sound in his Principia. However, he assumed isothermal conditions so his value was too low compared with measured values. This discrepancy was resolved by Laplace (1749-1827) when he included adiabatic heating and cooling effects. The first analytical solution for a vibrating string was given by Brook Taylor (1685-1731). After this, advances were made by Daniel Bernoulli (1700-82), Leonard Euler (1707-83) and Jean d'Alembert (1717-83) who found the first solution to the linear wave equation, see section (The linear wave equation). Whilst others had shown that a wave can be represented as a sum of simple harmonic oscillations, it was Joseph Fourier (1768-1830) who conjectured that arbitrary functions can be represented by the superposition of an infinite sum of sines and cosines - now known as the Fourier series. However, whilst his conjecture was controversial and not widely accepted at the time, Dirichlet subsequently provided a proof, in 1828, that all functions satisfying Dirichlet's conditions (i.e. non-pathological piecewise continuous) could be represented by a convergent Fourier series. Finally, the subject of classical acoustics was laid down and presented as a coherent whole by John William Strutt (Lord Rayleigh, 1832-1901) in his treatise Theory of Sound. The science of modern acoustics has now moved into such diverse areas as sonar, auditoria, electronic amplifiers, etc. The study of hydrostatics and hydrodynamics was being pursued in parallel with the study of acoustics. Everyone is familiar with Archimedes (c. 287-212 BC) eureka moment; however he also discovered many principles of hydrostatics and can be considered to be the father of this subject. The theory of fluids in motion began in the 17th century with the help of practical experiments of flow from reservoirs and aqueducts, most notably by Galileo's student Benedetto Castelli. Newton also made contributions in the Principia with regard to resistance to motion, also that the minimum cross-section of a stream issuing from a hole in a reservoir is reached just outside the wall (the vena contracta). Rapid developments using advanced calculus methods by Siméon-Denis Poisson (1781-1840), Claude Louis Marie Henri Navier (1785-1836), Augustin Louis Cauchy (1789-1857), Sir George Gabriel Stokes (1819-1903), Sir George Biddell Airy (1801-92), and others established a rigorous basis for hydrodynamics, including vortices and water waves, see section (Physical wave types). This subject now goes under the name of fluid dynamics and has many branches such as multi-phase flow, turbulent flow, inviscid flow, aerodynamics, meteorology, etc. The study of electromagnetism was again started in antiquity, but very few advances were made until a proper scientific basis was finally initiated by William Gilbert (1544-1603) in his De Magnete. However, it was only late in the 18th century that real progress was achieved when Franz Ulrich Theodor Aepinus (1724-1802), Henry Cavendish (1731-1810), Charles-Augustin de Coulomb (1736-1806) and Alessandro Volta (1745-1827) introduced the concepts of charge, capacity and potential. Additional discoveries by Hans Christian Ørsted (1777-1851), André-Marie Ampère (1775-1836) and Michael Faraday (1791-1867) found the connection between electricity and magnetism and a full unified theory in rigorous mathematical terms was finally set out by James Clerk Maxwell (1831-79) in his Treatise on Electricity and Magnetism. It was in this work that all electromagnetic phenomena and all optical phenomena were first accounted for, including waves, see section (Electromagnetic wave). It also included the first theoretical prediction for the speed of light. At the end of the 19th century, when some erroneously considered physics to be very nearly complete, new physical phenomena began to be observed that could not be explained. These demanded a whole new set of theories that ultimately led to the discovery of general relativity and quantum mechanics; which, even now in the 21st century are still yielding exciting new discoveries. However, as this article is primarily concerned with classical wave phenomena, we will not pursue these topics further. Historic data source: 'Dictionary of The History of Science [Byn-84]. A wave is a time evolution phenomenon that we generally model mathematically using partial differential equations (PDEs) which have a dependent variable \(u(x,t)\) (representing the wave value), an independent variable time \(t\) and one or more independent spatial variables \(x\in\mathbb{R}^{n}\ ,\) where \(n\) is generally equal to \(1,2 \;\textrm{or}\; 3\ .\) The actual form that the wave takes is strongly dependent upon the system initial conditions, the boundary conditions on the solution domain and any system disturbances. Waves occur in most scientific and engineering disciplines, for example: fluid mechanics, optics, electromagnetism, solid mechanics, structural mechanics, quantum mechanics, etc. The waves for all these applications are described by solutions to either linear or nonlinear PDEs. We do not focus here on methods of solution for each type of wave equation, but rather we concentrate on a small selection of relevant topics. However, first, it is legitimate to ask: what actually is a wave? This is not a straight forward question to answer. Now, whilst most people have a general notion of what a wave is, based on their everyday experience, it is not easy to formulate a definition that will satisfy everyone engaged in or interested in this wide ranging subject. In fact, many technical works related to waves eschew a formal definition altogether and introduce the concept by a series of examples; for example, Physics of waves [Elm-69] and Hydrodynamics [Lam-93]. Nevertheless, it is useful to at least make an attempt and a selection of various definitions from normally authoritative sources is given below: • "A time-varying quantity which is also a function of position" - Chambers Dictionary of Science and technology [Col-71]. • "... a wave is any recognizable signal that is transferred from one part of the medium to another with a recognizable velocity of propagation" - Linear and non-linear Waves [Whi-99]. • "Speaking generally, we may say that it denotes a process in which a particular state is continually handed on without change, or with only gradual change, from one part of a medium to another" - 1911 Encyclopædia Britannica. • "a periodic motion or disturbance consisting of a series of many oscillations that propagate through a medium or space, as in the propagation of sound or light: the medium does not travel outward from the source with the wave but only vibrates as it passes" - Webster's New World College Dictionary, 4th Ed. • "... an oscillation that travels through a medium by transferring energy from one particle or point to another without causing any permanent displacement of the medium" - Encarta® World English Dictionary [Mic-07]. The variety of definitions given above, and their clearly differing degrees of clarity, confirm that 'wave' is indeed not an easy concept to define! Because this is an introductory article and the subject of linear and non-linear waves is so wide ranging, we can only include sufficient material here to provide an overview of the phenomena and related issues. Relativistic issues will not be addressed. To this end we will discuss, as proxies for the wide range of known wave phenomena, the linear wave equation and the nonlinear Korteweg-de Vries equation in some detail by way of examples. To supplement this discussion we provide brief details of other types of wave equation and their application; and, finally, we introduce a number of PDE wave solution methods and discuss some general properties of waves. Where appropriate, references are included to works that provide further detailed discussion. Physical wave types A non-exhaustive list is given below of physical wave types with examples of occurrence and references where more details may be found. • Acoustic waves - audible sound, medical applications of ultrasound, underwater sonar applications [Elm-69]. • Chemical waves - concentration variations of chemical species propagating in a system [Ros-88]. • Electromagnetic waves - electricity in various forms, radio waves, light waves in optic fibers, etc [Sha-75]. • Gravitational waves - The transmission of variations in a gravitational field in the form of waves, as predicted by Einstein's theory of general relativity. Undisputed verification of their existence is still awaited [Oha-94, chapter 5]. • Seismic Waves - resulting from earthquakes in the form of P-waves and S-waves, large explosions, high velocity impacts [Elm-69]. • Traffic flow waves - small local changes in velocity occurring in high density situations can result in the propagation of waves and even shocks [Lev-07]. • Water waves - some examples • Capillary waves (Ripples) - When ripples occur in water they are manifested as waves of short length, \(\lambda=2\pi/k<0.1m\ ,\) (\(k=\)wavenumber) and in which surface tension has a significant effect. We will not consider them further, but a full explanation can be found in Lightfoot [Lig-78, p221]. See also Whitham [Whi-99, p404]. • Rossby (or planetary) waves - Long period waves formed as polar air moves toward the equator whilst tropical air moves to the poles - due to variation in the Coriolis effect. As a result of differences in solar radiation received at the equator and poles, heat tends to flow from low to high latitudes, and this is assisted by these air movements [Gil-82]. • Shallow water waves - For waves where the wavelength \(\lambda\ \) (distance between two corresponding points on the wave, e.g. peaks), is very much greater than water depth \(h\ ,\) they can be modelled by the following simplified set of coupled fluid dynamics equations, known as the shallow water equations \[\tag{1} {\displaystyle \left[\begin{array}{c} {\displaystyle \frac{\partial h}{\partial t}}\\ \\\frac{ {\displaystyle \partial u}}{{\displaystyle \partial t} }\end{array}\right]+\left[\begin{array}{c} {\displaystyle \frac{\partial\left(hu\right)}{\partial x}}\\ \\{\displaystyle \frac{\partial\left({\textstyle \frac{1}{2}}u^{2}+gh\right)}{\partial x}}\end{array}\right]=}\left[\begin{array}{c} 0\\ \\-g{\displaystyle \frac{\partial b}{\partial x}}\end{array}\right].\ \]   \(b\left(x\right)\) = fluid bed topography   \(h\left(x,t\right)\) = fluid surface height above bed   \(u\left(x,t\right)\) = fluid velocity - horizontal   \(g\) = acceleration due to gravity. For this situation, the celerity or speed of wave propagation can be approximated by \(c=\sqrt{gh}\ .\) For detailed discussion refer to [Joh-97]. • Ship waves - These are surface waves that are formed by a ship travelling in deep water, relative to the wavelength, and where surface tension can be ignored. The dispersion relation is given by \(\omega=\sqrt{gk}\ ;\) so for phase velocity and group velocity see section (Group and phase velocity), we have respectively: \[\tag{2} c_{p} = \frac{\omega}{k}=\sqrt{\frac{g}{k}}, \] \[\tag{3} c_{g} = \frac{d\omega\left(k\right)}{dk}=\frac{1}{2}c_{p}.\ \] The result is that the ship's wake is a wedge-shaped envelope of waves having a semi-angle of \(\backsimeq19.5\) degrees and a feathered pattern with the ship at the vertex. The shape is a characteristic of such waves, regardless of the size of disturbance - from a small duckling paddling on a pond to large ocean liner cruising across an ocean. These patterns are referred to as Kelvin Ship Waves after Lord Kelvin (William Thomson) [Joh-97]. • Tsunami waves - See section (Tsunami). Linear waves Linear waves are described by linear equations, i.e. those where in each term of the equation the dependent variable and its derivatives are at most first degree (raised to the first power). This means that the superposition principle applies, and linear combinations of simple solutions can be used to form more complex solutions. Thus, all the linear system analysis tools are available to the analyst, with Fourier analysis: expressing general solutions in terms of sums or integrals of well known basic solutions, being one of the most useful. The classic linear wave is discussed in section (The linear wave equation) with some further examples given in section (Linear wave equation examples). Linear waves are modelled by PDEs that are linear in the dependent variable, \(u\ ,\) and its first and higher derivatives, if they exist. The linear wave equation The following represents the classical wave equation in one dimension and describes undamped linear waves in an isotropic medium \[\tag{4} {\displaystyle \frac{1}{c^{2}}\frac{\partial^{2}u}{\partial t^{2}}}={\displaystyle \frac{\partial^{2}u}{\partial x^{2}}.}\] It is second order in \(t\) and \(x\ ,\) and therefore requires two initial condition functions (ICs) and two boundary condition functions (BCs). For example, we could specify \[\tag{5} \begin{array}{lcl} \textrm{ICs:}\quad u\left(x,t=0\right)=f\left(x\right),\quad u_{t}\left(x,t=0\right)=g\left(x\right) , \end{array}\] \[\tag{6} \begin{array}{lcl}\textrm{BCs:}\quad u\left(x=a,t\right)=u_{a},\quad u\left(x=b,t\right)=u_{b}. \end{array}\] Consequently, equations (4), (5) and (6) constitute a complete description of the PDE problem. We assume \(f\) to have a continuous second derivative (written \(f\in C^{2}\)) and \(g\) to have a continuous first derivative (\(g\in C^{1}\)). If this is the case, then \(u\) will have continuous second derivatives in \(x\) and \(t\ ,\) i.e. (\(u\in C^{2}\)), and will be a correct solution to equation (4) with any consistent set of appropriate ICs and BCs [Stra-92]. Extending equation (4) to three dimensions, the classical wave equation becomes, \[\tag{7} \frac{1}{c^{2}}\frac{\partial^{2}u}{\partial t^{2}}=\nabla^{2}u,\] where \(\nabla^{2}=\nabla\cdot\nabla\) represents the Laplacian operator. Because the Laplacian is co-ordinate free, it can be applied within any co-ordinate system and for any number of dimensions. Given below are examples of wave equations in 3 dimensions for Cartesian, cylindrical and spherical co-ordinate systems \[ \begin{array}{lccl} \textrm{Cartesian co-ordinates:} & {\displaystyle \frac{1}{c^{2}}\frac{\partial^{2}u}{\partial t^{2}}} & = & {\displaystyle \frac{\partial^{2}u}{\partial x^{2}}+\frac{\partial^{2}u}{\partial y^{2}}+\frac{\partial^{2}u}{\partial z^{2}},}\\ \textrm{Cylindrical co-ordinates}: & {\displaystyle \frac{1}{c^{2}}\frac{\partial^{2}u}{\partial t^{2}}} & = & {\displaystyle \frac{1}{r}\frac{\partial}{\partial r}\left(r\frac{\partial u}{\partial r}\right)+\frac{1}{r^{2}}\frac{\partial^{2}u}{\partial\theta^{2}}+\frac{\partial^{2}u}{\partial z^{2}},}\\ \textrm{Spherical co-ordinates}: & {\displaystyle \frac{1}{c^{2}}\frac{\partial^{2}u}{\partial t^{2}}} & = & {\displaystyle \frac{1}{r^{2}}\frac{\partial}{\partial r}\left(r^{2}\frac{\partial u}{\partial r}\right)+\frac{1}{r^{2}\sin\theta}\frac{\partial}{\partial\theta}\left(\sin\theta\frac{\partial u}{\partial\theta}\right)+\frac{1}{r^{2}\sin^{2}\theta}\frac{\partial^{2}u}{\partial\phi^{2}}.}\end{array}\] These equations occur in one form or another, in numerous applications in all areas of the physical sciences; see for example section (Linear wave equation examples ). The d'Alembert solution The solution to equations (4), (5) and (6) was first reported by the French mathematician Jean-le-Rond d'Alembert (1717-1783) in 1747 in a treatise on Vibrating Strings [Caj-61] [Far-93]. D'Alembert's remarkable solution, which used a method specific to the wave equation (based on the chain rule for differentiation), is given below \[\tag{8} u(x,t)=\frac{1}{2}\left[f(x-ct)+f(x+ct)\right]+\frac{1}{2c}\int_{x-ct}^{x+ct}g(\xi)d\xi.\] It can also be obtained by the Fourier Transform method or by the separation of variables (SOV) method, which are more general than the the method used by d'Alembert [Krey-93]. The d'Alembertian \(\square=\nabla^{2}-{\displaystyle \frac{1}{c^{2}}\frac{\partial^{2}}{\partial t^{2}}}\ ,\) also known as the d'Alembert operator or wave operator, allows a succinct notation for the wave equation, i.e. \(\square u=0\ .\) It first arose in d'Alembert's work on vibrating strings and plays a useful role in modern theoretical physics. Linear wave equation examples Acoustic (sound) wave We will consider the acoustic or sound wave as a small amplitude disturbance of ambient conditions where second order effects can be ignored. We start with the Euler continuity and momentum equations \[\tag{9} \frac{\partial\rho}{\partial t}+\nabla\cdot\left(\rho v\right) = 0,\] \[\tag{10} \frac{\partial\left(\rho v\right)}{\partial t}+\nabla\cdot\left(\rho vv\right)-\rho g+\nabla p+\nabla\cdot T = 0,\]   \(T\) = stress tensor (Pa)   \(g\) = gravitational acceleration (m/s\(^{2}\))   \(p\) = pressure (Pa)   \(t\) = time (s)   \(v\) = fluid velocity (m/s)   \(\rho\) = fluid density (kg/m\(^{3}\)) We assume an inviscid dry gas situation where gravitational effects are negligible. This means that the third and fifth terms of equation (10) can be ignored. If we also assume that we can represent velocity by \(v=u_{0}+u\ ,\) where \(u_{o}\) is ambient velocity which we set to zero and \(u\) represents a small velocity disturbance, the second term in equation (10) can be ignored (because it becomes a second order effect). Thus, equations (9) and (10) reduce to \[\tag{11} \frac{\partial\rho}{\partial t}+\nabla\cdot\left(\rho u\right) = 0,\] \[\tag{12} \frac{\partial\left(\rho u\right)}{\partial t}+\nabla p = 0.\] Now, taking the divergence of equation (12) and the time derivative of equation (11), we obtain\[ \frac{\partial^{2}\rho}{\partial t^{2}}-\nabla^{2}p=0.\] To complete the analysis we need to apply an equation-of-state relating \(p\) and \(\rho\) when we obtain the linear acoustic wave equation \[\tag{13} \frac{1}{c^{2}}\frac{\partial^{2}p}{\partial t^{2}}=\nabla^{2}p,\] where \[\tag{14} c^{2}=\frac{\partial p}{\partial\rho}\ .\] We now consider three cases: • The isothermal gas case\[p=\rho RT_{0}/MW\] (ideal gas law) \(\Rightarrow\left(\frac{\partial p}{\partial\rho}\right)_{T}=RT_{0}/MW\) and \(c=\sqrt{RT_{0}/MW}\ ,\) where \(T_{0}\) is the ambient temperature of the fluid, \(R\) is the ideal gas constant, \(MW\) is molecular weight and subscript \(T\) denotes constant temperature conditions. • The isentropic gas case\[p/\rho^{\gamma}=K\Rightarrow\left(\frac{\partial p}{\partial\rho}\right)_{s}=\gamma K\rho^{\gamma-1}=\gamma RT_{0}/MW\] and \(c=\sqrt{\gamma RT_{0}/MW}\ ,\) where \(\gamma\) is the isentropic or adiabatic exponent for the fluid (equal to the ratio of specific heats) and subscript \(s\) denotes constant entropy conditions. • The isothermal liquid case\[\left(\frac{\partial p}{\partial\rho}\right)_{T}=\beta/\rho\] and \(c=\sqrt{\beta/\rho},\) where \(\beta\) is bulk modulus. For atmospheric air at standard conditions we have \(p=101325\)Pa, \(T_{0}=293.15\)K, \(R=8.3145\)J/mol/K, \(\gamma=1.4\) and \(MW=0.028965\)kg/mol, which gives \[\tag{15} \textrm{isothermal:}\quad c = 290\textrm{m/s,}\] \[\tag{16} \textrm{isentropic:}\quad \; c = 343\textrm{m/s.}\] For liquid distilled water at \(20\)C we have \(\beta=2.18\times10^{9}\)Pa and \(\rho=1,000\)kg/m\(^{3},\) which gives \[\tag{17} \textrm{liquid}:\quad c=1476\textrm{m/s.}\] Waves in solids Waves in solids are more complex than acoustic waves in fluids. Here we are dealing with displacement \(\varrho\ ,\) and the resulting waves can be either longitudinal, P-waves, or shear (transverse), S-waves. Starting with Newton's second Law we arrive at the vector wave equation [Elm-69, chapter 7] \[\tag{18} \left(\lambda+\mu\right)\nabla\left(\nabla\cdot\varrho\right)+\mu\nabla^{2}\varrho=\rho\frac{\partial^{2}\varrho}{\partial t^{2}},\] from which, using the fundamental identity from vector calculus, \(\nabla\times\left(\nabla\times\varrho\right)=\nabla\left(\nabla\cdot\varrho\right)-\nabla^{2}\varrho\ ,\) we obtain \[\tag{19} \left(\lambda+2\mu\right)\nabla\left(\nabla\cdot\varrho\right)+\mu\nabla\times\left(\nabla\times\varrho\right)=\rho\frac{\partial^{2}\varrho}{\partial t^{2}}.\] Now, for irrotational waves, which vibrate only in the direction of propagation \(x\ ,\) \(\nabla\times\varrho=0\Rightarrow\nabla\left(\nabla\cdot\varrho\right)=\nabla^{2}\varrho\) and equation (19) reduces to the familiar linear wave equation \[\tag{20} \frac{1}{c^{2}}\frac{\partial^{2}\varrho}{\partial t^{2}}=\nabla^{2}\varrho,\] where \(c=\sqrt{\left(\lambda+2\mu\right)/\rho}=\sqrt{\left(K+\frac{4}{3}\mu\right)/\rho}\) is the wave speed, \(\lambda=E\upsilon/\left(1+\upsilon\right)\left(1-2\upsilon\right)\) is the Lamé modulus, \(\mu={\displaystyle \frac{E}{2\left(1+\upsilon\right)}}\) is the shear modulus and \(K=E/3\left(1-2\upsilon\right)\) is the bulk modulus of the solid material. Here, \(E\) and \(\upsilon\) are Young's modulus and Poisson's ratio for the solid respectively. Irrotational waves are of the longitudinal type, or P-waves. For solenoidal waves, which can vibrate independently in the \(y\) and \(z\) directions but not in the direction of propagation \(x\ ,\) we have \(\nabla\cdot\varrho=0\) and equation (18) reduces to the linear wave equation where the wave speed is given by \(c=\sqrt{\mu/\rho}\) . Solenoidal waves are of the transverse type, or S-waves. For a typical mild-steel at \(20\)C with \(\rho=7,860\)kg/m\(^{3}\ ,\) \(E=210\times10^{9}\)N/m\(^{2}\) and \(\upsilon=0.29\) we find that the P-wave speed is \(5917\)m/s and the S-wave speed is \(3,218\)m/s. For further discussion refer to [Cia-88]. Electromagnetic waves The fundamental equations of electromagnetism are the Maxwell Equations, which in differential form and SI units, are usually written as: \[\tag{22} \nabla\cdot E = \frac{1}{\epsilon_{0}}\rho,\] \[\tag{23} \nabla\cdot B = 0,\] \[\tag{24} \nabla\times E = -\frac{\partial B}{\partial t},\] \[\tag{25} \nabla\times B = \mu_{0}J+\mu_{0}\epsilon_{0}\frac{\partial E}{\partial t},\]   \(B =\) magnetic field (T)   \(E =\) electric field (V/m)   \(J =\) current density (A/m\(^{2}\))   \(\; t =\) time (s)   \(\epsilon_{0} =\) permittivity of free space (\(8.8541878\times10^{-12}\simeq10^{-9}/36\pi\) F/m)   \(\mu_{0} =\) permeability of free space (\(4\pi\times10^{-7}\) H/m)   \(\; \rho =\) charge density (C/m\(^{3}\)) If we assume that \(J=0\) and \(\rho=0\ ,\) then on taking the curl of equation (24) and again using the fundamental identity from vector calculus, \(\nabla\times\left(\nabla\times E\right)=\nabla\left(\nabla\cdot E\right)-\nabla^{2}E\ ,\) we obtain \[\tag{26} \frac{1}{c_{0}^{2}}\frac{\partial^{2}E}{\partial t^{2}}=\nabla^{2}E.\] Similarly, taking the curl of equation (25) we obtain \[\tag{27} \frac{1}{c_{0}^{2}}\frac{\partial^{2}B}{\partial t^{2}}=\nabla^{2}B.\] Equations (26) and (27) are the linear electric and magnetic wave equations respectively, where \(c_{0}=1/\sqrt{\mu_{0}\epsilon_{0}}\simeq3\times10^{8}\) m/s, the speed of light in a vacuum. They take the familiar form of linear wave equation (4). For further discussion refer to [Sha-75]. Nonlinear waves Nonlinear waves are described by nonlinear equations, and therefore the superposition principle does not generally apply. This means that nonlinear wave equations are more difficult to analyze mathematically and that no general analytical method for their solution exists. Thus, unfortunately, each particular wave equation has to be treated individually. An example of solving the Korteweg-de Vries equation by direct integration is given below. Some advanced methods that have been used successfully to obtain closed-form solutions are listed in section (Closed form PDE solution methods), and example solutions to well known evolution equations are given in section (Nonlinear wave equation solutions). Closed form PDE solution methods There are no general methods guaranteed to find closed form solutions to non-linear PDEs. Nevertheless, some problems can yield to a trial-and-error approach. This hit-and-miss method seeks to deduce candidate solutions by looking for clues from the equation form, and then systematically investigating whether or not they satisfy the particular PDE. If the form is close to one with an already known solution, this approach may yield useful results. However, success is problematical and relies on the analyst having a keen insight into the problem. We list below, in alphabetical order, a non-exhaustive selection of advanced solution methods that can assist in determining closed form solutions to nonlinear wave equations. We will not discuss further these methods and refer the reader to the references given for details. All these methods are greatly enhanced by use of a symbolic computer program such as: Maple V, Mathematica, Macysma, etc. • Bäcklund transformation - A method used to find solutions to a non-linear partial differential equation from either a known solution to the same equation or from a solution to another equation. This can facilitate finding more complex solutions from a simple solution, e.g. a multi-soliton solutions from a single soliton solution [Abl-91],[Inf-00],[Dra-89]. • Generalized separation of variables method - For simple cases this method involves searching for exact solutions of the multiplicative separable form \( u\left(x,t\right)=\varphi\left(x\right)\psi\left(t\right)\) or, of the additive separable form \(u\left(x,t\right)=\varphi\left(x\right)+\psi\left(t\right)\ ,\) where \(\varphi\left(x\right)\) and \(\psi\left(t\right)\) are functions to be found. The chosen form is substituted into the original equation and, after performing some algebraic operations, two expressions are obtained that are each deemed equal to a constant \(K\ ,\) the separation constant. Each expression is then solved independently and then combined additively or multiplicatively as appropriate. Initial conditions and boundary conditions are then applied to give a particular solution to the original equation. For more complex cases, special solution forms such as \(u\left(x,t\right)=\varphi\left(x\right)\psi\left(t\right)+\chi\left(x\right)\) can be sought - refer to [Pol-04, pp. 698-712], [Gal-06], and [Pol-07, pp. 681-696] for a detailed discussion. • Differential constraints method - This method seeks particular solutions of equations of the form \(F\left(x,y,u,{\displaystyle \frac{\partial u}{\partial x},}{\displaystyle \frac{\partial u}{\partial y}},{\displaystyle \frac{\partial^{2}u}{\partial x^{2}}},{\displaystyle \frac{\partial^{2}u}{\partial x\partial y}},{\displaystyle \frac{\partial^{2}u}{\partial y^{2}},\cdots}\right)=0\) by supplementing them with an additional differential constraint(s) of the form \(G\left(x,y,u,{\displaystyle \frac{\partial u}{\partial x},}{\displaystyle \frac{\partial u}{\partial y}},{\displaystyle \frac{\partial^{2}u}{\partial x^{2}}},{\displaystyle \frac{\partial^{2}u}{\partial x\partial y}},{\displaystyle \frac{\partial^{2}u}{\partial y^{2}},\cdots}\right)=0\ .\) The exact form of the differential constraint is determined from auxiliary problem conditions, usually based on physical insight. Compatibility analysis is then performed, for example by differentiating \(F\) and \(G\) (possibly several times), which enables an ordinary differential equation(s) to be constructed that can be solved. The resulting ODE is the compatibility condition for \(F\) and \(G\) and its solution can be used to obtain a solution to the original equation - refer to [Pol-04, pp. 747-758] for a detailed discussion. • Group analysis methods (Lie group methods) - These methods seeks to identify symmetries of an equation which permit us to discover: (i) transformations under which the equation is invariant, (ii) new variables in which the structure of the equation is simplified. For an \((n+1)\)-dimensional Euclidean space, the set of transformations \(\mathrm{T}_{\epsilon}=\left\{ \begin{array}{rc} \bar{x_{i}}=\varphi_i\left(x,u,\epsilon\right), & \left.\bar{x_{i}}\right|_{\epsilon=0}=x_{i}\\ \bar{u}=\psi\left(x,u,\epsilon\right), & \left.\bar{u}\right|_{\epsilon=0}=u\end{array}\right.\ ,\) where \(\varphi_{i}\) and \(\psi\) are smooth functions of their arguments and \(\epsilon\) is a real parameter, is called a one-parameter continuous point Lie group of transformations, \(G\ ,\) if for all \(\epsilon_{1}\) and \(\epsilon_{2}\) we have \(T_{\epsilon_{1}}\circ T_{\epsilon_{2}}=T_{\epsilon_{1}+\epsilon_{2}}\) - refer to [Ibr-94] and [Pol-04, pp. 735-743] for a detailed discussion. • Hirota's bilinear method - This method can be used to construct periodic and soliton wave solutions to nonlinear PDEs. It seeks a solution of the form \(u=-2\left(\log f\right)_{xx}\) by introducing the bilinear operator \(D_{t}^{m}D_{x}^{n}\left(a\cdot b\right)=\left.\left({\displaystyle \frac{\partial}{\partial t}-\frac{\partial}{\partial t^{\prime}}}\right)^{m}\left({\displaystyle \frac{\partial}{\partial x}-\frac{\partial}{\partial x{}^{\prime}}}\right)^{n}a\left(x,t\right)b\left(x^{\prime},t^{\prime}\right)\right|_{\begin{array}{c} x^{\prime}=x\\ t^{\prime}=t\end{array}}\) for non-negative integers \(m\) and \(n\) [Joh-97],[Dai-06]. • Hodograph transformation method - This method belongs to the class of point transformations and involves the interchange of dependent and independent variables, i.e. \(\tau=t\ ,\) \(\xi=u\left(x,t\right)\ ,\) \(\eta\left(\xi,\tau\right)=x\ .\) This transformation can, for certain applications, result in a simpler (possibly an exact linearization) problem for which solutions can be found [Cla-89], [Pol-04, pp. 686-687]. • Inverse scattering transform (IST) method - The phenomenon of scattering refers to the evolution of a wave subject to certain conditions, such as boundary and/or initial conditions. If data relating to the scattered wave are known, then it may be possible to determine from these data the underlying scattering potential. The problem of reconstructing the potential from the scattering data is referred to as the so-called inverse scattering transform. The IST is a nonlinear analog of the Fourier transform used for solving linear problems. This useful property allows certain nonlinear problems to be treated by what are essentially linear methods. The IST method has been used for solving many types of evolution equation [Abl-91], [Inf-00], [Kar-98], [Whi-99]. • Lax pairs - A Lax pair consists of the Lax operator \(L\) (which is self-adjoint and may depend upon \(x,\, u_{x},\, u_{xx},\cdots\ ,\) but not explicitly upon \(t\)) and the operator \(A\) that together represent a given partial differential equation such that \(L_{t}=[A,L]=\left(AL-LA\right)\ .\) Note\[\left(AL-LA\right)\] represents the commutator of the operators \(L\) and \(A\ .\) Operator \(A\) is required to have enough freedom in any unknown parameters or functions to enable the operator \(L_{t}=[L,A]\) to be chosen so that it is of degree zero, i.e. a multiplicative operator. \(L\) and \(A\) can be either scalar or matrix operators. If a suitable Lax pair can be found, the analysis of the nonlinear equation can be reduced to that of two simpler equations. However, the process of finding \(L\) and \(A\) corresponding to a given equation can be quite difficult. Therefore, if a clue(s) is available, inverting the process by first postulating a given \(L\) and \(A\) and then determining which partial differential equation they correspond to, can sometimes lead to good results. However, this may require the determination of many trial pairs and, ultimately, may not lead to the required solution [Abl-91],[Inf-00],[Joh-97],[Pol-07]. • Painlevé test - The Painlevé test is used as a means of predicting whether or not an equation is likely to be integrable. The test involves checking of self-similar reduced equations against a set of the six Panlevé equations (or, Panlevé transcendents) and, if there is a match, the system is integrable. A nonlinear evolution equation which is solvable by the IST is a Panelevé type, which means that it has no movable singularities other than poles [Abl-91],[Joh-97]. • Self-similar and similarity solutions - An example of a self-similar solution to a nonlinear PDE is a solution where knowledge of \(u(x,t=t_{0})\) is sufficient to obtain \(u(x,t)\) for all \(t>0\ ,\) by suitable rescaling [Bar-03]. In addition, by choosing a suitable similarity transformation(s) it is sometimes possible to find a similarity solution whereby a combination of variables is invariant under the similarity transformation [Fow-05]. Some techniques for obtaining traveling wave solutions The following are examples of techniques that transform PDEs into ODEs which are subsequently solved to obtain traveling wave solutions to the original equations. • Exp-function method - This is a straight forward method that assumes a traveling wave solution of the form \(u\left(x,t\right)=u\left(\eta\right)\) where \(\eta=kx+\omega t\ ,\) \(\omega=\) frequency and \(k=\) wavenumber. This transforms the PDE into an ODE. The method then attempts to find solutions of the form \(u(\eta)=\frac{\sum_{n=-c}^{d}a_{n}\exp\left(n\eta\right)}{\sum_{m=-p}^{q}b_{m}\exp\left(m\eta\right)}\ ,\) where \(c\ ,\) \(d\ ,\) \(p\) and \(q\) are positive integers to be determined, and \(a_{n}\) and \(b_{m}\) are unknown constants [He-06]. • Factorization - This method seeks solutions PDEs with a polynomial non-linearity by rescaling to eliminate coefficients and assuming a travelling wave solution of the form \(u\left(x,t\right)=U\left(\xi\right)\ ,\) where \(\xi=k\left(x-vt\right)\ ,\) \(v=\) velocity and \(k=\) wavenumber. The resulting ODE is then factorized and each factor solved independently [Cor-05]. • Tanh method - This is a very useful method that is conceptually easy to use and has produced some very good results. Basically, it assumes a travelling wave solution of the form \(u\left(x,t\right)=U\left(\xi\right)\) where \(\xi=k\left(x-vt\right)\ ,\) \(v=\) velocity and \(k=\) wavenumber. This has the effect of transforming the PDE into a set of ODEs which are subsequently solved using the transformation \(Y=\tanh\left(\xi\right)\) [Mal-92],[Mal-96a],[Mal-96b]. Some example applications of these and other methods can be found in [Gri-11]. Nonlinear wave equation solutions A non-exhaustive selection of well known 1D nonlinear wave equations and their closed-form solutions is given below. The closed form solutions are given by way of example only, as nonlinear wave equations often have many possible solutions. • Hopf equation (inviscid Burgers equation): \(u_{t}+uu_{x}=0\) [Pol-02] - Applications: gas dynamics and traffic flow. - Solution\[u=\varphi\left(\xi\right),\;\xi=x-\varphi\left(\xi\right)t.\]   \(u\left(x,t=0\right)=\varphi\left(x\right)\), arbitrary initial condition. • Burgers equation: \(u_{t}+uu_{x}-au_{xx}=0\) [Her-05] - Applications: acoustic and hydrodynamic waves. - Solution\[u(x,t)=2ak\left[1-\tanh k\left(x-Vt\right)\right] .\]   \(k=\) wavenumber,   \(a=\) arbitrary constant. • Fisher: \(u_{t}-u_{xx}-u\left(1-u\right)=0\) [Her-05] - Applications: heat and mass transfer, population dynamics, ecology. - Solution\[u(x,t)=\frac{1}{4}\left\{ 1-\tanh k\left[x-Vt\right]\right\} ^{2}.\]   \(k={\displaystyle \frac{1}{2\sqrt{6}}}\) (wavenumber),   \(V={\displaystyle \frac{5}{\sqrt{6}}}\) (velocity). Note: wavenumber and velocity are fixed values. • Sine Gordon equation: \(u_{tt}=au_{xx}+b\sin\left(\lambda u\right)\) [Pol-07] - Applications: various areas of physics - Solution\[u\left(x,t\right)=\left\{ \begin{array}{l} {\displaystyle \frac{4}{\lambda}}\arctan\left[\exp\left(\pm{\displaystyle \frac{b\lambda\left(kx+\mu t+\theta_{0}\right)}{\sqrt{b\lambda\left(\mu^{2}-ak^{2}\right)}}}\right)\right],\quad b\lambda\left(\mu^{2}-ak^{2}\right)>0,\\ \\{\displaystyle \frac{4}{\lambda}}\arctan\left[\exp\left(\pm{\displaystyle \frac{b\lambda\left(kx+\mu t+\theta_{0}\right)}{\sqrt{b\lambda\left(ak^{2}-\mu^{2}\right)}}}\right)\right]-{\displaystyle \frac{\pi}{\lambda}},\quad b\lambda \left(\mu^{2}-ak^{2}\right)<0.\end{array}\right. \]   \(k=\) wavenumber,   \(\mu, \theta_{0}= \) arbitrary constants. • Cubic Schrödinger equation: \(iu_{t}+u_{xx}+q\left|u\right|^{2}u=0\) [Whi-99] - Applications: various areas of physics, non-linear optics, superconductivity, plasma models. - Solution\[u(x,t)=\sqrt{\frac{\alpha}{q}}\textrm{sech}\left(\sqrt{\alpha}\left(x-Vt\right)\right),\quad \alpha>0,q>0 .\]   \(\alpha,q=\) arbitrary constants. • Korteweg-de Vries (a variant)\[u_{t}+uu_{x}+bu_{xxx}=0\] [Her-05] - Applications: various areas of physics, nonlinear mechanics, water waves. - Solution\[u(x,t)=12bk^{2}\textrm{sech}^{2}k\left(x-Vt\right)\]   \(b=\) arbitrary constant. • Boussinesq equation: \(u_{tt}-u_{xx}+3uu_{xx}+\alpha u_{xxxx}=0\) [Abl-91] - Applications: surface water waves - Solution\[{\displaystyle \frac{1}{6}\left\{ 1+8k^{2}-V^{2}\right\} -2k^{2}\tanh^{2}k\left(x+Vt\right)}\] • Nonlinear wave equation of general form: \(u_{tt}=\left[f\left(u\right)u_{x}\right]_{x}\) This equation can be linearized in the general case. Some exact solutions are given in [Pol-04, pp252-255] and, by way of an example consider the following special case where \(f\left(u\right)=\alpha e^{\lambda u}\ :\) Wave equation with exponential non-linearity: \(u_{tt}=\left(\alpha e^{\lambda u}u_{x}\right)_{x},\quad\alpha>0.\) [Pol-04, p223] - Applications: traveling waves - Solution\[u(x,t)={\displaystyle \frac{1}{\lambda}}\ln\left(\alpha ax^{2}+bx+c\right)-{\displaystyle \frac{2}{\lambda}}\ln\left(\alpha at+d\right)\ :\]   \(\alpha,\lambda,a,b,c,d=\) arbitrary constants. Additional wide-ranging examples of traveling wave equations, with solutions, from the fields of mathematics, physics and engineering are given in Polyanin & Manzhirov [Pol-07] and Polyanin & Zaitsev [Pol-04]. Examples from the biological and medical fields can be found in Murray [Mur-02] and Murray [Mur-03]. A useful on-line resource is the DispersiveWiki [Dis-08]. The Korteweg-de Vries equation The canonical form of the Korteweg-de Vries (KdV) equation is \[\tag{28} \frac{\partial u}{\partial t}-6u\frac{\partial u}{\partial x}+\frac{\partial^{3}u}{\partial x^{3}}=0,\] and is a non-dimensional version of the following equation originally derived by Korteweg and de Vries for a moving (Lagrangian) frame of reference [Jag-06], [Kor-95], \[\tag{29} \frac{\partial\eta}{\partial\tau}=\frac{3}{2}\sqrt{\frac{g}{h_{o}}}\frac{\partial}{\partial\chi}\left[\frac{1}{2}\eta^{2}+\frac{2}{3}\alpha\eta+\frac{1}{3}\sigma\frac{\partial^{2}\eta}{\partial\chi^{2}}\right].\] It is, historically, the most famous solitary wave equation and describes small amplitude, shallow water waves in a channel, where symbols have the following meaning:   \(g =\) gravitational acceleration (m/s\(^{2}\))   \(h_{o} =\) nominal water depth (m)   \(T =\) capillary surface tension of fluid (N/m)   \(\alpha =\) small arbitrary constant related to the uniform motion of the liquid (dimensionless)   \(\eta =\) wave height (m)   \(\rho =\) fluid density (kg/m\(^{3}\))   \(\tau =\) time (s)   \(\chi =\) distance (m) After re-scaling and translating the dependent and independent variables to eliminate the physical constants using the transformations [Abl-91], \[\tag{30} u=-\frac{1}{2}\eta-\frac{1}{3}\alpha;\quad x=-\frac{\chi}{\sqrt{\sigma}};\quad t=\frac{1}{2}\sqrt{\frac{g}{h_{o}\sigma}}\tau\] where \(\sigma=h_{o}^{3}/3-Th_{o}/\left(\rho g\right)\ ,\) and \(Th_{o}/\left(\rho g\right)\) is called the Bond number (a measure of the relative strengths of surface tension and gravitational force), we arrive at the Korteweg-de Vries equation, i.e. equation (28). The basic assumptions for the derivation of KdV waves in liquid, having wavelength \(\lambda\ ,\) are [Abl-91]: • the waves are long waves in comparison with total depth, \({\displaystyle \frac{h_{o}}{\lambda}}\ll1\ ;\) • the amplitude of the waves is small, \(\varepsilon={\displaystyle \frac{\eta}{h_{o}}}\ll1\ ;\) • the first two effects approximately balance, i.e. \({\displaystyle \frac{h_{o}}{\lambda}}=\mathcal{O\left(\varepsilon\right)}\ ;\) • viscous effects can be neglected. The KdV equation was found to have solitary wave solutions [Lam-93], which confirmed John Scott-Russell's account of the solitary wave phenomena [Sco-44] discovered during his experimental investigations into water flow in channels to determine the most efficient design for canal boats [Jag-06]. Subsequently, the KdV equation has been shown to model various other nonlinear wave phenomena found in the physical sciences. John Scott-Russell, a Scottish engineer and naval architect, also described in poetic terms his first encounter with the solitary wave phenomena, thus: An experimental apparatus for re-creating the phenomena observed by Scott-Russell have been built at Herriot-Watt University. Scott-Russell also coined the term solitary wave and conducted some of the first experiments to investigate another nonlinear wave phenomena, the Doppler effect, publishing an independent explanation of the theory in 1848 [Sco-48]. It is interesting to note that, a KdV solitary wave in water that experiences a change in depth will retain its general shape. However, on encountering shallower water its velocity and height will increase and its width decrease; whereas, on encountering deeper water its velocity and height will decrease and its width increase [Joh-97, pp 268-277]. A closed form single soliton solution to the KdV equation (28) can be found using direct integration as follows. Assume a travelling wave solution of the form \[\tag{31} u(x,t)=f(x-vt)=f(\xi).\] Then on substituting into the canonical equation the PDE is transformed into the following ODE \[\tag{32} -v\frac{df(\xi)}{d\xi}-6f\frac{df(\xi)}{d\xi}+\frac{d^{3}f(\xi)}{d\xi^{3}}=0.\] Now integrate with respect to \(\xi\) and multiply by \({\displaystyle \frac{df(\xi)}{d\xi}}\) to obtain \[\tag{33} -vf(\xi)\frac{df(\xi)}{d\xi}-3f(\xi)^{2}\frac{df(\xi)}{d\xi}+\frac{df(\xi)}{d\xi}\left(\frac{d^{2}f(\xi)}{d\xi^{2}}\right)=A\frac{df(\xi)}{d\xi}.\] Now integrate with respect to \(\xi\) once more, to obtain \[\tag{34} -\frac{1}{2}vf(\xi)^{2}-f(\xi)^{3}+\frac{1}{2}\left(\frac{df(\xi)}{d\xi}\right)^{2}=Af(\xi)+B.\] Where \(A\) and \(B\) are arbitrary constants of integration which we set to zero. We justify this by assuming that we are modeling a physical system with properties such that \(f,f^{\prime}\) and \(f^{\prime\prime}\rightarrow0\) as \(\xi\rightarrow\pm\infty\ .\) After rearranging and evaluating the resulting integral, we find \[\tag{35} f\left(\xi\right)=\frac{v}{2}\textrm{sech}^{2}\left(\frac{\sqrt{v}}{2}\xi\right).\] The solution is therefore \[\tag{36} u(x,t) = f(x-vt),\] \[\tag{37} \quad= 2k^{2}\textrm{sech}^{2}\left(k\left[x-vt-x_{0}\right]\right),\] where \(k={\displaystyle \frac{\sqrt{v}}{2}}\) represents wavenumber and the constant \(x_{0}\) has been included to locate the wave peak at \(t=0\ .\) Thus, we observe that the wave travels to the right with a speed that is equal to twice the peak amplitude. Hence, the taller a wave the faster it travels. The KdV equation also admits many other solutions including multiple soliton solutions, see figure (15), and cnoidal (periodic) solutions. Solutions of KdV equation can be systematically obtained from solutions \(\psi_{i}\) of of the free particle Schrödinger equation \[\tag{38} -\left(\frac{\partial^{2}}{\partial x^{2}}\psi_{i}\right)=E_{i}\psi_{i},\quad i=1,\cdots,n\] using the the relationship \[\tag{39} u\left(x,t\right)=2\left(\frac{\partial^{2}}{\partial x^{2}}\ln\left(W_{n}\right)\right),\] where we use the the Wronskian function \[\tag{40} W_{n}=W_{n}\left[\psi_{1},\psi_{2},\cdots,\psi_{n}\right].\] The Wronskian is the determinant of a \(n\times n\) matrix [Dra-89] composed from the functions \(\psi_{i}(\xi_{i})\ ,\) where \(\xi_{i}\) for our purposes is given by \[\tag{41} \xi_{i} = k_{i}\left(x-v_{i}t\right),\quad E_{i}<0,\] \[\tag{42} \xi_{i} = k_{i}\left(x+v_{i}t\right),\quad E_{i}>0.\] For example, a two-soliton solution is given by \[\tag{43} u(x,t)=\frac{\left(k_{1}^{2}-k_{2}^{2}\right)\left\{ 2k_{2}^{2}\textrm{csch}\, k_{2}\left(x-v_{2}t\right)+2k_{1}^{2}\textrm{sech}\, k_{1}\left(x-v_{1}t\right)\right\} }{\left[k_{1}\tanh k_{1}\left(x-v_{1}t\right)+k_{2}\coth k_{2}\left(x-v_{2}t\right)\right]^{2}}\] and a cnoidal wave solution is given by \[\tag{44} u(x,t)=\frac{1}{6k}\left(4k^{2}(2m-1)-vk\right)-2k^{2}\textrm{cn}^{2}\left(kx-vkt+x_{0};m\right).\] where 'cn' represents the Jacobi elliptic cosine function with modulus \( m, \left( 0<m<1 \right)\). Note: as \(m\rightarrow1\) the periodic solution tends to a single soliton solution. Interestingly, the KdV equation is invariant under a Galilean transformation, i.e. its properties remain unchanged, see section (Galilean invariance). Numerical solution methods Linear and nonlinear evolutionary wave problems can very often be solved by application of general numerical techniques such as: finite difference, finite volume, finite element, spectral, least squares, weighted residual (e.g. collocation and Galerkin) methods, etc. These methods, which can all handle various boundary conditions, stiff problems and may involve explicit or implicit calculations, are well documented in the literature and will not be discussed further here. For general texts refer to [Bur-93],[Sch-94],[Sch-09], and for more detailed discussion refer to [Lev-02],[Mor-94],[Zie-77]. Some wave problems do, however, present significant problems when attempting to find a numerical solution. In particular we highlight problems that include shocks, sharp fronts or large gradients in their solutions. Because these problems often involve inviscid conditions (zero or vanishingly small viscosity), it is often only practical to obtain weak solutions. Some PDE problems do not have a mathematically rigorous solution, for example where discontinuities or jump conditions are present in the solution and/or characteristics intersect. Such problems are likely to occur when there is a hyperbolic (strongly convective) component present. In these situations weak solutions provide useful information. Detailed discussion of this approach is beyond the scope of this article and readers are referred to [Wes-01, chapters 9 and 10] for further discussion. General methods are often not adequate for accurate resolution of steep gradient phenomena; they usually introduce non-physical effects such as smearing of the solution or spurious oscillations. Since publication of Godunov's order barrier theorem, which proved that linear methods cannot provide non-oscillatory solutions higher than first order [God-54],[God-59], these difficulties have attracted a lot of attention and a number of techniques have been developed that largely overcome these problems. To avoid spurious or non-physical oscillations where shocks are present, schemes that exhibit a total variation diminishing (TVD) characteristic are especially attractive. Two techniques that are proving to be particularly effective are MUSCL (Monotone Upstream-Centred Schemes for Conservation Laws) a flux/slope limiter method [van-79],[Hir-90],[Tan-97],[Lan-98],[Tor-99] and the WENO (Weighted Essentially Non-Oscillatory) method [Shu-98],[Shu-09]. MUSCL methods are usually referred to as high resolution schemes and are generally second-order accurate in smooth regions (although they can be formulated for higher orders) and provide good resolution, monotonic solutions around discontinuities. They are straight-forward to implement and are computationally efficient. For problems comprising both shocks and complex smooth solution structure, WENO schemes can provide higher accuracy than second-order schemes along with good resolution around discontinuities. Most applications tend to use a fifth order accurate WENO scheme, whilst higher order schemes can be used where the problem demands improved accuracy in smooth regions. Initial conditions and boundary conditions Consider the classic 1D linear wave equation \[\tag{45} \dfrac{\partial^{2}u}{\partial t^{2}}=\frac{1}{c^{2}}\dfrac{\partial^{2}u}{\partial x^{2}}.\] In order to obtain a solution we must first specify some auxiliary conditions to complete the statement of the PDE problem. The number of required auxiliary conditions is determined by the highest order derivative in each independent variable. Since equation (45) is second order in \(t\) and second order in \(x\ ,\) it requires two auxiliary conditions in \(t\) and two auxiliary conditions in \(x\ .\) To have a complete well posed problem, some additional conditions may have to be included - refer to section (Wellposedness). The variable \(t\) is termed an initial value variable and therefore requires two initial conditions (ICs). It is an initial value variable since it starts at an initial value, \(t_{0}\ ,\) and moves forward over a finite interval \(t_{0}\leq t\leq t_{f}\) or a semi-infinite interval \(t_{0}\leq t\leq\infty\) without any additional conditions being imposed. Typically in a PDE application, the initial value variable is time, as in the case of equation (45). The variable \(x\) is termed a boundary value variable and therefore requires two boundary conditions (BCs). It is a boundary value variable since it varies over a finite interval \(x_{0}\leq x\leq x_{f}\ ,\) a semi-infinite interval \(x_{0}\leq x\leq\infty\) or a fully infinite interval \(-\infty\leq x\leq\infty\ ,\) and at two different values of \(x\), conditions are imposed on \(u\) in equation (45). Typically, the two values of \(x\) correspond to boundaries of a physical system, and hence the name boundary conditions. BCs can be of three types: • Dirichlet or first type - the boundary has a value \(u(x=x_{0},t)=u^{b}\left(t\right)\ .\) • Neumann or second type - the spatial gradient at the boundary has a value \(\dfrac{\partial u(x=x_{f},t)}{\partial x}=u_{x}^{b}\left(t\right)\ ,\) and for multi-dimensions it is normal to the boundary. • Robin or third type - both the dependent variable and its spatial derivative appear in the BC, i.e. a combination of Dirichlet and Neumann. An important consideration is the possibility of discontinuities at the boundaries, produced for example by differences in initial and boundary conditions at the boundaries, which can cause computational difficulties, such as shocks - see section (Shock waves), particularly for hyperbolic PDEs such as equation (45) above. Numerical dissipation and dispersion Dissipation and dispersion can also be introduced when PDEs are discretized in the process of seeking a numerical solution. This introduces numerical errors. The accuracy of a discretization scheme can be determined by comparing the numeric amplification factor \(G_{numeric},\) with the analytical or exact amplification factor \(G_{exact}\ ,\) over one time step. For further reading refer to [Hir-88, chap. 8], [Lig-78, chap. 3], [Tan-97, chap. 4], [Wes-01, chap 8 and 9]. Dispersion relation Physical waves that propagate in a particular medium will, in general, exhibit a specific group velocity as well as a specific phase velocity - see section (Group and phase velocity). This is because within a particular medium there is a fixed relationship between the wavenumber \(k\ ,\) and the frequency \(\omega\ ,\) of waves. Thus, frequency and wavenumber are not independent quantities and are related by a functional relationship, known as the dispersion relation , \(\omega(k)\). We will demonstrate the process of obtaining the dispersion relation by example, using the advection equation \[\tag{46} u_{t}+au_{x}=0.\] Generally, each wavenumber \(k \,\ ,\) corresponds to \(s \,\) frequencies where \(s \,\) is the order of the PDE with respect to \(t\ .\) Now any linear PDE with constant coefficients admits a solution of the form \[\tag{47} u\left(x,t\right)=u_{0}e^{i\left(kx-\omega t\right)}.\] Because we are considering a linear system, the principal of superposition applies and equation (47) can be considered to be a frequency component or harmonic of the Fourier series representation of a specific solution to the advection equation. On inserting this solution into a PDE we obtain the so called dispersion relation between \(\omega\) and \(k\) i.e., \[\tag{48} \omega=\omega\left(k\right),\] and each PDE will have its own distinct form. For example, we obtain the specific dispersion relation for the advection equation by substituting equation (47) into equation (46) to get \[ -i\omega u_{0}e^{i\left(kx-\omega t\right)} = -iaku_{0}e^{i\left(kx-\omega t\right)}\] \[\Downarrow \] \[\tag{49} \omega = ak.\] This confirms that \(\omega\) and \(k\) cannot be determined independently for the advection equation, and therefore equation (47) becomes \[\tag{50} u\left(x,t\right)=u_{0}e^{ik\left(x-at\right)}.\] Note: If the imaginary part of \(\omega\left(k\right)\) is zero, then the system is non-dissipative. The physical meaning of equation (50) is that the initial value \(u\left(x,0\right)=u_{0}e^{ikx}\ ,\) is propagated from left to right, unchanged, at velocity \(a\ .\) Thus, there is no dissipation or attenuation and no dispersion. A similar approach can be used to establish the dispersion relation for systems described by other forms of PDEs. Amplification factor As mentioned above, the accuracy of a numerical scheme can be determined by comparing the numeric amplification factor \(G_{numeric},\) with the exact amplification factor \(G_{exact}\ ,\) over one time step. The exact amplification factor can be determined by considering the change that takes place in the exact solution over a single time-step. For example, taking the advection equation (46) and assuming a solution of the form \(u\left(x,t\right)=u_{0}e^{ik\left(x-at\right)}\ ,\) we have \[ G_{exact} = \frac{u\left(x,t+\Delta t\right)}{u\left(x,t\right)}=\frac{u_{0}e^{ik\left(x-a\left(t+\Delta t\right)\right)}}{u_{0}e^{ik\left(x-at\right)}}.\] \[\tag{51} \therefore G_{exact} = e^{-iak\Delta t}.\] We can also represent equation (51) in the form \[\tag{52} G_{exact}=\left|G_{exact}\right|e^{i\Phi_{exact}},\] where \[\tag{53} \Phi_{exact}=\angle G=\tan^{-1}\left(\frac{\textrm{Im}\left\{ G\right\} }{\textrm{Re}\left\{ G\right\} }\right).\] Thus, for this case \[\tag{54} \left|G_{exact}\right| = 1\] and \[\tag{55} \Phi_{exact} = \tan^{-1}\left(\tan\left(-ak\Delta t\right)\right)=-ak\Delta t.\] The amplification factor provides an indication of how the the solution will evolve because values of \(\left|\Phi\right|\rightarrow0\) are associated with low frequencies and values of \(\left|\Phi\right|\rightarrow\pi\) are associated with high frequencies. Also, because phase shift is associated with the imaginary part of \( G_{exact}\ ,\) if \(\Im\left\{ G_{exact}\right\} =0\ ,\) the system does not exhibit any phase shift and is purely dissipative. Conversely, if \(\Re\left\{ G_{exact}\right\} =1\ ,\) the system does not exhibit any amplitude attenuation and is purely dispersive The numerical amplification factor \(G_{numeric}\) is calculated in the same way, except that the appropriate numerical approximation is used for \(u(x,t)\ .\) For stability of the numerical solution, \(\left|G_{numeric}\right|\leq1\) for all frequencies. Numerical dissipation Figure 1: Figure 1: Illustration of pure numeric dissipation effect on a single sinusoid, as it propagates along the spatial domain. Both exact and simulated dissipative waves begin with the same amplitude; however, the amplitude of the dissipative wave decreases over time, but stays in phase. Figure 2: Figure 2: Effect of numerical dissipation on a step function applied to the advection equation \(u_{t}+u_{x}=0\ .\) In a numerical scheme, a situation where waves of different frequencies are damped by different amounts, is called numerical dissipation, see figure (1). Generally, this results in the higher frequency components being damped more than lower frequency components. The effect of dissipation therefore is that sharp gradients, discontinuities or shocks in the solution tend to be smeared out, thus losing resolution, see figure (2). Fortunately, in recent years, various high resolution schemes have been developed to obviate this effect to enable shocks to be captured with a high degree of accuracy, albeit at the expense of complexity. Examples of particularly effective schemes are based upon flux/slope limiters [Wes-01] and WENO methods [Shu-98]. Dissipation can be introduced by numerical discretization of a partial differential equation that models a non-dissipative process. Generally, dissipation improves stability and, in some numerical schemes it is introduced deliberately to aid stability of the resulting solution. Dissipation, whether real or numerically induced, tend to cause waves to lose energy. The dissipation error as a result of discretization can be determined by comparing the magnitude of the numeric amplification factor \(\left|G_{numeric}\right|,\) with the magnitude of the exact amplification factor \(\left|G_{exact}\right|\ ,\) over one time step. The relative numerical diffusion error or relative numerical dissipation error compares real physical dissipation with the anomalous dissipation that results from numerical discretization. It can be defined as \[\tag{56} \varepsilon_{D}=\frac{\left|G_{numeric}\right|}{\left|G_{exact}\right|},\] and the total dissipation error resulting from \(n\) steps will be \[\tag{57} \varepsilon_{Dtotal}=\left(\left|G_{numeric}\right|^{n}-\left|G_{exact}\right|^{n}\right)u_{0}.\] If \(\varepsilon_{D}>1\) for a given value of \(\theta\) or Co, this discretization scheme will be unstable and a modification to the scheme will be necessary. As mentioned above, if the imaginary part of \(\omega\left(k\right)\) is zero for a particular discretization, then the scheme is non-dissipative. Numerical dispersion Figure 3: Figure 3: Illustration of pure numeric dispersion effect on a single sinusoid, as it propagates along the spatial domain. Both exact and simulated dispersive waves start in phase; however, the phase of the dispersive wave lags the exact wave over time, but its amplitude is unaffected. Figure 4: Figure 4: Effect of numerical dispersion on a step function applied to the advection equation \(u_{t}+u_{x}=0\ .\) In a numerical scheme, a situation where waves of different frequencies move at different speeds without a change in amplitude, is called numerical dispersion - see figure (3). Alternatively, the Fourier components of a wave can be considered to disperse relative to each other. It therefore follows that the effect of a dispersive scheme on a wave composed of different harmonics, will be to deform the wave as it propagates. However the energy contained within the wave is not lost and travels with the group velocity. Generally, this results in higher frequency components traveling at slower speeds than the lower frequency components. The effect of dispersion therefore is that often spurious oscillations or wiggles occur in solutions with sharp gradient, discontinuity or shock effects, usually with high frequency oscillations trailing the particular effect, see figure (4). The degree of dispersion can be determined by comparing the phase of the numeric amplification factor \(\left|G_{numeric}\right|,\) with the phase of the exact amplification factor \(\left|G_{exact}\right|\ ,\) over one time step. Dispersion represents phase shift and results from the imaginary part of the amplification factor. The relative numerical dispersion error compares real physical dispersion with the anomalous dispersion that results from numerical discretization. It can be defined as \[\tag{58} \varepsilon_{P}=\frac{\Phi_{numeric}}{\Phi_{exact}},\] where \(\Phi=\angle G=\tan^{-1}\left(\frac{\textrm{Im}\left\{ G\right\} }{\textrm{Re}\left\{ G\right\} }\right)\ .\) The total phase error resulting from \(n\) steps will be \[\tag{59} \varepsilon_{Ptotal}=n\left(\Phi_{numeric}-\Phi_{exact}\right)\] If \(\varepsilon_{P}>1\ ,\) this is termed a leading phase error. This means that the Fourier component of the solution has a wave speed greater than the exact solution. Similarly, if \(\varepsilon_{P}<1\ ,\) this is termed a lagging phase error. This means that the Fourier component of the solution has a wave speed less than the exact solution. Again, high resolution schemes can all but eliminate this effect, but at the expense of complexity. Although many physical processes are modeled by PDE's that are non-dispersive, when numerical discretization is applied to analyze them, some dispersion is usually introduced. Group and phase velocity The term group velocity refers to a wave packet consisting of a low frequency signal modulated (or multiplied) by a higher frequency wave. The result is a low frequency wave, consisting of a fundamental plus harmonics, that propagates with group velocity \(c_{g}\) along a continuum oscillating at a higher frequency. Wave energy and information signals propagate at this velocity, which is defined as being equal to the derivative of the real part of the frequency \(\omega\ ,\) with respect to wavenumber \(k\) (scalar or vector proportional to the number of wave lengths per unit distance), i.e. \[\tag{60} c_{g}=\frac{d\,\textrm{Re}\left\{ \omega\left(k\right)\right\} }{dk}.\] If there are a number of spatial dimensions then the group velocity is equal to the gradient of frequency with respect to the wavenumber vector, i.e. \(c_{g}=\nabla\textrm{Re}\left\{ \omega\left(k\right)\right\} \ .\) The complementary term to group velocity is phase velocity, \(c_{p}\ ,\) and this refers to the speed of propagation of an individual frequency component of the wave. It is defined as being equal to the real part of the ratio of frequency to wavenumber, i.e. \[\tag{61} c_{p}=\textrm{Re}\left\{ \frac{\omega}{k}\right\} .\] It can also be viewed as the speed at which a particular phase of a wave propagates; for example, the speed of propagation of a wave crest. In one wave period \(T\) the crest advances one wave length \(\lambda\ ;\) therefore, the phase velocity is also given by \(c_{p}=\lambda/T\ .\) We see that this second form is equal to equation (61) due to the following relationships: wavenumber \(k=\frac{2\pi}{\lambda}\) and frequency \(\omega=2\pi f\) where \(f=\frac{1}{T}\ .\) For a non-dispersive wave the phase error is zero and therefore \(c_{g}=c_{p}\ .\) To calculate group and phase velocity for linear waves (or small amplitude waves) we assume a solution of the form \(u(x,t)=Ae^{i(kx-\omega t)}\ ,\) where \(A\) is a constant and \(x\) can be a scalar or vector, and substitute into the wave equation (or linearized wave equation) under consideration. For example, for \(u_{t}+u_{x}+u_{xxx}=0\) we obtain the dispersion relation \(\omega=k-k^{3}\ ,\) from which we calculate the group and phase velocities to be \(c_{g}=1-3k^{2}\) and \(c_{p}=1-k^{2}\) respectively. Thus, we observe that \(c_{g}\neq c_{p}\) and that therefore, this example is dispersive. For most practical situations our interest is primarily in solving partial differential equations numerically; and, before we embark on implementing a numerical procedure, we would usually like to have some idea as to the expected behaviour of the system being modeled, ideally from an analytical solution. However, an analytical solution is not usually available; otherwise we would not need a numerical solution. Nevertheless, we can usually carry out some basic analysis that may give some idea as to steady state, long term trend, bounds on key variables, and reduced order solution for ideal or special conditions, etc. One key estimate that we would like to know is whether the fundamental system is stable or well posed. This is particularly important because if our numerical solution produces seemingly unstable results we need to know if this is fundamental to the problem or whether it has been introduced by the solution method we have selected to implement. For most situations involving simulation this is not a concern as we would be dealing with a well analyzed and documented system. But there are situations where real physical systems can be unstable and we need to know these in advance. For a real system to become unstable there needs to be some form of energy source: kinetic, potential, reaction, etc., so this can provide a clue as to whether or not the system is likely to become unstable. If it is, then we may need to modify our computational approach so that we capture the essential behaviour correctly - although a complete solution may not be possible. In general, solutions to PDE problems are sought to solve a particular problem or to provide insight into a class of problems. To this end existence, uniqueness and stability of the solution are of vital importance [Zwi-97, chapter 10]. Whilst at this introductory level we must restrict our discussion, it is desirable to emphasize that for a solution of an evolutionary PDE (together with appropriate ICs and BCs) to be useful we require that: • A unique solution must exist. The question as to whether or not a solution actually exists can be rather complex, and an answer can be sought for analytic PDEs by application of the Cauchy-Kowalewsky theorem [Cou-62, pp39-56]. • The solution must be numerically stable if we are to be able to predict its evolution over time. If the physical system is actually unstable, then prediction may not be possible. • The solution must depend continuously on data such as boundary/initial conditions, forcing functions, domain geometry, etc. If these conditions are full-filled, then the problem is said to be well posed, in the sense of Hadamard [Had-23]. Numerical schemes for particular PDE systems can be analyzed mathematically to determine if the solutions remain bounded. By invoking Parseval's theorem of equality this analysis can be performed in the time domain or in the Fourier domain. A good introduction to this subject is given by LeVeque [Lev-07], and more advanced technical discussions can be found in the monographs by Tao [Tao-05] and Kreiss & Lorenz [Kre-04]. Characteristics are surfaces in the solution space of an evolutionary PDE problem that represent wave-fronts upon which information propagates. For example, consider the 1D advection equation problem \[\tag{62} u_{t}=cu_{x},\quad u\left(x,t=0\right)=u_{0},\; t\geq0\] where the characteristics are given by \(dx/dt=c\ .\) For this problem the characteristics are straight lines in the \(xt\)-plane with slope \(1/c\) and, along which, the dependent variable \(u\) is constant. The consequence of this is that the initial condition propagates from left to right at constant speed \(c\ .\) But, for other situations such as the inviscid Burgers equation problem, \[\tag{63} u_{t}=uu_{x},\quad u\left(x,t=0\right)=u_{0},\; t\geq0,\] the propagation speed is not constant and the shape of the characteristics depend upon the initial conditions. If the initial condition is monotonically increasing with \(x\ ,\) the characteristics will not overlap and the problem is well behaved. However, if the initial conditions are not monotonically increasing with \(x\ ,\) at some time \(t>0\) the characteristics will overlap and the solution will become multi-valued and a shock will develop. In this situation we can only find a weak solution (one where the problem is re-stated in integral form) by appealing to entropy considerations and the Rankine-Hugoniot jump condition. PDEs other than equations (62) and (63), such as those involving conservation laws, introduce additional complexity such as rarefaction or expansion waves. We will not discuss these aspects further here, and for additional discussion readers are referred to [Hir-90, chap. 16]. The method of characteristics The method of characteristics (MOC) is a numerical method for solving evolutionary PDE problems by transforming them into a set of ODEs. The ODEs are solved along particular characteristics, using standard methods and the initial and boundary conditions of the problem. For more information refer to [Kno-00],[Ost-94],[Pol-07]. MOC is a quite general technique for solving PDE problems and has been particularly popular in the area of fluid dynamics for solving incompressible transient flow in pipelines. For an introduction refer to [Stre-97, chap. 12]. General topics We conclude with a brief overview of some general aspects relating to linear and nonlinear waves. Galilean invariance Certain wave equations are Galilean invariant, i.e. the equation properties remain unchanged under a Galilean transformation. For example: • A Galilean transformation for the linear wave equation (4) is \[\tag{64} \tilde{u}=Au\left(\pm\lambda x+C_{1},\pm\lambda t+C_{2}\right),\] where \(A\ ,\) \(C_{1}\ ,\) \(C_{2}\) and \(\lambda\) are arbitrary constants. • A Galilean transformation for the nonlinear KdV equation (28) is \[\tag{65} \tilde{u}=u\left(x-6\lambda t,t\right)-\lambda ,\] where \(\lambda\) is an arbitrary constant. Other invariant transformations are possible for many linear and nonlinear wave equations, for example the Lorentz transformation applied to Maxwell's equations, but these will not be discussed here. Plane waves Figure 5: Figure 5: Plane sinusoidal wave where its source is assumed to be at \( x = -\infty\ ,\) and its fronts are advancing from right to left. A Plane wave is considered to exist far from its source and any physical boundaries so, effectively, it is located within an infinite domain. Its position vector remains perpendicular to a given plane and satisfies the 1D wave equation \[\tag{66} \frac{1}{c^2}\frac{\partial^{2}u}{\partial t^{2}}=\frac{\partial^{2}u}{\partial x^{2}}\] with a solution of the form \[\tag{67} u=u_{0}\cos\left(\omega t-kx+\phi\right)\] where \(c=\frac{\omega}{k}\) represents propagation velocity and \(\phi\) the phase of the wave. See figure (5). Refraction and diffraction Wave crests do not necessarily travel in a straight line as they proceed - this may be caused by refraction or diffraction. Wave refraction is caused by segments of the wave moving at different speeds resulting from local changes in characteristic speed, usually due to a change in medium properties. Physically, the effect is that the overall direction of the wave changes, its wavelength either increases or decreases but its frequency remains unchanged. For example, in optics refraction is governed by Snell's law and in shallow water waves by the depth of water. Wave diffraction is the effect whereby the direction of a wave changes as it interacts with objects in its path. The effect is greatest when the size of the object causing the wave to diffract is similar to the wavelength. Reflection results from a change of wave direction following a collision with a reflective surface or domain boundary. A hard boundary is one that is fixed which causes the wave to be reflected with opposite polarity, e.g. \(u(x-vt)\;\rightarrow\;-u(x+vt)\ .\) A soft boundary is one that changes on contact with the wave, which causes the wave to be reflected with the same polarity, e.g. \(u(x-vt)\;\rightarrow\; u(x+vt)\ .\) If the propagating medium is not isotropic, i.e. it is not spatially uniform, then a partial reflection can result, with an attenuated original wave continuing to propagate. The polarity of the partial reflection will depend upon the characteristics of the medium. Consider a travelling wave situation where the domain has a soft boundary with incident wave \(\phi_{I}=I\exp\left(j\omega t-k_{1}x\right)\ ,\) reflected wave \(\phi_{R}=R\exp\left(j\omega t+k_{1}x\right)\) and transmitted wave \(\phi_{T}=T\exp\left(j\omega t-k_{2}x\right)\ .\) In addition, for simplicity, consider the medium on both sides of the boundary to be isotropic and non-dispersive, which implies that all three waves will have the same frequency. From the conservation of energy law we have \(\phi_{I}+\phi_{R}=\phi_{T}\) for all \(t\ ,\) which implies \(I+R=T.\) Also, on differentiating with respect to \(x\ ,\) we obtain \(-ik_{1}I+ik_{1}R=-ik_{2}T\ .\) Thus, on rearranging we have \[\tag{68} \frac{T}{I} = \frac{2k_{1}}{k_{1}+k_{2}},\] \[\tag{69} \frac{R}{I} = \frac{k_{1}-k_{2}}{k_{1}+k_{2}}.\] Equations (68) and (69) indicate that: • the transmitted wave is always in-phase with the incident wave, i.e.synchronized (in-step with) and no phase-shift • the reflected wave is only in-phase with the incident wave if \(k_{1}>k_{2}.\) Also, because \(c_{g}=c_{p}={\displaystyle \frac{\omega}{k}},\;\) if \(\;k_{1}>k_{2}\), then this implies that \(c_{g1}<c_{g2},\;\) see section (Group and phase velocity). We mention two other quantities \[\tag{70} \tau = \left|\frac{T}{I}\right| ,\] \[\tag{71} \rho = \left|\frac{R}{I}\right| ,\] the so-called coefficients of transmission and reflection respectively. Resonance describes a situation where a system oscillates at one of its natural frequencies, usually when the amplitude increases as a result of energy being supplied by a perturbing force. A striking example of this phenomena is the failure of the mile-long Tacoma Narrows Suspension Bridge. On 7 November 1940 the structure collapsed due to a nonlinear wave that grew in magnitude as a result of excitation by a 42 mph wind. A video of this disaster is available on line at: archive.org . Another less dramatic example of resonance that most people have experienced is the effect of sound feedback from loudspeaker to microphone. A more complex form of resonance is autoresonance, a nonlinear phase-locking phenomenon which occurs when a resonantly driven nonlinear system becomes phase-locked (synchronized or in-step) with a driving perturbation or wave. Doppler effect The Doppler effect (or Doppler shift) relates to the change in frequency and wavelength of waves emitted from a source as perceived by an observer, where the source and observer are moving at a speed relative to each other. At each moment of time the source will radiate a wave and an observer will experience the following effects: • Wave source moving towards the observer - To the observer the moving source has the effect of compressing the emitted waves and the frequency is perceived to be higher than the source frequency. For example, a sound wave will have a higher pitch and the spectrum of a light wave will exhibit a blueshift. • Wave source moving away from the observer - To the observer this time, the recessional velocity has the effect of expanding the emitted waves such that a sound wave will have a lower pitch and the spectrum of a light wave will exhibit a redshift. Perhaps the most famous discovery involving the Doppler effect, is that made in 1929 by Edwin Hubble in connection with the Earth's distance from receding galaxies: the redshift of light coming from distant galaxies is proportional to their distance. This is known as Hubble's law. Transverse and longitudinal waves Transverse waves oscillate in the plane perpendicular to the direction of wave propagation. They include: seismic S (secondary) waves, and electromagnetic waves, E (electric field) and H (magnetic field), both of which oscillate perpendicularly to each other as well to the direction of propagation of energy. Light, an electromagnetic wave, can be polarized (oriented in a specific direction) by use of a polarizing filter. Longitudinal waves oscillate along the direction of wave propagation. They include sound waves (pressure, particle displacement, or particle velocity propagated in an elastic medium) and seismic P (earthquake or explosion) waves. Surface water waves however, are an example of waves that involve a combination of both longitudinal and transverse motion. Traveling waves Traveling-wave solutions [Pol-08], [Gri-11], by definition, are of the form \[\tag{72} u(x,t)=U(z),\quad z=kx-\lambda t\ ;\] where \(\lambda/k\) plays the role of the wave propagation velocity (the value \(\lambda=0 \,\) corresponds to a stationary solution, and the value \(k=0 \,\) corresponds to a space-homogeneous solution). Traveling-wave solutions are characterized by the fact that the profiles of these solutions at different time instants are obtained from one another by appropriate shifts (translations) along the \(\, x\)-axis. Consequently, a Cartesian coordinate system moving with a constant speed can be introduced in which the profile of the desired quantity is stationary. For \(\lambda>0 \,\) and \(k>0\ ,\) the wave described by equation (72) travels along the \(x\)-axis to the right (in the direction of increasing \(x \,\)). The term traveling-wave solution is also used in situations where the variable \(t \,\) plays the role of a spatial coordinate, \(y \,\ .\) Standing waves Figure 6: Figure 6: A standing wave\[\Re \left( \phi\left(x,t\right) \right)\ .\] Standing waves' occur when two traveling waves of equal amplitude and speed, but opposite direction, are superposed. The effect is that the wave amplitude varies with time but it does not move spatially. For example, consider two waves \(\phi_{1}\left(x,t\right)=\Phi_{1}\exp i\left(\omega t-kx\right)\) and \(\phi_{2}\left(x,t\right)=\Phi_{2}\exp i\left(\omega t+kx\right)\ ,\) where \(\phi_{1}\) moves to the right and \(\phi_{2}\) moves to the left. By definition we have \(\Phi_{2}=\Phi_{2}\ ,\) and by simple algebraic manipulation we obtain \[ \phi\left(x,t\right) = \phi_{1}\left(x,t\right)+\phi_{2}\left(x,t\right) ,\] \[ = \Phi_{1}\left[\exp i\left(\omega t-kx\right)+\exp i\left(\omega t+kx\right)\right] ,\] \[ \Downarrow \] \[\tag{73} = 2\Phi_{1}\exp i\omega t\;\cos kx.\] A standing wave is illustrated in figures (6) and (7) by a plot of the real part of equation (73), i.e. \(\Re \left( \phi\left(x,t\right) \right)=2\Phi_{1}\cos\omega t\cos kx\) with \(k=1\ ,\) \(\omega=1\) and \(\Phi_{1}=\frac{1}{2}\ .\) Figure 7: Figure 7: Animated standing wave\[\Re \left( \phi\left(x,t\right) \right)\ .\] The points at which \(\phi=0\) are called nodes and the points at which \(\phi=2\left|\Phi\right|\) are called antinodes. These points are fixed and occur at \(kx=\left(2n+1\right)\frac{ {\displaystyle \pi}}{{\displaystyle 2} }\) and \(kx=\left(2n+1\right)\pi\) respectively \(\left(n=\pm1,\pm2,\cdots\right)\ .\) Clearly, the existence of nonlinear standing waves can be demonstrated by application of Fourier analysis. The idea of a waveguide is to constrain a wave such that its energy is directed along a specific path. The path may be fixed or capable of being varied to suit a particular application. The operation of a waveguide is analyzed by solving the appropriate wave equation, subject to the prevailing boundary conditions. There will be multiple solutions, or modes, which are determined by the eigenfunctions associated with the particular wave equations, and the velocity of the wave as it propagates along the waveguide will be determined by the eigenvalues of the solution. • An electromagnetic waveguide is a physical structure, such as a hollow metal tube, solid dielectric rod or co-axial cable that guides electromagnetic waves in the sub-optical (non-visible) electromagnetic spectrum • An optical waveguide is a physical structure, such as an optical fiber, that guides waves in the optical (visible) part of the electromagnetic spectrum • An acoustic waveguide is a physical structure, such as a hollow tube or duct (a speaking tube), that guides acoustic waves in the audible frequency range. Musical wind instruments, such as a flute, can also be thought of as acoustic waveguides. For detailed analysis and further discussion refer to [Lio-03],[Oka-06]. Figure 8: Figure 8: Circular wave-fronts emanating from a point source. As a wave propagates through a medium, the wavefront represents the outward normal surface formed by points in space, at a particular instant, as the wave travels outwards from its origin. One of the simplest form of wavefront to envisage is an expanding circle where its radius \(r\ ,\) expands with velocity \(v\ ,\) i.e. \(r=vt.\) Simple circular sinusoidal wave-fronts propagating from a point source are shown in figure (8). They can be described by \[\tag{74} u\left(r,t\right)=Re\left\{ \exp\left[i(kr-\omega t+\pi/2-\psi\right]\right\} ,\] where \(k=\) wavenumber, \(r=\textrm{radius}\ ,\) \(t=\textrm{time}\ ,\) \(\omega=\textrm{frequency}\ ,\) \(\psi=\textrm{phase angle}\ .\) Figure 9: Figure 9: A snapshot from a simulation of the Indian Ocean tsunami that occurred on 26th December 2004 resulting from an earthquake off the west coast of Sumatra. The non-circular wave-fronts are clearly visible, which indicates curved rays. See animation here Depending upon the particular wave equation and medium in which the wave travels, the wavefront may not appear to be an expanding circle. The path upon which any point on the wave front has traveled is called a ray, and this can be a straight line or, more likely, a curve in space. In general, the wavefront is perpendicular to the ray path, and the ray curvature will depend on the circumstances of the particular physical situation. For example, its curvature will be influenced by: an anisotropic medium, refraction, diffraction, etc. Consider a water wave where wave height is very much smaller than water depth \(h\ .\) Its speed of propagation \(c\ ,\) or celerity, is given by \(c=\sqrt{gh}\ ;\) thus, for an ocean with varying depth the velocity will vary at different locations (refraction). This can result in waves having non-circular wave-fronts and hence curved rays. This situation, which occurs in many different applications, is illustrated in figure (9) where the curved wave-fronts are due to a combination of effects due to refraction, diffraction, reflection and a non-point disturbance. Huygens' principle Figure 10: Figure 10: Advancing envelope of wave-fronts \(\Phi_{q_{0}}\left(t\right)\ .\) We can consider all points of a wave-front of light in a vacuum or transparent medium to be new sources of wavelets that expand in every direction at a rate depending on their velocities. This idea was originally proposed by the Dutch mathematician, physicist, and astronomer, Christiaan Huygens, in 1690, and is a powerful method for studying various optical phenomena [Enc-09]. Thus, the points on a wave can be viewed as each emitting a circular wave which combine to propagate the wave-front \(\Phi_{q_{0}}\left(t\right)\ .\) The wave-front can be thought of as an advancing line tangential to these circular waves - see figure (10). The points on a wave-front propagate from the wave source along so-called rays. The Huygens' principle applies generally to wave-fronts and the laws of reflection and refraction can both be derived from Huygens' principle. These results can also be obtained from Maxwell's equations. For detailed analysis and proof of Huygens' principle, refer to [Arn-91]. Shock waves There are an extremely large number of types and forms of shock wave phenomena, and the following are representative of some subject areas where shocks occur: • Fluid mechanics: Shocks result when a disturbance is made to move through a fluid faster than the speed of sound (the celerity) of the medium. This can occur when a solid object is forced through a fluid, for example in supersonic flight. The effect is that the states of the fluid (velocity, pressure, density, temperature, entropy) exhibit a sudden transition, according to the appropriate conservation laws, in order to adjust locally to the disturbance. As the cause of the disturbance subsides, the shock wave energy is dissipated within the fluid and it reduces to a normal, subsonic, pressure wave. Note: A shock wave can result in local temperature increases of the fluid. This is a thermodynamic effect and should not be confused with heating due to friction. • Mechanics: Bull whips can generate shocks as the oscillating wave progresses from the handle to the tip. This is because the whip is tapered from handle to the tip and, when cracked, conservation of energy dictates that the wave speed increases as it progresses along the flexible cord. As the wave speed increases it reaches a point where its velocity exceeds that of sound, and a sharp crack is heard. • Continuum mechanics: Shocks result from a sudden impact, earthquake, or explosion. • Detonation: Shocks result from an extremely fast exothermic reaction. The expansion of the fluid, due to temperature and chemical changes force fluid velocities to reach supersonic speed, e.g. detonation of an explosive material such as TNT. But perhaps the most striking example would be the shock wave produced by a thermonuclear explosion. • Medical applications: A non-invasive treatment for kidney or gall bladder stones whereby they can be removed by use of a technique called extracorporeal lithotripsy. This procedure uses a focused, high-intensity, acoustic shock wave to shatter the stones to the point where they are reduced in size such that they may be passed through the body in a natural way! For further discussion relating to shock phenomena see ([Ben-00],[Whi-99]). We briefly introduce two topics below by way of example. Blast Wave - Sedov-Taylor Detonation A blast wave can be analyzed from the following equations, \[\tag{75} \frac{\partial\rho}{\partial t}+v\frac{\partial\rho}{\partial r}+\rho\left(\frac{\partial v}{\partial r}+\frac{2v}{r}\right) = 0,\] \[\tag{76} \quad \frac{\partial v}{\partial t}+v\frac{\partial v}{\partial r}+\frac{1}{\rho}\frac{\partial p}{\partial r} = 0,\] \[\tag{77} \quad \frac{\partial\left(p/\rho^{\gamma}\right)}{\partial t}+v\frac{\partial\left(p/\rho^{\gamma}\right)}{\partial r} = 0,\] where \(\rho\ ,\) \(v\ ,\) \(p\ ,\) \(r\ ,\) \(t\) and \(\gamma\) represent density of the medium in which the blast takes place (air), velocity of the blast front, blast pressure, blast radius, time and isentropic exponent (ratio of specific heats) of the medium respectively. Now, if we assume that, Figure 11: Figure 11: Time-lapse photographs with distance scales (100 m) of the first atomic bomb explosion in the New Mexico desert - 5.29 A.M. on 16th July, 1945. Times from instant of detonation are indicated in bottom left corner of each photograph (Top first - left column: 0.006s, 0.016s; right column: 0.025s, 0.09s). • the blast can be considered to result from a point source of energy; • the process is isentropic and the medium can be represented by the equation-of-state, \(\left(\gamma-1\right)e=p/\rho\ ,\) where \(e\) represents internal energy; • there is spherical symmetry; then, after some analysis, similarity considerations lead to the following equation [Tay-50b] \[\tag{78} E=c{\displaystyle \frac{R^{5}\rho}{t^{2}}},\] where \(c\) is a similarity constant, \(R\) is the radius of the wave front and \(E\) is the total energy released by the explosion. Back in 1945 Sir Geoffrey Ingram Taylor was asked by the British MAUD (Military Application of Uranium Detonation) Committee to deduce information regarding the power of the first atomic explosion in New Mexico. He derived this result, which was based on his earlier classified work [Tay-41], and was able to estimate, using only photographs of the blast (released into the public domain in 1947), that the yield of the bomb was equivalent to between \(16.8\) and \(22.9\) kilo-tons of TNT for values of \(\gamma\) equal to \(1.4\) and \(1.3\) respectively. Each of these photographs, crucially, contained a distance scale and precise time, see figure (11). Taylor used a value for the similarity constant of \(c=0.856\) that he obtained by a step-by-step method. However the correct analytical value for this constant was later shown to be \(0.8501\) [Sed-59]. This result was classified secret but, five years later he published the details [Tay-50a],[Tay-50b], much to the consternation of the British government. J. von Neumann and L. I. Sedov published similar independently derived results [Bet-47],[Sed-46]. For further discussion relating to the theory refer to [Kam-00],[Deb-58]. Sonic boom Figure 12: Figure 12: The N-wave sonic boom. As an aircraft proceeds in smooth flight at a speed greater than the speed of sound - the sound barrier - a shock wave is formed at its nose and finishes at its tail. The speed of sound is given by \(c=\sqrt{\gamma RT/MW}\ ,\) where \(\gamma\ ,\) \(R\ ,\) \(T\) and MW represent ratio of specific heats, universal gas constant, temperature and molecular weight respectively, and \(c\simeq330\)m/s at sea level for dry air at \(0^{o}\)C . The shock forms a high pressure, cone-shaped surface propagating with the aircraft. The half-angle (between direction of flight and the shock wave) \(\theta\) is given by \(\sin\left(\theta\right)=1/M\ ,\) where \(M=v_{aircraft}/c\) is known as the Mach number of the aircraft. Clearly, as \(v_{aircraft}\) increases, the cone becomes more pointed (\(\, \theta\) becomes smaller). Figure 13: Figure 13: The U-wave sonic boom. As the aircraft continues under steady flight conditions at high speed, there will be an abrupt rise in pressure at the aircraft's nose, which falls towards the tail when it then becomes negative. This is the so-called N-wave [Nak-08] - a pressure wave measured at sufficient distance such that it has lost its fine structure, see figure (12). A sonic boom occurs when the abrupt changes in pressure are of sufficient magnitude. Thus, steady supersonic flight results in two booms: one resulting from the rapid rise in pressure at the nose, and another when the pressure returns to normal as the tail passes the point vacated by the nose. This is the cause of the distinctive double boom from supersonic aircraft. At ground level typically, \(10<P_{max}<500\)Pa and \(\tau\simeq0.001-0.005\)s. The duration \(T\) varies from around 100 ms for a fighter plane to 500 ms for the Space Shuttle or Concorde. Figure 14: Figure 14:A USAF B1B makes a high speed pass at the Pensacola Beach airshow - Florida, July 12, 2002. Copyright © Gregg Stansbery, Stansbery Photography - reproduced with permission. Another form of sonic boom is the focused boom. These can result from high speed aircraft flight maneuvering operations. These result in the so-called U-waves which have positive shocks at the front and rear of the boom, see figure (13). Generally, U-waves result in higher peak over-pressures than N-waves - typically between 2 and 5 times. At ground level typically, \(20<P_{max}<2500\)Pa (although they can be much higher). The highest overpressure ever recorded was 6800 Pa [144 lbs/sq-ft] (source: USAF Fact Sheet 96-03). For further discussion related to sonic booms refer to [Kao-04]. As an aircraft passes through, or close to the sound barrier, water vapor in the air is compressed by the shock wave and becomes visible as a large cloud of condensation droplets formed as the air cools due to low pressure at the tail. A smaller shock wave can also form on top of the canopy. This phenomena is illustrated in figure (14). Solitary waves and solitons The correct term for a wave which is localized and retains its form over a long period of time is: solitary wave. However, a soliton is a solitary wave having the additional property that other solitons can pass through it without changing its shape. But, in the literature it is customary to refer to the solitary wave as a soliton, although this is strictly incorrect [Tao-08]. Figure 15: Figure 15: Evolution of a two-soliton solution of the KdV equation. Image illustrates the collision of two solitons that are both moving from left to right. The faster (taller) soliton overtakes the slower (shorter) soliton. Solitons are stable, nonlinear pulses which exhibit a fine balance between non-linearity and dispersion. They often result from real physical phenomena that can be described by PDEs that are completely integrable, i.e. they can be solved exactly. Such PDEs describe: shallow water waves, nonlinear optics, electrical network pulses, and many other applications that arise in mathematical physics. Where multiple solitons moving at different velocities occur within the same domain, collisions can take place with the unexpected phenomenon that, first they combine, then the faster soliton emerges to proceed on its way. Both solitons then continue to proceed in the the same direction and eventually reach a situation where their speeds and shapes are unchanged. Thus, we have a situation where a faster soliton can overtake a slower soliton. There are two effects that distinguishes this phenomena from that which occurs in a linear wave system. The first is that the maximum height of the combined solitons is not equal to the sum of the individual soliton heights. The second is that, following the collision, there is a phase shift between the two solitons, i.e. the linear trajectory of each soliton before and after the collision is seen to be shifted horizontally - see figure (15). Some additional discussion is given in section (The Korteweg-de Vries equation) and detailed technical overviews of the subject can be found in the works by Ablowitz & Clarkson [Abl-91], Drazin & Johnson [Dra-89] and Johnson [Joh-97]. Soliton theory is still an active area of research and a discussion on the various types of soliton solution that are known is given by Gerdjikov & Kaup [Ger-05]. Soliton types Soliton types generally fall into thee types: • Humps (pulses) - These are the classic bell-shaped curves that are typically associated with soliton phenomena. • Kinks - These are solitons characterized by either a monotonic positive shift (kink) or a monotonic negative shift (anti-kink) where the change in value occurs gradually in the shape of an s-type curve. • Breathers (bions) - These can be either stationary or travelling soliton humps that oscillate: becoming positive, negative, positive and so on. More details may be found in Drazin and Johnson [Dra-89]. The word tsunami is a Japanese term derived from the characters 津 (tsu) meaning harbor and 波 (nami) meaning wave. It is now generally accepted by the international scientific community to describe a series of traveling waves in water produced by the displacement of the sea floor associated with submarine earthquakes, volcanic eruptions, or landslides. They are also known as tidal waves. Tsunami are usually preceded by a leading-depression N-wave (LDN), one in which the trough reaches the shoreline first. Eyewitnesses in Banda Aceh who observed the effects of the December 2004 Sumatra Tsunami, see figure (9), resulting from a magnitude 9.3 seabed earthquake, described a series of three waves, beginning with a leading depression N wave [Bor-05]. Recent estimates indicate that this powerful tsunami resulted in excess of 275,000 deaths and extensive damage to property and infrastructure around the entire coast line of the Indian ocean [Kun-07]. Tsunami are long-wave phenomena and, because the wavelengths of tsunami in the ocean are long with respect to water depth, they can be considered shallow water waves. Thus, \(c_{p}=c_{g}=\sqrt{gh}\) and for a depth of 4km we see that the wave velocity is around 200 m/s. Hence, tsunami waves are often modelled using the shallow water equations, the Boussinesq equation, or other suitable equations that bring out in sufficient detail the required wave characteristics. However, one of the major challenges is to model shoreline inundation realistically, i.e. the effect of the wave when it encounters the shore - also known as run-up. As the wave approaches the shoreline, the water depth decreases sharply resulting in a greatly increased surge of water at the point where the wave strikes land. This requires special modeling techniques to be used, such as robust Riemann solvers [Tor-01],[Ran-06] or the level-set method [Set-99],[Osh-03], which can handle situations where dry regions become flooded and vice versa. The authors would like to thank reviewers Prof. Andrei Polyanin and Dr. Alexei Zhurov for their positive and constructive comments. • [Abl-91] Ablowitz, M. J. and P. A. Clarkson (1991), Solitons, Nonlinear Evolution Equations and Inverse Scattering, London Mathematical Society lecture Notes 149, Cambridge University press. • [Arn-91] Arnold, V. I. (1991), Mathematical Methods of Classical Mechanics, 2nd Ed., Springer. • [Bar-03] Barenblatt, G. I. (2003), Scaling, Cambridge University Press. • [Ben-00] Ben-Dor, G. (Ed), O. Igra (Ed) and T. Elperin (Ed) (2000), Handbook of Shock Waves, 3 vols, Academic Press. • [Bet-47] Bethe, H. A., K. Fuchs, J.O. Hirschfelder, J. L. Magee, R. E. Peierls and J. von Neumann (1947), Blast Wave, Los Alamos Scientific Laboratory Report LA-2000. • [Bor-05] Borrero, J. C. (2005). Field Data and Satellite Imagery of Tsunami Effects in Banda Aceh, Science 10 June, 308, p. 1596. • [Buc-83] Buckley, R. (1985), Oscillations and Waves, Adam Hilger Ltd., Bristol and Boston. • [Byn-84] Bynam, W. F., E. J. Browne and R. Porter Eds. (1984), Dictionary of The History of Science, Princton University Press. • [Bur-93] Burden, R. L. and Faires, J. D. (1993), Numerical Analysis, 5th Ed., PWS Publishing Company. • [Caj-61] Cajori, F. (1961), A History of Mathematics, MacMillan. • [Cia-88] Ciarlet, P. G. (1988), Mathematical Elasticity: Three-dimensional Elasticity, volume 1, Elsevier. • [Cla-89] Clarkson, P. A., A. S. Fokas and M. J. Ablowitz (1989), Hodograph transformations of linearizable partial differential equations, SIAM J. Appl. Mathematics, Vol. 49, No. 4, pp. 1188-1209. • [Col-71] Collocott, T. C. (Ed.) (1971), Chambers Dictionary of Science and technology, Chambers. • [Cor-05] Cornejo-Perez, O. and H. C. Rosu (2005), Nonlinear Second Order ODE's: Factorizations And Particular Solutions, Progress of Theoretical Physics, 114-3, pp. 533-538. • [Cou-62] Courant, R. and D. Hilbert (1962), Methods of Mathematical Physics - Vol II, Interscience Publishers. • [Dai-06] Dai, H. H. , E. G. Fan and X. G. Geng (2006), Periodic wave solutions of nonlinear equations by Hirota's bilinear method. Available on-line at: [1] • [Deb-58] Deb Ray, G. (1958), An Exact Solution of a Spherical Blast Under Terrestrial Conditions, Proc. Natn. Inst. Sci. India, A 24, pp. 106-112. • [Dis-08] DispersiveWiki (2008). An on-line collection of web pages concerned with the local and global well-posedness of various non-linear dispersive and wave equations. DispersiveWiki [2] • [Dra-89] Drazin, P. G. and R. S. Johnson (1989), Solitons: an Introduction, Cambridge University press. • [Elm-69] Elmore, W. C. and M. A. Heald (1969), Physics of Waves, Dover. • [Enc-09] Encyclopædia Britannica (2009), Encyclopædia Britannica On-line, [3] • [Far-93] Farlow, S. J. (1993), Partial Differential Equations for Scientists and Engineers, Chapter 17, Dover Publications, New York, New York. • [Fow-05] Fowler, A. C. (2005), Techniques of Applied Mathematics, Report of the Mathematical Institute, Oxford University. • [Gal-06] Galaktionov, V. A. and S. R. Svirshchevskii (2006), Exact Solutions and Invariant Subspaces of Nonlinear Partial Differential Equations in Mechanics and Physics, Chapman & Hall/CRC Press, Boca Raton. • [Ger-05] Gerdjikov, V. S. and D. Kaup (2005), How many types of soliton solutions do we know? Seventh International Conference on Geometry, Integrability and Quantization, June 2-10, Varna, Bulgaria. I. M. Mladenov and M. De Leon, Editors. SOFTEX, Sofia 2005, pp. 1-24. • [Gil-82] Gill, A. E. (1982), Atmosphere-Ocean Dynamics, Academic Press. • [God-54] Godunov, S. K. (1954), Ph.D. Dissertation: Different Methods for Shock Waves, Moscow State University. • [God-59] Godunov, S. K. (1959), A Difference Scheme for Numerical Solution of Discontinuous Solution of Hydrodynamic Equations, Math. Sbornik, 47, 271-306, translated US Joint Publ. Res. Service, JPRS 7226, 1969. • [Gri-11] Griffiths, G. W. and W. E. Schiesser (2011), Traveling Wave Solutions of Partial Differential Equations: Numerical and Analytical Methods with Matlab and Maple, Academic Press; see also http://www.pdecomp.net/ • [Had-23]Hadamard, J. (1923), Lectures on Cauchy's Problem in Linear Partial Differential Equations, Dover. • [Ham-07] Hamdi, S., W. E. Schiesser and G. W. Griffiths (2007), Method of Lines. Scholarpedia, 2(7) 2859. Available on-line at Scholarpedia: [4] • [He-06] He, J-H. and X-H. Wu (2006), Exp-function method for nonlinear wave equations, Chaos, Solitons & Fractals, Volume 30, Issue 3, November, pp. 700-708. • [Her-05] Herman, W. and W. Malfliet (2005), The Tanh Method: A Tool to Solve Nonlinear Partial Differential Equations with Symbolic Software, 9th World Multiconference on Systemics, Cybernetics, and Informatics (WMSCI 2005), Orlando, Florida, July 10-13, pp. 165-168. • [Hir-88] Hirsch, C. (1988), Numerical Computation of Internal and External Flows, Volume 1: Fundamentals of Numerical Discretization, Wiley. • [Hir-90] Hirsch, C. (1990), Numerical Computation of Internal and External Flows, Volume 2: Computational Methods for Inviscid and Viscous Flows, Wiley. • [Ibr-94] Ibragimov, N. H. (1994 – 1995), CRC Handbook of Lie Group Analysis of Differential Equations, Volumes 1 & 2, CRC Press, Boca Raton. • [Inf-00] Infield, E. and G. Rowlands (2000). Nonlinear Waves, Solitons and Chaos, 2nd Ed., Cambridge University Press. • [Jag-06] de Jager, E. M. (2006), On the Origin of the Korteweg-de Vries Equation, arXiv e-print service. Available on-line at: arXiv.org [5] • [Joh-97] Johnson, R. S. (1999), A Modern Introduction to the Mathematical Theory of Water Waves, Cambridge University press. • [Kao-04] Kaouri, K. (2004), PhD Thesis: Secondary Sonic Booms, Somerville College, Oxford University. • [Kam-00] Kamm, J. R. (2000), Evaluation of the Sedov-von Neumann-Taylor Blast Wave Solution, Los Alamos National Laboratory Report LA-UR-00-6055. • [Kar-98] Karigiannis, S. (1998). Minor Thesis: The inverse scattering transform and integrability of nonlinear evolution equations, University of Waterloo. Available on-line at: [6] • [Kno-00] Knobel, R. A. (2000), An Introduction to the Mathematical Theory of Waves, American Mathematical Society. • [Kor95] Korteweg, D. J. and de Vries, F. (1895), On the Change of Form of Long Waves Advancing in a Rectangular Canal, and on a New Type of Long Stationary Waves, Phil. Mag. 39, pp. 422-443. • [Kre-04] Kreiss, H-O, and J. Lorenz (2004), Initial-Boundary Value problems and the Navier-Stokes Equations, Society for Industrial and Applied mathematics. • [Krey-93]Kreyszig, E. (1993), Advanced Engineering Mathematics - Seventh Edition, Wiley. • [Kun-07]Kundu, A. Ed. (2007), Tsunami and Nonlinear Waves, Springer. • [Lam-93] Lamb, Sir H. (1993), Hydrodynamics, 6th Ed., Cambridge University Press. • [Lan-98] Laney, Culbert B. (1998), Computational Gas Dynamics, Cambridge University Press. • [Lev-02] LeVeque, R. J. (2002), Finite Volume Methods for Hyperbolic Problems, Cambridge University Press. • [Lev-07] LeVeque, R. J. (2007), Finite Difference Methods for Ordinary and Partial Differential Equations, Society for Industrial and Applied mathematics. • [Lig-78] Lighthill, Sir James (1978), Waves in Fluids, Cambridge University Press. • [Lio-03] Lioubtchenko, D., S. Tretyakov and S. Dudorov (2003), Millimeter-wave Waveguides, Springer. • [Mal-92] Mafliet, W. (1992), Solitary wave solutions of nonlinear wave equations, Am. J. Physics, 60(7), pp 650-654. • [Mal-96a] Mafliet, W. and W. Hereman (1996a), The Tanh Method I - Exact Solutions of Nonlinear Evolution Wave Equations, Physica Scripta, 54, pp. 563-568. • [Mal-96b] Mafliet, W. and W. Hereman (1996b), The Tanh Method II - Exact Solutions of Nonlinear Evolution Wave Equations, Physica Scripta, 54, pp. 569-575. • [Mic-07] Microsoft Corporation (2007), Encarta® World English Dictionary [North American Edition], Bloomsbury Publishing Plc. • [Mor-94] Morton, K. W. and D. F. Myers (1994), Numerical Solution of Partial Differential Equations, Cambridge. • [Mur-02] Murray, J. D. (2002), Mathematical Biology I: An Introduction, 3rd Ed., Springer. • [Mur-03] Murray, J. D. (2003), Mathematical Biology II: Spatial Models and Biomedical Applications, 3rd Ed., Springer. • [Nak-08] Naka, Y., Y. Makino and T. Ito (2008). Experimental study on the effects of N-wave sonic-boom signatures on window vibration, pp 6085-90, Acoustics 08, Paris. • [Oha-94] Ohanian, H. C. and R. Ruffini (1994), Gravitation and Spacetime, 2nd ed., Norton. • [Oka-06] Okamoto, K. (2006), Fundamentals of Optical Waveguides, Academic Press. • [Osh-03] Osher, S. and R. Fedkiw (2003). Level Set Methods and Dynamic Implicit Surfaces, Springer. • [Ost-94] Ostaszewski, A. (1994), Advanced Mathematical Methods, Cambridge University press. • [Pol-02] Polyanin, A. D., V. F. Zaitsev and A. Moussiaux (2002), Handbook of First Order Partial Differential Equations, Taylor & Francis. • [Pol-04] Polyanin, A. D. and V. F. Zaitsev (2004), Handbook of Nonlinear Partial Differential Equations, Chapman and Hall/CRC Press. • [Pol-07] Polyanin, A. D. and A. V. Manzhirov (2007), Handbook of Mathematics for Engineers and Scientists, Chapman and Hall/CRC Press. • [Pol-08] Polyanin, A. D., W. E. Schiesser and A. I Zhurov (2008), Partial Differential Equation. Scholarpedia, 3(10):4605. Available on-line at Scholarpedia: [7] • [Ran-06] Randall, D. L. and J. LeVeque (2006), Finite Volume Methods and Adaptive Refinement for Global Tsunami Propagation and Local Inundation, Science of Tsunami Hazards, 24(5), pp 319-328. • [Ros-88] Ross, J., S. C. Muller, C. Vidal (1988), Chemical Waves, Science, 240, pp. 460-465. • [Sch-94] Schiesser, W. E. (1994), Computational Mathematics in Engineering and Applied Science, CRC Press. • [Sch-09] Schiesser, W. E. and G. W. Griffiths (2009), A Compendium of Partial Differential Equation Models: Method of Lines Analysis with Matlab, Cambridge University Press; see also http://www.pdecomp.net/ • [Sco-44] Scott-Russell, J. (1844), Report on Waves, 14th Meeting of the British Association for the Advancement of Science, pp. 311-390, London. • [Sco-48] Scott-Russell, J. (1848), On Certain Effects Produced on Sound by The Rapid Motion of The Observer, 18th Meeting of the British Association for the Advancement of Science, pp. 37-38, London. • [Sed-46] Sedov, L. I. (1946), Propagation of strong shock waves, Journal of Applied Mathematics and Mechanics, 10, pp241-250. • [Sed-59] Sedov, L. I. (1959), Similarity and Dimensional Methods in Mechanics, Academic Press, New York. • [Set-99] Sethian, J. A. (1999), Level Set methods and Fast Marching Methods, Cambridge University Press. • [Sha-75] Shadowitz, A. (1975), The Electromagnetic Field, McGraw-Hill. • [Shu-98] Shu, C-W. (1998), Essentially Non-oscillatory and Weighted Essential Non-oscillatory Schemes for Hyperbolic Conservation Laws. In: Cockburn, B., Johnson, C., Shu, C-W., Tadmor, E. (Eds.), Advanced Numerical Approximation of Nonlinear Hyperbolic Equations, Lecture Notes in Mathematics, vol 1697. Springer, pp. 325-432. • [Shu-09] Shu, C-W. (2009), High Order Weighted Essentially Non-oscillatory Schemes for Convection Dominated Problems, SIAM Review, Vol. 51, No. 1, pp. 82-126. • [Stra-92] Strauss, W. A. (1992), Partial Differential Equations: An Introduction, Wiley. • [Stre-97] Streeter, V., K. W. Bedford and E. B. Wylie (1997), Fluid Mechanics, 9th Ed., McGraw-Hill. • [Tan-97] Tannehill, J. C., et al. (1997), Computational Fluid Mechanics and Heat Transfer, 2nd Ed., Taylor and Francis. • [Tao-05] Tao, T. (2005), Nonlinear dispersive equations: local and global analysis, Monograph based on (and greatly expanded from) a lecture series given at the NSF-CBMS regional conference on nonlinear and dispersive wave equations at New Mexico State University, held in June 2005. Available on-line at ucla.edu: [8] • [Tao-08] Tao, T. (2008), Why are solitons stable?, arXiv:0802.2408v2 [math.AP]. Available on-line at arxiv.org: [9] • [Tay-41] Taylor, Sir Geoffrey Ingram (1941), The formation of a blast wave by a very intense explosion, British Civil Defence Research Committee, Report RC-210. • [Tay-50a] Taylor, Sir Geoffrey Ingram (1950), The Formation of a Blast Wave by a Very Intense Explosion. I. Theoretical Discussion, Proceedings of the Royal Society of London. Series A, Mathematical and Physical Sciences, Volume 201, Issue 1065, pp. 159-174. • [Tay-50b] Taylor, Sir Geoffrey Ingram (1950), The Formation of a Blast Wave by a Very Intense Explosion. II. The Atomic Explosion of 1945, Proceedings of the Royal Society of London. Series A, Mathematical and Physical Sciences, Volume 201, Issue 1065, pp. 175-186. • [Tor-99] Toro, E. F. (1999), Riemann Solvers and Numerical Methods for Fluid Dynamics, Springer-Verlag. • [Tor-01] Toro, E. F. (2001), Shock-Capturing Methods for Free-Surface Shallow Flows, Wiley. • [van-79] van Leer, B. (1979), Towards the ultimate conservative difference scheme V. A second order sequel to Godunov`s method, J. Comput. Phys., vol. 32, pp. 101-136. • [Wes-01] Wesseling, P. (2001), Principles of Computational Fluid Dynamics, Springer, Berlin. • [Whi-99] Whitham, G. B. (1999), Linear and Nonlinear Waves, Wiley. • [Zie-77] Zienkiewicz, O. (1977) The Finite Element Method in Engineering Science. McGraw-Hill. • [Zwi-97] Zwillinger, D. (1997), Handbook of Differential Equations, 3rd Ed., Academic Press. Internal references Personal tools Focal areas
381a23bdbd11efcb
Citation for this page in APA citation style.           Close Mortimer Adler Rogers Albritton Alexander of Aphrodisias Samuel Alexander William Alston Louise Antony Thomas Aquinas David Armstrong Harald Atmanspacher Robert Audi Alexander Bain Mark Balaguer Jeffrey Barrett William Barrett William Belsham Henri Bergson George Berkeley Isaiah Berlin Richard J. Bernstein Bernard Berofsky Robert Bishop Max Black Susanne Bobzien Emil du Bois-Reymond Hilary Bok Laurence BonJour George Boole Émile Boutroux Michael Burke Lawrence Cahoone Joseph Keim Campbell Rudolf Carnap Ernst Cassirer David Chalmers Roderick Chisholm Randolph Clarke Samuel Clarke Anthony Collins Antonella Corradini Diodorus Cronus Jonathan Dancy Donald Davidson Mario De Caro Daniel Dennett Jacques Derrida René Descartes Richard Double Fred Dretske John Dupré John Earman Laura Waddell Ekstrom Herbert Feigl Arthur Fine John Martin Fischer Frederic Fitch Owen Flanagan Luciano Floridi Philippa Foot Alfred Fouilleé Harry Frankfurt Richard L. Franklin Michael Frede Gottlob Frege Peter Geach Edmund Gettier Carl Ginet Alvin Goldman Nicholas St. John Green H.Paul Grice Ian Hacking Ishtiyaque Haji Stuart Hampshire Sam Harris William Hasker Georg W.F. Hegel Martin Heidegger Thomas Hobbes David Hodgson Shadsworth Hodgson Baron d'Holbach Ted Honderich Pamela Huby David Hume Ferenc Huoranszki William James Lord Kames Robert Kane Immanuel Kant Tomis Kapitan Walter Kaufmann Jaegwon Kim William King Hilary Kornblith Christine Korsgaard Saul Kripke Andrea Lavazza Christoph Lehner Keith Lehrer Gottfried Leibniz Jules Lequyer Michael Levin George Henry Lewes David Lewis Peter Lipton C. Lloyd Morgan John Locke Michael Lockwood E. Jonathan Lowe John R. Lucas Alasdair MacIntyre Ruth Barcan Marcus James Martineau Storrs McCall Hugh McCann Colin McGinn Michael McKenna Brian McLaughlin John McTaggart Paul E. Meehl Uwe Meixner Alfred Mele Trenton Merricks John Stuart Mill Dickinson Miller Thomas Nagel Otto Neurath Friedrich Nietzsche John Norton Robert Nozick William of Ockham Timothy O'Connor David F. Pears Charles Sanders Peirce Derk Pereboom Steven Pinker Karl Popper Huw Price Hilary Putnam Willard van Orman Quine Frank Ramsey Ayn Rand Michael Rea Thomas Reid Charles Renouvier Nicholas Rescher Richard Rorty Josiah Royce Bertrand Russell Paul Russell Gilbert Ryle Jean-Paul Sartre Kenneth Sayre Moritz Schlick Arthur Schopenhauer John Searle Wilfrid Sellars Alan Sidelle Ted Sider Henry Sidgwick Walter Sinnott-Armstrong Saul Smilansky Michael Smith Baruch Spinoza L. Susan Stebbing Isabelle Stengers George F. Stout Galen Strawson Peter Strawson Eleonore Stump Francisco Suárez Richard Taylor Kevin Timpe Mark Twain Peter Unger Peter van Inwagen Manuel Vargas John Venn Kadri Vihvelin G.H. von Wright David Foster Wallace R. Jay Wallace Ted Warfield Roy Weatherford C.F. von Weizsäcker William Whewell Alfred North Whitehead David Widerker David Wiggins Bernard Williams Timothy Williamson Ludwig Wittgenstein Susan Wolf Michael Arbib Walter Baade Bernard Baars Leslie Ballentine Gregory Bateson John S. Bell Mara Beller Charles Bennett Ludwig von Bertalanffy Susan Blackmore Margaret Boden David Bohm Niels Bohr Ludwig Boltzmann Emile Borel Max Born Satyendra Nath Bose Walther Bothe Hans Briegel Leon Brillouin Stephen Brush Henry Thomas Buckle S. H. Burbury Donald Campbell Anthony Cashmore Eric Chaisson Gregory Chaitin Jean-Pierre Changeux Arthur Holly Compton John Conway John Cramer E. P. Culverwell Olivier Darrigol Charles Darwin Richard Dawkins Terrence Deacon Lüder Deecke Richard Dedekind Louis de Broglie Max Delbrück Abraham de Moivre Paul Dirac Hans Driesch John Eccles Arthur Stanley Eddington Gerald Edelman Paul Ehrenfest Albert Einstein Hugh Everett, III Franz Exner Richard Feynman R. A. Fisher Joseph Fourier Philipp Frank Steven Frautschi Edward Fredkin Lila Gatlin Michael Gazzaniga GianCarlo Ghirardi J. Willard Gibbs Nicolas Gisin Paul Glimcher Thomas Gold A. O. Gomes Brian Goodwin Joshua Greene Jacques Hadamard Mark Hadley Patrick Haggard Stuart Hameroff Augustin Hamon Sam Harris Hyman Hartman John-Dylan Haynes Donald Hebb Martin Heisenberg Werner Heisenberg John Herschel Art Hobson Jesper Hoffmeyer E. T. Jaynes William Stanley Jevons Roman Jakobson Pascual Jordan Ruth E. Kastner Stuart Kauffman Martin J. Klein William R. Klemm Simon Kochen Hans Kornhuber Stephen Kosslyn Ladislav Kovàč Leopold Kronecker Rolf Landauer Alfred Landé Pierre-Simon Laplace David Layzer Benjamin Libet Seth Lloyd Hendrik Lorentz Josef Loschmidt Ernst Mach Donald MacKay Henry Margenau James Clerk Maxwell Ernst Mayr John McCarthy Warren McCulloch Ulrich Mohrhoff Jacques Monod Emmy Noether Abraham Pais Howard Pattee Wolfgang Pauli Massimo Pauri Roger Penrose Steven Pinker Colin Pittendrigh Max Planck Susan Pockett Henri Poincaré Daniel Pollen Ilya Prigogine Hans Primas Adolphe Quételet Jürgen Renn/a> Juan Roederer Jerome Rothstein David Ruelle Tilman Sauer Jürgen Schmidhuber Erwin Schrödinger Aaron Schurger Claude Shannon David Shiang Herbert Simon Dean Keith Simonton B. F. Skinner Lee Smolin Ray Solomonoff Roger Sperry John Stachel Henry Stapp Tom Stonier Antoine Suarez Leo Szilard Max Tegmark William Thomson (Kelvin) Giulio Tononi Peter Tse Vlatko Vedral Heinz von Foerster John von Neumann John B. Watson Daniel Wegner Steven Weinberg Paul A. Weiss John Wheeler Wilhelm Wien Norbert Wiener Eugene Wigner E. O. Wilson Stephen Wolfram H. Dieter Zeh Ernst Zermelo Wojciech Zurek Konrad Zuse Fritz Zwicky Free Will Mental Causation James Symposium The "decoherence program" of H. Dieter Zeh, Erich Joos, Wojciech Zurek, John Wheeler, Max Tegmark, and others has multiple aims - 1. to show how classical physics emerges from quantum physics. They call this the "quantum to classical transition." 2. to explain the lack of macroscopic superpositions of quantum states (e.g., Schrödinger's Cat as a quantum superposition of live and dead cats). 3. in particular, to identify the mechanism that suppresses ("decoheres") interference between states as something involving the "environment" beyond the system and measuring apparatus. 4. to explain the appearance of particles following paths (they say there are no "particles," and maybe no paths). 5. to explain the appearance of discontinuous transitions between quantum states (there are no "quantum jumps" either) 6. to champion a "universal wave function" (as a superposition of states) that evolves in a "unitary" fashion (i.e., deterministically) according to the Schrödinger equation. 7. to clarify and perhaps solve the measurement problem, which they define as the lack of macroscopic superpositions. 8. to explain the "arrow of time." 9. to revise the foundations of quantum mechanics by changing some of its assumptions, notably challenging the "collapse" of the wave function or "projection postulate." Decoherence theorists say that they add no new elements to quantum mechanics (such as "hidden variables") but they do deny one of the three basic assumptions - namely Dirac's projection postulate. This is the method used to calculate the probabilities of various outcomes, which probabilities are confirmed to several significant figures by the statistics of large numbers of identically prepared experiments. They accept (even overemphasize) Dirac's principle of superposition. Some also accept the axiom of measurement, although some of them question the link between eigenstates and eigenvalues. The decoherence program hopes to offer insights into several other important phenomena: 1. What Zurek calls the "einselection" (environment-induced superselection) of preferred states (the so-called "pointer states") in a measurement apparatus. 2. The role of the observer in quantum measurements. 3. Nonlocality and quantum entanglement (which they say is used to "derive" decoherence). 4. The origin of irreversibility (by "continuous monitoring"). 5. The approach to thermal equilibrium. The decoherence program finds unacceptable these aspects of the standard quantum theory: 1. Quantum "jumps" between energy eigenstates. 2. The "apparent" collapse of the wave function. 3. In particular, explanation of the collapse as a "mere" increase of information. 4. The "appearance" of "particles." 5. The "inconsistent" Copenhagen Interpretation - quantum "system," classical "apparatus." 6. The "insufficient" Ehrenfest Theorems. Decoherence theorists admit that some problems remain to be addressed: 1. The "problem of outcomes." Without the collapse postulate, it is not clear how definite outcomes are to be explained. As Tegmark and Wheeler put it: The main motivation for introducing the notion of wave-function collapse had been to explain why experiments produced specific outcomes and not strange superpositions of is embarrassing that nobody has provided a testable deterministic equation specifying precisely when the mysterious collapse is supposed to occur. Some of the controversial positions in decoherence theory, including the denial of collapses and particles, come straight from the work of Erwin Schrödinger, for example in his 1952 essays "Are There Quantum Jumps?" (Part I and Part II), where he denies the existence of "particles," claiming that everything can be understood as waves. Other sources include: Hugh Everett III and his "relative state" or "many world" interpretations of quantum mechanics; Eugene Wigner's article on the problem of measurement; and John Bell's reprise of Schrödinger's arguments on quantum jumps. Decoherence advocates therefore look to other attempts to formulate quantum mechanics. Also called "interpretations," these are more often reformulations, with different basic assumptions about the foundations of quantum mechanics. Most begin from the "universal" applicability of the unitary time evolution that results from the Schrödinger wave equation. They include: • The DeBroglie-Bohm "pilot-wave" or "hidden variables" formulation. • The Everett-DeWitt "relative-state" or "many worlds" formulation. • The Ghirardi-Rimini-Weber "spontaneous collapse" formulation. Note that these "interpretations" are often in serious conflict with one another. Where Erwin Schrödinger thinks that waves alone can explain everything (there are no particles in his theory), David Bohm thinks that particles not only exist but that every particle has a definite position that is a "hidden parameter" of his theory. H. Dieter Zeh, the founder of decoherence, sees one of two possibilities: a modification of the Schrödinger equation that explicitly describes a collapse (also called "spontaneous localization") or an Everett type interpretation, in which all measurement outcomes are assumed to exist in one formal superposition, but to be perceived separately as a consequence of their dynamical autonomy resulting from decoherence. It was John Bell who called Everett's many-worlds picture "extravagant," While this latter suggestion has been called "extravagant" (as it requires myriads of co-existing quasi-classical "worlds"), it is similar in principle to the conventional (though nontrivial) assumption, made tacitly in all classical descriptions of observation, that consciousness is localized in certain semi-stable and sufficiently complex subsystems (such as human brains or parts thereof) of a much larger external world. Occam's razor, often applied to the "other worlds", is a dangerous instrument: philosophers of the past used it to deny the existence of the interior of stars or of the back side of the moon, for example. So it appears worth mentioning at this point that environmental decoherence, derived by tracing out unobserved variables from a universal wave function, readily describes precisely the apparently observed "quantum jumps" or "collapse events." The Information Interpretation of quantum mechanics also has explanations for the measurement problem, the arrow of time, and the emergence of adequately, i.e., statistically determined classical objects. However, I-Phi does it while accepting the standard assumptions of orthodox quantum physics. See below. We briefly review the standard theory of quantum mechanics and compare it to the "decoherence program," with a focus on the details of the measurement process. We divide measurement into several distinct steps, in order to clarify the supposed "measurement problem" (mostly the lack of macroscopic state superpositions) and perhaps "solve" it. The most famous example of probability-amplitude-wave interference is the two-slit experiment. Interference is between the probability amplitudes whose absolute value squared gives us the probability of finding the particle at various locations behind the screen with the two slits in it. Finding the particle at a specific location is said to be a "measurement." However, if the system is prepared in an arbitrary state ψa, it can be represented as being in a linear combination of the system's basic energy states φn. ψa = Σ cn | n >. cn = < ψa | φn >. It is said to be in "superposition" of those basic states. The probability Pn of its being found in state φn is Pn = < ψa | φn >2 = cn2 . Between measurements, the time evolution of a quantum system in such a superposition of states is described by a unitary transformation U (t, t0) that preserves the same superposition of states as long as the system does not interact with another system, such as a measuring apparatus. As long as the quantum system is completely isolated from any external influences, it evolves continuously and deterministically in an exactly predictable (causal) manner. Whenever the quantum system does interact however, with another particle or an external field, its behavior ceases to be causal and it evolves discontinuously and indeterministically. This acausal behavior is uniquely quantum mechanical. Nothing like it is possible in classical mechanics. Most attempts to "reinterpret" or "reformulate" quantum mechanics are attempts to eliminate this discontinuous acausal behavior and replace it with a deterministic process. We must clarify what we mean by "the quantum system" and "it evolves" in the previous two paragraphs. This brings us to the mysterious notion of "wave-particle duality." In the wave picture, the "quantum system" refers to the deterministic time evolution of the complex probability amplitude or quantum state vector ψa, according to the "equation of motion" for the probability amplitude wave ψa, which is the Schrödinger equation, δψa/δt = H ψa. The probability amplitude looks like a wave and the Schrödinger equation is a wave equation. But the wave is an abstract quantity whose absolute square is the probability of finding a quantum particle somewhere. It is distinctly not the particle, whose exact position is unknowable while the quantum system is evolving deterministically. It is the probability amplitude wave that interferes with itself. Particles, as such, never interfere (although they may collide). Note that we never "see" the superposition of particles in distinct states. There is no microscopic superposition in the sense of the macroscopic superposition of live and dead cats (See Schrödinger's Cat). When the particle interacts, with the measurement apparatus for example, we always find the whole particle. It suddenly appears. For example, an electron "jumps" from one orbit to another, absorbing or emitting a discrete amount of energy (a photon). When a photon or electron is fired at the two slits, its appearance at the photographic plate is sudden and discontinuous. The probability wave instantaneously becomes concentrated at the location of the particle. There is now unit probability (certainty) that the particle is located where we find it to be. This is described as the "collapse" of the wave function. Where the probability amplitude might have evolved under the unitary transformation of the Schrödinger equation to have significant non-zero values in a very large volume of phase space, all that probability suddenly "collapses" (faster than the speed of light, which deeply bothered Albert Einstein) to the location of the particle. Einstein said that some mysterious "spooky action-at-a-distance" must act to prevent the appearance of a second particle at a distant point where a finite probability of appearing had existed just an instant earlier. Animation of a wave function collapsing - click to restart Whereas the abstract probability amplitude moves continuously and deterministically throughout space, the concrete particle moves discontinuously and indeterministically to a particular point in space. For this collapse to be a "measurement," the new information about which location (or state) the system has collapsed into must be recorded somewhere in order for it to be "observable" by a scientist. But the vast majority of quantum events - e.g., particle collisions that change the particular states of quantum particles before and after the collision - do not leave an indelible record of their new states anywhere (except implicitly in the particles themselves). We can imagine that a quantum system initially in state ψa has interacted with another system and as a result is in a new state φn, without any macroscopic apparatus around to record this new state for a "conscious observer." H. D. Zeh describes how quantum systems may be "measured" without the recording of information. It is therefore a plausible experimental result that the interference disappears also when the passage [of an electron through a slit] is "measured" without registration of a definite result. The latter may be assumed to have become a "classical fact" as soon as the measurement has irreversibly "occurred". A quantum phenomenon may thus "become a phenomenon" without being observed. This is in contrast to Heisenberg's remark about a trajectory coming into being by its observation, or a wave function describing "human knowledge". Bohr later spoke of objective irreversible events occurring in the counter. However, what precisely is an irreversible quantum event? According to Bohr this event can not be dynamically analyzed. Analysis within the quantum mechanical formalism demonstrates nonetheless that the essential condition for this "decoherence" is that complete information about the passage is carried away in some objective physical form. This means that the state of the environment is now quantum correlated (entangled) with the relevant property of the system (such as a passage through a specific slit). This need not happen in a controllable way (as in a measurement): the "information" may as well form uncontrollable "noise", or anything else that is part of reality. In contrast to statistical correlations, quantum correlations characterize real (though nonlocal) quantum states - not any lack of information. In particular, they may describe individual physical properties, such as the non-additive total angular momentum J2 of a composite system at any distance. The Measurement Process In order to clarify the measurement process, we separate it into several distinct stages, as follows: • A particle collides with another microscopic particle or with a macroscopic object (which might be a measuring apparatus). • In this scattering problem, we ignore the internal details of the collision and say that the incoming initial state ψa has changed asymptotically (discontinuously, and randomly = wave-function collapse) into the new outgoing final state φn. • [Note that if we prepare a very large number of identical initial states ψa, the fraction of those ending up in the final state φn is just the probability < ψa | φn >2] • The information that the system was in state ψa has been lost (its path information has been erased; it is now "noise," as Zeh describes it). New information exists (implicitly in the particle, if not stored anywhere else) that the particle is in state φn. • If the collision is with a large enough (macroscopic) apparatus, it might be capable of recording the new system state information, by changing the quantum state of the apparatus into a "pointer state" correlated with the new system state. "Pointers" could include the precipitated silver-bromide molecules of a photographic emulsion, the condensed vapor of a Wilson cloud chamber, or the cascaded discharge of a particle detector. • But this new information will not be indelibly recorded unless the recording apparatus can transfer entropy away from the apparatus greater than the negative entropy equivalent of the new information (to satisfy the second law of thermodynamics). This is the second requirement in every two-step creation of new information in the universe. • The new information could be useful (it is negative entropy) to an information processing system, for example, a biological cell like a brain neuron. The collision of a sodium ion (Na+) with a sodium/potassium pump (an ion channel) in the cell wall could result in the sodium ion being transported outside the cell, resetting conditions for the next firing of the neuron's action potential, for example. • The new information could be meaningful to an information processing agent who could not only observe it but understand it. Now neurons would fire in the mind of the conscious observer that John von Neumann and Eugene Wigner thought was necessary for the measurement process to occur at all. Von Neumann (perhaps influenced by the mystical thoughts of Neils Bohr about mind and body as examples of his "complementarity.") saw three levels in a measurement; 1. the system to be observed, including light up to the retina of the observer. 2. the observer's retina, nerve tracts, and brain 3. the observer's abstract "ego." • John Bell asked tongue-in-cheek whether no wave function could collapse until a scientist with a Ph.D. was there to observe it. He drew a famous diagram of what he called von Neumann's "shifty split." Bell shows that one could place the arbitrary "cut" (Heisenberg called it the "Schnitt") at various levels without making any difference. But an "objective" observer-independent measurement process ends when irreversible new information has been indelibly recorded (in the photographic plate of Bell's drawing). Von Neumann's physical and mental levels are better discussed as the mind-body problem, not the measurement problem. The Measurement Problem So what exactly is the "measurement problem?" For decoherence theorists, the unitary transformation of the Schrödinger equation cannot alter a superposition of microscopic states. Why then, when microscopic states are time evolved into macroscopic ones, don't macroscopic superpositions emerge? According to H. D. Zeh: Because of the dynamical superposition principle, an initial superposition Σ cn | n > does not lead to definite pointer positions (with their empirically observed frequencies). If decoherence is neglected, one obtains their entangled superposition Σ cn | n > | Φn >, that is, a state that is different from all potential measurement outcomes. And according to Erich Joos, another founder of decoherence: It remains unexplained why macro-objects come only in narrow wave packets, even though the superposition principle allows far more "nonclassical" states (while micro-objects are usually found in energy eigenstates). Measurement-like processes would necessarily produce nonclassical macroscopic states as a consequence of the unitary Schrödinger dynamics. An example is the infamous Schrödinger cat, steered into a superposition of "alive" and "dead". The fact that we don't see superpositions of macroscopic objects is the "measurement problem," according to Zeh and Joos. An additional problem is that decoherence is a completely unitary process (Schrödinger dynamics) which implies time reversibility. What then do decoherence theorists see as the origin of irreversibility? Can we time reverse the decoherence process and see the quantum-to-classical transition reverse itself and recover the original coherent quantum world? To "relocalize" the superposition of the original system, we need only have complete control over the environmental interaction. This is of course not practical, just as Ludwig Boltzmann found in the case of Josef Loschmidt's reversibility objection. Does irreversibility in decoherence have the same rationale - "not possible for all practical purposes" - as in classical statistical mechanics? According to more conventional thinkers, the measurement problem is the failure of the standard quantum mechanical formalism (Schrödinger equation) to completely describe the nonunitary "collapse" process. Since the collapse is irreducibly indeterministic, the time of the collapse is completely unpredictable and unknowable. Indeterministic quantum jumps are one of the defining characteristics of quantum mechanics, both the "old" quantum theory, where Bohr wanted radiation to be emitted and absorbed discontinuously when his atom jumpped between staionary states, and the modern standard theory with the Born-Jordan-Heisenberg-Dirac "projection postulate." To add new terms to the Schrödinger equation in order to control the time of collapse is to misunderstand the irreducible chance at the heart of quantum mechanics, as first seen clearly, in 1917, by Albert Einstein. When he derived his A and B coefficients for the emission and absorption of radiation, he found that an outgoing light particle must impart momentum hν/c to the atom or molecule, but the direction of the momentum can not be predicted! Neither can the theory predict the time when the light quantum will be emitted. But the inability to predict both the time and direction of light particle emissions, said Einstein in 1917, is "a weakness in the theory..., that it leaves time and direction of elementary processes to chance (Zufall, ibid.)." It is only a weakness for Einstein, of course, because his God does not play dice. Decoherence theorists too appear to have what William James called an "antipathy to chance." In the original "old" quantum mechanics, Neils Bohr made two assumptions. One was that atoms could only be found in what he called stationary energy states, later called eigenstates. The second was that the observed spectral lines were discontinuous sudden transitions of the atom between the states. The emission or absorption of quanta of light with energy equal to the energy difference between the states (or energy levels) with frequency ν was given by the formula E2 - E1 = h ν, where h is Planck's constant, derived from his radiation law that quantized the allowed values of energy. In the now standard quantum theory, formulated by Werner Heisenberg, Max Born, Pascual Jordan, Erwin Schrödinger, Paul Dirac, and others, three foundational assumptions were made: the principle of superposition, the axiom of measurement, and the projection postulate. Since decoherence challenges some of these ideas, we review the standard definitions. The Principle of Superposition The fundamental equation of motion in quantum mechanics is Schrödinger's famous wave equation that describes the evolution in time of his wave function ψ, i δψ/δt - Hψ. For a single particle in idealized complete isolation, and for a Hamiltonian H that does not involve magnetic fields, the Schrödinger equation is a unitary transformation that is time-reversible (the principle of microscopic reversibility) Max Born interpreted the square of the absolute value of Schrödinger's wave function as providing the probability of fi nding a quantum system in a certain state ψn. The quantum (discrete) nature of physical systems results from there generally being a large number of solutions ψn (called eigenfunctions) of the Schrödinger equation in its time independent form, with energy eigenvalues En. Hψn = Enψn, The discrete energy eigenvalues En limit interactions (for example, with photons) to the energy di fferences En - Em, as assumed by Bohr. Eigenfunctions ψn are orthogonal to one another, < ψn | ψm > = δnm, where δnm is the Dirac delta-function, equal to 1 when n = m, and 0 otherwise. The sum of the diagonal terms in the matrix < ψn | ψm >, when n = m, must be normalized to 1 to be meaningful as Born rule probabilities. Σ Pn = Σ < ψn | ψn >2 = 1. The off-diagonal terms in the matrix, < ψn | ψm >, are interpretable as interference terms. When the matrix is used to calculate the expectation values of some quantum mechanical operator O, the off-diagonal terms < ψn | O | ψm > are interpretable as transition probabilities - the likelihood that the operator O will induce a transition from state ψn to ψm. The Schrödinger equation is a linear equation. It has no quadratic or higher power terms, and this introduces a profound - and for many scientists and philosophers a disturbing - feature of quantum mechanics, one that is impossible in classical physics, namely the principle of superposition of quantum states. If ψa and ψb are both solutions of the equation, then an arbitrary linear combination of these, ψ = caψa + cbψb; with complex coefficients ca and cb, is also a solution. Together with Born's probabilistic interpretation of the wave function, the principle of superposition accounts for the major mysteries of quantum theory, some of which we hope to resolve, or at least reduce, with an objective (observer-independent) explanation of information creation during quantum processes (which can often be interpreted as measurements). The Axiom of Measurement The axiom of measurement depends on the idea of "observables," physical quantities that can be measured in experiments. A physical observable is represented as a Hermitean operator A that is self-adjoint (equal to its complex conjugate, A* = A). The diagonal elements < ψn | A | ψn > of the operator's matrix are interpreted as giving the expectation value for An (when we make a measurement). The off -diagonal n, m elements describe the uniquely quantum property of interference between wave functions and provide a measure of the probabilities for transitions between states n and m. It is these intrinsic quantum probabilities that provide the ultimate source of indeterminism, and consequently of irreducible irreversibility, as we shall see. The axiom of measurement is then that a large number of measurements of the observable A, known to have eigenvalues An, will result in the number of measurements with value An being proportional to the probability of finding the system in eigenstate ψn with eigenvalue An. The Projection Postulate The third novel idea of quantum theory is often considered the most radical. It has certainly produced some of the most radical ideas ever to appear in physics, in attempts to deny it (as the decoherence program appears to do, as do also Everett relative-state interpretations, many worlds theories, and Bohm-de Broglie pilot waves). The projection postulate is actually very simple, and arguably intuitive as well. It says that when a measurement is made, the system of interest will be found in one of the possible eigenstates of the measured observable. We have several possible alternatives for eigenvalues. Measurement simply makes one of these actual, and it does so, said Max Born, in proportion to the absolute square of the probability amplitude wave function ψn. In this way, ontological chance enters physics, and it is partly this fact of quantum randomness that bothered Albert Einstein ("God does not play dice") and Schrödinger (whose equation of motion is deterministic). When Einstein derived the expressions for the probabilities of emission and absorption of photons in 1917, he lamented that the theory seemed to indicate that the direction of an emitted photon was a matter of pure chance (Zufall), and that the time of emission was also statistical and random, just as Rutherford had found for the time of decay of a radioactive nucleus. Einstein called it a "weakness in the theory." What Decoherence Gets Right Allowing the environment to interact with a quantum system, for example by the scattering of low-energy thermal photons or high-energy cosmic rays, or by collisions with air molecules, surely will suppress quantum interference in an otherwise isolated experiment. But this is because large numbers of uncorrelated (incoherent) quantum events will "average out" and mask the quantum phenomena. It does not mean that wave functions are not collapsing. They are, at every particle interaction. Decoherence advocates describe the environmental interaction as "monitoring" of the system by continuous "measurements." Decoherence theorists are correct that every collision between particles entangles their wave functions, at least for the short time before decoherence suppresses any coherent interference effects of that entanglement. But in what sense is a collision a "measurement." At best, it is a "pre-measurement." It changes the information present in the wave functions before the collision. But the new information may not be recorded anywhere (other than being implicit in the state of the system). All interactions change the state of a system of interest, but not all leave the "pointer state" of some measuring apparatus with new information about the state of the system. So environmental monitoring, in the form of continuous collisions by other particles, is changing the specific information content of both the system, the environment, and a measuring apparatus (if there is one). But if there is no recording of new information (negative entropy created locally), the system and the environment may be in thermodynamic equilibrium. Equilibrium does not mean that decoherence monitoring of every particle is not continuing. It is. There is no such thing as a "closed system." Environmental interaction is always present. If a gas of particles is not already in equilibrium, they may be approaching thermal equilibrium. This happens when any non-equilibrium initial conditions (Zeh calls these a "conspiracy") are being "forgotten" by erasure of path information during collisions. Information about initial conditions is implicit in the paths of all the particles. This means that, in principle, the paths could be reversed to return to the initial, lower entropy, conditions (Loschmidt paradox). Erasure of path information could be caused by quantum particle-particle scattering (our standard view) or by decoherence "monitoring." How are these two related? The Two Steps Needed in a Measurement that Creates New Information More than the assumed collapse of the wave function (von Neumann's Process 1, Pauli's measurement of the first kind) is needed. Indelibly recorded information, available for "observations" by a scientist, must also satisfy the second requirement for the creation of new information in the universe. Everything created since the origin of the universe over ten billion years ago has involved just two fundamental physical processes that combine to form the core of all creative processes. These two steps occur whenever even a single bit of new information is created and survives in the universe. • Step 1: A quantum process - the "collapse of the wave function." The formation of even a single bit of information that did not previously exist requires the equivalent of a "measurement." This "measurement" does not involve a "measurer," an experimenter or observer. It happens when the probabilistic wave function that describes the possible outcomes of a measurement "collapses" and a eigenstate of a matter or energy particle is actually changed. If the probability amplitude wave function did not collapse, unitary evolution would simply preserve the initial information. • Step 2: A thermodynamic process - local reduction, but cosmic increase, in the entropy. The second law of thermodynamics requires that the overall cosmic entropy always increases. When new information is created locally in step 1, some energy (with positive entropy greater than the negative entropy of the new information) must be transferred away from the location of the new bits or they will be destroyed, if local thermodynamical equilibrium is restored. This can only happen in a locality where flows of matter and energy with low entropy are passing through, keeping it far from equilibrium. The two physical processes in the creative process, quantum physics and thermodynamics, are somewhat daunting subjects for philosophers, and even for many scientists, including decoherence advocates. Quantum Level Interactions Do Not Create Lasting Information The overwhelming number of collisions of microscopic particles like electrons, photons, atoms, molecules, etc, do not result in observable information about the collisions. The lack of observations and observers does not mean that there have been no "collapses" of wave functions. The idea that the time evolution of the deterministic Schrödinger equation continues forever in a unitary transformation that leaves the wave function of the whole universe undecided and in principle reversible at any time, is an absurd and unjustified extrapolation from the behavior of the ideal case of a single perfectly isolated particle. The principle of microscopic reversibility applies only to such an isolated particle, something unrealizable in nature, as the decoherence advocates know with their addition of environmental "monitoring." Experimental physicists can isolate systems from the environment enough to "see" the quantum interference (but again, only in the statistical results of large numbers of identical experiments). The Emergence of the Classical World In the standard quantum view, the emergence of macroscopic objects with classical behavior arises statistically for two reasons involving large numbers: 1. The law of large numbers (from probability and statistics) • When a large number of material particles is aggregated, properties emerge that are not seen in individual microscopic particles. These properties include ponderable mass, solidity, classical laws of motion, gravity orbits, etc. • When a large number of quanta of energy (photons) are aggregated, properties emerge that are not seen in individual light quanta. These properties include continuous radiation fields with wavelike interference. 2. The law of large quantum numbers (Bohr Correspondence Principle). Decoherence as "Interpreted" by Standard Quantum Mechanics Can we explain the following in terms of standard quantum mechanics? 1. the decoherence of quantum interference effects by the environment 2. the measurement problem, viz., the absence of macroscopic superpositions of states 3. the emergence of "classical" adequately determined macroscopic objects 4. the logical compatibility and consistency of two dynamical laws - the unitary transformation and the "collapse" of the wave function 5. the entanglement of "distant" particles and the appearance of "nonlocal" effects such as those in the Einstein-Podolsky-Rosen experiment Let's consider these point by point. 1. The standard explanation for the decoherence of quantum interference effects by the environment is that when a quantum system interacts with the very large number of quantum systems in a macroscopic object, the averaging over independent phases cancels out (decoheres) coherent interference effects. 2. In order to study interference effects, a quantum system is isolated from the environment as much as possible. Even then, note that microscopic interference is never "seen" directly by an observer. It is inferred from probabilistic theories that explain the statistical results of many identical experiments. Individual particles are never "seen" as superpositions of particles in different states. When a particle is seen, it is always the whole particle and nothing but the particle. The absence of macroscopic superpositions of states, such as the infamous linear superposition of live and dead Schrödinger Cats, is therefore no surprise. 3. The standard quantum-mechanical explanation for the emergence of "classical" adequately determined macroscopic objects is that they result from a combination of a) Bohr's correspondence principle in the case of large quantum numbers. together with b) the familiar law of large numbers in probability theory, and c) the averaging over the phases described in point 1. Heisenberg indeterminacy relations still apply, but the individual particles' indeterminacies average out, and the remaining macroscopic indeterminacy is practically unmeasurable. 4. Perhaps the two dynamical laws would be inconsistent if applied to the same thing at exactly the same time. But the "collapse" of the wave function (von Neumann's Process 1, Pauli's measurement of the first kind) and the unitary transformation that describes the deterministic evolution of the probability amplitude wave function (von Neumann's Process 2) are used in a temporal sequence. first a wave of possibilities, then an actual particle. The first process describes what happens when quantum systems interact, in a collision or a measurement, when they become indeterministically entangled. The second then describes their deterministic evolution (while isolated) along their mean free paths to the next collision or interaction. One dynamical law applies to the particle picture, the other to the wave picture. 5. The paradoxical appearance of nonlocal "influences" of one particle on an entangled distant particle, at velocities greater than light speed, are a consequence of a poor understanding of both the wave and particle aspects of quantum systems. The confusion usually begins with a statement such as "consider a particle A here and a distant particle B there." When entangled in a two-particle probability amplitude wave function, the two identical particles are "neither here nor there," just as the single particle in a two-slit experiment does not "go through" the slits. It is the single-particle probability amplitude wave that must "go through" both slits if it is to interfere. For a two-particle probability amplitude wave that starts its deterministic time evolution when the two identical particles are produced, it is only the probability of finding the particles that evolves according to the unitary transformation of the Schrödinger wave equation. It says nothing about where the particles "are." Now if and when a particle is measured somewhere, we can then label it particle A. Conservation of energy and momentum tell us immediately that the other identical particle is now symmetrically located on the other side of the central source of particles. If the particles are electrons (as in David Bohm's version of EPR), conservation of spin tells us that the now distant particle B must have its spin opposite to that of particle A is they were produced with a total spin of zero. Nothing is sent from particle A to B. The deduced properties are the consequence of conservation laws that are true for much deeper reasons than the puzzles of nonlocal entanglement. The mysterious instantaneous values for the properties is exactly the same mystery that bothered Einstein about a single-particle wave function having values all over a photographic screen at one instant, then having values only at the position of the located particle in the next instant, apparently violating special relativity. To summarize: Decoherence by interactions with environment can be explained perfectly by multiple "collapses" of the probability amplitude wave function during interactions with environment particles. Microscopic interference is never "seen" directly by an observer, therefore we do not expect ever to "see" macroscopic superpositions of live and dead cats. The "transition from quantum to classical" systems is the consequence of laws of large numbers. The quantum dynamical laws necessarily include two phases, one needed to describe the continuous deterministic motions of probability amplitude waves and the other the discontinuous indeterministic motions of physical particles. The mysteries of nonlocality and entanglement are no different from those of standard quantum mechanics as seen in the two-slit experiment. It is just that we now have two identical particles and their wave functions are nonseparable . For Teachers The Role of Decoherence in Quantum Mechanics, Stanford Encyclopedia of Philosophy For Scholars Chapter 3.7 - The Ergod Chapter 4.2 - The History of the Knowledge Problem Part Three - Value Part Five - Problems Normal | Teacher | Scholar
332f2fbce694f5bf
Thursday, April 25, 2019 Yes, scientific theories have to be falsifiable. Why do we even have to talk about this? The task of scientists is to find useful descriptions for our observations. By useful I mean that the descriptions are either predictive or explain data already collected. An explanation is anything that is simpler than just storing the data itself. An hypothesis that is not falsifiable through observation is optional. You may believe in it or not. Such hypotheses belong into the realm of religion. That much is clear, and I doubt any scientist would disagree with that. But troubles start when we begin to ask just what it means for a theory to be falsifiable. One runs into the following issues: 1. How long it should take to make a falsifiable prediction (or postdiction) with a hypothesis? If you start out working on an idea, it might not be clear immediately where it will lead, or even if it will lead anywhere. That could be because mathematical methods to make predictions do not exist, or because crucial details of the hypothesis are missing, or just because you don’t have enough time or people to do the work. My personal opinion is that it makes no sense to require predictions within any particular time, because such a requirement would inevitably be arbitrary. However, if scientists work on hypotheses without even trying to arrive at predictions, such a research direction should be discontinued. Once you allow this to happen, you will end up funding scientists forever because falsifiable predictions become an inconvenient career risk. 2. How practical should a falsification be? Some hypotheses are falsifiable in principle, but not falsifiable in practice. Even in practice, testing them might take so long that for all practical purposes they’re unfalsifiable. String theory is the obvious example. It is testable, but no experiment in the foreseeable future will be able to probe its predictions. A similar consideration goes for the detection of quanta of the gravitational field. You can measure those, in principle. But with existing methods, you will still collect data when the heat death of the universe chokes your ambitious research agenda. Personally, I think predictions for observations that are not presently measurable are worthwhile because you never know what future technology will enable. However, it makes no sense working out details of futuristic detectors. This belongs into the realm of science fiction, not science. I do not mind if scientists on occasion engage in such speculation, but it should be the exception rather than the norm. 3. What even counts as a hypothesis? In physics we work with theories. The theories themselves are based on axioms, that are mathematical requirements or principles, eg symmetries or functional relations. But neither theories nor principles by themselves lead to predictions. To make predictions you always need a concrete model, and you need initial conditions. Quantum field theory, for example, does not make predictions – the standard model does. Supersymmetry also does not make predictions – only supersymmetric models do. Dark matter is neither a theory nor a principle, it is a word. Only specific models for dark matter particles are falsifiable. General relativity does not make predictions unless you specify the number of dimensions and chose initial conditions. And so on. In some circumstances, one can arrive at predictions that are “model-independent”, which are the most useful predictions you can have. I scare-quote “model-independent” because such predictions are not really independent of the model, they merely hold for a large number of models. Violations of Bell’s inequality are a good example. They rule out a whole class of models, not just a particular one. Einstein’s equivalence principle is another such example. Troubles begin if scientists attempt to falsify principles by producing large numbers of models that all make different predictions. This is, unfortunately, the current situation in both cosmology and particle physics. It documents that these models are strongly underdetermined. In such a case, no further models should be developed because that is a waste of time. Instead, scientists need to find ways to arrive at more strongly determined predictions. This can be done, eg, by looking for model-independent predictions, or by focusing on inconsistencies in the existing theories. This is not currently happening because it would make it more difficult for scientists to produce predictions, and hence decrease their paper output. As long as we continue to think that a large number of publications is a signal of good science, we will continue to see wrong predictions based on useless models. 4. Falsifiability is necessary but not sufficient. A lot of hypotheses are falsifiable but just plain nonsense. Really arguing that a hypothesis must be science just because you can test it is typical crackpot thinking. I previously wrote about this here. 5. Not all aspects of a hypothesis must be falsifiable. It can happen that a hypothesis which makes some falsifiable predictions leads to unanswerable questions. An often named example is that certain models of eternal inflation seem to imply that besides our own universe there exist an infinite number of other universes. These other universes, however, are unobservable. We have a similar conundrum already in quantum mechanics. If you take the theory at face value then the question what a particle does before you measure it is not answerable. There is nothing wrong with a hypothesis that generates such problems; it can still be a good theory, and its non-falsifiable predictions certainly make for good after-dinner conversations. However, debating non-observable consequences does not belong into scientific research. Scientists should leave such topics to philosophers or priests. This post was brought on by Matthew Francis’ article “Falsifiability and Physics” for Symmetry Magazine. 1. You might find interesting Lee McIntyre's book The Scientific Attitude (see my review: which spends quite a lot of time on the demarcation issue (either between science and non-science or science and pseudoscience). 2. Still, if proposed mechanism as a hypothesis was especially odd but there were no other reasonable explanation yet, would also crazy ideas be considered as science? (Susskind's words adapted). 1. If the crazy ideas pass through experimental verification, then those ideas are considered proven science. 3. How would you classify an analysis of this kind? A nice argument to account for the Born rule within MWI ( "Less is More: Born's Rule from Quantum Frequentism" ). I believe it's fair to say the paper concludes that the experimental validity of the Born rule implies the universe is necessarily infinite. If we never observe a violation of the Born rule, would this hypothesis qualify as science? 1. This seems to be a perfect case of falsifiability. If a Born rule violation is never observed this is not proven, but there is a confidence level, maybe some form of statistical support, for this theory. If a Born rule is found to be violated the theory is false, or false outside some domain of observation. Since a quantum gravity vacuum is not so far well defined, and there are ambiguities such as with Boulware vacua, it could be the Born rule is violated in quantum gravity. 2. Science has defied categories since the start. If anyone is responsible for defining science in the modern context it is probably Galileo. Yet we have different domains of science that have different criteria for what is meant by testable. A paleontologist never directly experiences the evolution of life in the past, but these time capsules called fossils serve to lead to natural selection as the most salient understanding of speciation. Astronomy studies objects and systems at great distances, where we will only ever visit some tiny epsilon of the nearest with probes. So we have to make enormous inferences about things. From parallax of stars, to Cepheid variables, to red-shift of galaxies to the luminosity of type I supernova we have this chain of meter sticks to measure the scale of the universe. We measure not the Higgs particle or the T-quark, but the daughter products that infer the existence of these particles and fields. We do not make observations that are as direct as some purists would like. As Eusa and Helbig point out there are aspects of modern theories which have unobservable aspects. Susskind does lean heavily on the idea of theores that are of a nature "it can't be any other way." General relativity predicts a lot of things about black hole interiors. That is a big toughy. No one will ever get close to a black hole that could be entered before being pulled apart, the closest is SgrA* at 27k light years away. Even if theoretical understanding of black hole interiors is confirmed in such a venture that will remain a secret held by those who entered a black hole. It is plausible that aspects of black hole interiors can have some indirect physics with quantum black holes, but we will not be generating quantum black holes any day soon. Testability and falsifiability are the gold standard of science. Theories that have their predictions confirmed are at this top. quantum mechanics is probably the modern physics that is the most confirmed. General relativity has a good track record, and the detection of gravitational radiation is a big feather in the GR war bonnet. Other physics such as supersymmetry are really hypotheses and not theories in a rigorous sense. Supersymmetry also is a framework that one puts phenomenology on. So far all that phenomenology of light SUSY partners looks bad. When I started graduate school I was amazed that people were interested in SUSY at accelerator energies. At first I thought it was properly an aspect of quantum gravitation. I still think this may be the case. At best some form of split SUSY Arkani Hamed proposes may play a role at low energy, which I think might be 1/8th SUSY or something. So these ideas are an aspect of science, but they have not risen to the level of a battle tested theory. IMO string theory really should be called the string hypothesis; it is not a theory --- even if I might think there may be some stringy aspect to nature. There is a certain character in a small country sandwiched between Austrai, Germany and Poland who has commented on this and ridicules the idea of falsifiability. I just checked his webpage and sure enough he has an entry on this. I suppose his pique on this is because he holds to an idea about the world that producing 35 billion tons of carbon in CO_2 annually into the atmosphere has no climate influence. He upholds a stance that has been falsified; the evidence for AGW is simply overwhelming, and by now any scientific thinker should have abandoned climate denialism. Curious how religion and ideology can override reason, even with the best educated. 3. Lawrence Crowell wrote: This assumption may not be the case. The theory of Hawking radiation has been verified in supersonic wave based analog black holes in the lab. Yes, entangled virtual items have been extracted from the vacuum and made real. The point to be explored in the assumptions that underlie science is can such a system using Hawking radiation be engineered to greatly amplify the process of virtual energy realization to the point where copious energy is extracted from nothing. When does such a concept become forbidden as a violation of the conservation of energy to consider as being real? In this forbidden case, it is not so much the basic science of the system, but the point where the quantity of its energy production becomes unthinkable since the conservation of energy is inviolate. 4. There are optical analogues of black holes and Hawking radiation. Materials that trap light can be made to appear black hole like. This property can be tuned with a reference beam of some type. There is no "something from nothing" here. The energy comes from the energy employed to establish the BH analogue. Black holes have a time-like Killing vector, which in a Noether theorem sense means there is a constant of motion for energy. Mass-energy is conserved. Another example: GR says a lot about what goes on inside the event horizon of a black hole, which (classically) is by definition non-observable. But of course this is not a mark against GR. Similarly, the unobservability of other universes in (some types of) the multiverse is not a mark against the theories which have the multiverse as a consequence, as long as they are testable in other ways. It is not GR per se, that is responsible for the event horizon (or the singularity) of the modern 'relativistic' black hole. Rather it is the Schwarzschild solution to the GR field equations that produces both of those characteristics. If Schwarzschild had incorporated the known fact that the speed of light varies with position in a gravitational field we probably wouldn't be talking about black holes. 2. Here another culprit: The renormalisation group itself as David Tong says in here (pdf p.62): “The renormalisation group isn't alone in hiding high-energy physics from us. In gravity, cosmic censorship ensures that any high curvature regions are hidden behind horizons of black holes while, in the early universe, inflation washes away any trace of what took place before. Anyone would think there's some kind of conspiracy going on....” 3. Phillip, I believe I have said this before but here we go again: 1) What happens inside a black hole horizon is totally observable. You just cannot come back and tell us about it. 2) We have good reason to think that the inside of a black hole does play a role for our observations and that, since the black hole evaporates, it will not remain disconnected. For these reasons the situation with black holes is very different from that of postulating other universes which you cannot visit and that are and will remain forever causally disconnected. 4. I think the comparison of other pocket cosmologies and black hole interiors is actually fairly comparable. The interior of a black hole probably has some entanglement role with the exterior world. We might have some nonlocal phenomena with other pocket worlds or these pockets may interact. There is some data coming about that could upend a fair amount of physics and cosmology. The CMB data is compatible with a Hubble parameter H = 67km/sec-Mpc and data from galaxies out to z > 8 indicates H = 74km/sec-Mpc. The error bars on these data sets do not overlap. Something odd is happening. This could mean possibly three things, four if I include something completely different we have no clue about. The universe is governed by phantom energy. The evolution of the vacuum energy dρ/dt = -3H(p + ρ) > 0 with p = wρ, and for w < -1 we have dρ/dt = -3H(1 + w)ρ > 0. This means the observable universe will in time cease to primarily exponentially expand, but will asymptote to some value in a divergent expansion. This is the big rip. One possibility is this pocket world interacted with another at some point. If the two regions had different vacuum energy then maybe some of that from the other pocket spilled into this world. The region we observe out to around 12 billion light years and beyond the cosmic horizon then had this extra vacuum energy fill in sometime in the first few hundred million years of this observable world. Another is that quantum states in our pocket world have some entanglement with quantum states in the inflationary region or in other pocket regions. There may then be some process similar to the teleportation of states that is increasing the vacuum energy of this pocket. It might be this happens generally, or it occurs under different conditions the pocket is in within the inflationary spacetime. Susskind talks about entangled black holes, and I think more realistically there might be entanglement of a few quantum states on a black hole with some quantum states on another, maybe in another pocket world or cosmology, and then another set entangled with a BH elsewhere and there is then a general partition of these states that is similar to an integer partition. If so then it is not so insane to think of the vacuum in this pocket world entangled with vacua elsewhere. The fourth possibility is one that no one has thought of. At any rate, we are at the next big problem in cosmology. This discrepancy in the Hubble parameter from CMB and from more recent galaxies is not going away. 5. Regarding the forth possibility... The CMB tells us about the state that the universe existed in when it was very young. There is no reason to assume that the expansion of the universe is constant. The associated projections about the proportions of the various types of matter and energy that existed at that early time are no longer reliable since the expansion rate of the universe has increased. It is likely that the associated proportions of the various types of matter and energy that exist now have changed from its primordial CMB state. This implies that there is a vacuum based variable process in place that affects the proportions of the various types of matter and energy as an ongoing activity that has always existed and that has caused the Hubble parameter derived from the CMB to differ from its current measured value. 6. We ultimately get back to this problem with what we mean by energy in general relativity. I wrote the following on stack exchange on how a restricted version of FLRW dynamics can be derived from Newton's laws The ADM space plus time approach to general relativity results in the constraints NH = 0 and N^iH_i = 0 that are the Hamiltonian and momentum constraints respectively. The Hamiltonian constraint, or what is energy on a contact manifold, means there is no definition of energy in general relativity for most spacetimes. The only spacetimes where energy is explicitly defined is where there is an asymptotic flat region, such as black holes or Petrov type D solutions. In a Gauss's law setting for a general spacetime there is no naturally defined surface where one can identify mass-energy. Either the surface can never contain all mass-energy or the surface has diffeomorphic freedom that makes it in appropriate (coordinate dependent or non-covariant etc) to define an observable such as energy. The FLRW equations though are a case with H = 0 with kinetic and potential parts E = 0 = ½må^2 - 4πGρa^2/3 for a the scale factor on distance x = ax_0, where x_0 is some ruler distance chosen by the analyst and not nature. Further å = da/dt for time t on the Hubble frame. From there the FLRW equations can be seen. The density has various dependencies for matter ρ ~ a^{-3}, radiation ρ ~ a^{-4} and for the vacuum ρ is generally assumed to be constant. The question is then what do we mean by a vacuum. The Hamiltonian constraint has the quantum mechanical analogue in the Wheeler-DeWitt equation HΨ[g] = 0, which looks sort of like the Schrödinger equation HΨ[g] = i∂Ψ/∂t, but where i∂Ψ/∂t = 0. The time-like Killing vector is K_t = K∂/∂t and we can think of this as a case where the timelike Killing vector is zero. This generally is the case, and the notable cases where K_t is not zero is with black holes. We can however adjust the WDW equation with the inclusion of a scalar field φ and the Hamiltonian can be extended to include this with HΨ[g, φ] = 0, such that there is a local oscillator term with a local meaning to time. This however is not extended everywhere, unless one is happy with pseudotensors. The FLRW equation is sort of such as case; it is appropriate for the Hubble frame. One needs a special frame, usually tied to the global symmetry of the spacetime, to identify this. However, transformations can lead to troubles. Even with black holes there are Boulware vacua, and one has no clear definition of what is a quantum vacuum. I tend to think this may be one thing that makes quantum gravitation different from other quantum fields. 5. But isnt it advantageous for proponents of something like string theory to not have anything that is falsifiable..and continue with the hope the "results" are just around the corner..and for the $$ to keep flowing..forever ? 6. >1. How long should it take to make a falsifiable prediction or postdiction ... >2. How practical should a falsification be? It doesn't make much difference how long it takes, the real question is how much work, and/or time, and/or money it should take to develop an executable falsifiable outcome. In the final analysis this comes down to whether we should pay person A, B, or C to work on hypotheses X, Y or Z. It is a relative-value question, and at times it is very difficult to rank hypotheses in a way that lets us sort them. This is especially true when "beauty" and "naturalness" can generate enthusiasm among researchers; those can render the people that know the most about the prospects for hypotheses X, Y or Z incapable of properly ranking them; their bias is to vote on the hypothesis most pleasing if it were true, instead of the hypothesis most likely to be true or most testable, or that would take the fewest personnel-hours to pursue. In the end there is a finite amount of money-per-year, thus a finite number of personnel-hours, equipment, lab space, computer time and engineering support. In the end it is going to be portioned out, one way or another. The problem is in judging the unknowns: 1) How many $ are we away from an executable falsifiable proposal? 2) How much time and money will it cost? 3) How likely is a proof/refutation? 4) How much impact will a proof/refutation of the hypothesis have on the field in question? Ultimately we need stats we are unlikely to ever develop! In such a case, one solution is to sidestep the reasoning and engage in something like the (old) university model: Professors get paid to work on whatever they feel like, as long as they want, in return for spending half their week teaching students. That can include some amount for experimentation and equipment. "Whatever they want" can include the work of other researchers; so they can collaborate and pool resources. This kind of low-level "No Expectations" funding can be provided by governments. Additional funding would not be provided until the work was developed to the point that the above "unknowns" are plausible answered; meaning when they DO know how to make a falsifiable proposal for an experiment. As for the thousands of dead-ends they might engage in: That's self-regulating; they would still like to work on something relevant with experiments. But if their goal is just to invent new mathematics or whatever that bear no relationship to the real world; that's fine. Not all knowledge requires practical application. 7. It's nice that you toy in your own terms with the familiar philosophical notion of under-determination by experience (remarked on by physicist Duhem as early as 19th century, and leveraged against Popper and positivist philosophies in the 1950's). Maybe the problem is more widespread than you think, and I would be tempted to add a (6): coming up with clear-cut falsification criteria requires assuming an interpretative and methodological framework. To take just the most extreme cases, one should exclude the possibilities that experimenters are systematically hallucinating, and other radical forms of skepticism. But this also includes a set of assumptions that are part of scientific methodology on how to test hypotheses, what kinds of observations are robust, what statistical analysis or inductive inferences are warranted, etc. This things are shared by scientists, because they belong to the same culture, and the general success of science brings confidence into them. Yet all these methodological and interpretative principles are not strictly speaking falsifiable. Now the problem is: of what counts as falsification rests on non-falsifiable methodological assumptions, how can anything be absolutely falsifiable? And I think the answer is that nothing is strictly falsifiable, but only relative to a framework that is acceptable for its general fruitfulness. 1. " one should exclude the possibilities that experimenters are systematically hallucinating," Yes, we're all hallucinating that are computers, which confirm the quantum behaviour of the electron quadrillions of times a second, are working; and that are car satnavs which confirm time dilation in GR trillions of times a second, are working. *Real* scientists are the only people who are *not* hallucinating. 2. Steven Evans, You're missing the point. I'm talking about everything that you have to implicitly assume to trust experimental results, in your example, the general fiability of computers and the fact that they indeed do what you claim they do. I personally don't doubt it. It seems absurd of course. The point is that any falsification ultimately rests on many other assumptions, there's no falsification simpliciter. 3. @Steven Evans maybe you're under the impression that I'm making an abstract philosophical point that is not directly relevant to how science works out should work. But no: take the OPERA experiment that apparently showed that neutrino travel faster than light. It tooks several weeks for scientists to understand what went wrong, and why relativity was not falsified. If anything, this shows that falsifying a theory is not a simple recipe, just a matter of observing that the theory is false. (And the bar can be more or less high depending on how will the theory is established so pragmatic epistemic cost considerations enter into the picture). My point is simply this: what counts as falsification is not a simple matter, a lots of assumptions and pragmatic aspects come in. Do you disagree with this? 4. @Quentin Ruyant You are. Take 1 kilogram of matter, turn it into energy. Does the amount of energy = c^2? Put an atomic clock on an orbiting satellite. Does it run faster than an atomic clock on the ground by the amount predicted by Einstein? Building the instruments is presumably tricky, checking the theories not so much. OPERA was a mistake, everybody knew it was a mistake. Where there is an issue, the issue is not a subtle point about falsifiability, it is far more mundane - people telling lies about their being empirical evidence to support universal fine-tuning or string theory. Or people claiming the next gen collider is not a hugely expensive punt. The people saying this are frauds. In the medical or legal professions they would be struck off and unable to practise further. 8. "To make predictions you always need a concrete model..." The problem is that qualitative (concrete) modeling is a lost art in modern theoretical physics. All of the emphasis is on quantitative modeling (math). The result is this: It is a waste of time to develop more quantitative model variants on the same old concrete models, but what is desperately needed are new qualitative models. All of the existing quantitative models are variations on qualitative models that have been around for the better part of a century (the big bang and quantum theory). The qualitative models are the problem. Unfortunately, with its mono-focus on quantitative analysis, modern theoretical physics does not appear to have a curriculum or an environment conducive to properly evaluating and developing new qualitative models. I want to be clear that I am not suggesting the abandonment of quantitative for qualitative reasoning. What is crucial is a rebalancing between the two approaches, such that in reflecting back on one another, the possibility of beneficial, positive and negative feedback loops is introduced. The difficulty in achieving such a balance lies in the fact that qualitative modeling is not emphasized, if taught at all, in the scientific academy. Every post-grad can make new mathematical models. Nobody even seems to think it necessary to consider, let alone construct, new qualitative models. At minimum, if the qualitative assumptions made a century ago aren't subject to reconsideration, "the crisis in physics" will continue. 9. Thanks for stating this so clearly. 10. Some non-falsifiable hypotheses are not optional. These are known as axioms or assumptions (aka religion) and no science is possible without them. For instance, cosmology would be dead without the unverifiable assumption (religious belief) that the laws of physics are universal in time and space. Sum of Axiomatic Beliefs = Religion …therefore, Science = Observation + Religion 11. The demarcation problem of science versus pseudo science was of course pondered long before Karl Popper. Aristotle for one was quite interested in solving it. No indication that this quandary will ever be satisfactorily resolved. Though I do consider Popper’s “falsifiability” heuristic to be reasonably useful, I’m not hopeful about the project in general. I love when scientists remove their science hats in order to put on philosophy hats! It’s an admission that failure in philosophy causes failure in science. And why does failure in philosophy cause failure in science? Because philosophy exists at a more fundamental level of reality exploration than science does. Without effective principles of metaphysics, epistemology, and value, science lacks an effective place to stand. (Apparently “hard” forms of sciences are simply less susceptible than “personal” fields such as psychology, though physics suffers here as well given that we’re now at the outer edges of human exploration in this regard.) I believe that it would be far more effective to develop a new variety of philosopher rather than try to define a hard difference between “science” and “pseudo science”. The sole purpose of this second community of philosophers would be to develop what science already has — respected professionals with their own generally accepted positions. Though small initially, if scientists were to find this community’s principles of metaphysics, epistemology, and value useful places from which to develop scientific models, this new community should become an essential part of the system, or what might then be referred to as “post puberty science”. 1. One problem with this proposal is the use of the word "metaphysics". To me this carries connotations of God, religion, angels, demons and magic. It means "beyond physics," and in the world today it is synonymous with the "supernatural" (i.e. beyond natural) and used to indicate faith which is "beyond testable or verifiable or falsification". I hear "metaphysics" and I run for the hills. Unless their position on metaphysics is there are no metaphysics, I cannot imagine why I would have any professional respect for them. Their organization would be founded on a cognitive error. I think it is likely possible to develop a "science of science" by categorizing and then generalizing what we think are the failures of science; and why. From those one might derive or discover useful new axioms of science, self-evident claims upon which to rest additional reasoning about what is and is not "science". Part of the problem may indeed be that we have not made such axioms explicit; and instead we rely on instinct and absorption of what counts as self-evident. That is obviously an approach ripe for error, and difficult to correct without formal definitions. Having something equivalent to the family tree of logical fallacies could be useful in this regard. But that effort would not be separate from science, it would just be a branch of science, science modeling itself. That should not cause a problem of recursiveness or infinite descent; and we have an example of this in nature: Each of us contain a neural model of ourself, which we use for everything from planning our movements to deciding what we'd enjoy for dinner, or what clothing we should buy, or what career to pursue. Science can certainly model science, without having to appeal to anything above or beyond science. To some extent this has already been done. Those efforts could be revisited, revised, and expanded. 2. Dr. Castaldo, I think you’d enjoy my good friend Mike Smith’s blog. After reading this post of Sabine’s he wrote an extensive post on the matter as well, and did so even before I was notified that Sabine had put up this one! I get the sense that you and he are similarly sharp. Furthermore I think you’d enjoy extensively delving into the various mental subjects which are his (and I think my) forte. Anyway I was able to submit the same initial comment to both sites. He shot back something similarly dismissive of philosophy btw. On metaphysics, I had the same perspective until a couple years ago. (I only use the “philosophy” modifier as a blogging pseudonym.) Beyond the standard speech connotation, I realized that “metaphysics” is technically mean to refer to what exists before one can explore physics… or anything really. A given person’s metaphysics might be something spiritual for example, and thus faith based. My own metaphysics happens to be perfectly causal. The metaphysics of most people seems to fluctuate between the two. Consider again my single principle of metaphysics, or what I mean to be humanity’s final principle of metaphysics: “To the extent that causality fails (in an ontological sense rather than just epistemically mind you), nothing exists for the human to discover.” All manners of substance dualist populate our soft sciences today. Furthermore many modern physicists seem to consider wave function collapse to ontologically occur outside of causality, or another instance of supernaturalism. I don’t actually mind any of this however. Some of them may even be correct! But once (or if) my single principle of metaphysics becomes established, these people would then find themselves in a club which resides outside of standard science. In that case I’m pretty sure that the vast majority of scientists would change their answer in order to remain in our club. (Thus I suspect that very few physicists would continue to take an ontological interpretation of wave function collapse, and so we disciples of Einstein should finally have our revenge!) Beyond this clarification for the “metaphysics” term, I’m in complete agreement. Science needs a respected community of professionals with their own generally accepted principles of how to do science. It makes no difference if these people are classified as “scientist”, “philosopher”, or something else. Thus conscientious scientists like Sabine would be able to get back to their actual jobs. Or they might become associated professionals if they enjoy this sort of work. And there’s plenty needed here since the field is currently in need of founders! I hope to become such a person, and by means of my single principle of metaphysics, my two principles of epistemology, and my single principle of axiology. I don't get the distinction. There is much to be said for the "shut up and compute" camp; though I don't like the name. It is an approach that works, and has worked for millennia. We never had to know the cause of gravity in order to compute the rules of gravity. We may still not know the cause of gravity; there may be no gravitons, and I admit I am not that clear on how a space distortion translates into an acceleration. Certainly when ancient humans were building and sculpting monoliths, they ran a "shut up and compute" operation; i.e. it makes no difference why this cuts stone, it does. The investigation can stop there. Likewise, I don't have to believe in magic or the supernatural to believe the wavefunction collapses for reasons that appear random to me, or truly are random, or in principle is predictable but would require so much information to predict that prediction is effectively impossible. That last is the case in predicting the outcome of a human throwing dice: Gathering all the information necessary to predict the outcome before the throw begins would be destructive to the human, the dice, and the environment! "Shut up and compute" says ignore why, just treat the wavefunction collapse as randomized according to some distribution described by the evolution equations, and produce useful predictions of the outcomes. Just like we can ignore why gravity is the way it is, why steel or titanium is the way it is, why granite is the way it is. We can test all these things to characterize what we need to know about them in order to build a skyscraper. Nor do we need to know why earthquakes occur. We can characterize their occurrence and strength statistically and successfully use that to improve our buildings. Of course I am not dissing the notion of investigating underlying causations and developing better models of what contributes to material strength, or prevents oxidation, or lets us better predict earthquakes or floods. But I am saying that real science does not demand causality; it can and has progressed without it. Human brains are natural modeling machines. I don't need a theory of why animals migrate on certain paths to use that information to improve my hunting success, and thus my survival chances. We didn't need to be botanists or geneticists to understand enough to start the science of farming and selective breeding for yields. It is possible to know that some things work reliably without understanding why they work reliably. To my mind, it is simply false to claim that without causality there is nothing to know. There is plenty to know, and a true predictive science can (and has) been built resting on foundations of "we don't know why this happens, but it does, and apparently randomly." 4. Well let’s try this Dr. Castaldo. I’d say that there are both arrogant and responsible ways to perceive wave function collapse. The arrogant way is essentially the ontological stance, or “This is how reality itself IS”. The responsible way is instead epistemological, or “This is how we perceive reality”. The first makes absolute causal statements while the second does not. Thus the first may be interpreted as “arrogant”, with the second “modest”. I’m sure that there are many here who are far more knowledgeable in this regard than I am, and so could back me up or refute me as needed, but I’ve been told that in the Copenhagen Interpretation of QM essentially written by Bohr and Heisenberg, they did try to be responsible. This is to say that they tried to be epistemic rather than ontological. But apparently the great Einstein would have none of it! He went ontological with the famous line, “God does not play dice”. So what happens in a psychological capacity when we’re challenged? We tend to double down and get irresponsible. That’s where the realm of physics seems to have veered into a supernatural stance, or that things happen in an ontological capacity, without being caused to happen. So my understanding is that this entire bullshit dispute is actually the fault of my hero Einstein! Regardless I’d like to help fix it by means of my single principle of metaphysics. Thus to the extent that “God” does indeed play dice, nothing exists to discover. And more importantly, if generally accepted then the supernaturalists which reside in science today, would find that they need to build themselves a club which is instead populated by their own kind! :-) 12. @Philosopher Eric You seem to have an inflated view of philosophy and philosophers. I fully agree with you in so far as one ought not ignore what philosophers do and say. To excel in other fields will constrain one from interrogating the work of philosophers. Those who make that choice ought to accept their decision and refrain from the typical contemptuous language seen so often. I have spent the last thirty years studying the foundations of mathematics. To be quite frank about it, I am exhausted by the lunacy of both philosophers and scientists who think mathematics has any relationship to reality beyond one's subjective cognitive experience. From what I can tell, the main emphasis of philosophers in this arena over the last century has been to justify science as a preferred world view by crafting mathematics in the image of their belief systems. Their logicians are even more pathetic. Hume's account of skepticism is good philosophy. It is also unproductive. To represent a metaphysical point of view and then invoke a distinction between syntax and semantics to claim one is not doing metaphysics is simply deceptive. We have a great deal of progress with no advancement. You are correct that such matters cannot be sorted out without digging into the philosophical development of the subject matter. But what you are likely to find are people running around saying, "I don't believe that!". So what one has are contradictory points of view and different agendas. That is what philosophers and their logicians have given to mathematics. Should you disagree with me, what is logic without truth? One can claim that one is only studying "forms". But once one believes they have identified a correct form, one defends one's claims from the standpoint of belief. Philosophers and their logicians can never get away from metaphysics whether they care to admit it or not. But their pretensions to the contrary are simply lies. Science fails because of naive beliefs with respect to truth, reality, and the inability to accept epistemic limitations. Philosophers have shown just as much willingness to fail along those same lines. 1. mls, Thanks for your reply. I’ve dealt with a number of professional philosophers online extensively, and from that can assure you that they don’t consider me to inflate them. Unfortunately most would probably say the opposite, and mind you that I try to remain as diplomatic with them as possible. Your disdain for typical contemptuous language is admirable. They’re a sensitive bunch. Aren’t we all? What I believe must be liberated in order to improve the institution of science, is merely the subject matter which remains under the domain of philosophy. Thus apparently we’ll need two distinct forms of “philosophy”. One would be the standard cultural form for the artist in us to appreciate. But we must also have a form that’s all about developing a respected community with it’s own generally accepted understandings from which to found the institution of science. So you’re a person of mathematics, and thus can’t stand how various interests defile this wondrous language — this monument of human achievement — by weaving it into their own petty interests? I hear you there. But then consider how inconsistent it would be if mathematics were instead spared. I believe that defiling things to our own interests needs to become acknowledged to be our nature. I thinks it’s standard moralism which prevents us from understanding ourselves. I seek to “fix science” not for that reason alone, but rather so that it will be possible for the human to effectively explore the nature of the human. I’d essentially like to help our soft sciences harden. Once we have a solid foundation from which to build, which is to say a community of respected professionals with their own associated agreements, I believe that many of your concerns would be addressed. What is logic without truth? That’s exactly what I have. I have various tools of logic (such mathematics) but beyond just a single truth, I have only belief. The only truth that I can ever have about Reality, is that I exist. It is from this foundation that I must build my beliefs as effectively as can. 2. " I am exhausted by the lunacy of both philosophers and scientists who think mathematics has any relationship to reality beyond one's subjective cognitive experience." Wiles proved, via an isomorphism between modular forms and semi-stable elliptic curves, that there are no positive integer solutions to x^3 + y^3 = z^3. Now, back in "reality", take some balls arranged into a cube, and some more balls arranged into another cube, put them all together and arrange them into a single cube. You can't. Why is that do you think? 3. Steven, It seems to me that two equally sized cubes stacked do not, by definition, form a cube. Nor do three. Four of them however, do. It’s simple geometry. But I have no idea what that has to do with lms’ observation about the lunacy of people who believe that mathematics exists beyond subjective experience, or the mathematical proof that you’ve referred to. I agree entirely with lms — I consider math to merely be a human invented language rather than something that exists in itself (as platonists and such would have us believe.) Do you agree as well? And what is the point of your comment? 4. @Steven Evans Should you take the time to learn about my views, you would find that I am far more sympathetic with core mathematics than not. Get a newsgroup reader and load headers for sci.logic back to January 2019. Look for posts by "mitch". I doubt that you will have much respect for what you read, but, you will find an account of truth tables based upon the affine subplane of a 21-point projective plane. Since there is a group associated with this affine geometry, this basically marrys Klein's Erlangen program with symbolic logic in the sense of well-formedness criteria (that is, logical constants alone do not make a logical algebra). But this is precisely the kind of thing committed logicists will reject. Now, Max Black presented a critical argument against mathematical logicians based upon a "symmetric universe". My constructions are similarly based upon symmetry considerations -- except that I am using tetrahedra oriented with labeled vertices. Who knew that physicists had been inventing all sorts of objects on the basis of similar ideas, although they use continuous groups because they must ultimately relate to physical measurement? For the last two weeks I have been associating collineations in that geometry with finite rotations in four dimensions using Petrie polygon projections of tesseracts. And, as other posts in that newsgroup show, any 16 element group which carries a 2-(16,6,2) design can be mapped into this affine geometry. So, I happen to think that logicians and philosophers have turned left and right into true and false. You must forgive me for critcizing physicists who publish cool mathematics as science without a single observation to back it up. 5. @Philosopher Eric I don't mean stack the cubes(!), I mean take 2 cubes of balls of any size, take all the balls from both cubes and try to rearrange them into a single cube of balls. You can't, whatever the sizes of the original 2 cubes. The reason we know you can't do this is because of Wiles' proof of Fermat's Last Theorem: there are no positive integer solutions to x^3 + y^3 = z^3 The point is this that this is maths existing in reality, in contradiction to what you wrote - whether you know Wiles' theorem or not, you can't take 2 cubes-worth of balls and arrange them into a single cube. There are 2 reasons that this theorem applies to reality: 1) The initial abstraction that started mathematics was the abstraction of number. So it is not a surprise when mathematical theorems, like Wiles', can be reapplied to reality. 2) Wiles' proof depends on 350 years-worth of abstractions upon abstractions (modular forms, semi-stable elliptic curves) from the time of Fermat, but the reason Wiles' final statement is still true is because mathematics deals with precise concepts. (Contrast with philosophy, which largely gets nowhere because they try to write "proofs" in natural language - stupid idea.) Tl;DR Maths often applies to reality because it was initially an abstraction of a particular characteristic of reality. 6. "You must forgive me for critcizing physicists who publish cool mathematics as science without a single observation to back it up." Fair criticism, and it is the criticism of the blog author's "Lost In Math" book. But that's not what you wrote originally. You wrote originally that it was lunacy to consider any maths as being real. O.K., arrange 3 balls into a rectangle. How did it go? Now try it with 5 balls, 7 balls, 11 balls, 13 balls,.. What shall we call this phenomenon in reality that has nothing to do with maths? Do you think there is a limit to the cases where the balls can't be arranged into a rectangle? My money is on not. But maths has nothing to do with reality. Sure. 7. Okay Steven, I think that I now get your point. You’re saying that because the idea you’ve displayed in mathematics is also displayed in our world, that maths must exist in reality, or thus be more than a human construct. And actually you didn’t need to reference an esoteric proof in order to display your point. The same could be said of a statement like “2 + 2 = 4”. There is no case in our world where 2 + 2 does not equal 4. It’s true by definition. But this is actually my point. Mathematics exists conceptually through a conscious mind, and so is what it is by means of definition rather than by means of causal dynamics of this world. It’s independent of our world. This is to say that in a universe that functions entirely differently from ours, our mathematics would still function exactly same. In such a place, by definition 2 + 2 would still equal 4. We developed this language because it can be useful to us. Natural languages such as English and French are useful as well. It’s interesting to me how people don’t claim that English exists independently of us, even though just as many “true by definition” statements can be made in it. I believe it was Dr. Castaldo who recently implied to me that “Lost in Math” doesn’t get into this sort of thing. (My own copy of the book is still on its way!) In that case maybe this could be another avenue from which to help the physics community understand what’s wrong with relying upon math alone to figure out how our world works? 8. " that maths must exist in reality," You've got it the wrong way round. Maths is an abstraction of a property in physical reality. Even before humans appeared, it was not possible to arrange 5 objects into a rectangle. " And actually you didn’t need to reference an esoteric proof " The point is that modular forms and elliptic curves are still related to reality, because the axioms of number theory are based on reality. "2 + 2 would still equal 4." The concept might not arise in another universe. In this universe, the only one we know, 2+2=4 represents a physical fact. "what’s wrong with relying upon math alone to figure out how our world works" It's a trivial question. Competent physicists understand you need to confirm by observation. 9. Steven, If you’re not saying that maths exists in reality beyond us, but rather as an abstraction of a physical property, then apparently I had you wrong. I personally just call maths a language and don’t tie it to my beliefs about the physical, though I can see how one might want to go that way. As long as you consider it an abstraction of reality then I guess we’re square. 13. The title of this column and the second paragraph appear to conflate theories and hypotheses. Theories can generate hypotheses, and hopefully do, but it is the hypothesis that should be falsifiable, and the question remains whether even a robustly falsified hypothesis has any impact on the validity of a theory. Scientists work in the real work, and in that real world, historically, countless hypotheses have been falsified -- or have failed tests -- yet the theories behind them were preserved, and in some cases (one thinks immediately of Pasteur and the spontaneous generation of life) the theory remains fundamental to this day. At the same time, I always remember philosopher Grover Maxwell's wonderful example of a very useful hypothesis that is not falsifiable: all humans are mortal. As Maxwell noted, in a strict Popperian test, you'd have to find an immortal human to falsify the hypothesis, and you'll wait a looooong time for that. 1. " I always remember philosopher Grover Maxwell's wonderful example of a very useful hypothesis that is not falsifiable: all humans are mortal." And yet no-one so far has made it past about 125 years old, even on a Mediterranean diet. What useful people philosophers are. 2. I don't understand how 'all humans are mortal' is a useful hypothesis. It is pretty obvious to anybody reaching adulthood that other humans are mortal, and to most that they themselves can be hurt and damaged, by accident if nothing else. We see people get old, sick and die. We see ourselves aging. I don't understand how this hypothesis is useful for proving anything. It would not even prove that all humans die on some timescale that matters. It doesn't tell us how old a human can grow to be; it doesn't tell us how long an extended life we could live with technological intervention. A hypothesis, by definition, is a supposition made as a starting point for further investigation. Is this even a hypothesis, or only claimed to be a hypothesis? I will say, however, that in principle it is a verifiable hypothesis; because it doesn't demand that all humans that will ever exist be mortal, and there are a finite number of humans alive today. So we could verify this hypothesis by bringing about the death of every human on Earth, and then killing ourselves; and thus know that indeed every human is mortal. Once a hypothesis is confirmed, then of course it cannot be falsified. That is true of every confirmed hypothesis; and the unfalsifiability of confirmed hypotheses is not something that worries us. 3. Dr. Castaldo: "useful to prove anything" is not a relevant criterion for being good science. That said, much of human existence entails acting on the assumption that all humans are mortal, so I think that Maxwell's tongue-in-cheek example is of a hypothesis that us extremely useful. Your comment about how the hypothesis is in principle verifiable (because there are a finite number of humans) is, forgive me, somewhat bizarre -- the classic examples of good falsifiable hypotheses, such as "all swans are white" would be equally verifiable for the same reason, yet those examples were invented to show that it is the logical form of the hypothesis that Popper and falsificationists appeal to, not the practicalities of testing. Moreover, while it could be arguable that the number of anything in the universe is finite, one issue with humans (and swans) is that the populations are indefinite in number -- as Maxwell commented, you don't know if the next baby to be born will be immortal, or the 10 millionth baby to be born. @Steven Evans: while your observation about human longevity is true (so far), Maxwell's humorous point -- which, by the way, was a critique of Popper -- was that you cannot be absolutely certain that the next child born will not be mortal, just as Popper insisted that the next swan he encountered could, just possibly, be black. Maxwell's point was about how you would establish a test of this hypothesis. In Popper's strange world of absolutes, you'd have to find an immortal human. Maxwell noted that here in in the real world of actual science, no one would bother, especially since markers of mortality pile up over the lifespan. 4. @DKP: I am not the one that claimed it was a useful hypothesis. Once that claim is made, it should be provable: What is it useful for? The only thing a hypothesis can be useful for is to prove something true or false if it holds true or fails to hold true; I am interested in what that is: Otherwise it is not a useful hypothesis. In other words, it must have consequences or it is not a hypothesis at all. Making a claim that is by its nature is unprovable does not make it a hypothesis. I can't even claim every oxygen atom in the universe is capable of combining with two hydrogen atoms, in the right conditions, to form a molecule of water. I can't claim that as a hypothesis, I can't prove it true for every oxygen atom in the universe, without that also being a very destructive test. UNLESS I rely on accepted models of oxygen and hydrogen atoms, and their assertions that these apply everywhere in the universe, which they also cannot prove conclusively. Maxwell's "hypothesis" is likewise logically flawed; but if we resort to the definition of what it is to be human, than it is easily proven, because it is not a hypothesis at all but a statement of an inherent trait of being human; just like binding with hydrogen is a statement of the inherent trait of oxygen and the atom we call oxygen. I know Maxwell's point was about how you would establish a test of this hypothesis; MY point was that Maxwell's method is not the only method, is it? If all living humans should die, then there will be no future humans, and we will have proved conclusively that all humans are mortal. In fact, in principle, my method of confirming the truth of the hypothesis is superior to Maxwell's method if falsifying it, because mine can be done in a finite amount of time (since there are a finite number of humans alive at any given time, and it takes a finite amount of time to kill each one of us). And confirmation would obviously eliminate the need for falsification. Of course, I am into statistics and prefer the statistical approach; I imagine we (humanity, collectively throughout history) have exceed an 8 sigma confirmation by now on the question of whether all humans are mortal; so I vote against seeking absolute confirmation by killing everyone alive. 5. @DKP " In Popper's strange world of absolutes, you'd have to find an immortal human. " Or kill all humans. The point is that you can apply falsifiability in each instance - run a test that confirms the quantum behaviour of the electron. Then carry out this test 10^100000000000 times and you now have an empirical fact, which is certainly sufficient to support a business model for building a computer chip based on the quantum behaviour of the electron. By the standards of empirical science, there will never be an immortal human as the 2nd law will eventually get you, even if you survive being hit by a double-decker: As a society, we would better off giving most "philosophers" a brush and tell them to go and sweep up leaves in the park. They could still ponder immortal humans and other irrelevant, inane questions while doing something actually useful. 6. @Steven Evans: Perhaps you missed the point of Maxwell's example, which was to suggest that at least one particular philosopher was irrelevant, by satirizing his simplistic notion of falsification. As a scientist myself, and not a philosopher, I found myself in agreement with Maxwell, and 50 years later I still find historians of science to offer more insight into the multiple ways in which "science" has worked and evolved -- while philosophers still wrestle, as Maxwell satirized, with simplistic absolutes. More seriously, your proposed test of the behavior of the electron makes the point I started with in my first comment: theories are exceedingly difficult to falsify in the way that Sabine's article here suggests; efforts at falsification focus on hypotheses. 14. There is an intriguing name (proposal) for a new book by science writer Jim Baggott (@JimBaggott): A Game of Theories. Theory-making does seem to form a kind of game, with 'falsifiability' just one of the cards (among many) to play. And today (April 26) is Wittgenstein's (language games) birthday. 15. Very manipulative article. All the traditional attempts of theoreticians to dodge the question are there. But to me it was even more amusing to see an attempt to bring in Popper and not to oppose Marx. But since Popper was explicitly arguing against Marx' historicism they had to make up "Stalinist history" (what would it even be?). 16. Hi Sabine, You claim that string theory makes predictions, which prediction do you have in mind? Peter Woit often claims that string makes no predictions ... "zip, zero, nadda" in his words. 1. Thanks, that FAQ #1 is a little short on specifics. As a result I am still puzzled. As far as string cosmology goes I would question whether it is so flexible you can get just about anything you want out of it. 2. String cosmology is not string theory. You didn't ask for specifics. 17. Sabine Said… debating non-observable consequences does not belong into scientific research. Scientists should leave such topics to philosophers or priests. Of course you are correct, I’m wondering if you’ve also gotten the impression some scientists may even be using non-observable interpretations as a basis for their research? 18. Thank you. Your writing is clear and amusing, as usual. I'm glad to see that you allow for some nuance when it comes to falsifiability. There is a distinction between whether or not a non-falsifiable hypothesis is "science", and whether or not the practice of a particular science requires falsifiability at every stage of its development, even over many decades. I am glad string theory was pursued. I am also glad, but only in retrospect, that I left theoretical physics after my undergraduate degree and did not waste my entire career breaking my brain doing extremely difficult math for its own sake. Others, of course, would not see this as a waste. But how much of this will be remembered? Or to quote Felix Klein: "When I was a student, abelian functions were, as an effect of the Jacobian tradition, considered the uncontested summit of mathematics and each of us was ambitious to make progress in this field. And now? The younger generation hardly knows abelian functions." 19. Dr. Hossenfelder, So a model, e.g. string cosmology, is a prediction? 1. Korean War, A model is not a prediction. You make a prediction with a model. If the prediction is falsified, that excludes the model. Of course the trouble is that if you falsify one model of string cosmology, you can be certain that someone finds a fix for it and will continue to "research" the next model of string cosmology. That's why these predictions are useless: It's not the model itself that's at fault, it's that the methodology to construct models is too flexible. 2. Dr. Hossenfelder, Thanks for your response, I thought that was the case. If this comment just shows ignorance, please don't publish it. If it might be of use, my question arose because Jan Reimera asked for a specific string theory prediction to refute Peter Woit''s claim that none exist. After reading the faq, I couldn't see that it does this unless the string cosmology model is either sufficient in itself or can be assumed to reference already published predictions. 3. String theory generically predicts string excitations, which is a model-independent prediction. Alas, these are at too high energies to actually be excited at energies we can produce, etc etc. String cosmology is a model. String cosmology is not the same as string theory. 20. Hi, Sabine. Ich vertraue darauf,dass alles gut lauft. I enjoyed your post. All things considered, Ich weiB nicht,wie du dass I'll be plain, I'm a experimental scientist. " The term ' falsification ' (for me) belongs to the realm of theory. when I run an experiment the result is obvious - and observable. When you brought up Karl (Popper) , we're you making a statement on ' critical rationalism' , I hope not. (in the quantum realm, you will find a maze) At any rate, You struck me with the words ' I start working on an idea and then ... You know me. (2 funny) In parting, for You I have a new moniker for Your movement. - as a # , tee-shirts, it's -- DISCERN. (a play on words) not to mean 'Dis-respect' In the true definition of the word: ' to be able to tell the say, ... between a good idea - and be a bad one. Once again, Love Your Work. - All Love, 1. I did not "bring up Popper." How about reading what I wrote before commenting? 21. Wasn't the argument that atomism, arguably one of the most productive theories of all time, wasn't falsifiable? Of course it was ultimately confirmed, which is not quite the same thing - it just took 2000 plus years. 22. @Lawrence: Off the top of my head: Perhaps the statistical distributions are wrong, and thus the error bars are wrong. I don't know anything about how physicists have come to conclusions on distributions (or have devised their own), but I've done work on fitting about 3 dozen different statistical distributions, particularly for finding potential extreme values; and without large amounts of data it is easy to mistakenly think we have a good fit for one distribution when we know the test data was generated by another. Noise is another factor, if the data being fitted is noisy in any dimension; including time. For example, in the generalized extreme value distribution, using IRL engineering to predict the worst wind speeds, flood levels, or in aviation, the extent of crack growth in parts do to aviation stressors (and thus time to failure), minor stochastic errors in the values can change things like the shape parameter that wildly skew the predictions. Even computing something like a 100-year flood level: sorting 100 samples of the worst flood per year. The worst of all would be assigned the rank index (100/101), (i/(N+1) is its expected value on the probability axis) but that can be wrong. The worst flood in 1000 years may have occurred in the last 100 years. There is considerable noise in both dimensions; the rank values and the measured values, even if we fit the correct distribution. There is also the problem of using the wrong distribution; I believe I have seen this in medical literature. Weibull distributions can look very much like a normal curve, but they are skewed, and have a lower limit (a reverse Weibull has an upper limit). They are easily confused with Fréchet distributions. But they can give very different answers on exactly where your confidence levels (and thus error bars) are for 95%, 99%, or 99.9%. One fourth possibility is the assumption of what the statistical distribution should even be is in error. It may depend upon initial conditions in the universe, or have too much noise in the fitting, or too few samples to rule out other distributions prevailing. In general, the assumptions made in order to compute the error bars may be in error. 1. I can't comment too much on the probability and statistics. To be honest this has been from early years my least favorite area of mathematics. I know just the basic stuff and enough to get me through. With Hubble data this trend has been there for decades. Telescope redshift data for decades have been in the 72 to 74km/sec-Mpc range. The most recent Hubble data is 74.03±1.42km/sec-MpC. With the CMB data this is now based on the ESA Planck spacecraft data. It is consistent with the prior NASA WMAP spacecraft data. and this is very significantly lower around 67.66±0.42Km/sec-Mpc. Other data tend to follow a similar trend. There has been in the last 5 to 10 years this growing gap between the two. I would imagine there are plenty of statistics pros who eat the subject for lunch. I question whether some sort of error has gotten through their work. 23. Each brain creates a model of the inside and outside. Each of us call that the reality. But it's just a model. Now we create models of parts of the model that might or not fit the first model. It's a bit of a conundrum. Personally I believe that it's all about information. The one that makes a theory that takes all that into account will reap the nobel price. That's the next step. 24. "An hypothesis that is not falsifiable through observation is optional. You may believe in it or not." One has no reason to think it is true as an empirical fact. Believing it in this case is just delusion (see religion). The issue is simply honesty. People who claim there is empirical evidence for string theory, or fine-tuning of the universe, or the multiverse, or people who claim that the next gen collider at CERN is anything but a massively expensive, completely unprecedented punt are simply liars. It's easy to see when you compare with actual empirical facts, which in physics are often being confirmed quintillions of times a second in technology (quantum behaviour of electron in computer chips, time dilation in satnav, etc.) How can someone honestly claim that universal fine-tuning is physics just like E=mc^2? They can't - they are lying. Where taxpayers' money is paying for these lies, it is criminal fraud. 25. I realise that the notion of "model" is too subtle for someone like me just fixated at playing with parsing guaranteed Noether symmetries with ward like identities upon field equations from action principles.... So the Equivalence Principle in itself is predictive in that it need not be supplemented with some Consitituive equations (like the model of susceptibility of medium in which a Maxwell field source resides say or model of a star) to describe the materiality of the inertial source? 26. @Philosopher Eric Nice response. I think your initial remarks sparked a reaction rather than a response on my part. Your last paragraph expresses an essential problem. One's first assumption, then, ought to be that one is not alone. And, science as a community enterprise requires something along the lines of Gricean maxims. This is completely undermined when, for the sake of a logical calculus, philosophers pretend that words are to be treated as mere parameters. Tarski explicitly rejected this methodology in his paper on the semantic conception of truth. Yet, those who invoke the distinction between semantics and syntax as some inviolable principle regularly invoke Tarski as the source of their views (one should actually look to Carnap as the source of such extreme views). This is the kind of thing I find so disturbing where philosophy, logic, and mathematics intersect. There is a great deal of misinformation in the literature. There is a great deal that needs "fixing". But the received paradigms are largely defensible. It is not as if they are not the product of highly intelligent practitioners. 27. The difficulty of detecting gravitons raises a related question: what counts as a detection? Saying that it must be detected in a conventional particle physics experiment is a rather ad hoc criterion. If all the knowledge we have today already implies the existence of the graviton, then that should count as it having been detected. The same can be said about LIGO's detection of gravitational waves. The existence of gravitational waves was already implied by the detection of the decaying orbits of orbiting pulsars. Or one may argue that this was in turn a prediction of GR which had ample observational support before the observation of the orbiting pulsars. 28. Sean Carroll wrote a blogpost about this: He is not a crackpot. Maybe you two could have a podcast / youtube-discussion about it? 1. In practice, calls to remove falsifiability are intended to support string theory, fine-tuning and the multiverse as physics. They are not physics, merely speculation, and the people claiming they are physics *are* crackpots. Remove falsifiability and just watch all the loonies swarm in with their ideas that "can't be disproved" and are "compatible with observations". There's nothing wrong with speculation but it is important that one is aware it is speculation otherwise you end up with the situation as in string theory where too much money and too many careers have been wasted on it. (Or philosophy where several thousand years have been wasted.) 29. @Steven Evans I assure you that we are, for the most part, on the same side of these issues. Your arguments, however, are very much like those of the foundations community who challenge dissent by demanding that a contradiction to their views be shown. 1999 Pavicic and Megill (the latter known for the metamath program) showed that propositional logic is not categorical and that the model faithful to the syntactic structure of the logic is not Boolean. So the contradiction demand is silly and simplistic. You are making arguments on the basis of 'abstractions'. Where exactly do these abstractions reside in time and space? Or, as many philosophers do, are you speaking of a realm of existence beyond time and space? Indeed, Tarski's semantic conception of truth properly conveys the intentions we ascribe to correspondence theories of truth. So, if we state that some abstraction is meaningful with respect to the truth of our scientific theories, we must account for the existence of the objects denoted by our language terms. Either you are claiming realms of existence which I shall not concede to you, or, you can show me "the number one" as an existent individual. Most of my acquaintances do not have formal education. When they ask me to explain my interest, I remind them of just how often one hears that "mathematics is the language of science". So, in a very crude sense, what is true in science depends on the nature of truth in mathematics. I expect that you will disagree with that view. But, I do not think you will be able to demonstrate the substantive existence of the abstractions you are invoking to challenge me. You may have problems with the very publications I mentioned because we share a similar sense of what constitutes science. But I see the kernel of the problem in the very statements you are making about the nature of mathematics. It is not so much that I disagree with you, it is that your positions are not defensible. You need to stipulate a theory of truth. You need to stipulate which conception of truth is applied under that theory. You need to stipulate logical axioms. You need to stipulate axioms for your mathematical theory. You need to decide whether or not you are following a formalist paradigm. If not, you will have to accommodate substitutions in the calculus with a strategy to warrant substitutions. If so, you will be faced with the problem of non-categoricity. Dr. Hossenfelder discussed this last problem in her book when considering Tegmark's suggestion that all interpretations be taken as meaningful. It is just not as simple as you would like it to be. 1. I've no idea what the correct logical terms are, but arithmetic is a physical fact. I can do arithmetic with physical balls, add them, subtract them, show prime numbers, show what it means for sqrt(2) to be irrational, etc., etc. This maths exists physically, and it is this physical maths that is the basis of abstract maths. Physical arithmetic obeys the axioms of arithmetic and the logical steps used to prove theorems are also embodied in physical arithmetic. Of course - because arithmetic and logic are observed in the physical world, that's where the ideas come from. Of course, philosophers can witter on at length about theoretical issues with what I have written, but they will never be able to come up with a concrete counter-example. They will nit-pick. I will re-draft what I have written. They will nit-pick some more, I will re-draft some more. And 2,000 years later we will have got nowhere, yet still it will be physically impossible to arrange a prime number of balls into a rectangle. Again, you've got it the wrong way round. Maths comes from the physical. Anyway, the issue of this blog post, falsifiability, is in practice an issue with people trying to suspend falsifiability to support string theory, fine-tuning and the multiverse. In more extreme cases, it is about philosophers and religious loonies claiming they can tell us about the natural world beyond what physics tells us. These people trying to suspend falsifiability are all dishonest cranks. That is why falsifiability is important, not because of any subtleties. There are straight-up cranks, even amongst trained physicists, who want to blur the line between "philosophy"/"religion" and physics and claim Jesus' daddy made the universe. Falsifiability stops these cranks getting their lies on the physical record. 30. Hi Sabine, sorry for a late reply. (everyone's busy) All apologies for the misunderstanding. 1) I did read your post. 2) I know you didn't mention him by name, but (in my mind) I don't see how one can speak of 'falsifiability' and not 'bring up' Karl Popper. 3) In the intro to your post you said " I don't know why we should even be talking about this". I agreed. ... and then wondered why we were. I thought you might be making a separate statement of some kind. At any rate, I'm off to view your new video while I have time. (can't wait) Once again, Love Your Work All love, 31. Maths does exist in reality beyond us. Of course it does, because Maths comes from a description of reality. 5 objects can't be arranged into a rectangle whether human mathematicians exist or not. 1. Steven, I’m not going to say that you’re wrong about that. If you want to define maths to exist beyond us given that various statements in it are true of this world (such as 5 points cannot form a rectangle, which I certainly agree with), then yes, math does indeed exist. I’m not sure that your definition for “exists” happens to be all that useful however. In that case notice that English and French also exist beyond us given that statements can be made in these human languages which are true of this world. The term “exists” may be defined in an assortment of ways, though when people start getting platonic with our languages, I tend to notice them developing all sorts of silly notions. Max Tegmark would be a prominent example of this sort of thing. 32. Sabine Hossenfelder posted (Thursday, April 25, 2019): Are theories also based on principles for how to obtain empirical evidence, such as, famously, Einsteins requirement that »All our space-time verifications invariably amount to a determination of space-time coincidences {... such as ...} meetings of two or more of these material points.« ? > To make predictions you always need a concrete model, and you need initial conditions. As far as this is referring to experimentally testable predictions this is a very remarkable (and, to me, agreeable and welcome) statement; constrasting with (wide-spread) demands that "scientific theories ought to make experimentally testable predictions", and claims that certain theories did make experimentally testable predictions. However: Is a principal reason for considering "[concrete] initial conditions" separate from "a [or any] concrete model", and not as part it ? Sabine Hossenfelder wrote (2:42 AM, April 27, 2019): > A model is not a prediction. You make a prediction with a model. Are conrete, experimentally falsifiable predictions part of models ? > if you falsify one model [...] someone [...] will continue to "research" the next model I find this description perfectly agreeable and welcome; yet it also seems very remarkable because it appears to contrast with (wide-spread) demands that "scientific theories ought to be falsifiable", and claims that certain theories had been falsified. > That's why these predictions are useless: [...] Any predictions may still be used as rationals for economic decisions, or bets. 33. mls: Where exactly do these abstractions reside in time and space? Originally the abstractions were embodied in mental models, made of neurons. Now they are also on paper, in textbooks, as a way to program and recreate such neural models. Math is just recursively abstracting abstractions. When I count my goats, each finger stands for a goat. If I have a lot of goats, each hash mark stands for one finger. When I fill a "hand", I use the thumb to cross four fingers, and start another hand. Abstractions of abstractions. Math is derived from reality, and built to model reality; but the rules of math can be extended, by analogy, beyond anything we see in reality. We can extend our two dimensional geometry to three dimensions, and then to any number of dimensions; I cluster mathematical objects in high dimensional space fairly frequently; it is a convenient way to find patterns. But I don't think anybody is proposing that reality has 143 dimensions, or that goats exist in that space. So math can be used to describe reality, or because the abstractions can be extended beyond reality, it can also be used to describe non-reality. If you are looking for "truth", that mixed bag is the wrong place to look. Even a simple smooth parabolic function describing a thrown object falling to earth is an abstraction. If all the world is quantized, there is no such thing: The smooth function is just an estimator of something taking quantum jumps in a step-like fashion, even though the steps are very tiny in time and space; so the progress appears to be perfectly smooth. To find truth, we need to return to reality, and prove the mathematics we are using describe something observable. That is how we prove we are not using the parts of the mixed bag of mathematics that are abstractions extended beyond reality. 34. @ Steven Evans In response to David Hume's "An Enquiry Concerning Human Understanding" Kant offered an account of objective knowledge grounded in the subjective experience of individuals. He distinguished between mathematics (sensible intuition) and logic (intelligible understanding). But to take this as his starting point he had to deny the actuality of space and time as absolute concepts. He took space to correspond with geometry and time to correspond with arithmetic. The relation to sensible intuition he claimed for these correspondences is expressed in the sentences, "Time is the form of inner sense." "Space, by all appearances, is the form of outer sense." The qualification in the second statement reflects the fact that the information associated with what we do not consider as part of ourselves is conditioned by our sensory apparatus before it can be called a spatial manifold. Hence, external objects are only known through "appearances". This certainly provides a framework by which mathematics can be understood in terms of descriptions related to the reality of experience. But it does not provide for a reality outside of our own. This, of course, is why I acknowledged Philosopher Eric's knowledge claim in his response to me. You seem to be assuming that an external reality substantiates the independent existence of your descriptions. The Christians I know use the same strategy to assure themselves of God's existence and the efficacy pf prayer. Kant's position on geometry is one instance of misinformation in the folklore of mathematical foundations. But, that does not really affect many of the arguments used against him. Where in sensible experience, for example, can one find a line without breadth? Or, if mathematics is grounded in visualizations, what of optical illusions? These criticisms are not without merit. Of major importance is that the sense of necessity attributed to mathematical truth seems to be undermined. Modern analytical philosophy recovers this sense of necessity by reducing mathematics to a priori stipulations presentable in formal languages with consequences obtained by rules for admissible syntactic transformations. Any relationship with sensible intuition is eradicated. What is largely lost is the ability to account for the utility of mathematics in applications. The issues are just not that simple. And they were alluded to by George Ellis in Dr. Hossenfelder's book. 1. @mls: "Time is the form of inner sense." / "Space, by all appearances, is the form of outer sense." Kant sounds utterly ridiculous, and these sound like trying to force a parallelism that does not exist. These definitions have no utility I can fathom. mls: Where in sensible experience, for example, can one find a line without breadth? Points without size and lines without breadth are abstractions used to avoid the complications of points and lines with breadth. So our answers (say about the sums of angles) are precise and provable. A line without breadth is the equivalent of a limit: If we reason using lines with breadth, we must give it a value, say W. Then our answer will depend on W. The geometry of lines without breadth is what we get as W approaches 0, and this produces precise answers instead of ranges that depend on W. mls: Or, if mathematics is grounded in visualizations, what of optical illusions? Mathematics began grounded in reality. Congenitally blind people can learn and understand mathematics without visualizations. Those are shortcuts to understanding for sighted people, not a necessity for mathematics, so optical illusions are meaningless. Thus contrary to your assertion, those criticisms are indeed without merit. Mathematics began by abstracting things in the physical world, but by logical inference it has grown beyond that in order to increase its utility. mls: Any relationship with sensible intuition is eradicated. Not any relationship. Mathematics can trump one's sensible intuition; that is a good thing. Our brains work by "rules of thumb," they work with neural models that are probabilistic in nature and therefore not precise. Mathematics allows precise reasoning and precise predictions; some beyond the capabilities of "intuition". Dr. Hossenfelder recently tweeted an article on superconductivity appearing in stacked graphene sheets, with one rotated by exactly 1.1 degrees with respect to the other. This effect was dismissed by many researchers out of hand, their intuition told them the maths predicting something would be different were wrong. But it turns out, the maths were right; something (superconductivity) does emerge at this precise angle. Intuition is not precise, and correspondence with intuition is not the goal; correspondence with reality is the goal. mls: What is largely lost is the ability to account for the utility of mathematics in applications. No it isn't, mathematics has been evolving since the beginning to have utility and applications. I do not find it surprising that when our goal is to use mathematics to model the real world, by trial and error we find or invent the mathematics to do that, and then have successes in that endeavor. What is hard to understand about that? It is not fundamentally different than wanting to grow crops and by trial and error figuring out a set of rules to do that. mls: The issues are just not that simple. I think they are pretty simple. Neural models of physical behaviors are not precise; thus intuition can be grossly mistaken. We all get fooled by good stage magicians, even good stage magicians can be fooled by good stage magicians. But the rules of mathematics can be precise, and thus precisely predictive because we designed it that way, and thus mathematics can predict things that test out to be true in cases where our "rule of thumb" intuition predicts otherwise; because intuition evolved in a domain in which logical precision was not a necessity of survival, and fast "most likely" or "safest" decisions were a survival advantage. 35. “Falsifiable” continues to be a poor term that I’m surprised so many people are happy using. Yeah, yeah, I know..Popper. It’s still a poor term. Nothing in empirical scientific inquiry is ever truly proven false (or true), only shown to be more or less likely. “Testable” is a far better word to describe that criterion for a hypothesis or a prediction. It renders a lot of the issues raised in this thread much less sticky. 36. "various statements in it are true of this world " You keep getting it the wrong way round. The world came first. Human maths started by people counting objects in the physical world. Physical arithmetic was already there, then people observed it. 37. OK, so what I strictly mean but couldn't be bothered to write out, was that if you take a huge number of what appear to observation at a certain level of precision as quantum objects they combine to produce at the natural level of observation of the senses of humans and other animals enough discrete-yness to embody arithmetic. This discrete-yness and this physical arithmetic exist (are available for observation) for anything coming along with senses at the classical level. In this arena of classical discrete-yness, 5 discrete-y objects can't be arranged into a rectangle, for example. I am aware of my observations, so I'll take a punt that you are similarly aware of your observations, that what I observe as my body and your body exist in the sense that they are available for observation to observers like ourselves and now it makes no sense not to accept the existence of the 5 objects, in the sense that they are available for observation. As I said, arithmetic exists in reality and human maths comes from an observation of that arithmetic. It is that simple. "Where in sensible experience, for example, can one find a line without breadth?" The reality of space at the human level is 3-D Euclideanesque. A room of length (roughly) 3 metres and breadth (roughly) 4 metres will have a diagonal of (roughly) 5 metres. For best results, count the atoms. "The Christians I know use the same strategy to assure themselves of God's existence and the efficacy pf prayer." "God" doesn't exist - it's a story. However, 5 objects really can't be arranged into a rectangle - try it. "Of major importance is that the sense of necessity attributed to mathematical truth seems to be undermined." I would stake my life on the validity of the proof that sqrt(2) is irrational. Undermined by whom? Dodgy philosophers who have had their papers read by a couple of other dodgy philosophers? Meanwhile, Andrew Wiles has proved Fermat's Last Theorem for infinite cases. Also known as proving from axioms as Euclid did over 2,000 years ago. And all originally based on our observations of the world. 38. Well, with Dr. Hossenfelder's permission, perhaps I might respond with a post or two that actually reflect my views rather than what one finds in the literature. At the link, one may find the free Boolean lattice on two generators. Its elements are labeled with the symbols typically taught in courses on propositional logic. If one really wants to argue that the claims of philosophers and their logicians are of questionable merit, thos is one of the places to start. Let's see a show of hands. Who sees the tetrahedron? In combinatorial topology, one decomposes a tetrahedron into vertices, edges, faces, and an interior. With exception for the bottom element, the order-theoretic representation of this decomposition is order-isomorphic with the lattice above. And, one need only hold that the bottom element denote the exterior to complete the sixteen set here. Philosophers and their logicians hold that mathematics has been arithmetized. Even though the most basic representation of how their logical connectives relate to one another can be directly compared with a tetrahedron, they will insist that geometry has been eliminated from mathematics. You can thank David Hilbert and the formalists for that. Say all that you want about Euclid, Hilbert's "Foundations of Geometry" reconstructs the Euclidean corpus without reference to motions or temporality. Remember this the next time you want to recite some result from mathematical logic which is contrary to your beliefs about mathematics. So, if logicians have simply put labels on a tetrahedron, one has just cause for questioning the relevance of their claims concerning the foundations of mathematics. But, that bottom element is still bothersome because it is not typically addressed in combinatorial topology. In the link, one can find the 3-dimensional projection of a tesseract, although Wikipedia does not show the edges connecting the vertices to a point at infinty. When this is added, all of the elements are 4-connected as in the Boolean order. The bottom of the Boolean order would coincide with the point at infinity. Amazing, is it not? Our logic words have a 4-dimensional character. Let me repeat something I have maintained repeatedly in blog posts here. If the theory of evolution is among our best science, then we have no more facility for knowing the truth of reality than an earthworm. I do not need Euclid's axioms to make two paper tetrahedra with vertices colored so that they cannot be superimposed with all four colors matched. One can do a lot with that to criticize received views in the foundations community. Ignoring their arguments because you believe differently just puts you in the queue of "he said, she said" that Steven Evans has used to discredit philosophers. 39. I read a preview of one of Smolin's books on Amazon in which he proclaims the importance of Leibniz' identity of indiscernibles. Since I have read Leibniz, I would tend to agree with him. However, Leibniz also motivated the search for a logical calculus. So, the principle is more often associated with logical contexts. Leibniz attributes the principle to St. Thomas Aquinus to answer how God knows each soul individually. In keeping with Smolin's account, Leibniz does claim to be generalizing the principle to a geometric application. But in the debates over how Leibniz and Newton differed, the principle became associated with its logical application. Steven Evans would like me to acknowledge the reality of arithmetic in some sense. Kant had probably been the first critic of the logical principle. He asserted that numerical difference is known through spatial intuition. In modern contexts, the analogous portrayal can be found in Strawson's book "Individuals". He uses a diagram with different shapes to explain the distinction between qualitative identity and quantitative identity. In other words, numerical difference is grounded by spatial intuition. Since mathematician's make it a habit to work from axioms, I wrote a set of axioms intended to augment set theory by interpreting the failure of equality as topological separation. In other words, two points in space are distinct if one is in a part of space that the other is not. When you run around using a membership relation while thinking in terms of geometric incidence, keep in mind that this is not what a membership predicate means. One may say that the notion of a set is not yet decided, but the received view is one where geometry is deprecated because mathematics has been arithmetized. And, since numbers can be defined in logic, any relation of the membership predicate with numerical identity associated with spatial intuition has been lost. My views on mathematics are far closer to those who study the physical sciences than not. So do not hold me accountable for a summary of what is the case in the foundations of mathematics. You have physicists running around pretending that the mathematics is telling them truths about the universe and others using mathematics to say that they should be believed. My point is that they are further enabled by what is going on in the foundations of mathematics. 40. @Steven Evans " a story" You have probably never heard of deflationary nominalism. It is one way of speaking of mathematical objects without committing to their reality: Motivated by the fact that core mathematicians actually define their terms, I needed a logic that supported descriptions. Free logics do that, although the general discussion of free logics does not apply to my personal work. The logic I had to write for my own purposes is better compared with how free logics can be used for fictional accounts, My logic is classical (rather than paraconsistent) and the method "works" because proofs are finite. The standard account of formal systems relies on a completed infinity outside of the context of an axiom system. I doubt that Euclid ever had this in mind. David Hilbert turned his attention to arithmetical metamathematics with the objective of a finite consistency proof precisely because completed infinities are *NOT* sensibly demonstrable. 41. @Dr. Castaldo I really have no reason to accept reductionist arguments in physics. If you can substantiate your claim, then do so. Words explaining words is how we get into these problems to begin with. Having said that, a comment in another thread made some small reference to circularity. I forget the specifics right now, but I pointed out the result of a 2016 Science article about concept formation and hexagonal grid cells. It is a beautiful circularity. Abstract concepts depend upon neural structures that exhibit hexagonal symmetry. A book on my shelf which explicitly classifies hexagons and relates them to tetrahedra. String theorists asking people to believe in six rolled up dimensions. And the need for physical theories to build the instruments and interpret the data so we can identify how hexagonal symmetries pertain to abstract concept formation. You are a pragmatic gentleman. Thank you for your other replies as well. For what this is worth, I am certainly not looking for truth. When Frege retracted his logicism he suggested that all mathematics is geometrical. That is mostly what I have uncovered from my own deliberations. It really does not make sense to speak of truth and falsity in geometry. 42. @mls: What is "amazing" about that? I can do the same thing on paper better with bits; given 2 binary states there are 2^2 = 4 possible states. In binary we can uniquely number them, [0,1,2,3]. That is not "four dimensional" any more than 10 states by 10 states is "100" dimensional. On your "earthworm" comparison, obviously that is wrong. We have far more facility than an earthworm for knowing the truth of reality, or earthworms wouldn't let us use them as live bait. And fish wouldn't fall for that and bite into a hook, if they could discern reality equally as well as us. Humans understand the truth of reality well enough to manipulate chemistry on the atomic level, to build microscopic machines, to create chemical compounds and materials on a massive scale that simply do not exist in nature. Only humans can make and execute plans that require decades, or even multiple lifetimes, to complete. Where are the particle colliders built by any other non-human ape or animal? I have no idea how you think the theory of evolution creates any equivalence between the intelligence of earth worms and that of humans. I suspect you don't understand evolution. 43. @mls You don't address the point that maths exists in reality and came from reality. Obviously, the field of logic has something to say about maths and has a credible standard of truth and method like science and maths. I do not need to discredit the field of philosophy as it discredits itself - there are professional philosophers who are members of the American Philosophical Association who publish "proofs of God"(!!); in the comments in this very blog a panpsychist professional philosopher couldn't answer the blog author's point that the results of the Standard Model are not compatible with panpsychism being an explanation of consciousness in the brain. Philosophers can churn out such nonsense because the "standard" of truth in philosophy is to write a vaguely plausible-sounding natural language "proof". This opens the field to all kinds of cranks and frauds. And these frauds want to have their say about natural science, too, but fortunately the falsifiability barrier keeps them at bay. It is not a "he said, she said" argument. I have explained why I think maths exists in reality. Comment moderation on this blog is turned on.
72fe190da7b401c6
Thursday, March 21, 2019 actual infinite falling (all-)together &/or chaosmos COLLAPSE Musée d'Orsay/January 2018 (for more A/Z photography see portfolio here); Clara Colosimo in Fellini's Prova d'orchestra; Arquipélago dos Pombos Correios (o soverdouro); The great abyss inframince (by A/Z, for more see here); "... the term quantum mechanics is very much a misnomer. It should, perhaps, be called quantum nonmechanics..." David Bohm "Ihr verehrt mich: aber wie, wenn eure Verehrung eines Tages umfällt?" "... la majorité est travaillé par une minorité proliférante et non dénombrable qui risque de détruire la majorité dans son concept même, c'est-à-dire en tant qu'axiome... le étrange concept de non-blanc ne constitue pas un ensemble dénombrable... Le propre de la minorité, c'est de faire valoir la puissance du non-dénombrable, même quand elle est composée d'un seul membre. C'est la formule des multiplicités. Non-blanc, nous avons tous à le devenir, que nous soyons blancs, jaunes ou noirs." Deleuze & Guattari "Eighteenth-century masters achieved most pleasing effects with forgrounds of warm brown and fading distances of cool, silvery blues... Constable wanted to try out the effect of respecting the local color of grass somewhat more, in his Wivenhoe Park he is seen pushing the range more in the direction of bright greens. Only in the direction of, it is a transposition, not a copy." Ernst H. Gombrich (Art and Illusion) "Note the parallels between ordinary awareness, classical physics, and the natural and counting integers..." Dean Radin (Real Magic)  This is AGAINST Carlo Rovelli's dictum or pseudo-problem: "visto que tudo se atrai, a única maneira de um Universo finito não desmoronar sobre si mesmo é que se expanda" [since all things attract one another, the only way a finite Universe can avoid collapse is to expand] (A realidade não é o que parece, p. 105)—but why should one use the term "finite" (or even "infinite") to describe a universe with no definite borders (like a 3-sphere, or something even more complex)? The infinite is not equivalent to the huge. The infinite is simply (according to Dedekind) what can be matched up to its own parts (the only reason to deny this is hysteria, paradox-freakishness). The universe (the chaosmos) both expands & collapse! As a whole and at the length of its space-time infinitesimals (or epsilon-delta limits, whatever), the macro/micro contractions, the revolving ruminations (what Rovelli confusedly calls "granulations," as if they were incompatible with any notion of continuity) of an autophagic real-virtual Einsteinian mollusk. If you have three fundamental constants (as Rovelli suggests, A realidade não é o que parece, p. 229), velocity [of light], information and Planck's length (c, ħ, ℓp), what matters is the relation among them (which might be revealed in established, finite proportions) not each one of their supposedly fixed (absolute) values (and even the relation might vary, fluctuate). Otherwise you behave like a very stupid painter arguing over the positive value of (what we call) green or brown in the transposition of tonal gradations to canvas. Main Hall: Time out of joints or the excessive solution (academically and sophistically called 'the measurement problem'):  "If quantum state evolution proceeds via the Schrödinger equation or some other linear equation, then, as we have seen in the previous section, typical experiments will lead to quantum states that are superpositions of terms corresponding to distinct experimental outcomes. It is sometimes said that this conflicts with our experience, according to which experimental outcome variables, such as pointer readings, always have definite values. This is a misleading way of putting the issue, as it is not immediately clear how to interpret states of this sort as physical states of a system that includes experimental apparatus, and, if we can’t say what it would be like to observe the apparatus to be in such a state, it makes no sense to say that we never observe it to be in a state like that," Wayne Myrvold's "Philosophical Issues in Quantum Mechanics," Stanford Encyclopedia of Philosophy "... von Neumann makes the logical structure of quantum theory very clear by identifying two very different processes, which he calls process 1 and process 2... Process 2 is the analogue in quantum theory of the process in classic physics that takes the state of a system at one time to its state at a later time. This process 2, like its classic analogue, is local and deterministic. However, process 2 by itself is not the whole story: it generates a host of ‘physical worlds’, most of which do not agree with our human experience. For example, if process 2 were, from the time of the big bang, the only process in nature, then the quantum state (centre point) of the moon would represent a structure smeared out over a large part of the sky, and each human body–brain would likewise be represented by a structure smeared out continuously over a huge region. Process 2 generates a cloud of possible worlds, instead of the one world we actually experience...," Jeffrey M. Schwartz's, Henry P. Stapp's and Mario Beauregard's "Quantum physics in neuroscience and psychology: a neurophysical model of mind–brain interaction," Philosophical Transactions of the Royal Society (2005). "... a seminal discovery by Heisenberg... in order to get a satisfactory quantum generalization of a classic theory one must replace various numbers in the classic theory by actions (operators). A key difference between numbers and actions is that if A and B are two actions then AB represents the action obtained by performing the action A upon the action B. If A and B are two different actions then generally AB is different from BA: the order in which actions are performed matters. But for numbers the order does not matter: AB=BA. The difference between quantum physics and its classic approximation resides in the fact that in the quantum case certain differences AB–BA are proportional to a number measured by Max Planck in 1900, and called Planck’s constant. Setting those differences to zero gives the classic approximation," Jeffrey M. Schwartz's, Henry P. Stapp's and Mario Beauregard's "Quantum physics in neuroscience and psychology: a neurophysical model of mind–brain interaction," Philosophical Transactions of the Royal Society (2005). "At their narrowest points, calcium ion channels are less than a nanometre in diameter... The narrowness of the channel restricts the lateral spatial dimension. Consequently, the lateral velocity is forced by the quantum uncertainty principle to become large. This causes the quantum cloud of possibilities associated with the calcium ion to fan out over an increasing area as it moves away from the tiny channel to the target region... This spreading of this ion wave packet means that the ion may or may not be absorbed on the small triggering site. Accordingly, the contents of the vesicle may or may not be released... the quantum state of the brain splits into a vast host of classically conceived possibilities, one for each possible combination of the release-or-no-release options at each of the nerve terminals... a huge smear of classically conceived possibilities," Jeffrey M. Schwartz's, Henry P. Stapp's and Mario Beauregard's "Quantum physics in neuroscience and psychology: a neurophysical model of mind–brain interaction," Philosophical Transactions of the Royal Society (2005). "... waves make diffraction patterns precisely because multiple waves can be at the same place at the same time, and a given wave can be at multiple places at the same time... by definition particles are localized entities that take up space, they can be here or there, but not in two places at once. However it turns out that particles can produce diffraction patterns under specific circumstances... a given particle can be in a state of superposition... to be in a state of superposition between two positions, for exemple, is not to be here or there or even here and there, but rather it is to be indeterminately here-there. That is, it is not simply that the position is unknown, but rather there is no fact of the matter to whether it is here or there... it is a matter of ontological indeterminacy and not merely epistemological uncertainty... patterns of difference... are arguably at the core or what matter is and are at the heart of how quantum physics understands the world... the quantum probabilities are calculated by taken account of all the possible paths connecting the points. In other words, a given particle that starts out here and winds up there is understood as is understood to be in a superposition of all possible paths between two points. Or in its four dimensional quantum field theory elaboration, all possible space-time histories... the very meaning of superposition is that all possible histories are happening together, they all coexist and mutually contribute to this overall pattern or else there wouldn't be a diffraction pattern..." Karen Barad's "Troubling Time's & Ecologies of Nothingness," European Graduate School Video Lectures (YouTube), my transcription. "Quantum physics opens up another possibility beyond the relatively familiar phenomena of spatial diffraction, namely, temporal diffraction. The existence of temporal diffraction is due to a less well-known indeterminacy principle than the usual position/momentum indeterminacy principle... something call the energy/time indeterminacy principle. This indeterminacy principle plays a key role in quantum field theory... temporalities are not merely multiple, but rather temporalities are specifically entangled and threaded through one another such that there is no determinate answer to the question what time is it? Karen Barad's "Troubling Time's & Ecologies of Nothingness," European Graduate School Video Lectures (YouTube), my transcription. "During the waning decades of the 20th century, the most murdering century by some accounts in history, the notion that the past might be open to revision through a quantum erasure came to the fore.  The quantum erasure experiment is a variation of the two slit diffraction experiment, an experiment  which Feynman said contains all the mysteries of quantum physics. Against this fantastic claim of the possibility of erasure, I will claim that in paying close attention to the material labours entailed the claim of erasure possibility fades, at least full erasure, while at the same time bringing to the forth a relational ontology sensibility to questions of time, memory and history... the nature of time and being, or rather time-being itself is in question and can't be assumed. What this experiment tells us is not simply that a given particle would have done something different in the past, but that the very nature of its being, its ontology, in the past remains open to future reworkings... In particular I argue that this experiment offers empirical evidence for a relational ontology or perhaps more accurately a hauntology as against a metaphysics of presence... Remarkably this experiment makes evident that entanglement survives the measurement process and further more that material traces of attempts at erasure can be found in tracing the entanglements... While the past is never finished, and the future is not what would unfold, the world holds or rather is the memories of its iterated reconfigurings" Karen Barad's "Troubling Time's & Ecologies of Nothingness," European Graduate School Video Lectures (YouTube), my transcription. "If classical physics insists that the void has no matter and no energy, the quantum principle of ontological indeterminacy, and particularly the indeterminacy relation between energy and time, pose into question the existence of such a zero energy, zero matter state... the indeterminacy principle allows for fluctuations of the vacuum... the vacuum is far from empty, it is fill with all possible indeterminate yearnings of space-time mattering... we can understand vacuum fluctuation in terms of virtual particles. Virtual particles are the quanta of the vacuum fluctuations... the void is a spectral ground, not even nothing can be free of ghosts... there is infinite number of possibilities, but not everything is possible. The vacuum isn't empty but neither is anything in it... particles together with their antiparticles and pairs can be created out of the vacuum by putting the right amount of energy into the vacuum... So, similarly, particles together with their antiparticles and pairs can go back into the vacuum, emitting the excess energy" Karen Barad's "Troubling Time's & Ecologies of Nothingness," European Graduate School Video Lectures (YouTube), my transcription. Labyrinthine corridors, rooms:  "This was on Friday afternoon. Saturday morning I awoke early and read the two papers. Bohm, in simple clear language, declared that indeed there were conceptual problems in both macro- and microphysics, and that they were not to be swept under the carpet... And, further, Bohm suggested that the root of those problems was the fact that conceptualizations in physics had for centuries been based on the use of lenses which objectify (indeed the lenses of telescopes and microscopes are called objectives). Lenses make objects, particles," Karl Pribram's "The Implicate Brain"; "An equally important step in understanding came at a meeting at the University of California in Berkley, in which Henry Stapp and Geoffrey Chew of the Department of Physics pointed out that most of quantum physics, including their bootstrap formulations based on Heisenberg's scatter matrices, were described in a domain which is the Fourier transform of the spacetime domain. This was of great interest to me because Russell and Karen DeValois of the same university had shown that the spatial frequency encoding displayed by cells of the visual cortex was best described as a Fourier transform of the input pattern. ***The Fourier theorem states that any pattern, no matter how complex, can be analyzed into regular waveform components of different frequencies, amplitudes, and (phase) relations among frequencies. Further, given such components, the original pattern can be reconstructed. This theorem was the basis for Gabor's invention of holography," Karl Pribram's "The Implicate Brain"; [***when different wave patterns meet, they add up to form new patterns; you can analyse complex wave patterns as if they were a superposition of more simple waves, which have, for instance a definite, uniforme wavelength; the illustration at left is taken from the site of professor John D. Norton (Pittsburg University): "Einstein for Everyone"; it is important to note that real wave patterns studied in physics are much more complex than this two dimensional representation, and that they are ultimately formed by something that is neither strictly speaking a wave nor a particle as these are classically understood; I shall also say that not all John D. Norton's  explanations given in the referred site seem very enlightening to me] (see picture above) From Maxwell's equations, we should expect an infinite number of frequencies of electromagnetic waves (or radiation, which includes visible light, and waves whose frequencies are bellow the one which produces the red colour, such as radio waves, and also waves whose frequencies are above the one which produces the violet colour, such gama rays). All these electromagnetic waves travel at what is called the speed of light (the frequencies can vary because the wavelength also varies proportionally) and constitute the electromagnetic spectrum. High frequency means also high photon energy. The photon energy is related to how single atoms of different material objects can absorb and emit electromagnetic waves, which happens always in quantum discrete amounts. As concrete musical instruments, atoms can produce oscillations only in certain restricted ways, and they do so very energetically. The physical production of what we perceive as forms and colours has to do, however, more directly with the way electromagnetic waves travel much more freely and continuously in space, through, for instance, air or water, interfering (constructively or destructively) with one another, interacting with molecules—and we are talking about electromagnetic waves of lower energy and frequencies, which are visible. What we see, although, isn't everything. Does the continuum (infinitely divisible) preclude plurality? Does the discrete precludes unity? Of course no! Except for the lack of imagination of the purist & prudish. But thanks gosh, in philosophy we also have Leibniz's "Natura non facit saltus," and Peirce's synechism (everything is connected), the immemorial and unending, irreducible battles between the one and the many. Why should people be so afraid of a conundrum of straight lines, curves, and points (which besides going for these one- and two-dimensions, can be extrapolated to n-)? Infinitesimals, differentials, and limits, what is the real difference? Epsilon-delta definition (Cauchy, Bolzano, Weierstrass) and nonstandard analysis (Abraham Robinson) are all in the end perfectly compatible. Add to that synthetic differential geometry or the smooth infinitesimal (F. W. Lawvere), whatever! The actual infinite—everything else starts from it! Just don't be afraid of lingo—the science wars are an affair of securing university bonus in times of economic havoc. And don't forget that continuity doesn't have to be only local, that is, the chaosmos is full of nonlocal connections, the innermost separations! What matters is attitude, not content or specific formulations. ["Whenever a point x is within δ units of c, f(x) is within ε units of L," graphic and definition from the Wikipedia's epsilon-delta entry ]  ["Infinitesimals (ε) and infinites (ω) on the hyperreal number line (1/ε = ω/1)," graphic and definition from Wikipedia's hyperreal number entry] (see picture above) "Cusanus... took the circle to be an infinilateral regular polygon, that is, a regular polygon with an infinite number of (infinitesimally short) sides... The idea of considering a curve as an infinilateral polygon was employed by a number of later thinkers, for instance, Kepler, Galileo and Leibniz... Traditionally, geometry is the branch of mathematics concerned with the continuous and arithmetic (or algebra) with the discrete. The infinitesimal calculus that took form in the 16th and 17th centuries, which had as its primary subject matter continuous variation, may be seen as a kind of synthesis of the continuous and the discrete, with infinitesimals bridging the gap between the two. The widespread use of indivisibles and infinitesimals in the analysis of continuous variation by the mathematicians of the time testifies to the affirmation of a kind of mathematical atomism which, while logically questionable, made possible the spectacular mathematical advances with which the calculus is associated. It was thus to be the infinitesimal, rather than the infinite, that served as the mathematical stepping stone between the continuous and the discrete," John L. Bell's "Continuity and Infinitesimals" (Stanford Encyclopedia of Philosophy) [I like this passage very much, and this is a very useful article, but I'm not subscribing in detail to all ideas Bell developed there]; "... science needs calculus; calculus needs the continuum; the continuum needs a very careful definition; and the best definition requires there to be actual infinities (not merely potential infinities) in the micro-structure and the overall macro-structure of the continuum... Informally expressed [for Dedekind], any infinite set can be matched up to a part of itself; so the whole is equivalent to a part. This is a surprising definition because, before this definition was adopted, the idea that actually infinite wholes are equinumerous with some of their parts was taken as clear evidence that the concept of actual infinity is inherently paradoxical... [Cantor's] new idea [similar to Dedekind's] is that the potentially infinite set presupposes an actually infinite one. If this is correct, then Aristotle’s two notions of the potential infinite and actual infinite have been redefined and clarified," Bradley Dowden's "The Infinite" (Internet Encyclopedia of Philosophy) [I like this passage very much, and this is a very useful article, but I'm not subscribing in detail to all ideas Dowden developed there]; "... in Quantum Electrodynamics... processes of much greater complexity [than a simple electron-electron scattering] could intervene in the scattering process. For example, the exchanged photon could convert to an electron-positron pair which would subsequently recombine... or one of the incoming electrons might emit a photon and reabsorb it on the way out... in general, the exchange of arbitrarily large numbers of photons, electrons and positrons can contribute to electromagnetic interactions... very complicated multiparticle exchanges have to be taken into account in the analysis of physical systems. Indeed, no exact solutions to the Quantum Electrodynamics are known, nor have such solutions ever been shown rigorously to exist [but precise approximations are possible]," Andrew Pickering's Constructing Quarks (p. 63); "... in quantum field theory all forces are mediated by particle exchange... It is equally important to stress that the exchanged particles... are not observable... To explain why this is so, it is necessary to make a distinction between 'real' and 'virtual' particles... particles with unphysical values of energy and momentum are said to be 'virtual' or 'off mass-shell' particles. In classical physics they could not exist at all... In quantum physics, in consequence of the Uncertainty Principle, virtual particles can exist, but only for an infinitesimal and experimentally undetectable length of time. In fact, the lifetime of a virtual particle is inversely dependent upon how far its mass diverges from its physical value," Andrew Pickering's Constructing Quarks (p. 64-65); "In quantum mechanics the particles themselves can be represented as fields. An electron, for example, can be consid­ered a packet of waves with some finite extension in space. Conversely, it is of­ten convenient to represent a quantum­ mechanical field as if it were a particle. The interaction of two particles through their interpenetrating fields can then be summed up by saying the two par­ticles exchange a third particle, which is called the quantum of the field. For example, when two electrons, each sur­rounded by an electromagnetic field, ap­proach each other and bounce apart, they are said to exchange a photon, the quantum of the electromagnetic field. The exchanged quantum has only an ephemeral existence... The larger their energy, the briefer their existence. The range of an interaction is related to the mass of the exchanged quantum. If the field quantum has a large mass, more energy must be borrowed in order to support its existence, and the debt must be repaid sooner lest the discrep­ancy be discovered. The distance the particle can travel before it must be reabsorbed is thereby reduced and so the corresponding force has a short range. In the special case where the exchanged quantum is massless [such as a photon] the range is infinite," Gerard 't Hooft's "Gauge Theories of the Forces between Elementary Particles" (Scientific American, vol. 242, n. 6, 1980, pp. 104-141); "It was not immediately apparent that quantum electrodynamics could qualify as a physically acceptable theory. One problem arose repeatedly in any at­tempt to calculate the result of even the simplest electromagnetic interac­tions, such as the interaction between two electrons. The likeliest sequence of events in such an encounter is that one electron emits a single virtual photon and the other electron absorbs it. Many more complicated exchanges are also possible, however; indeed, their number is infinite. For example, the electrons could interact by exchanging two pho­tons, or three, and so on. The total prob­ability of the interaction is determined by the sum of the contributions of all the possible events... Perhaps the best defense of the theo­ry is simply that it works very well. It has yielded results that are in agree­ment with experiments to a n accuracy of about one part in a billion, which makes quantum electrodynamics the most accurate physical theory ever de­ vised," Gerard 't Hooft's "Gauge Theories of the Forces between Elementary Particles" (Scientific American, vol. 242, n. 6, 1980, pp. 104-141); "If an electron enters a medi­um composed of molecules that have positively and negatively charged ends, for example, it will polarize the molecules. The electron will repel their negative ends and attract their positive ends, in effect screening itself in positive charge. The result of the polarization is to reduce the electron's effective charge by an amount that in­ creases with distance... The uncertainty principle of Werner Heisenberg suggests... that the vacuum is not empty. Accord­ing to the principle, uncertainty about the energy of a system increases as it is examined on progressively shorter time scales. Particles may violate the law of the conservation of energy for unobservably brief instants; in effect, they may materialize from nothing­ness. In QED [Quantum Electrodynamics] the vacuum is seen as a complicated and seething medium in which pairs of charged "virtual" parti­cles, particularly electrons and posi­trons, have a fleeting existence. These ephemeral vacuum fluctuations are polarizable just as are the molecules of a gas or a liquid. Accordingly QED predicts that in a vacuum too electric charge will be screened and effectively reduced at large distances," Chris Quigg's Elementary Particles and Forces (Scientific American, vol. 252, n. 4, 1985, pp. 84-95); "A nuvem de probabilidades que acompanha os elétrons entre uma interação e outra é um pouco parecida com um campo. Mas os campos de Faraday e Maxwell, por sua vez, são feitos de grãos: os fótons. Não apenas as partículas estão em certo sentido difusas no espaço como campos, mas também os campos interagem como partículas. As noções de campo e de partícula, separadas por Faraday e Maxwell, acabam convergindo na mecânica quântica. A forma como isso acontece na teoria é elegante: as equações de Dirac determinam quais valores cada variável pode assumir. Aplicadas à energia das linhas de Faraday, dizem-nos que essa energia pode assumir apenas certos valores e não outros... As ondas eletromagnéticas são de fato vibrações das linhas de Faraday, mas também, em pequena escala, enxames de fótons... Por outro lado, também os elétrons e todas as partículas de que é feito o mundo são 'quanta' de um campo... semelhante ao de Faraday e Maxwell," Carlo Rovelli's A realidade não é o que parece  (Objetiva, 2014, p. 125); "A 'nuvem' que representa os pontos do espaço onde é provável encontrar o elétron é descrita por um objeto matemático chamado 'função de onda.'O físico austríaco Erwin Schrödinger escreveu uma equação que mostra como essa função de onda evolui no tempo. Schrödinger esperava que a 'onda' explicasse as estranhezas da mecânica quântica... Ainda hoje alguns tentam entender a mecânica quântica pensando que a realidade é a onda de Schrödinger. Mas Heisenberg e Dirac logo compreenderam que esse caminho é equivocado. A função [de onda] não está no espaço físico, está em um espaço abstrato formado por todas as possíveis [virtuais!] configurações do sistema... A realidade do elétron não é uma onda [?]: é esse aparecer intermitente nas colisões," Carlo Rovelli's A realidade não é o que parece  (Objetiva, 2014, p. 271); "When we say that we wish to make sense of something we meant o put it into spacetime terms, the terms of Euclidean geometry, clock time, etc. The Fourier transform domain is potential to this sensory domain. The waveforms which compose the order present in the electromagnetic sea which fills the universe make up an interpenetrating organization similar to that which characterizes the waveforms "broadly cast" by our radio and television stations. Capturing a momentary cut across these airwaves would constitute their hologram. The broadcasts are distributed and at any location they are enfolded among one another. In order to make sense of this cacophany of sights and sounds, one must tune in on one and tune out the others. Radios and television sets provide such tuners. Sense organs provide the mechanisms by which organisms tune into the cacophany which constitutes the quantum potential organization of the elecromagnetic energy which fills the universe," Karl Pribram's "The Implicate Brain"; "... the cloud chamber photograph does not reveal a “solid” particle leaving a track. Rather it reveals the continual unfolding of process with droplets forming at the points where the process manifests itself. Since in this view the particle is no longer a point-like entity, the reason for quantum particle interference becomes easier to understand. When a particle encounters a pair of slits, the motion of the particle is conditioned by the slits even though they are separated by a distance that is greater than any size that could be given to the particle. The slits act as an obstruction to the unfolding process, thus generating a set of motions that gives rise to the interference pattern," Basil J. Hiley's "Mind and matter: aspects of the implicate order described through algebra" (in K. H. Pribram's and J. King's Learning as Self-Organisation, New Jersey, Lawrence Erlbaum Associates, 1996, pp. 569-86); "Let us... ask what the algebraic structure tells you about the underlying phase space. Because the algebra is non-commutative there is no single underlying manifold. That is a mathematical result. Thus if we take the algebra as primary then there is no underlying manifold we can call the phase space. But we already know this. At present we say this arises because of the 'uncertainty principle,' but nothing is 'uncertain,'" Basil Hiley's "From the Heisenberg Picture to Bohm: a New Perspective on Active Information and its relation to Shannon Information" (in A. Khrennikov, Proc. Conf. Quantum Theory: reconsideration of foundations, Sweden, Växjö University Press, pp. 141-162, 2002). "What Gelfand showed was that you could either start with an a priori given manifold and construct a commutative algebra of functions upon it or one could start with a given commutative algebra and deduce the properties of a unique underlying manifold. If the algebra is non-commutative it is no longer possible to find a unique underlying manifold. The physicist’s equivalent of this is the uncertainty principle when the eigenvalues of operators are regarded as the only relevant physical variables. What the mathematics of non-commutative geometry tells us is that in the case of a non-commutative algebra all we can do is to find a collection of shadow manifolds... The appearance of shadow manifolds is a necessary consequence of the non-commutative structure of the quantum formalism," Basil Hiley's "Phase Space Descriptions of Quantum Phenomena" (in A. Khrennikov, Quantum theory: Reconsiderations of Foundations, Vaxjo University Press, 2003). the odd transformation of Der Herr Warum (Gödel with Resnais); the only three types of ingenuity; why self-help books are not to be dismissed; the most auspicious tetrahedron; what is REAL space? what is REAL number? Timothy Leary in the 1990s; 5G?! Get real... list of charming scientists/engineers; pick a soul (ass you wish); - en profane: Orsay & Centre Pompidou;- view from Berthe Trépat's apartment; list des déclencheurs musicaux; Dark Consciousness; The Doors of Perception; Structuralism, Poststructuralism; List des figures du chaos primordial (Deleuze); Brazilian Perspectivism; Piano Playing (Kochevitsky); - L'Affirmation de l'âne (review of Smolin/Unger's The Singular Universe); And also: 1. Anonymous3:52 PM 2. Anonymous12:00 AM concerning my study and knowledge. Leave your comments below:
2f79cf7e094c4465
The UN is Hopeless Little hurts more than losing long held and strongly felt hopes, such as the hopes most people have for their children and grandchildren. But there comes a time in such despair when, to go on living, the anguished must reject such hopes, replacing them with new hopes, held with less idealism and more realism, with less passion and more wisdom. Yet, in the process one dies a little – or a lot. As Samuel Coleridge wrote in The Rhyme of the Ancient Mariner: He went like one that hath been stunned, And is of sense forlorn: A sadder and a wiser man He rose the morrow morn. When I memorized that stanza, more than 50 years ago, I was president of my high school’s UN club. At the time, we were full of hopes for the future world, heralded by the UN’s Declaration of Human Rights. But now, what a sad depth to which the UN has sunk. As former UN Secretary-General Kofi Annan said with dismay about the UN’s former Commission on Human Rights: We have reached a point at which the Commission’s declining credibility has cast a shadow on the reputation of the United Nations system as a whole, and where piecemeal reforms will not be enough. In an attempt to remove that “shadow”, in 2006 the UN General Assembly attempted to reform the Commission on Human Rights (CHR) by replacing it with the Human Rights Council (HRC), but as is illustrated below, it was only a “piecemeal” reform, with the only significant change being a shuffling of the letters in its acronym. Below is quoted text from pp. 68-73 of a UN document from the Seventh Session of the HRC. To the quotation I’ve added the colored notes in “square brackets” in hopes of prodding readers to consider how ludicrous and even despicable this resolution is. It was introduced by representatives from Pakistan and Egypt and passed by a vote of 21 to 10, with 14 abstentions. 7/19. Combating defamation of religions [and defunct scientific theories, superstitions, fairy tales, astrology, and similar nonsense] The Human Rights Council, Recalling the 2005 World Summit Outcome adopted by the General Assembly in its resolution 60/1 of 24 October 2005, in which the Assembly emphasized the responsibilities of all States, in conformity with the Charter of the United Nations, to respect human rights and fundamental freedoms for all, without distinction of any kind as to race, color, sex, language or religion, political or other opinion, national or social origin, property, birth or other status, and acknowledged the importance of respect and understanding for religious and cultural diversity throughout the world [note that the statement is about “the importance of respect and understanding for… diversity”, not necessarily respect for religions and cultures!], Recalling also the Durban Declaration and Programme of Action, adopted by the World Conference against Racism, Racial Discrimination, Xenophobia and Related Intolerance in September 2001… Recalling further the Declaration on the Elimination of All Forms of Intolerance and of Discrimination Based on Religion or Belief, proclaimed by the General Assembly in its resolution 36/55 of 25 November 1981, Recognizing the valuable contribution of all religions to modern civilization [somebody’s gotta be kidding!] and the contribution that dialogue among civilizations can make to an improved awareness and understanding of the common values shared by all humankind, Noting the Declaration adopted by the Islamic Conference of Foreign Ministers at its thirty-fourth session in Islamabad, in May 2007, which condemned the growing trend of Islamophobia [are we “unbelievers” (in nonsense) not to fear an ideology that, in its “holy book”, unrelentingly calls for our murder?!] and systematic discrimination against the adherents of Islam and emphasized the need to take effective measures to combat defamation of religions [how can one “defame” ideologies that are indefensible, being nothing but childish superstitions, scientific speculations by savages, babblings of deranged psychopaths, and legalistic mumbo-jumbo concocted by megalomaniacs?] Noting also the final communiqué adopted by the Organization of the Islamic Conference [is this an Islamic or a UN document?] at its eleventh summit, in Dakar, in March 2008, in which the Organization expressed concern at the systematically negative stereotyping of Muslims and Islam and other divine religions [and those who believe in Santa Claus, fairy godmothers, elves, witches, and sundry other supernatural silliness, such as ghosts, angels, and gods], and denounced the overall rise in intolerance and discrimination against Muslim minorities [and others who require no evidence to form their strongly held beliefs], which constitute an affront to human dignity [for certainly it’s undignified to hold beliefs more strongly than relevant, reliable evidence can support] and run counter to the international human rights instruments, Recalling the joint statement of the Organization of the Islamic Conference, the European Union and the Secretary-General of 7 February 2006, in which they recognized the need, in all societies, to show sensitivity and responsibility in treating issues of special significance for the adherents of any particular faith, even by those who do not share the belief in question [and in particular, sensitivity to those delusional people who are convinced that they've been abducted by aliens, that angels communicate with people, and/or that Santa Claus really does live at the North Pole, e.g., by sensitively and responsibly getting them psychiatric help], Reaffirming the call made by the President of the General Assembly in his statement of 15 March 2006 that, in the wake of existing mistrust and tensions, there is a need for dialogue and understanding among civilizations, cultures and religions to commit to working together to prevent provocative or regrettable incidents and to develop better ways of promoting tolerance, respect for and freedom of [and from] religion and belief [including “respect” for all ideas that are patently absurd?], Welcoming all international and regional initiatives to promote cross-cultural and interfaith harmony, including the Alliance of Civilizations and the International Dialogue on Interfaith Cooperation and their valuable efforts [such as?] towards the promotion of a culture of peace and dialogue at all levels, Welcoming also the report by the Special Rapporteur on contemporary forms of racism, racial discrimination, xenophobia and related intolerance on the situation of Muslims and Arabs in various parts of the world [when everyone knows that we should be tolerant of people who have made it abundantly clear that they desire to rule the world]… Welcoming further the reports of the Special Rapporteur submitted to the Council at its fourth and sixth sessions… in which he draws the attention of Member States to the serious nature of the defamation of all religions [and the defamation of fairy tales by Hans Christian Anderson and others] and to the promotion of the fight against these phenomena by strengthening the role of interreligious and intercultural dialogue and promoting reciprocal understanding [of each other’s myths and fairy tales, absurd antihuman laws, and defunct scientific theories] and joint action to meet the fundamental challenges of development, peace and the protection and promotion of human rights, as well as the need to complement legal strategies, Reiterating the call made by the Special Rapporteur on contemporary forms of racism, racial discrimination, xenophobia and related intolerance to Member States to wage a systematic campaign against incitement to racial and religious hatred by maintaining a careful balance between the defense of secularism and respect for freedom of [and from] religion and by acknowledging and respecting the complementarity of all the freedoms embodied in the International Covenant on Civil and Political Rights… Emphasizing that States, non-governmental organizations, religious bodies and the media have an important role to play in promoting tolerance and freedom of [and from] religion and belief through education [except, of course, in the case of “the one true religion”, disbelievers of which and apostates from which are to be killed], Noting with concern that defamation of religions [and defunct scientific theories, superstitions, fairy tales, astrology, and similar nonsense] is among the causes of social disharmony and instability, at the national and international levels, and leads to violations of human rights, Noting with deep concern the increasing trend in recent years of statements attacking religions, including Islam and Muslims, in human rights forums [when obviously if you’re convinced that any religion is stupid, you have no right to express your opinion]… 1. Expresses deep concern at the negative stereotyping of all religions [and defunct scientific theories, superstitions, fairy tales, astrology, and similar nonsense], manifestations of intolerance and discrimination in matters of religion or belief [I mean, after all, just because you cling to stupid ideas, doesn’t mean that you cling to stupid ideas – I guess]; 2. Also expresses deep concern at attempts to identify Islam with terrorism, violence, and human rights violations [I mean, just because such identification is abundantly clear in Islam’s “holy book”, the Koran, doesn’t mean that it’s true – I guess], and emphasizes that equating any religion with terrorism should be rejected and combated by all at all levels [for after all, people will next be calling a spade a spade, and we can’t have that]; 3. Further expresses deep concern at the intensification of the campaign of defamation of religions and the ethnic and religious profiling of Muslim minorities in the aftermath of the tragic events of 11 September 2001 [because, after all, just because all of the September 11th terrorists were Muslims and behaved in a manner consistent with Islamic teachings doesn’t meant that they were Muslims following Islamic teachings – I guess]; 4. Expresses its grave concern at the recent serious instances of deliberate stereotyping of religions, their adherents, and sacred persons in the media and by political parties and groups in some societies, and at the associated provocation and political exploitation [after all, when you have “sacred persons” such as Sir Isaac Newton defamed by deliberately provocative people such a Einstein, then who will be safe from criticism?!] ; 5. Recognizes that, in the context of the fight against terrorism, defamation of religions [and defunct scientific theories, superstitions, fairy tales, astrology, and similar nonsense] becomes an aggravating factor that contributes to the denial of fundamental rights and freedoms of target groups and their economic and social exclusion; 6. Expresses concern at laws or administrative measures that have been specifically designed to control and monitor Muslim minorities, thereby stigmatizing them and legitimizing the discrimination that they experience [for after all, if people want to be terrorists, they should have the freedom to be terrorists]; 7. Strongly deplores physical attacks and assaults on businesses, cultural centers and places of worship of all religions and targeting of religious symbols; 8. Urges States to take actions to prohibit the dissemination, including through political institutions and organizations, of racist and xenophobic ideas and material aimed at any religion or its followers that constitute incitement to racial and religious hatred, hostility or violence [so, from here on out, everybody, stop distributing the Bible, the Koran, and the Book of Mormon, cause they’re all loaded with such crap]; 9. Also urges States to provide, within their respective legal and constitutional systems, adequate protection against acts of hatred, discrimination, intimidation and coercion resulting from the defamation of any religion [and defunct scientific theories, superstitions, fairy tales, astrology, and similar nonsense], to take all possible measures to promote tolerance and respect for all religions [and defunct scientific theories, superstitions, fairy tales, astrology, and similar nonsense] and their value systems [such as: “Kill the infidels”] and to complement legal systems with intellectual and moral strategies to combat religious hatred and intolerance [certainly we should combat religious hatred and intolerance, such as is promoted in all “holy books”]; 10. Emphasizes that respect of religions [and defunct scientific theories, superstitions, fairy tales, astrology, and similar nonsense] and their protection from contempt is an essential element conducive for the exercise by all of the right to freedom of thought, conscience and religion [yes siree: defunct ideas MUST BE protected – otherwise, for goodness sake, people will start thinking for themselves, and we can’t have that]; 11. Urges all States to ensure that all public officials, including members of law enforcement bodies, the military, civil servants and educators, in the course of their official duties, respect all religions and beliefs [and all defunct scientific theories, superstitions, fairy tales, astrology, and similar nonsense] and do not discriminate against persons on the grounds of their religion or belief [I mean, just because some people are bonkers doesn’t mean you’re to consider them bonkers] and that all necessary and appropriate education or training is provided; 12. Emphasizes that, as stipulated in international human rights law, everyone has the right to freedom of expression [except, of course, those who express their opinions that anyone who believes in any god is bonkers], and that the exercise of this right carries with it special duties and responsibilities [not to criticize any religion or defunct scientific theories, superstitions, fairy tales, astrology, and similar nonsense] and may therefore be subject to certain restrictions [e.g., laws of blasphemy against defunct scientific theories, superstitions, fairy tales, astrology, and similar nonsense] but only those provided by law and necessary for the respect of the rights or reputations of others, or for the protection of national security or of public order, or of public health or morals [which certainly should be big enough loopholes to permit any theocrat to drive through with columns of tanks and armored personnel carriers]; 13. Reaffirms that general comment No. 15 of the Committee on the Elimination of Racial Discrimination, in which the Committee stipulates that the prohibition of the dissemination of all ideas based upon racial superiority or hatred [such as are contained in the Bible, the Koran, and the Book of Mormon] is compatible with the freedom of opinion and expression, is equally applicable to the question of incitement to religious hatred [except, of course, for one minor detail: people have no control over their ethnicity, but they do have (or should have) control over the stupidity in which they profess “belief”]; 14. Deplores the use of printed, audio-visual and electronic media, including the Internet, and of any other means to incite acts of violence, xenophobia or related intolerance and discrimination towards Islam or any religion [or any defunct scientific theories, superstitions, fairy tales, astrology, and similar nonsense]; 15. Invites the Special Rapporteur on contemporary forms of racism, racial discrimination, xenophobia and related intolerance to continue to report on all manifestations of defamation of religions [and defunct scientific theories, superstitions, fairy tales, astrology, and similar nonsense], and in particular on the serious implications of Islamophobia, on the enjoyment of all rights to the Council at its ninth session; 16. Requests the High Commissioner for Human Rights to report on the implementation of the present resolution and to submit a study compiling relevant existing legislations and jurisprudence concerning defamation of and contempt for religions [and defunct scientific theories, superstitions, fairy tales, astrology, and similar nonsense] to the Council at its ninth session. Believe it or not, the above resolution (for some strange reason, without the red remarks) was adopted by the UN’s Human Rights Council. Nations in favor included: Azerbaijan, Bangladesh, Cameroon, China, Cuba, Djibouti, Egypt, Indonesia, Jordan, Malaysia, Mali, Nicaragua, Nigeria, Pakistan, Philippines, Qatar, Russian Federation, Saudi Arabia, Senegal, South Africa and Sri Lanka. Wikipedia states: “Of the Council’s members from the Organization of the Islamic Conference, 16 of 17 voted for the resolution, along with China, Russia, and South Africa.” That diplomats from China, the Philippines, Russia, and South Africa voted for the resolution is a disgrace to the people that they supposedly represent. Nations voting against the resolution were Canada, France, Germany, Italy, Netherlands, Romania, Slovenia, Switzerland, Ukraine and the United Kingdom. So now, everybody, take note: according to the above resolution of the UN’s Human Rights Council, religions have “rights”! The Council should rename itself: The UN Council for the Rights of Humans and Religions. Roy W. Brown of the International Humanist and Ethical Union summarized such stupidity well: Again, why protect just religious ideas? Why not all ideas? Shouldn’t all ideas have as many “rights” as religious ideas? So, will the Council entertain its renaming as The UN Council for the Rights of Humans, Religions, and Other Ideas? But then, what would happen if ideas conflict? Suppose, for example, that someone supports the idea (as strange as it might seem) that all religions are stupid, infantile, holdovers from (as Richard Dawkins said) “the cry-baby phase” of human development. Suppose someone supports the idea expressed by Joseph Lewis: Let me tell you that religion is the cruelest fraud ever perpetrated upon the human race. It is the last of the great schemes of thievery that man must legally prohibit so as to protect himself from the charlatans who prey upon the ignorance and fears of the people. The penalty for this type of extortion should be as severe as it is of other forms of dishonesty. Suppose someone supports the idea expressed by Henry Mencken: Religion is fundamentally opposed to everything I hold in veneration – courage, clear thinking, honesty, fairness, and, above all, love of the truth… God is the immemorial refuge of the incompetent, the helpless, the miserable. They find not only sanctuary in His arms, but also a kind of superiority, soothing to their macerated egos; He will set them above their betters. Suppose someone supports the idea expressed by Robert Ingersoll: The doctrine that future happiness depends upon belief is monstrous. It is the infamy of infamies. The notion that faith in Christ [or Allah] is to be rewarded by an eternity of bliss, while a dependence upon reason, observation and experience merits everlasting pain, is too absurd for refutation, and can be relieved only by that unhappy mixture of insanity and ignorance, called “faith.” Suppose someone supports the idea expressed by Clarence Darrow: Suppose someone supports the idea expressed by Mikhail A. Bakunin: Religion is a collective insanity. Suppose someone supports the idea expressed by Thomas Edison: So far as religion of the day is concerned, it’s a damned fake… Religion is all bunk. Suppose someone supports the idea expressed by W.K. Clifford: It’s wrong always, everywhere and for everyone to believe anything upon insufficient evidence. Suppose someone supports the idea expressed by William Archer: Suppose someone supports the idea expressed by Bertrand Russell: My own view of religion is that of Lucretius. I regard it as a disease born of fear and as a source of untold misery to the human race… I am as firmly convinced that religions do harm as I am that they are untrue. Suppose someone supports the idea expressed by Carlespie Mary Alice McKinney: Suppose someone supports the idea expressed by Gene Roddenberry: Suppose someone supports the idea expressed by Frank Zappa: If you want to get together in any exclusive situation and have people love you, fine – but to hang all this desperate sociology on the idea of The Cloud-Guy who has The Big Book, who knows if you’ve been bad or good – and CARES about any of it – to hang it all on that, folks, is the chimpanzee part of the brain working. Suppose someone supports the idea expressed by Joseph Daleiden: In the final analysis all theology, whether Christian or otherwise, is a marvelous exercise in logic based on premisses that are no more verifiable – or reasonable – than astrology, palmistry, or belief in the Easter Bunny. Theology pretends to search for truth, but no method could lead a person farther away from the truth than that intellectual charade. The purpose of theology is first and foremost to perpetuate the religious status quo. Religion, in turn, seeks to maintain the social stability necessary for its own preservation. Suppose someone supports the idea expressed by President Thomas Jefferson: Religions are all alike – founded upon fables and mythologies. Suppose someone supports the idea expressed by President James Madison: Suppose someone supports the idea expressed by President Abraham Lincoln; Suppose someone supports the idea expressed by Prime Minister Winston Churchill; Individual Muslims may show splendid qualities – but the influence of the religion paralyses the social development of those who follow it. No stronger retrograde force exists
in the world. Suppose someone supports the idea expressed by Albert Einstein: The Jewish religion like all other religions is an incarnation of the most childish superstitions. Suppose someone supports the idea expressed by Richard Dawkins: If all the achievements of theologians were wiped out tomorrow, would anyone notice the difference? Even bad achievements of scientists, the bombs and sonar-guided whaling vessels, *work*! The achievements of theologians don’t do anything, don’t affect anything, don’t mean anything. What makes anyone think that “theology” is a subject at all? Suppose someone supports the idea expressed by Sam Harris: We have names for people who have many beliefs for which there is no rational justification. When their beliefs are extremely common we call them “religious”; otherwise, they are likely to be called “mad”, “psychotic” or “delusional”. Suppose someone supports the idea expressed by Sunand Tryambak Joshi: The atheist, agnostic, or secularist… should not be cowed by exaggerated sensitivity to people’s religious beliefs and fail to speak vigorously and pointedly when the devout put forth arguments manifestly contrary to all the acquired knowledge of the past two or three millennia. Those who advocate a piece of folly like the theory of an “intelligent creator” should be held accountable for their folly; they have no right to be offended for being called fools until they establish that they are not in fact fools. Then tell us, Oh Wise Members of the Human Rights Council of the United Nations, will such ideas also be protected – or just the ideas recorded in sundry, ridiculous “holy books”? No wonder respect for the UN continues to plummet. As Sigmund Freud said about all religious beliefs: Maybe Freud was right, but I’m not quite ready to give up on humanity. Instead, I’d urge all readers to ridicule all gods, all religions, and all “holy books” out of existence. Think about it, and if you’re so inclined, consider what Bertrand Russell said: But as sad as it is for me to say, I’ve lost hope in the UN. Yet, that’s not to suggest that some UN organizations aren’t successful (e.g., UNESCO, UNICEF, WHO, WMO, and others), but we need to start over, planning to keep what’s working and to jettison what isn’t (such as the General Assembly, the Security Council, and the Human Rights Council). Best would be start over “from scratch”, because with the rules of the existing UN, I expect that the members will never agree to needed reforms, since nations almost certainly won’t agree to reduce their representation, privileges, and power. For example, starting from scratch, let’s invite all nations to join a new organization (maybe call it the Global Congress, GC, or the Global Council, GC, or the Global Cooperative, GC, or similar), with two houses of congress, with passage of any resolution requiring a majority in both houses, and with each participating nation having representation in both houses. In one house, maybe call it “The House of Rights”, the votes of the representatives would be weighted by the number of people that each diplomat represents multiplied by a measure of the people’s freedom, e.g., as a first approximation (until a better measure becomes available) as given by summing columns A (Electoral Process) through G (Personal Autonomy and Individual Rights) from the Table produced by Freedom House, with the goal being to have each vote reflect the freely held opinions of people whom each diplomat represents in reality rather than as claimed by the nation's rulers. In the other house of the Global Congress, maybe call it “The House of Responsibilities”, the votes of the same nation’s diplomats would be weighted by the financial contributions to the GC made by each representative’s nation, with the goal being to have each vote reflect the willingness of each nation to shoulder the responsibilities associated with each resolution. For example, if funding for the GC were similar to current funding of the UN, then those nations paying “the floor rate” of 0.001% of the total budget would have their votes in The House of Responsibilities multiplied by 0.001%, while (again if current contributions continued) votes of the following nations would be multiplied by the following numbers (reflecting their 2007 percentage contributions to the total UN budget): US 22% (the maximum currently permitted), Japan 16.6%, Germany 8.7%, UK 6.1%, France 6.0%, Italy 4.9%, Canada 2.8%, China 2.7%, Spain 2.5%, Mexico 1.9%, Australia 1.6%, Brazil 1.5%, etc. Similar weightings of all ballots would occur in all Committees, Councils, Working Groups, etc., established by the GC, although probably not by establishing two subgroups within each group (although the rights and responsibilities associated with every resolution would, of course, need to be thoroughly and separately evaluated), but instead, by weighting each ballot in each group both via “rights” and “responsibilities”. For example, the following table shows the results of the two separate weightings for a GC Human Rights Council vote on the resolution dealing with “Combating defamation of religions”, assuming the votes cast would be the same as were cast in the UN Human Rights Council. For the calculations shown (click the table to enlarge it), numbers in the “Relative Freedom” column are obtained by summing columns A through G of the 2007 Freedom House figures already referenced (used until a better measure of freedoms becomes available), and numbers in the “Rights-Weighted” column are the products of the nation’s “Population” (in hundreds of millions) and its “Relative Freedom” (divided by 100). The “Responsibilities-Weighted” column is obtained from the 2007 funding for the UN, copied from the relevant UN report (p. 9). As a result, for the same nations voting in the same manner on the same resolution in a GC’s Human Right Council (which passed in the UN’s Human Rights Council by a vote of 21 to 10 - although the representatives from China and Russia might display more responsibility when their votes are more significant), the “Totals” show (if I’ve made no errors copying all those numbers!) that although the resolution would have passed by 7.86 to 3.59 (about 2 to 1) with a rights-based weighting, it would have failed by 32.876 to 5.712 (or about 33 to 6) with a responsibilities-based weighting. Therefore, upon failing to be approved by both measurements, the stupid, antihuman, religious-defamation-nonsense resolution would have been rejected. As for other details about the proposed Global Congress, they would be worked out by mutual consent. I’d expect agreement that an Administrator would be elected by a majority of both houses for a single term (maybe for six years), that the Administrator (and only the Administrator) would have veto power, which could be over-ridden by a two-thirds majority in both houses, and that the Administrator would be commander-in-chief of the GC’s police forces. Also, I expect that an independent judiciary would be established with lifetime court-appointments of judges by majorities in both houses - all (including the Administrator and all diplomats) of course impeachable by a majority in both houses. If a few nations started the GC (e.g., the US, Japan, EU nations, Australia, Canada, India…), then I expect that within a few years, the rest of the nations of the world would quickly follow, leading to the simultaneous abandonment of the UN. Good riddance: the UN is UNdermined, UNsuitable, UNsound, UNtenable, UNwise, UNworkable, UNrepresentative, UNdemocratic, UNconscionable, UNscrupulous, UNsuccessful, and UNworthy of further hope. Some Saudi Odds & Ends 1. For the few readers who might be regulars, not only “thank you” but also: you might remember that this blog carried the “Free Fouad” banner until a few weeks ago, when he (Fouad Al-Farhan, the Saudi blogger who was brave enough to use his own name and who was arrested on 10 December 2007) was released from a Saudi prison on 26 April 2008. According to a 27 April 2008 CNN report, “a spokesman for the Saudi Interior Ministry said al-Farhan was arrested… ‘because he violated the regulations of the kingdom’.” But a 23 May 2008 CNN report, about this week’s arrest by Saudi Secret Police of the Saudi political science professor and human rights activist Dr. Matrook al-Faleh, states that Fouad was arrested in December “after he called for the release [on his blog] of a group of detained peaceful reform [activists].” What I didn’t realize earlier was that the “Free Fouad” campaign was launched by another Saudi blogger who was also sufficiently brave to use her own name, Hadeel Alhodaif. Now, in the 19 May 2008 issue of Times Online, Michael Theodoulou reports: The Saudi blogosphere is in mourning after the sudden death of a young female web-diarist and author who battled for a freer media in the restrictive kingdom. Hadeel Alhodaif died last Friday after failing to emerge from a coma she fell unexpectedly into last month, just two days after her 25th birthday. More information about Hadeel is available in a story at the English-language Saudi daily, Arab News. Hadeel’s “unexpected coma” and untimely end seem extremely odd. If Saudi authorities would like to remove suspicion that their Secret Police were involved in her murder, then I’d strongly recommend that they quickly invite a team of internationally respected coroners, toxicologists, and pathologists to thoroughly investigate and fully report on the cause of her death. 2. Regular readers might also remember my “Open Letter to the King of Saudi Arabia”, stimulated by his call for “dialogue [among] representatives of all monotheistic religions to sit together with their brothers in faith and sincerity to all religions… so we can agree on something that guarantees the preservation of humanity against those who tamper with ethics, family systems, and honesty… to [combat] the disintegration of the family and the rise of atheism in the world.” As a follow-up, the following information is from a report by Abdul Ghafour in the 22 May 2008 issue of Arab News: JEDDAH, 22 May 2008 — A three-day international Islamic conference will begin at the Muslim World League (MWL) headquarters in Makkah on May 31 in preparation for the interfaith dialogue called for by Custodian of the Two Holy Mosques, King Abdullah. Dr. Abdullah Al-Turki, secretary-general of the MWL, said the conference would discuss the basis for dialogue with other faiths in the light of the Qur’an and Sunnah. “It will also review past experiences in the field to make use of them,” he added. The conference, to be attended by leading Islamic scholars from around the world, will focus on four pivotal topics, such as the basis of dialogue in Islam, the methodology and principles of dialogue, parties involved in dialogue, and areas of dialogue… The planned conference seems quite odd. If the end sought by Islamic clerics were dialogue with other “divine religions”, then doesn’t it thwart that end (destroying the spirit of ‘dialogue’) to first meet to decide on what they’ll mean by ‘dialogue’? Won’t the result be just another muzzled Muslim monologue? More generally, if the end sought by Islamic clerics were to help their societies rather than themselves, I’d strongly recommend that they abandon plans for their clerical get-together to define their dialogue strategy and, instead, get together with people even in their own societies who see that the clerics are ruining their societies. For example, they’d be well advised to have a dialogue with the brave Saudi anthropologist Sa’d Al-Sowayan. To illustrate what I mean, consider the following exchange as given at the excellent MEMRI website during an interview with Sa’d Al-Sowayan, which aired on Al-Arabiya TV on 25 April 2008. Interviewer: “To spread the prevalent views.” Sa’d Al-Sowayan: “Exactly. Secondly, anthropology...” Interviewer: “Scientifically, speaking, what’s wrong with spreading the prevalent views?” Interviewer: “Does this include the basic principles [of religion]?” Interviewer: “The Koran and the Sunna do not constitute basic principles?” Sa’d Al-Sowayan: “The text is static, but the way people interpret it is not. You can interpret the text in a way that corresponds with the age in which you live.” Interviewer: “So you have no problem with people interpreting the text differently in each age?” Sa’d Al-Sowayan: “As long as it is compatible with the spirit of the text.” Interviewer: “In other words, the spirit of the text remains, and in each age, there is an adaptation [of the text].” Interviewer: “You are openly calling for secularism?” Sa’d Al-Sowayan: “Secularism is not as dangerous as people think. They [the clerics] have instilled... They reduce you to that single word, so that they can classify you more easily.” Interviewer: “So you are saying that the term ‘secularism’ has been distorted by a group of people.” Sa’d Al-Sowayan: “The interpretation given to this term is incorrect. For example, the Messenger consulted with other people about worldly issues. In my view, this is secular behavior. In religious matters related to divine revelation, the Prophet was the ultimate authority. But with regard to worldly matters, he turned to the relevant experts.” [...] Yet, if you consider the ends that the Muslim clerics actually pursue, it really isn’t odd that they reject secularism: it’s not the people that they want to help, it’s themselves; it’s not the people they want to protect, it’s their own turf; their goal, stipulated in their “holy book” (the Koran), is to rule the world. 3. Still another odd report out of Saudi Arabia is in an article in Arab News entitled “Identify Causes of Decline, Scholar Tells Muslim”. The report by Ebtihal Mubarak starts: Dr. M. Umer Chapra, an eminent economist, social scientist and the winner of the King Faisal International Prize, has urged Muslims to identify the reasons for their decline. After making vitally important contributions to civilization for several centuries, the Muslim world went into decline and Chapra would like for the lost glory to become a reality once again… Chapra emphasized the need for material as well as spiritual progress for a balanced development of humanity. He said only Islam can present such an equation. [Italics added] It’s odd that this “eminent… scholar”, Chapra, would come to the conclusion that “only Islam can present such an equation”, when obviously the “lost glory” is relative to societies free of Islam! It’s also odd that the “eminent… scholar”, Chapra, doesn’t heed advice even of fellow Muslims, such as the “Syrian philosopher” Sadik Jalal Al-’Azm. MEMRI provides excerpts from an interview with Al-’Azm that was published in the Qatari Al-Raya daily. All of his remarks are worth reading; here, I’ll quote just some of them: In [my] book Critique of Religious Thought I described the thought in those days [between 1969 and 1970] as impoverished. The title of the first essay in the book is “The Scientific Culture and the Impoverishment of Religious Thought.” Now I see that this impoverishment [in the Muslim world] has deepened and grown worse. In that period… there was [at least] an attempt by Islamic thinkers to deal with the problems and questions of modern science. They tended to base their discussion and argument on reason, reality, and the course of events. Now, I find that the religious thought that has emerged on Islam is in an even deeper state of impoverishment… In that period, when I discussed the impoverishment of religious thought, I dealt with a number of Islamic thinkers and clerics, such as the Mufti of Tripoli Nadim Al-Jasser, Musa Al-Sadr, and others. At that time I saw that they wanted to deal with modern science, the scientific revolution, and applied science; however, unfortunately, they were ignorant of everything related to modern science: What is the meaning of science? What are the ways of scientific inquiry? Often their only knowledge of physics, chemistry, or anatomy since finishing elementary school came from reading the newspapers. They wanted to oppose the societal influence of scientific development and technological achievements while at the same time acting with an almost complete ignorance in these matters. In my estimation this has grown even worse today. There is greater ignorance. There are opinions, especially in fundamentalist Islam, that completely reject modern science, the West, and all that it produces. If you take their thinking to its logical conclusion, they will become [like] the Taliban on this issue. They relate to problems with complete stupidity. For example, I read some of Imam [Ayatollah Ruhollah] Khomeini’s fatwas. In one of them, he presents the matter of a Muslim going into space in a space capsule. He discussed how he should pray, and how he should figure out in which direction to pray in outer space… The problem is that Khomeini is not familiar with any of the achievements, the attainments, the sciences, or the technological knowledge relating to space. All that interests him is how a Muslim should bow and pray, and how he should fast when he stays there for a long period of time. After this discussion, Khomeini arrives at the conclusion to permit a Muslim to pray in any of the four directions. Obviously, this way of thinking betrays [his] complete ignorance, as the directions are a matter of convention; there are no four directions in nature... They are opposed to matters like test-tube babies, or innovations, for example, in the area of the genetic code (DNA) and genetic reproduction as well as other scientific breakthroughs and discoveries. They have no knowledge of the nature of these sciences, how the scientists arrived at them, and what were the experiments that preceded them. They are not in possession of a culture of science and they are radical in this matter. This is regarding the Shi’ites, but [there are examples] also among the Sunnis, [like] Sheikh ‘Abd Al-’Aziz ibn Baz, the senior religious scholar in the kingdom of Saudi Arabia… In Ibn Baz’s book, published in 1985, he completely rejected the idea that the earth is round. He discussed the question on the basis that the earth is flat. He completely rejected the idea that the earth orbits the sun. I own the book and you can verify what I am saying. And so, the earth does not orbit the sun, rather it is the sun that goes around the earth. He brought [us] back to ancient astronomy, to the pre-Copernican period. Of course, in this book Ibn Baz declares that all those who say that the earth is round and orbits the sun are apostates. At any rate, he is free to think what he wants. But the great disaster is that not one of the religious scholars or institutions in the Muslim world, from the East to the West, from Al-Azhar to Al-Zaytouna, from Al-Qaradhawi to Al-Turabi and [Sheikh Ahmad] Kaftaro, and the departments for shari’a study – no one dared to tell Ibn Baz what nonsense he clings to in the name of the Islamic religion… The fact that you tell me that this is a sensitive matter – this means that I cannot reply to the words of Ibn Baz when he says that the Earth is flat and does not go around the sun, but rises and sets, in the ancient manner. This is a disaster. The greatest disaster is that we cannot even answer them… The official religious institutions, first and foremost Al-Azhar, the faculties of shari’a, the departments of religious rulings, and so on are in a state of complete intellectual barrenness. They produce nothing but rulings like adult breastfeeding, the hadith of the fly, blessing oneself with the Prophet’s urine, and flogging journalists. The field has been abandoned to the jihadist-fundamentalist ideology, as it is the only one that raises thoughts that are worthy of being discussed and rejected. This is because of the barrenness of the major official institutions which are considered to be exemplary. They are filled with repetitiveness, ossification, regression, protecting [particular] interests, perpetuating the status quo, and submission to the ruling authority. If the state is socialist, the Mufti becomes a socialist; if the rulers are at war, the clerics are pro-war; if the governments pursue peace, the [religious authorities] follow them. This is part of the barrenness of these institutions. This [forms a] vacuum in religious thought that is filled by the [intellectual] descendants and followers of Sayyid Qutb, for example, and that type of violent fundamentalist Islam… There is no doubt that in Muslim countries the slogan “Islam is the solution” is attractive and brings people in. However I believe that this enlistment is superficial and sentimental, since when people deeply examine the substance of these slogans and the platforms it includes, they will begin to examine and discuss it anew. Likewise, they will raise pressing questions, for example: Is the meaning of “Islam is the solution” the reestablishment of the Caliphate? And is the reestablishment of the Caliphate a realistic program? And so on. I think that the Caliphate could return when the Bourbons or Louis XVI return to rule in France, or the czars return to rule in Russia. In Russia there is a Czarist party that wants to establish constitutional czarist rule. If it succeeds, then perhaps the Islamists will succeed in reestablishing the Caliphate. As for these movements’ understanding of implementation of the shari’a, it could be summed up in the penal code, that is, flogging, stoning, cutting off hands, feet, heads, and so on. But what would happen if [one of the Islamists], for example, or his son or relative, was sentenced to flogging, to having his hands cut off, or whatever? In this situation he would reject this penal code. Perhaps they would agree to a fine, jail, or some other punishment, but he would not agree to flogging, stoning, or the cutting off of a hand. Therein lies the problem. When the Islamists reach power, as they did in Sudan, for example, they are wary of implementing these punishments. When you carefully examine the slogan “Islam is the solution”, you discover that the people are already apprehensive and have second thoughts about implementation of this slogan… I believe that the Islamists’ conception of implementing the Muslim shari’a is [really] martial law. When military officers take over the government they declare a state of emergency and martial law. When Islamists come to power they declare the implementation of the shari’a – and in this way they are no different from each other. In my opinion, their most important role is to terrorize people… I am pessimistic about Arab culture in general… Culture is not the primary mover [that determines] the life of society or what policies are followed. It is not the primary mover in the historic orientation of one Arab country or another. There are those who think this, but there are crises on another level [that are only] reflected in the prevailing culture in [these] societies… It may be that there is a crisis of the rulers, or the economy, or a crisis of the elites, or some other type of crisis. But one cannot say that it is because of our culture that we suffer from all these problems… there are many impediments [to progress] to be found in [various] peoples’ cultures and traditions. At the same time – especially in the current period – there is a reluctance to investigate these impediments, define them, examine them closely, and criticize them in order to overcome them and remove them. The tendency to do so has grown weaker at present, and there is a kind of obsequiousness and deference to traditions and customs, whether they are backward or not. When we simply look at the Arab world, we see that it consumes everything but that it produces nothing apart from raw materials. What can we expect from the Arabs? Look at the Arab world from one end to the other; there is no true added value to anything. There is a structure that seems not to encourage production and to not be for it. What do we produce? What do we export? [This is true] whether you are talking about material, economic, scientific, or intellectual production, or any other kind. Look at oil production, for example. What is the Arabs’ relation to the oil industry? They own the oil, but they have nothing to do with its extraction, refinement, marketing, or transport. Look at the huge installations for prospecting oil, extracting it, and refining it. Look at the Arab satellite: what in it is Arab? I doubt the ability of the Arabs to produce a telephone without importing the parts and the technologies it requires, and perhaps even the technicians… We need to take as our starting point the fact that no society is fundamentally endowed with a natural readiness for democracy. Democracy is a cumulative historical process. It would be a mistake to adopt the opinion that [this is] impossible, and that since we are tribal and sectarian we need to do away entirely with the idea of democracy, say that it is not appropriate for us, and close the door before it. In China they say a thousand-mile journey starts with a single step. I am in favor of attempts and experiments. There are previous experiences from which we can benefit. I do not despair or throw my up my hands, despite being aware of the difficulty of this issue and the complications it entails. No [society] had a structure that was fundamentally appropriate and fit for democracy. We, like other people, can learn, and accomplish 20 percent, then 30 percent, then 40, 50, and more. It is a cumulative process that depends on the steps taken to educate people in schools and educational institutions and train them gradually for the practice of democracy. If we don’t do this, we will be governed by the saying: as you are, so will you be ruled. If you are tribal, you will be ruled by tribes; if you are backward, you will be ruled by the backward; if you are clannish, you will be ruled by clans; and if you are sectarian, you will be ruled by sects, and so on. This is to fall into a cycle from which there is no escape. Or else there is [another] Arabic saying that would apply to us: the people are of the religion of their rulers. If the ruler is democratic, all of us will become democratic, and if the leader is a dictator, all of us become pro-dictatorship. As though we are condemning ourselves to a position of quiescence from which there is no escape. I reject this… It is difficult for the Arab mentality in its current structure to produce democracy, but I do not believe that this mentality is an eternal fixed [attribute]. I [would] accept a model that is 30 percent successful, though up to now we have not been able to accomplish this. There is sectarian democracy in Lebanon, it is a regime of quotas, and not a democracy based on citizenship. The political regime in Lebanon prevents a dictatorship through sectarian balances, but [it] has not achieved true democracy based on citizenship. Likewise, Iraq is going in the same direction… In my opinion, if the Iraqis want to maintain the unity of their country and avoid a grinding civil war, they must learn historical lessons from what they are going through today. The Shi’ite majority cannot say that the meaning of democracy is majority rule and that’s the end of it. They must say that it means majority rule with protection for the rights of minorities, and by this I mean political minorities, and not necessarily numerical, ethnic, or religious minorities. They say, We are the majority and therefore we will rule, and democracy is majority rule. But this is to stray from the truth. Democracy is rule by the majority with the protection of minority rights. Otherwise the state will face division, civil war, and ruin. This is an issue that the Arab mind needs to study: that it must accept the other, and it must accept the possibility of the minority reaching [power] if its alliances make it into the majority – [but this] without [the minority] discriminating against the majority or taking revenge on it after reaching power. In Iraq there are also many Islamic parties and movements from various schools [of jurisprudence]. Are they capable of implementing the shari’a in accordance with Sunni or Shi’ite belief? Not unless they are prepared to sink into a grinding civil war. What can you learn from this if you are not interested in a civil war or the disintegration of the state? You learn to be wise and build neither a Shi’ite nor Sunni state, but rather a state based on citizenship, truth, law, and social justice. This belief comes as a result of historical lessons, but there are those who learn quickly and others who never learn. In Lebanon, for example, they didn’t learn, and they experienced a grinding 16-year civil war; but considering what is happening there now, one feels they learned nothing from it, especially regarding the sectarian issue. Question: “Are you really an atheist or a ‘Damascene heretic’ as some people have described you?” Answer: (laughs) “Can you imagine a serious, learned intellectual in our Arab countries not being seduced by ideas like a critical attitude towards traditional religious beliefs, doubt and non-determinism, and the idea of using a scientific approach to understand religious phenomena? From the time of Qasim Amin to the present, there have been those who promulgate and publicize their reactions to subjects like these. Naturally the religious institutions and clerics look at this matter in terms of atheism, heresy, and so on. But at the end of the day, there remains something that is a matter of the conscience, and this is part of the freedom of conscience of every man. In the future, by the way, if ever I get around to updating my list of brave Muslims who are trying to drag Islam out of its clerically-imposed Dark Ages, challenging the backward Islamic clerics at substantial risks to themselves, I’ll add the names of Fouad al-Farhan, Matrook al-Faleh, Hadeel Alhodaif, Sa’d Al-Sowayan and Sadik Jalal Al-’Azm. So long as such heroes have the courage to speak up, I have some hope for the poor Muslim people (especially the children) hamstrung by their horribly ignorant clerics. But I must admit that my hopes for progress in the Muslim world are held very tenuously: Islamic clerics have brainwashed the people so thoroughly, manipulating them with Muhammad’s oxymoronic madness about life after death, that they may be immune from even considering the liberating ideas of intellectuals such as Al-Sowayan and Al-‘Azm. A similar horrible dynamic is rampant in the West: parents infected with the god meme pass their degeneracy on to their children, who are then immune from critical thinking. In the Epilogue to his brilliant 2004 book The End of Faith: Religion, Terror, and the Future of Reason, Sam Harris beautifully summarized both the problem and the needed solution: Elsewhere, I’m in the process of posting descriptions of how we may be able to make progress toward the goals described by Harris; I’ll probably review such ideas in future posts; here, I’ll just outline my recommended four-phase plan (the same plan that kids use to “handle” other kids who are “real brats”): 1) Ridicule the theists, 2) Set a better example (viz., scientific humanism), 3) Explain to the theists what they’re doing wrong (i.e., holding beliefs more strongly than relevant evidence warrants), and 4) If they fail to smarten up, then exclude them from cooperative activities (e.g., exclude Muslims from immigrating to Western countries). But instead of pursuing such a plan, consider what the West is doing. During the previous century, Westerners struggled, fought, and died to defeat both fascist and communist ideologies. For example, Hugh Fitzgerald at Jihad Watch points out that in 70 years of its propaganda campaign against the West, “the Soviet Union spent between eight and nine billion dollars.” In contrast, The total amount spent by just one Muslim country (admittedly the richest), Saudi Arabia, in furthering the cause of Islam over the past three decades, is close to 100 billion dollars. Think of all the mosques built and maintained, all the imams on the payroll, all the missionaries conducting Da’wa in American and British prisons, all the Western hirelings, in the capital of every Western country, whose full-time job is to explain away the Al-Saud, and the mutawwa of Saudi Arabia, and Islam itself, its texts, its tenets, its attitudes, its atmospherics. Why are Western governments permitting such propaganda campaigns by a foreign power, which is intent on having their ignorant Wahhabi clerics rule the world? The ignorance of Muslim clerics is well illustrated by the fatwa mentioned in the above quotation from Al-’Azm and issued in 1993 by the presiding cleric, ‘Abd al-’Aziz Bin (or Ibn Baz), of the Saudi Permanent Committee for Scientific Research (by which they seem to mean, “research” into the craziness of the Koran): What’s the matter? You think it odd that anyone would suggest that people are atheists (and therefore, according to the Koran, they should be killed) if they accept the evidence that the Earth is more like a sphere than a flat plate? And if you think it’s odd that we permit the Islamists to promote their stupid propaganda in the West, then perhaps you’ll agree that Westerners better smarten up: wake up, smell the manure, read the Koran. Islam isn’t a “religion of peace”; Islam was never just a religion; since the time of the megalomaniac Muhammad, Islam has been a political movement, complete with its own (barbaric) law code, their shari’a – which includes “almost 70 rules about how to urinate and defecate”, as described by the mentally-challenged Saudi cleric Muhammad Al-Munajid. He adds: Again, Islam is an all-encompassing way of life, an ideology, just as were Nazism and Communism, and similar to Nazism and Communism, Islam’s goal (stated clearly in the Koran) is to rule the world. For example, in the same speech the Saudi cleric Al-Munajid stated: And such craziness isn’t confined to the Sunnis (e.g., the Wahhabis); thus, speaking to a group of his Shi’ite religious students, the Iranian president Mahmoud Ahmadinejad said: We must believe in the fact that Islam is not confined to geographical borders, ethnic groups, and nations… We don’t shy away from declaring that Islam is ready to rule the world. We must prepare ourselves to rule the world. The major difference between Islam and Nazism or Communism is that the end desired by Islamic supremacists is a theocracy – not with any god ruling (of course), but ruled by the most ignorant of people, namely, clerics. We in the West beat the Christian, fascist, and communist supremacists, but now we’re permitting infiltration by Saudi Wahhabis who want to rule the entire, flat-plate world, complete with 70 rules for how to urinate and defecate. Permitting such craziness is not only extremely odd; it’s extremely foolish. In fact, permitting Saudi propaganda to continue to pollute the West is not only odd and foolish; it’s treason against humanity. It must end. To end it, since most of our politicians seem to have been bought-off by the Saudis (using money gouged from us via their oil-cartel), then “we the people” will probably need to stop the Muslim madness by ourselves. Maybe someday we’ll be able to elect politicians who will enforce laws prohibiting foreign political parties from interfering with our domestic policies (barring Islam in the West, just as we barred the Communist and Nazi Parties). Until then, we can make progress by diminishing Saudi money supply, by riding bicycles, joining car pools, using public transportation, communicating more electronically and less personally, picketing against oil-burning power plants and in support of coal-fired and nuclear plants, promoting the development of our own oil resources (e.g., our huge oil-shale reserves, which contain more oil than Saudi Arabia), participating in the use of renewable energy, such as biofuels and solar, geothermal, wind, wave, and tidal energy, and supporting international fusion-research programs (which if successful, will provide unlimited electrical energy from fusing the heavy hydrogen isotopes found in abundance in the oceans). In the end, the choice is between embarking into the future with the help of the world’s most knowledgeable scientists or being enslaved in the past by the world’s most ignorant clerics. Being and Time from Nothing Maybe I’m losin’ it. During a phone call last week, my daughter said she was reading something by Heidegger. I didn’t remember him. I asked if he was the guy in the mid-1800s who said: “I stick my finger into existence…” She said she didn’t think so; that Heidegger published in the 1900s. Then worse for my ego: she informed me that she had my copy of Heidegger’s book! So this week, I spent some time on the internet trying to refresh my memory about the German philosopher Martin Heidegger (1880–1976), “counted among the main exponents of 20th century Existentialism.” And so, at least I got the existentialist-link right by remembering Kierkegaard, i.e., the fellow who said: Wikipedia states: “[Heidegger’s] best known book, Being and Time, is generally considered to be one of the key philosophical works of the 20th Century.” That’s probably the book that my daughter has, but since it wasn’t easy to retrieve (she lives in a different city), I searched on the web and found excerpts of his essay with the similar title: “On Time and Being”. Upon reading that essay, maybe I got a hint about why I didn’t remember him. For me, Heidegger is similar to Kant: his writing is damn near impenetrable, awash in speculative ruminations (“chewing the cud”). While reading Heidegger, I find myself frequently saying to myself: “Spit it out, fella’! What are you trying to say?!” So, feeling a little better about my memory but wondering if I might have missed something, I looked up some of his “famous quotations”. I thank those who hoed through Heidegger’s rows and rows of verbiage (similar to Kant’s) to find a few morsels that others might be able to finally sink their teeth into! For this post, my plan is go through a few Heidegger quotes, try to respond to some of his dangling questions, and try to point out where I think he went wrong, starting with his: Making itself intelligible is suicide for philosophy. What a crazy attitude for a philosopher to take! Yet, my experience with his and Kant’s writings suggests that at least they chose to avoid suicide, opting instead for long, slow, drawn-out deaths! In his 1935 book What is Metaphysics? Heidegger asks: Wikiquote gives the translation: But a translation that’s more common on the web is: Why are there beings at all, instead of nothing? That is the question. By ‘Being’ apparently Heidegger means essentially ‘existence’; so, another translation could be: “Why does anything exist, rather than nothing?” Well, since elsewhere I’ve addressed that (very old) question in some detail (and by the way, certainly it’s not a “Heidegger original”), therefore, here I’ll try to be brief. And the briefest answer is that the question is wrong: in fact, there’s nothing here! If that answer seems silly, then I’d encourage you to go through a detailed inventory: you’ll probably quickly agree that the net electrical charge in the universe is zero (from Coulomb’s “law” also known as the principle of the conservation of electrical charge), you’ll probably also agree that the net linear (and angular) momentum in the universe is zero (from Newton’s second “law”, also known as the principle of conservation of momentum, applied to a closed system with no net momentum initially), and if you’ll think about it for a bit, you’ll see that the total energy in the universe is also zero (from the first “law” of thermodynamics, again applied to a closed system that before the Big Bang had no energy, and with Einstein’s recognition that mass is a form of energy, i.e., E = mc^2, and with Dirac’s recognition that space, itself, is “brim full” with negative energy). Below, I’ll comment a little on the possibility that the total entropy of the universe is also zero. Now, if the idea that there’s nothing in this universe seems to make some sense but not much (because it sure feels like there’s something here), then welcome to the club! Below, I’ll sketch a resolution to that dilemma and suggest how what-we-perceive as something might have come into existence. Then, I’ll use that “resolution” to address the above quotation from Heidegger as well as some of his other quotes. To start the resolution, surely the reason why we have the impression that there’s something here in this universe (when in reality there’s nothing) is because we’re accustomed to recognizing only what we commonly call ‘positive’ energy, especially mass but also all the other forms of positive energy, including heat (or thermal energy), mechanical energy, gravitational energy, electromagnetic energy, chemical energy, nuclear energy, and so on. Simultaneously, though, we ignore (unless someone calls our attention to it!) all the negative energy that’s everywhere around us (and even inside us – even inside every atom and nucleus), namely, space, itself. Worse, not only are we normally oblivious to all the negative energy around us (and even in us), we have the audacity to call it empty space! When Dirac first saw that space was “brim full” of negative energy (when he modified the Schrödinger equation of quantum mechanics so that it would be invariant under the Lorentz transformation), he wrote that he didn’t understand what the result (for a free electron) meant; i.e., that he didn’t understand the meaning of ‘negative energy’. Actually, though, that shouldn’t have been much of a surprise, because if you’ll think about it for a bit, I expect you’ll conclude that we also don’t know what ‘positive energy’ is! In reality (and about reality), the statement ‘energy exists’ is probably the most foundational statement that can be made: it can’t be rephrased in more fundamental terms. Maybe in a thousand-or-more years from now people will be able to say ‘gluck glicks’ (or similar) and show how ‘gluck glicks’ implies ‘energy exists’, but for now, ‘energy exists’ must be taken as an irreducible, base statement – meaning, that’s all we know about it; so, get over it! Still, Dirac discovered something new, namely, that whatever energy is, it can be both positive and negative – whatever that means! For his discovery, he shared the 1933 Nobel Prize in Physics (with Schrödinger) – in particular, for his prediction (subsequently confirmed) that if a hole were to appear in the sea of negative energy that we call ‘space’ (or ‘the vacuum’), then the hole would appear as what has subsequently been called an ‘antiparticle’. For example, in his Nobel Prize acceptance speech, Dirac wrote: We now make the assumptions that in the world [or universe] as we know it, nearly all the states of negative energy for the electrons are occupied, with just one electron in each state, and that a uniform filling of all the negative-energy states is completely unobservable to us. [Italics added] Further, any unoccupied negative-energy state, being a departure from uniformity, is observable and is just a positron. Subsequently, Dirac’s result has been generalized for other ‘negative-energy states’ (not just for electrons), and data supporting his idea that if a hole develops in space, then we observe the hole as an antiparticle has been overwhelming confirmed for a huge variety of antiparticles. If it’s then appreciated that space (or the vacuum) is brim full of negative energy (and as Dirac said, all the negative-energy states are then unobservable to us, because they’re filled uniformly), then returning to the old question (repeated by Heidegger) about how something could have been created from nothing (viz., ex nihilo), we can see the answer using the simplest possible mathematics. Thus, Something, say S, could have been created from Nothing (say, Zero) via 0 = S – S. That is, Nothing (Zero) can yield any type of Something (e.g., energy) – provided that it’s exactly balanced by the negative of that Something. Such seems to be what occurred to create our universe. It’s proposed that “originally” there was “totally nothing”. [I put those words in double quotation marks, because if you’ll think about them for a while, you’ll conclude (correctly) that they can’t be defined, since we have no experience with such “things”.] It’s assumed that the “original total-nothingness” could engage in fluctuations (similar to well-known fluctuations of quantum-mechanical systems). The original “total-nothingness” could fluctuate as much as “it wanted”, provided that, in all such fluctuations, equal positive and negative “things” that were created were exactly balanced, so that in total, there was “always” still nothing present. It’s assumed that in one such fluctuation, however, some symmetry was broken. Possibly the fluctuation was in some “unknown stuff” that we now call ‘energy’, which not only led to a positive and negative energy pair but also, for some unknown reason, what we would now call ‘the positive-energy fluctuation’ somehow ‘congealed’ or ‘got tied in a knot’ (maybe as the first particle or maybe as the first string of positive energy), “refusing” to rejoin with its negative-energy counterpart. Once the symmetry was broken, “all hell broke lose”, leading to the Big Bang. And now, ~13.73 billion years later, here we are pleased with our existence on this ‘positive side of existence’ (as blobs of positive energy), while all about us (and even within us) is the ‘negative side of existence’, i.e., the negative energy, which we call ‘space’ or ‘the vacuum’. Incidentally, for all we know, similar symmetry-breaking fluctuations in “total nothingness” might be quite common (and not just symmetry-breaking energy fluctuations). If so, then “outside our universe”, many other ‘verses’ (meaning ‘turns’) could exist – which needn’t be made of the same “stuff” (energy) or have the same number or even the same type of dimensions, and so on. But since humans will almost certainly never know if that’s so (although, maybe in a million-or-more years from now, someone will see how to get communications outside our universe!), then speculations about ‘multiverses’ seem rather pointless. Anyway, with the above “resolution” (which, of course, may be wrong), maybe the following four statements will make some sense. 1) Alan Guth (of MIT, famous for his Inflationary Theory of the universe) stated: “It’s said that there’s no such thing as a free lunch. But the universe is the ultimate free lunch” – in the sense that we got a whole lot (i.e., the universe!) for nothing – or better, from nothing. 2) Edward Tryon (of the City College of New York, who in 1973 published the first estimate, from data, that the total energy of the universe is zero) wrote the following [to which I’ve added the notes in “square brackets” and the italics]: If it is true that our Universe has a zero net value for all conserved quantities [such as electrical charge, momentum, and total energy], then it [our Universe] may simply be a fluctuation of the vacuum [i.e., the original “zero” or “total nothingness”], the vacuum of some larger space [which stretches the meanings of the words ‘vacuum’ and ‘space’] in which our Universe is imbedded. In answer to the question of why it happened, I offer the modest proposal that our Universe is simply one of those things [that] happen from time to time. 3) Sung Kyu Kim (Physics Dept., Macalester College, St. Paul, Minnesota) entertainingly summarized with: In the beginning, there was nothing – but nothing is unstable. And nothing borrowed nothing from nothing, within the limits of uncertainty, and became something. The rest is just math. And then 4), there’s the famous statement by Einstein: Once you can accept the universe as matter expanding into nothing that is something, [then] wearing stripes with plaid comes easy. In fact, such ideas can be found in Ancient Chinese philosophy. On the one hand, there’s the idea of yin and yang (“the principle of polarity in Chinese cosmology, in which the opposite poles eventually blend and become one another in cosmic connectedness”), and on the other hand, there’s the Tao (described by Lao-tzu in ~600 BCE as: “The Tao that can be spoken of, is not the true Tao; the name that can be named, is not the true Name”). Thus, if Einstein had been asked, “What is the ‘nothing that is something’ into which the universe is expanding?”, perhaps he would have answered, “The Tao.” And I admit that one reason that I added the previous paragraph is because it really “gets to me” to have the dumb clerics of monotheism repeat the familiar line from Robert Jastrow (astronomer, author, and founder of Goddard Institute for Space Studies): It’s somewhat unfortunate that Jastrow used the phrase “faith in the power of reason”, since any scientist’s faith is not in reason but in the scientific method (which is much, much, more powerful than reason!), but it’s even more unfortunate that Jastrow used the word ‘theologians’. It wasn’t theologians (i.e., those who study “theo” = “god”) who were sitting at the top of the mountain; the clerics of monotheistic religions, in particular, are still tangled in thorny thickets, back in the jungle at the base of the mountain; instead, those sitting quietly at the top were Zen masters. And another reason for the previous paragraph is to mention that perhaps the interested reader will begin to understand why this blog and my book use the term “Zen of Zero”. But that aside, let me get back to Heidegger (who started out studying theology, then switched to philosophy, and who seems never to have studied any science – although I saw that he did go for some walks with Heisenberg). Two other (connected) quotations from him are: Being and time determine each other reciprocally, but in such a manner that neither can the former – Being – be addressed as something temporal nor can the latter – time – be addressed as a being. Those two statements contain quite a few misunderstanding about ‘time’ and ‘being’ (or better than the word ‘being’, I’ll use the word ‘energy’). Below, I’ll comment on and try to straighten out some of his misunderstandings. First, consider Heidegger’s statement “time is not a thing, thus nothing which is…” That’s a weird way to put it, leading to my first response: “Well, yes and no.” Time is usually considered to be a coordinate, a locator, as is position. So, what response would be appropriate to the statement: “Position is not a thing, thus nothing which is…”? In some sense the statement is correct, but in other ways, it’s not. For example, if I told you that my daughter is in Detroit, that locates her (at least fairly well) for you, but then, where are you – and relative to what: lines of latitude and longitude on the Earth? But the location of the Earth is what and relative to what? So, maybe the best response to Heidegger’s “time is not a thing, thus nothing which is…” is to say: “So what?” But going further into the above statements by Heidegger, it becomes apparent that he has some fundamental misunderstandings both about ‘time’ and ‘energy’. To try to show you what I mean, since in the above quotations he’s now talking about not just ‘existence’ but also ‘change’ (implied with his introduction of ‘time’), I’ll start by extending what I claim to be the fundamental principle of reality: not only that ‘energy exists’ but even ‘energy exists and can change’. (Maybe all of that will someday be contained in ‘gluck glicks’!) In any case, starting from that fundamental principle and realizing that we commonly describe ‘change’ by using ‘time’, now consider Heidegger’s statements, starting with It would have helped if he had mentioned what ‘time’ he was referring to, there being at least three different meanings for ‘time’, sketched below. 1. Every-day Time. Not much need be said about every-day (household-variety) time, since we use it “all the time”, but bear with me for a bit, to remind yourself what we do. Although the fundamental feature of reality seems to be that ‘energy changes’, many other things change as well, and we use ‘time’ as a convenient tool for quantifying and comparing such changes. To that end, we characterize any change by comparing it to some standard, such as the number of swings of the pendulum of a grandfather clock, the number of times the Earth spins on its axis, the frequency of vibration of some electromagnetic energy, etc. We can play lots of games with the resulting comparisons of changes. For example, if the digits of my age (in years) are added together, then the sum is always the same as the sum of the digits in my daughter’s age. Behind such usage is Newton’s (outmoded) idea that “absolute, true, and mathematical time, of itself and from its own nature, flows equably without relation to anything external, and by another name is called duration”, and it’s generated a lot of stimulating poetry, such as Ralph Hodgson’s: Time, you old gipsy man, Will you not stay, Put up your caravan Just for one day? 2. Time in Applied Science. Associated with attempts to change thermal energy into mechanical energy, a second meaning of ‘time’ was developed especially by many 19th century engineers and scientists (Watt, Fourier, Poisson, Carnot, Mayer, Thompson, Joule, Helmholtz, Kelvin, Clausius, and many others, including Boltzmann and Gibbs). Although no doubt the first caveman who handled a burning stick quickly learned that heat flows from hot to cold, Fourier was the first to describe the idea quantitatively, and Carnot was the first to see some of the resulting limitations of changing thermal into mechanical energy. Cutting an amazingly difficult, century-long intellectual achievement down to a few words, I’d put it this way: time is not just a convenient tool for quantifying and comparing changes, it provides an indication of the usual direction of most changes. Thus, in our macroscopic world (in contrast to the microscopic world currently described by quantum mechanics), things usually change in a preferred direction: heat normally flows from hot to cold, and typically because of friction, energy is usually lost (in the sense that the energy, as electromagnetic thermal-energy, goes roaring off toward the edge of the universe, at the speed of light). More formally, the normal direction of change is that all available states (both locations and energy states) become populated as uniformly as is consistent with applied constraints, or stated more concisely, the entropy of any closed system always increases. As Eddington said: “Entropy is time’s arrow.” In fact, it then makes sense to talk of different ‘times’ for different systems. In closed systems, for example (i.e., those that don’t interact with their environments), entropy increases (or better, it never decreases) and time advances until such systems reach their equilibrium state, i.e., when change no longer occurs – which then means, for them, ‘time’ stops. In open systems, in contrast (i.e., those that can exchange, e.g., energy, with their environments), their ‘time’ can either increase (increasing entropy by, e.g., adding energy) or decrease (decreasing entropy by, e.g., decreasing their energy or, e.g., by increasing their ‘order’). Thus in some cases (e.g., when a youngster is sent to clean up his room), an open system’s entropy can decrease, as if its ‘time’ goes backwards – but rest assured that in a while, its ‘time’ will again increase and the room will be just as messy as it ever was, only to reach equilibrium when it’s in a maximum state of disorder, maximum entropy – which usually takes a kid no more than a few hours! Thus, we normally find that the entropy of most systems (and their ‘time’) increases. For example, we get old, deteriorate, and have less energy. As another example: because of the tides on Earth caused by the Moon (and the resulting loss of thermal radiation to space), the Earth’s rate of spin is slowly decreasing (and since in a few billion years the Earth will stop spinning, it’s recommended that, before then, humans should find a better place to live). Many poets have summarized such entropy increases, for example, there’s T.S. Eliot’s: “This is the way the world ends… not with a bang but a whimper.” When there are no more changes, when everything is uniform, when the system attains equilibrium, then its ‘time’ stops. 3. Time in Modern Physics. In physics, time has always been “just” another coordinate (similar to the usual spatial coordinates). As the physicist John Wheeler reportedly said (although the source isn’t certain; it might have been Woody Allen!): Time is nature’s way of keeping everything from happening at once. 
Space is what prevents everything from happening to me. In any case, ever since Einstein and Schrödinger, such coordinates locating ‘events’ in space-time have become quite weird – maybe especially the time coordinate. Thus, as Einstein showed, observers traveling at different speeds won’t agree on time durations (nor on spatial differences), and time-durations change with changing locations near any mass. Yet, the idea continues that if a system reaches a state of equilibrium, then for it, its ‘time’ stops. For example, if you could ask an electron that’s whizzing around its nucleus, “What time is it?”, his meaningful response (or hers, as the case may be) would be: “Whaddya mean by ‘time’? I’ve gone round and round that stupid nucleus down there umpteen quadzillion times, and nothing ever changes.” In fact, that’s 'doubly weird', because although an electron seems to be accelerated as it 'goes around' a nucleus (its centripetal acceleration resulting from the force between the positively charged nucleus and the negatively charged electron), it doesn’t radiate energy. Yet, when an electron is accelerated in the macroscopic world (e.g., accelerated in an antenna), then the electron produces an electromagnetic wave that goes roaring off toward the edge of the universe. So, something is wrong about the extrapolation of our macroscopic model to the microscopic world: either the electron isn't accelerated as it 'goes around' a nucleus or in some cases (some energy states), an accelerated electron needn't radiate energy. To get an electron in an atom or molecule to radiate energy, the first step is to bounce it up to a higher energy state, e.g., via a collision with another atom or molecule or with a photon of light. Interestingly relative to links between energy and time, uncertainties (δ) in the lifetime (τ) and energy (E) of the ‘excited state’ are related via δE δτ ≥ h/2π, where h is Plank’s constant, similar to the familiar uncertainty in the momentum (p) and position (q) as given by Heisenberg’s Uncertainty Principle (δp δq ≥ h/2π). And when the electron does terminate its uncertain lifetime in an excited state by emitting electromagnetic energy, then for the emitted photon (heading off toward the edge of the universe at the speed of light), time stands still. That is, whereas ‘time’ (or any information) can’t travel faster than the speed of light, then in the 'rest frame' of the photon, there's no such thing as 'time', i.e., for light (viz., electromagnetic energy), there is no past or future, it’s always ‘now’. For nondissipative systems (e.g., frictionless systems such as all quantum mechanical systems), time is merely a convenient ‘marker’ whose origin and even whose direction is immaterial: Schrödinger’s equation is invariant under time reversal - meaning that its predictions are the same no mater if 'the parameter time' runs backwards or forwards. In fact, Noether’s theorem states that, for energy to be conserved, then time must have translational symmetry (i.e., there’s no meaningful origin of time), which then intimately links time to energy. Further, if energy is negative (e.g., in ‘the vacuum’), then there are suggestions that time goes in the opposite direction from the direction with which we’re familiar. For example, rather than interpret a positron as a hole in negative-energy space, it can be interpreted as an electron going backwards in time. Further still, if the interpretation is correct that time in ‘the vacuum’ goes backward, then it might finally resolve some of the many perplexing features of quantum mechanics (e.g., as Einstein complained, quantum mechanics seems to permit information to travel faster than the speed of light). Time going backward in space would be consistent with the entropy of the universe being a constant (namely, zero), if space (or the vacuum) has not only negative entropy but also increasingly more negative entropy as the universe expands. And if the entropy of the universe is zero, then there is no such thing as ‘time’ for the universe (maybe that’s what Einstein meant when he said “time is an illusion”) – but we who are stuck in ‘the positive side of reality’ (i.e., we positive-energy beings) apparently are stuck with entropy usually increasing (e.g., our aging). So, then, what’s to be made of Heidegger’s: “Time is not a thing, thus nothing which is, and yet it remains constant in its passing away without being something temporal like the beings [energy] in time”? I don’t know! His statement doesn’t conform to the ideas of Eddington or Einstein. Thus, Heidegger’s statement “Time is not a thing” is incorrect, if change is considered to be a fundamental feature of the universe and, in particular, if time is related to entropy increase. As for his “yet it [time] remains constant in its passing away”, that’s inconsistent with both Einstein’s and Eddington's ideas, suggesting that Heidegger is stuck with Newton's idea about time. Next, adding his proposed distinction between time and ‘being’ (or ‘energy’, including the mass-energy known as humans) consider Heidegger’s: In some sense, he’s correct that “being [energy] and time determine each other reciprocally” (in the sense of Noether’s theorem, which is applicable only to nondissipative systems), but his statement that “neither can the former [energy] be addressed as something temporal” is wrong, as is his “nor can the latter – time – be addressed as [energy]”, not only in the sense that energy degradation is the most common form of entropy increase (which is the fundamental concept of time in our macroscopic world) but also in the sense that without energy (e.g., “before” the Big Bang) there would be no time. Therefore, by the way (trying to “clean up” what I wrote earlier), the concept of “before” the Big Bang is meaningless: without energy there was no time; therefore, there was no “before” (and similarly, without momentum, there is no meaning for location, so if there’s no momentum in the “nothing” that’s “outside” our universe, than there’s no meaning to “outside”). So anyway, maybe I’m not losing it! Maybe I don’t remember Heidegger, because for me, he said nothing memorable. Yet, I wholeheartedly agree with his: But since Heidegger was a student of Greek philosophy, I think he should have credited Epicurus (341–270 BCE), who said essentially the same: [It follows that] death is nothing to us. For all good and evil consist in sensation, but death is deprivation of sensation. And therefore a right understanding that death is nothing to us makes the mortality of life enjoyable, not because it adds to it an infinite span of time, but because it takes away the craving for immortality. For there is nothing terrible in life for the man who has truly comprehended that there is nothing terrible in not living… [Death should not] concern either the living or the dead, since for the former it is not, and the latter are no more. I’d even add: would that all Christians, Muslims, and Mormons would consider what Epicurus said. If they understood it, they’d immediately junk their religions in the trash and put a lid on their clerics! Yet, even Heidegger apparently didn’t understand it, since in his interview with Der Spiegel on 23 September 1966, published posthumously on 31 May 1976, he’s quoted as saying: Of course it’s easy to agree that “philosophy will not be able to effect an immediate transformation of the present condition of the world.” Perhaps the only thing that could “effect an immediate transformation” would be communications from extraterrestrial beings (or if another 'verse' bumped into ours!). But for Heidegger to say that “only a god can save us” displays a horrible lack of faith – in humanity and in the scientific method. And with that thought, maybe I see why I couldn’t remember who Heidegger was. I wouldn’t be surprised if, years ago, I said to myself: “This guy doesn’t know what he’s talking about; forget about him.”
8aaf6b0607eaae68
Theoretical Foundations of the Models Implemented in LUMPAC Process of Geometry Optimization The potential energy surface (PES) consists of a surface of the calculated energy () as a function of the geometric parameters of a molecule (q), , where n is the number of geometric parameters. A steady point in the PES is defined by a flat point on the surface, being mathematically represented by . If , the steady point is a transition state. In contrast, if , the steady point corresponds to a minimum of energy, usually a local minimum. The lowest of the local minima, the global minima, usually defines the most stable ground state geometry of the molecule. In general, the geometry optimization procedure consists in supplying an input molecular structure with geometrical parameters hoped to be as close as possible to the steady point desired. This reasonable geometry is then submitted to an algorithm which systematically alters the atomic positions until a minimum steady point is reached, defined by its geometric parameters . The computational chemistry methods that are applied to perform geometry optimizations may be divided roughly into two groups: molecular mechanics methods and quantum methods. In molecular mechanics methods, roughly speaking, the molecule is treated as set points, each representing an atom, interconnected by springs, representing the chemical bonds. Such methods are very fast because the potentials are classic and no electronic wave functions are present. In contrast, quantum methods attempt to solve, for the entire molecular system, the famous Schrödinger eigenvalue equation , where is the Hamiltonian operator, is the wave function (the eigenvector), and E is the total energy of the system (the eigenvalue). The quantum computational methods are usually divided into four groups: i) the Hartree-Fock methods which solve the self-consistent field Schrödinger equation under the independent particle approximation, calculating all possible two-electron integrals; ii) the post Hartree-Fock methods, which also take into consideration the contributions due to electron correlations; iii) the semiempirical quantum chemical methods, which are based on the Hartree-Fock method, though some integrals are replaced by parameters, adjusted to reproduce experimental data during its process of development; and, finally, iv) the methods based on density functional theory (DFT), which consider the electron density as the fundamental entity, and not the wave function as all other quantum methods do. The method of choice for performing geometry optimizations depends mainly on the number of atoms present in the desired system. Normally, the standard procedure for geometry optimization consists in building the chemical structure by adding atoms in arbitrary positions and by subsequently connecting them according to their chemical bonds. The next step is to previously optimize the geometry by using less costly computational methods. Molecular mechanics methods are quite fast methods and some of them have parameters available for almost all elements. As a result, although molecular mechanics methods do not provide accurate geometries, such methods are usually applied to transform the drawn geometries into reasonably good starting structures for refinement by computationally more accurate and usually more expensive calculations [1]. The ground state geometries of lanthanide complexes can be calculated by two different quantum chemical based approaches: i) DFT or ab initio methods with effective core potentials (ECP) for treating the lanthanide ions, or ii) semiempirical methods. In 2006 [2] and 2011 [3] two papers were published in order to compare geometries predicted by semiempirical methods with those predicted by ab initio and DFT methods, with crystallographic data as reference. In contrast to what would be expected, the results showed that by enlarging the size of the basis set, or by including electron correlation, or both, deviations of the predicted coordination polyhedrons with respect to the crystallographic ones consistently increased, reducing the quality of the results. And among all ab initio methods evaluated, the method RHF/STO-3G using the MWB core effective potential was the most efficient for predicting the coordination polyhedron of lanthanide complexes [2]. This result confirms that the Sparkle models, which demand a much lower computational effort, have higher accuracy in calculations and modeling, when compared to ab initio/ECP ones [3]. In LUMPAC, the Sparkle models will be used in the geometry optimization step because such methods have an excellent capability of geometry prediction and also a considerably low computational cost. As a result, we will now provide a detailed description of these models. The procedure of development of the Sparkle model consists in parameterizing a semiempirical Hamiltonian, such as AM1 or PM3, for example, in which the lanthanide ion is replaced by a +3e point charge. This point charge is subjected to a repulsive potential , where the parameter quantifies the size of the ion. This mathematic entity is called “sparkle”. As the bond between Ln3+ and atom ligands has high ionic character, the Sparkle model has been consistently proven to be adequate. The first Sparkle model, named SMLC (Sparkle Model for the Calculation of Lanthanide Complexes), was developed by Andrade and coworkers in 1994 [4]. This version was parameterized for the AM1 semiempirical model with only one experimental structure in the parameterization set: the tris(acetylacetonate)-(1,10-phenanthroline) of europium (III). When this Sparkle model version was evaluated for a representative test containing 96 europium complexes, the SMLC model lead to errors of approximately 0.68 Å for LnL, lanthanideligand atom, distances. However, the second parameterized version of the Sparkle model [5], SMLC II, published in 2004, included Gaussian functions in the core-core repulsion energy. The errors for Ln–L distances decreased from 0.68 to 0.28 Å when tested with the same europium structures set. A new and much more sophisticated parameterization scheme was then carried out within AM1 for the Sparkle model in 2005 and was initially developed for Eu3+, Gd3+, and Tb3+[6]. This new version of the model was called Sparkle/AM1. The main changes consisted in the application of more sophisticated statistical techniques, both in the selection of the most representative training sets as well as in the validation of the parameters obtained. In the Sparkle/AM1 model development of the three parameterized ions, more than 200 different crystallographic structures were used together with a new response function for minimization in the parameterization procedure. These changes made it possible to decrease the errors for Ln–L distances from 0.28 to 0.09 Å in europium complexes. Test sets of gadolinium complexes (70 structures) and of terbium complexes indicated errors of approximately 0.07 Å. Then, the Sparkle/AM1 model was generalized for all types of ligands and parameterized for all 15 trivalent lanthanide ions [7-14]. Currently, the Sparkle models are also parameterized for the following semiempirical models: PM3 [15-21], PM6 [22], PM7 [23] and RM1 [24]. The geometry optimization of lanthanide complexes has a great importance for studying the luminescent properties of the system. All published Sparkle models (Sparkle/AM1, Sparkle/PM3, Sparkle/PM6, Sparkle/PM7 and Sparkle/RM1) are available in MOPAC2012 [25]. The choice of which of the available Sparkle models is to be used must be based mainly on the capability of the underlying semiempirical method, either AM1, PM3, PM6, RM1 or PM7 to correctly describe the specific ligands involved. Nevertheless, many tests performed by our group suggest Sparkle/RM1 to be the version that presents the best overall results. Excited States Calculation The singlet and triplet excited states of the organic part can be calculated by using methods based on time-dependent density functional theory (TD-DFT) [26] or by the semiempirical INDO/S method [27, 28]. In 2001, Gorelsky and Lever [36] compared these two methodologies for the ground and excited states calculations of Ru(II) complexes. The electronic spectra obtained by these two different methods showed excellent agreement with each other. However, even today, the TD-DFT method is still inappropriate to treat complexes with more than 100 atoms, due to its high demand of computational resources. Santos and coworkers evaluated the accuracy of the semiempirical INDO/S method in comparison with TD-DFT ab initio results in studies of lanthanide complexes [29]. The results showed that triplet state energies calculated by the semiempirical method presented errors similar to those obtained by TD-DFT methodology, with the advantage of being hundreds of times faster. In this context, the geometries optimized by the Sparkle models are used to calculate the singlet and triplet excited states by using the configuration interaction simple (CIS) of INDO/S, which has an accuracy of about 1000 cm-1 [27, 28]. This method is implemented in ZINDO [30] and ORCA [31] programs. In this procedure, a point of charge +3e represents the lanthanide ion [32]. Intensity Parameters Calculation The intensity parameters, ( = 2, 4, and 6), are calculated by Judd-Ofelt theory [33, 34]. According to this theory, the central ion is affected by the nearest neighbor atoms, through a static electric field also referred as crystal or ligand field. Judd and Ofelt described, in independent works, the importance of the electric dipole mechanism for the 4f 4f transitions from the mixing of a ground state 4fN configuration with excited state configurations of opposite parity through the odd terms of the ligand field Hamiltonian. All 4f orbitals have the same parity, that is , where = 3 for lanthanide ions. Then, the mixing are between the 4f orbitals plus higher-n orbitals, such as the 5d orbital, which presents = 2, and has an opposite parity to that of the f orbital. The intensity parameters describe the interaction between the lanthanide and ligand atoms, and are calculated by Eq. (). One aspect which is very important for the possible application of this theory is to know the values that each of the rank variables , , and may assume in relation to each other. As can been seen in Eq. (), for example, when is equal to 2, will be equal to 1 and 3, whereas the values of will be equal to 0, 1, ..., . The parameters are calculated by: The first term, , refers only to the forced electric dipole (ED) contribution, and are given by Eq. (). The term corresponds to the difference of energy between the ground state barycenters and the first excited state configuration of opposite parity. The radial integrals, , were taken from reference [35], with an extrapolation for the quantity . The values of radial integrals for Eu3+ ion are = 0.9175 a.u., = 2.0200 a.u., = 9.0390 a.u., and = 110.0323 a.u. The terms are numeric factors associated with each lanthanide ion and are estimated from radial integrals of Hartree-Fock calculations [36]. The values of are ; ; ; ; , and , for Eu3+ ion [36]. The second term of Eq. (), , refers only to the dynamics coupling (DC) contribution and is given by Eq. (). This contribution is complementary to the one from the Judd-Ofelt static electric field model, and was firstly considered by Mason and coworkers [37]. The dynamics coupling mechanism, which is more important than the electric dipole mechanism for some transitions, is due to the high gradient of the electromagnetic field generated by the ligands when they interact with an incident external field. The DC mechanism depends on the nature of both ligands and on the coordination geometry, and explains the hipersensitivity in 4f 4f transitions [36]. The quantity is a shielding field due to 5s and 5p filled orbitals of lanthanide ions, which have a radial extension larger than those of the 4f orbitals [36]. The values of are 0.600, 0.139 and 0.100. is a tensor operator of rank ( = 2, 4, and 6) with values = -1.366, = 1.128, and = -1.270 for lanthanide ions. is the Kronecker delta function. As such, is equal to 0 when is different from the . The parameters ( = 1, 3, 5, and 7), given by Eq. (), are the so-called odd-rank ligand field parameters and contains a sum over the surrounding atoms. are the conjugated spherical harmonics. As can be observed in Eq. (), the spherical harmonics depend on the spherical coordinates of the j ligand atoms. The term present in Eq. () according to the Simple Overlap Model (SOM) [38, 39] developed by prof. Oscar Malta (UFPE, Brazil), formalizes that crystal field Hamiltonian and is adequately calculated as a function of the charge density between the lanthanide ion and the ligand atoms. The SOM model assumes two postulates [38]: i) the 4f energy potential is generated by charges, uniformly distributed in a small region located around the mid-points of the lanthanide–ligand chemical bonds; and ii) the total charge in each region is equals to , where the parameter is proportional to the magnitude of the total overlap between the lanthanide ion and the ligand atoms. Figure 1 shows a sketch of the effective charges for a hypothetical complex (LnL3). The vector represents the position of the ligand atoms, and the vector represents the position of the electron of the central metal ion. Figure 1. Graphical representation of the Simple Overlap Model. In other words, the term introduces a correction to the crystal field parameters of the point charge electrostatic model (PCEM), , such that . This way, this correction confers a degree of covalency to the point charge model from the inclusion of the parameter , since PCEM treats the metal-ligand atom bonds as a purely electrostatic phenomenon. The effective charges are assumed to be at positions defined at the distances given by . The factor , given by Eq. (), indicates that the effective charges may not be exactly at . The plus sign in Eq. () is used when the barycenter of the overlap region is displaced towards the ligand, which happens in the case of oxygen and fluorine coordinating atoms. The minus sign is used when this barycenter is displaced towards the central ion, as is the case of nitrogen and chlorine coordinating atoms. The overlap between 4f orbitals and the valence orbitals of the ligands, , is calculated by Eq. (). where is a constant equal to 0.05 and is equal to 3.5 for the lanthanides. is the smallest among all lanthanide–ligand atom distances. The parameters ( = 1, 3, 5, and 7), like the parameter , also depends on the coordination geometry and on the chemical environment around the lanthanide ion, and is given by Eq. (). The limitations in the intensity parameters calculation consist in determining the quantities, and . As a result, it is necessary to use the experimental intensity parameters. The charge factors and polarizabilities, used in and calculations, respectively, are adjusted to reproduce the experimental intensity parameters. During the adjustment procedure, the intensity parameters calculated () from the optimized geometry, obtained from Sparkle model, are compared with the experimental intensity parameters (). The response function () is defined by Eq. (). Emission Radiative Rate Calculation The emission radiative rate (), taking into account the magnetic dipole and forced electric dipole mechanisms, is given by Eq. (): where is the difference of energy between the 5D0 and 7FJ states (in cm-1), is the Planck constant, is the degeneracy of the initial state, and is the refractive index of the medium, usually assumed to be equal to 1.5. (Eq. ()) and in Eq. () are the magnetic dipole and forced electric dipole mechanisms, respectively. The squared matrix elements , , and are equal to 0.0032, 0.0023, and 0.0002, respectively, for Eu3+ [40]. where is the electron mass. The matrix elements that appear in Eq. () above are determined according to the intermediate coupling mechanism. The 5D07F1 transition is the only one that does not have contributions from the electric dipole mechanism and are quantified theoretically as esu2 cm2 [41]. The 5D07FJ transitions ( = 0, 3, and 5) are forbidden by magnetic dipole and forced electric dipole mechanisms, that is, their contributions are equal to 0. The contributions of each transition to the emission radiative rate are calculated by Eq. (), and are named branching ratios (). Energy Transfer Rates Calculation The theoretical model used to calculate the energy transfer rate between the organic ligands and the lanthanide ion was developed by Malta and coworkers [42, 43]. According to this model, the energy transfer rates, , are given by the sum of two terms: The term , given by Eq. (), corresponds to the energy transfer rate obtained from the multipolar mechanism. The quantities are the electric dipole contributions to the intensity parameters, taking into account only the contributions of the parameters . are reduced matrix elements of the tensor operators . The parameters γλ are calculated by Eq. (). In Eq. (), is the total angular momentum quantum number of the lanthanide ion. is the degeneracy of the initial state of the ligand, and a specifies the spectroscopic term of the 4f orbitals. is the dipole strength associated with the transitions in the ligands. The quantity , calculated by Eq. (), corresponds to the temperature-dependent factor and contains a sum of Frank Condon factors. The factor (Eq. ()) is the ligand state bandwidth-at-half-maximum (in cm-1), and is the difference of energy between the donor and acceptor states involved in the energy transfer process. For lanthanide complexes, the energy donor states correspond to the singlet and triplet excited states, whereas the acceptor states correspond to the lanthanide ion excited states. Typical values of are in the range of 10121013 erg-1. The second term of Eq. (), refers to the energy transfer rates obtained from the exchange mechanism and are calculated by Eq. (). In Eq. (), ( = -1, 0, 1) is the spherical component of the spin operator of electron in the ligand. The is the component of its dipole operator, and is the total spin operator of the lanthanide ion. Typical values of the squared matrix element of the coupled dipole and spin operators lie in the range 10-3410-36 esu2 cm2 [44]. Malta proposed some corrections for the energy transfer rates equations in 2008 [44]. The first one corresponds to the addition of a shielding factor, to the first term of Eq. () . This contribution had been initially neglected for dipole-dipole mechanism. The second one corresponds to the replacement of the quantity in Eq. () by the overlap integral. This last correction causes a change of three orders in the magnitude of the energy transfer rate calculated by the exchange mechanism. Nevertheless, as these rates are still much higher than the radiative and non-radiative rates, the general conclusions for the theoretical quantum yield obtained from previous work remain valid [44]. The energy transfer rates depend on the distance difference between donor and acceptor states involved in the process of energy transfer. This distance is known as , and for its determination, it is necessary to estimate the molecular orbital coefficients of the atom () that contributes to the ligand states (triplet or singlet). It is important to know the distance from atom () to the lanthanide ion. The quantities and are calculated by data obtained from excited states calculations using semiempirical INDO/S method. This way, the is given by Eq. (). The energy back-transfer rates () are obtained by multiplying the transfer rate () by the Boltzmann factor, , considering the room temperature. refers to the energy difference between the donor and acceptor levels, and is Boltzmann constant. The most important transfer channels for systems containing europium ion, according to Malta [45], are shown in Figure 2. Figure 2. Transfer channels involved in the energy transfer rate processes of systems containing europium ion. The total angular momentum selection rules, , of the lanthanide 4f states, are complementary. The europium excited states that are more likely to accept energy from ligands through the direct Coulomb interaction mechanism are 5D2, 5L6, 5G6, and 5D4. The energy transfer from ligand excited states to the 5D1 level is allowed by the exchange interaction mechanism (Eq. ()). Although the energy transfer to the 5D0 level, in principle, is forbidden by direct interaction or exchange mechanism, the selection rule can be relaxed by a mix of the total angular moments (’s) [43, 45]. Emission Quantum Yield Calculation The emission quantum yield, given by Eq. (), is defined as the ratio between the emitted and absorbed light intensities. where is the 5D0 level population. and correspond to the S0 singlet level population and absorption rate, respectively. The normalized population levels, , are obtained from the appropriate rate equations given by Eq. (). From Eq. (), or represent the transfer rates between and states, or and states. The 5D0 level population depends on the non-radiative emission rate , which still cannot be theoretically calculated. However, can be quantified via Eq. () from the and the experimental lifetime (). Because of this, the theoretical emission quantum yield depends on the experimental lifetime. The normalized populations of the states involved in the process of energy transfer are obtained from diagonalizing the matrix that contains the rate equations showed in Figure 3. As can been in Figure 3, the matrix is assembled from energy transfer and back-transfer rates, and . The matrix diagonal contains the transfer channels responsible for the energy depopulation of the states in the matrix columns. The channels in red (Figure 2) are not normally included in the energy transfer rates diagrams (Figure 2) due to the non-resonance condition presented between some ligand and the europium excited states. The emission quantum yield is then calculated from the population given by Eq. (). Figure 3. Matrix for obtaining the normalized energy level population, enabling the theoretical calculation of the emission quantum yield. Bibliographic References [1] Lewars, E.G., Computational Chemistry: Introduction to the Theory and Applications of Molecular and Quantum Mechanics2010: Springer. [2] Freire, R.O., G.B. Rocha, and A.M. Simas, Lanthanide complex coordination polyhedron geometry prediction accuracies of ab initio effective core potential calculations. Journal of Molecular Modeling, 2006. 12(4): p. 373-389. [3] Rodrigues, D.A., N.B. da Costa, and R.O. Freire, Would the Pseudocoordination Centre Method Be Appropriate To Describe the Geometries of Lanthanide Complexes? Journal of Chemical Information and Modeling, 2011. 51(1): p. 45-51. [4] de Andrade, A.V.M., et al., Sparkle Model for the Quantum-Chemical Am1 Calculation of Europium Complexes. Chemical Physics Letters, 1994. 227(3): p. 349-353. [5] Rocha, G.B., et al., Sparkle Model for AM1 Calculation of Lanthanide Complexes:  Improved Parameters for Europium. Inorganic Chemistry, 2004. 43(7): p. 2346-2354. [6] Freire, R.O., G.B. Rocha, and A.M. Simas, Sparkle model for the calculation of lanthanide complexes: AM1 parameters for Eu(III), Gd(III), and Tb(III). Inorganic Chemistry, 2005. 44(9): p. 3299-3310. [7] da Costa, N.B., et al., Sparkle/AM1 modeling of holmium (III) complexes. Polyhedron, 2005. 24(18): p. 3046-3051. [8] Freire, R.O., G.B. Rocha, and A.M. Simas, Modeling lanthanide complexes: Sparkle/AM1 parameters for ytterbium (III). Journal of Computational Chemistry, 2005. 26(14): p. 1524-1528. [9] Freire, R.O., et al., Modeling lanthanide coordination compounds: Sparkle/AM1 parameters for praseodymium (III). Journal of Organometallic Chemistry, 2005. 690(18): p. 4099-4102. [10] da Costa, N.B., et al., Sparkle model for the AM1 calculation of dysprosium (III) complexes. Inorganic Chemistry Communications, 2005. 8(9): p. 831-835. [11] Freire, R.O., G.B. Rocha, and A.M. Simas, Modeling rare earth complexes: Sparkle/AM1 parameters for thulium (III). Chemical Physics Letters, 2005. 411(1-3): p. 61-65. [12] Freire, R.O., et al., AM1 sparkle modeling of Er(III) and Ce(III) coordination compounds. Journal of Organometallic Chemistry, 2006. 691(11): p. 2584-2588. [13] Freire, R.O., et al., Sparkle/AM1 structure modeling of lanthanum (III) and lutetium (III) complexes. Journal of Physical Chemistry A, 2006. 110(17): p. 5897-5900. [14] Freire, R.O., et al., Sparkle/AM1 parameters for the modeling of samarium(III) and promethium(III) complexes. Journal of Chemical Theory and Computation, 2006. 2(1): p. 64-74. [15] Freire, R.O., G.B. Rocha, and A.M. Simas, Modeling rare earth complexes: Sparkle/PM3 parameters for thulium(III). Chemical Physics Letters, 2006. 425(1-3): p. 138-141. [16] Freire, R.O., et al., Sparkle/PM3 parameters for the modeling of neodymium(III), promethium(III), and samarium(III) complexes. Journal of Chemical Theory and Computation, 2007. 3(4): p. 1588-1596. [17] Freire, R.O., G.B. Rocha, and A.M. Simas, Sparkle/PM3 parameters for praseodymium(III) and ytterbium(III). Chemical Physics Letters, 2007. 441(4-6): p. 354-357. [18] da Costa, N.B., et al., Structure modeling of trivalent lanthanum and lutetium complexes: Sparkle/PM3. Journal of Physical Chemistry A, 2007. 111(23): p. 5015-5018. [19] Simas, A.M., R.O. Freire, and G.B. Rocha, Cerium (III) complexes modeling with Sparkle/PM3. Computational Science - Iccs 2007, Pt 2, Proceedings, 2007. 4488: p. 312-318. [20] Simas, A.M., R.O. Freire, and G.B. Rocha, Lanthanide coordination compounds modeling: Sparkle/PM3 parameters for dysprosium (III), holmium (III) and erbium (III). Journal of Organometallic Chemistry, 2008. 693(10): p. 1952-1956. [21] Freire, R.O., G.B. Rocha, and A.M. Simas, Sparkle/PM3 for the Modeling of Europium(III), Gadolinium(III), and Terbium(III) Complexes. Journal of the Brazilian Chemical Society, 2009. 20(9): p. 1638-1645. [22] Freire, R.O. and A.M. Simas, Sparkle/PM6 Parameters for all Lanthanide Trications from La(III) to Lu(III). Journal of Chemical Theory and Computation, 2010. 6(7): p. 2019-2023. [23] Dutra, J.D.L., et al., Sparkle/PM7 Lanthanide Parameters for the Modeling of Complexes and Materials. Journal of Chemical Theory and Computation, 2013. 9(8): p. 3333-3341. [24] Filho, M.A.M., et al., Sparkle/RM1 parameters for the semiempirical quantum chemical calculation of lanthanide complexes. RSC Advances, 2013. 3(37): p. 16747-16755. [25] Stewart, J.J.P., MOPAC2009, 2009, Colorado Springs: USA. p. Stewart Computational Chemistry. [26] Stratmann, R.E., G.E. Scuseria, and M.J. Frisch, An efficient implementation of time-dependent density-functional theory for the calculation of excitation energies of large molecules. Journal of Chemical Physics, 1998. 109(19): p. 8218-8224. [27] Ridley, J.E. and M.C. Zerner, Triplet-States Via Intermediate Neglect of Differential Overlap - Benzene, Pyridine and Diazines. Theoretica Chimica Acta, 1976. 42(3): p. 223-236. [28] Zerner, M.C., et al., Intermediate Neglect of Differential-Overlap Technique for Spectroscopy of Transition-Metal Complexes - Ferrocene. Journal of the American Chemical Society, 1980. 102(2): p. 589-599. [29] Santos, J.G., et al., Theoretical Spectroscopic Study of Europium Tris(bipyridine) Cryptates. Journal of Physical Chemistry A, 2012. 116(17): p. 4318-4322. [30] Zerner, M.C., ZINDO manual QTP, 1990, University of Florida: Gainesville. [31] Neese, F., The ORCA program system. Wiley Interdisciplinary Reviews-Computational Molecular Science, 2012. 2(1): p. 73-78. [32] de Andrade, A.V.M., et al., Theoretical model for the prediction of electronic spectra of lanthanide complexes. Journal of the Chemical Society-Faraday Transactions, 1996. 92(11): p. 1835-1839. [33] Judd, B.R., Optical Absorption Intensities of Rare-Earth Ions. Physical Review, 1962. 127(3): p. 750-&. [34] Ofelt, G.S., Intensities of Crystal Spectra of Rare-Earth Ions. Journal of Chemical Physics, 1962. 37(3): p. 511-&. [35] Freeman, A.J. and J.P. Desclaux, Dirac-Fock Studies of Some Electronic Properties of Rare-Earth Ions. Journal of Magnetism and Magnetic Materials, 1979. 12(1): p. 11-21. [36] Malta, O.L., et al., Theoretical Intensities of 4f-4f Transitions between Stark Levels of the Eu3+ Ion in Crystals. Journal of Physics and Chemistry of Solids, 1991. 52(4): p. 587-593. [37] Mason, S.F., R.D. Peacock, and B. Stewart, Dynamic coupling contributions to the intensity of hypersensitive lanthanide transitions. Chemical Physics Letters, 1974. 29(2): p. 149-153. [38] Malta, O.L., A Simple Overlap Model in Lanthanide Crystal-Field Theory. Chemical Physics Letters, 1982. 87(1): p. 27-29. [39] Malta, O.L., Theoretical Crystal-Field Parameters for the Yoc1 - Eu-3+ System - a Simple Overlap Model. Chemical Physics Letters, 1982. 88(3): p. 353-356. [40] Carnall, W.T., H. Crosswhite, and H.M. Crosswhite, Energy level structure and transition probabilities of the trivalent lanthanides in LaF3, 1977: Argonne National Laboratory. [41] Peacock, R., The intensities of lanthanide f ↔ f transitions, in Rare Earths 1975, Springer Berlin Heidelberg. p. 83-122. [42] Malta, O.L., Ligand-rare-earth ion energy transfer in coordination compounds. A theoretical approach. Journal of Luminescence, 1997. 71(3): p. 229-236. [43] Silva, F.R.G.E. and O.L. Malta, Calculation of the ligand-lanthanide ion energy transfer rate in coordination compounds: Contributions of exchange interactions. Journal of Alloys and Compounds, 1997. 250(1-2): p. 427-430. [44] Malta, O.L., Mechanisms of non-radiative energy transfer involving lanthanide ions revisited. Journal of Non-Crystalline Solids, 2008. 354(42-44): p. 4770-4776. [45] de Sa, G.F., et al., Spectroscopic properties and design of highly luminescent lanthanide coordination complexes. Coordination Chemistry Reviews, 2000. 196: p. 165-195.
f30c83f708b4d0fe
Schrödinger cat For in the end I have relocated to write and in fact, this will be completed if the machine becomes me to hang. Let that maybe it pilláis in the process so excuse me in advance. Anyway, I wanted to talk about something that many have spoken before and so I doubt me tell you something you do not know already. I’ll talk Schrödinger’s cat. First of all let’s start with Schrödinger. Erwin Schrödinger was greater than the physical quantum physics contributed by the Schrödinger equation which explains how it evolves a wave function of a particle in time. And it is “wise” before our friend or intuited that at very small distances the particles behaved as a wave but no one had been able to think about their behavior over time. Schrödinger is comparable to Einstein in quantum physics and its equation is similar (in scope of time). Without going into too much to the plate, Schrödinger insert Planck so that it is able to calculate as varies the wave equation in time allowing calculate, for example, an electron which can be “statistically” in time orbiting an atom, something that was previously impossible to know. Including achievement relate your equation depending on the speed of the particle is very fast (relativistic they say) or not. Quite an achievement. Schrödinger too and as I explained the other day, I try to show people the like quantum physics was not something separate or different to classical physical world. To prove that quantum physics through his equations was nothing more than a singularity of classical physics devised his famous experiment Schrödinger’s cat. The experiment, as you know, a cat is enclosed in a box with a radioactive particle, a radioactive atom. With them is a Geiger counter that, as you know is capable of measuring a boat radiactividady cyanide. The particle in question has a 50% chance of emitting radioactivity. If the meter detects radioactivity releases cyanide and cat… dies. It’s simple. And this is where all the grace of the matter is. The phenomenon of disintegration depends on the wave of the radioactive particle, then we know the equation of the wave function of the particle and the Schrödinger equation we know that equation over time. At first it was thought that quantum phenomena influenced classical physics, were like separate worlds. But this experiment if the particle undergoes disintegration (a quantum phenomenon) affect the cat (not quantum) relating both types of physics. That is the first and most important conclusion. The second conclusion is that we all know. Quantum wave function is superimposed in two states (and therefore in two states for the cat) which explains how a particle, at these levels, it may be in more than one state at a time, so while in classical physics would only have a state (alive or dead cat, “whole” or disintegrated particle) in quantum physics this in both states at once without any problem. And here it could go on with the Heisenberg uncertainty principle, which leads to the third conclusion, the funniest. The cat and the particle, while in the box, are in both states, perfect, both quantum states, but really, for an observer can only be in one state because if you open the box or the cat is alive or dead ( unless it is a zombie). What does this mean?. Honestly, the wave function of a particle does not tell us exactly where this or that speed is much less as it develops over time, despite the Schrödinger equations. What are the odds indicates that there this but we can never know exactly where the mere fact that the observer influences the wave equation changing it every time you try to measure it (cool huh?). This indicates that the observer is, when does the measurement, which “condenses” (tuned to that word) the wave equation giving a measurement and some data (but then changed) as they spend probability to something concrete. Well, back to the cat, when in the box in both possibilities and we opened the box condense their quantum states, its wave function on something particular. And because the chances of disintegration are those that are not mean that a moment before or a moment later the cat was in the same state (alive or dead, if I live too) if we had not done our observation, our “condensation”. With what the observer is what makes reality. What does this mean?. Well, honestly that there may be many realities depending on who measured. The observer or observers of what happens in our lives are / are the ones that condense in a timeline, in a specific time line, there may be other timelines for the wave function and therefore to different realities. And this to next ?. A basis of the theory of the multiverse in which they are merely different time lines of the quantum wave function of what surrounds us. The collapse and condensation of observation leads to a concrete reality that can be (or not) the same as that could have happened to someone else. Something that leads to a very complicated issue condensation waves by different observers and these physical quantities, a mathematical topic very interesting and much more philosophically that I leave for what you think. An example: can I condense the wave equations in a different way to someone else when this is not present? Therefore we have different time lines? And when we come together and observe the same process we collapsed the same timeline? And in that case, as two different time lines in a single same for two observers they collapse? What if the problem is for more than two observers?… All this gives a (if complicated) very complicated mathematics that now are developing and I, particularly, it costs me a lot to understand with all the tequila from the world over. Leave a Reply
92c70f1d97eecdd6
Monday, February 12, 2018 Black Holes and Information Black holes, with their extreme gravity and ability to profoundly warp space and time, are some of the most interesting objects in the universe. However, in at least one precisely defined way, they are also the least interesting. According to general relativity, black holes are nearly featureless. Specifically, there is a result known as the "no-hair theorem" that states that stationary black holes have exactly three features that are externally observable: their mass, their electric charge, and their angular momentum (direction and magnitude of spin). There are no other attributes that distinguish them (these additional properties would be the "hair"). It follows that if two black holes are exactly identical in mass, charge, and angular momentum, there is no way, even in principle, to tell them apart from the outside. This in and of itself is not a problem. As usual, problems arise when the principles of quantum mechanics are brought to bear in circumstances where both gravity and quantum phenomena play a large role. At the heart of the formalism of quantum mechanics is the Schrödinger equation, which governs the time-evolution of a system (at least between measurements). Fundamentally, the evolution may be computed both forwards and backwards in time. Therefore, at least the mathematical principles of quantum mechanics hold that information about a physical system cannot be "lost", that is, we may always deduce what happened in the past from the present. This argument does not take the measurement process into account, but it is believed that these processes do not destroy information either. Black holes provide some problems for this paradigm. However, it may seem that information is lost all the time. If a book is burned, for example, everything that was written on its pages is beyond our ability to reconstruct. However, in principle, some omniscient being could look at the state of every particle of the burnt book and surrounding system and deduce how they must have been arranged. As a result, the omniscient being could say what was written in the book. The situation is rather different for black holes. If a book falls into a black hole, outside observers cannot recover the text on its pages, but this poses no problem for our omniscient being: complete knowledge of the state of all particles in the universe includes of course those on the interior of black holes as well as the exterior. The book may be beyond our reach, but its information is still conserved in the black hole interior. The real problem became evident in 1974, when physicist Stephen Hawking argued for the existence of what is now known as Hawking radiation. This quantum mechanism allows black holes to shed mass over time, requiring a modification to the conventional wisdom that nothing ever escapes black holes. The principles of quantum mechanics dictate that the "vacuum" of space is not truly empty. Transient so-called "virtual" particles may spring in and out of existence. Pairs of such particles may emerge from the vacuum (a pair with opposite charges, etc. is required to preserve conservation laws) for a very short time; due to the uncertainty principle of quantum mechanics, short-lived fluctuations in energy that would result from the creation of particles do not violate energy conservation. In the presence of very strong gravitational fields, such as those around a black hole, the resulting pairs of particles sometimes do not come back together and annihilate each other (as in the closed virtual pairs above). Instead, the pairs "break" and become real particles, taking with them some of the black hole's gravitational energy. When this occurs on the event horizon, one particle may form just outside and the other just inside, so that the one on the outside escapes to space. This particle emission is Hawking Radiation. Theoretically, therefore, black holes have a way of shedding mass (through radiation) over time. Eventually, they completely "evaporate" into nothing! This process is extremely slow: black holes resulting from the collapse of stars may take tens of billions of years (more than the current age of the Universe!) to evaporate. Larger ones take still longer. Nevertheless, a theoretical puzzle remains: if the black hole evaporates and disappears, where did its stored information go? This is known as the black hole information paradox. The only particles actually emitted from the horizon were spontaneously produced from the vacuum, so it is not obvious how these could encode information. Alternatively, the information could all be released in some way at the moment the black hole evaporates. This runs into another problem, known as the Bekenstein bound. The Bekenstein bound, named after physicist Jacob Bekenstein, is an upper limit on the amount of information that may be stored in a finite volume using finite energy. To see why this bound arises, consider a physical system as a rudimentary "computer" that stores binary information (i.e. strings of 1's and 0's). In order to store a five-digit string such as 10011, there need to be five "switches," each of which has an "up" position for 1 and a "down" position for 0. Considering all possible binary strings, there are therefore 25 = 32 different physical states (positions of switches) for our five-digit string. This is a crude analogy, but it captures the basic gist: the Bekenstein bound comes about because a physical system of a certain size and energy can only occupy so many physical states, for quantum mechanical reasons. This bound is enormous; every rearrangement of atoms in the system, for example, would count as a state. Nevertheless, it is finite. The mathematical statement of the bound gives the maximum number of bits, or the length of the longest binary sequence, that a physical system of mass m, expressed as a number of kilograms, and radius R, a number of meters, could store. It is I ≤ 2.5769*1043 mR. This is far, far greater than what any existing or foreseeable computer is capable of storing, and is therefore not relevant to current technology. However, it matters to black holes, because if they hold information to the moment of evaporation, the black hole will have shrunk to a minuscule size and must retain the same information that it had at its largest. This hypothesis addressing the black hole information paradox seems at odds with the Bekenstein bound. In summary, there are many possible avenues for study in resolving the black hole information paradox, nearly all of which require the sacrifice of at least one physical principle. Perhaps information is not preserved over time, due to the "collapse" of the quantum wavefunction that occurs with measurement. Perhaps there is a way for Hawking radiation to carry information. Or possibly, there is a way around the Bekenstein bound for evaporating black holes. These possibilities, as well as more exotic ones, are current areas of study. Resolving the apparent paradoxes that arise in the most extreme of environments, where quantum mechanics and relativity collide, would greatly advance our understanding of the universe.
249042a4bc09d2a6
Graphene: Status and Prospects See allHide authors and affiliations Science  19 Jun 2009: Vol. 324, Issue 5934, pp. 1530-1534 DOI: 10.1126/science.1158877 Graphene research has developed at a truly relentless pace. Several papers appear every day, and, if the bibliometrics predictions (1) are to be trusted, the amount of literature on graphene will keep rapidly increasing over the next few years. This makes it a real struggle to keep up with the developments. Newcomers are left without a broad perspective and are largely unaware of previous arguments and solved problems, whereas the community’s doyens already show signs of forgetting their earlier papers. To combat this curse of success, many reviews have appeared in the last 2 years [e.g., (2)], and books on graphene are in the making. The electronic properties of graphene were recently discussed in an extensive theory review (3), and this basic information is unlikely to require any revision soon. More specialized papers discussing such topics as the quantum Hall effect in graphene, its Raman properties, and epitaxial growth on SiC are collected in (4). Despite, or perhaps because of, the vast amount of available literature, graphene research has now reached the stage where a strategic update is needed to cover the latest progress, emerging trends, and opening opportunities. This paper is intended to serve this purpose without repeating, whenever possible, the information available in the earlier reviews. Growing Opportunities Graphene is a single atomic plane of graphite, which—and this is essential—is sufficiently isolated from its environment to be considered free-standing. Atomic planes are, of course, familiar to everyone as constituents of bulk crystals, but one-atom-thick materials such as graphene remained unknown. The basic reason for this is that nature strictly forbids the growth of low-dimensional (low-D) crystals (2). Crystal growth implies high temperatures (T) and, therefore, thermal fluctuations that are detrimental for the stability of macroscopic 1D and 2D objects. One can grow flat molecules and nanometer-sized crystallites, but as their lateral size increases, the phonon density integrated over the 3D space available for thermal vibrations rapidly grows, diverging on a macroscopic scale. This forces 2D crystallites to morph into a variety of stable 3D structures. The impossibility of growing 2D crystals does not actually mean that they cannot be made artificially. With hindsight, this seems trivial. Indeed, one can grow a monolayer inside or on top of another crystal (as an inherent part of a 3D system) and then remove the bulk at sufficiently low T such that thermal fluctuations are unable to break atomic bonds even in macroscopic 2D crystals and mold them into 3D shapes. This consideration allows two principal routes for making 2D crystals (Fig. 1). One is to mechanically split strongly layered materials such as graphite into individual atomic planes (Fig. 1A). This is how graphene was first isolated and studied. Although delicate and time-consuming, the handcraft (often referred to as a scotch-tape technique) provides crystals of high structural and electronic quality, which can currently reach millimeter size. It is likely to remain the technique of choice for basic research and for making proof-of-concept devices in the foreseeable future. Instead of cleaving graphite manually, it is also possible to automate the process by using, for example, ultrasonic cleavage (5). This leads to stable suspensions of submicrometer graphene crystallites (Fig. 1B), which can then be used to make polycrystalline films and composite materials (5, 6). Conceptually similar is the ultrasonic cleavage of chemically “loosened” graphite, in which atomic planes are partially detached first by intercalation, making the sonification more efficient (6). The sonification allows graphene production on an industrial scale. Fig. 1 Making graphene. (A) Large graphene crystal prepared on an oxidized Si wafer by the scotch-tape technique. [Courtesy of Graphene Industries Ltd.] (B) Left panel: Suspension of microcrystals obtained by ultrasound cleavage of graphite in chloroform. Right panel: Such suspensions can be printed on various substrates. The resulting films are robust and remain highly conductive even if folded. [Courtesy of R. Nair, University of Manchester] (C) The first graphene wafers are now available as polycrystalline one- to five-layer films grown on Ni and transferred onto a Si wafer. [Courtesy of A. Reina and J. Kong, MIT] (D) State-of-the-art SiC wafer with atomic terraces covered by a graphitic monolayer (indicated by “1”). Double and triple layers (“2” and “3”) grow at the steps (12). The alternative route is to start with graphitic layers grown epitaxially on top of other crystals (7) (Fig. 1C). This is the 3D growth during which epitaxial layers remain bound to the underlying substrate and the bond-breaking fluctuations are suppressed. After the epitaxial structure is cooled down, one can remove the substrate by chemical etching. Technically, this is similar to making, for example, SiN membranes; however, the survival of one-atom-thick crystals was deemed impossible, and no one tried this route until recently (810). The isolation of epitaxial monolayers and their transfer onto weakly binding substrates (2) may now seem obvious, but it was realized only last year (9, 10). With progress continuing apace, the production of graphene wafers looks like a done deal. Imagine the following technology: Let us start with a tungsten (011) wafer of many inches in diameter and epitaxially grow a thin Ni (111) film on top (11). This is to be followed by chemical vapor deposition of a carbon monolayer (the growth of graphene on Ni can be self-terminating with little lattice mismatch) (7, 11). In this manner, wafer-scale single crystals of graphene (chemically bound to Ni) have been grown (11). A polymer or another film can then be deposited on top, and Ni is etched away as a sacrificial layer, leaving a graphene monolayer on an insulating substrate and the expensive W wafer ready for another round. The full cycle has not yet been demonstrated and will probably differ from the gedanken one outlined above (e.g., Cu can be used instead of Ni). Nonetheless, wafers of continuous few-layer graphene have already been grown on polycrystalline Ni films and transferred onto plastic and Si wafers (9, 10) (Fig. 1C). These films exhibit carrier mobility μ of up to 4000 cm2 V−1 s−1 (10)—close to that of cleaved graphene—even before the substrate material, growth, and transfer procedures have been optimized. Where does this leave graphitic layers grown on SiC (4, 12) (Fig. 1D)? These have been considered as a champion route to graphene wafers for electronics applications, mostly because SiC automatically provides an insulating substrate. First of all, one must distinguish between two principally different types of “graphene on SiC.” One consists of single and double layers grown on the Si-terminated face, and the other is “multilayer epitaxial graphene” that rapidly grows on the carbon face (4, 12). In the former case, carbon layers are bound to the substrate sufficiently weakly to retain graphene’s linear spectrum away (>0.2 eV) from the charge neutrality point (NP) (13). However, interaction with the substrate induces strong doping (~1013 cm−2) and spectral disorder at low energies [(13); see (14) for a possible model for the complex graphene-SiC interface]. The crystal quality and coverage homogeneity for the Si-face films have recently improved (12), and μ values start approaching those for graphene transferred from Ni. As for the carbon face, its epitaxial multilayers should probably be referred to as turbostratic graphene because they are rotationally disordered (no Bernal stacking) and separated by a distance slightly larger than that in graphite (4, 15). Turbostratic graphene exhibits the Dirac-like spectrum of free-standing graphene, little doping, and exceptionally high electronic quality (μ ≈ 250,000 cm2 V–1 s–1 at room temperature) (15). These features can be attributed to weak electronic coupling between inner layers; their protection from the environment by a few outer layers; and the absence of microscopic corrugations (2, 8). Because an external electric field is screened within just a couple of near-surface layers, turbostratic graphene probably offers limited potential for electronics but is interesting from other perspectives, especially for fundamental studies close to NP. Whichever way one now looks at the prospects for graphene production in bulk and wafer-scale quantities, those challenges that looked so daunting just 2 years ago have suddenly shrunk, if not evaporated, thanks to the recent advances in growth, transfer, and cleavage techniques. Quantum Update The most explored aspect of graphene physics is its electronic properties. Despite being recently reviewed (24), this subarea is so important that it necessitates a short update. From the most general perspective, several features make graphene’s electronic properties unique and different from those of any other known condensed matter system. The first and most discussed is, of course, graphene’s electronic spectrum. Electrons propagating through the honeycomb lattice completely lose their effective mass, which results in quasi-particles that are described by a Dirac-like equation rather than the Schrödinger equation (24). The latter—so successful for the understanding of quantum properties of other materials—does not work for graphene’s charge carriers with zero rest mass. Figure 2 provides a visual summary of how much our quantum playgrounds have expanded since the experimental discovery of graphene. Second, electron waves in graphene propagate within a layer that is only one atom thick, which makes them accessible and amenable to various scanning probes, as well as sensitive to the proximity of other materials such as high-κ dielectrics, superconductors, ferromagnetics, etc. This feature offers many enticing possibilities in comparison with the conventional 2D electronic systems (2DES). Third, graphene exhibits an astonishing electronic quality. Its electrons can cover submicrometer distances without scattering, even in samples placed on an atomically rough substrate, covered with adsorbates and at room temperature. Fourth, as a result of the massless carriers and little scattering, quantum effects in graphene are robust and can survive even at room temperature. Fig. 2 Quasi-particle zoo. (A) Charge carriers in condensed matter physics are normally described by the Schrödinger equation with an effective mass m* different from the free electron mass (Embedded Image is the momentum operator). (B) Relativistic particles in the limit of zero rest mass follow the Dirac equation, where c is the speed of light and Embedded Image is the Pauli matrix. (C) Charge carriers in graphene are called massless Dirac fermions and are described by a 2D analog of the Dirac equation, with the Fermi velocity vF ≈ 1 × 106 m/s playing the role of the speed of light and a 2D pseudospin matrix Embedded Image describing two sublattices of the honeycomb lattice (3). Similar to the real spin that can change its direction between, say, left and right, the pseudospin is an index that indicates on which of the two sublattices a quasi-particle is located. The pseudospin can be indicated by color (e.g., red and green). (D) Bilayer graphene provides us with yet another type of quasi-particles that have no analogies. They are massive Dirac fermions described by a rather bizarre Hamiltonian that combines features of both Dirac and Schrödinger equations. The pseudospin changes its color index four times as it moves among four carbon sublattices (24). The initial studies of graphene’s electronic properties were focused on the analysis of what new physics could be gained by using the Dirac equation within the standard condensed matter formalism (24). This “recycling” of quantum electrodynamics for the case of graphene has quickly led to the understanding of the half-integer quantum Hall effect and the predictions of such phenomena as Klein tunneling, zitterbewegung, the Schwinger production (16), supercritical atomic collapse (3, 17), and Casimir-like interactions between adsorbates on graphene (18). As for experiment, only the Klein tunneling has been verified in sufficient detail (19, 20). Furthermore, transport properties of real graphene devices have turned out to be much more complicated than theoretical quantum electrodynamics, and some basic questions about graphene’s electronic properties still remain to be answered. For example, there is no consensus about the scattering mechanism that currently limits μ, little understanding of transport properties near NP [especially on zero Landau level (21)], and no evidence for many predicted interaction effects. In the near term, much of this research will continue being driven by our knowledge about other low-D systems and by the “recycling” of the known issues and phenomena. Graphene-based quantum dots (22, 23), p-n junctions (19, 20), nanoribbons (2325), quantum point contacts (22), and, especially, magnetotransport near NP have not received even a fraction of the attention they deserve. Also, it is easy to foresee the revisiting of lateral superlattices, magnetic focusing, electron optics, and many interference and ballistic effects studied previously in the conventional 2DES (26), which hopefully can either be more spectacular in graphene or clarify its physics. Among other usual suspects are electro- and magneto-optics, where graphene offers many unexplored opportunities. Graphene is structurally malleable, and its electronic, optical, and phonon properties can be strongly modified by strain and deformation (27). For example, strain allows one to create local gauge fields (3) and even alter graphene’s band structure. Research on bended, folded, and scrolled graphene is also gearing up. Furthermore, graphene and turbostratic graphene offer a dream playground for scanning probe microscopy, and many experiments can be constructed for observing supercritical screening, detecting local magnetic moments, mapping wave functions in quantizing fields, etc. Further down the line are interaction effects in split bilayers; observing such effects would be experimentally challenging but may bring up physics even more spectacular than that in the other 2DES (28). On the frontier of exploration is the fractional quantum Hall effect, whose possibility has already been tormenting graphene researchers who occasionally observe plateau-like features at fractional fillings, only to find them irreproducible for different devices. The above sketchy agenda may take many years to complete, and the speed of developments will crucially depend on progress in growing wafers and improvements in sample quality. Inch-size wafers with μ values in the range of 1 million can no longer be dismissed as “graphene dreams,” and when this happens, many-body phenomena and new physics that cannot even be envisaged at this stage are likely to emerge. Chemistry Matters Graphene is an ultimate incarnation of the surface: It has two faces with no bulk in between. Although this surface’s physics is currently at the center of attention, its chemistry has remained largely unexplored. What we have so far learned about graphene chemistry is that, similar to the surface of graphite, graphene can adsorb and desorb various atoms and molecules (for example, NO2, NH3, K, and OH). Weakly attached adsorbates often act as donors or acceptors and lead to changes mostly in the carrier concentration, so that graphene remains highly conductive (29). Other adsorbates such as H+ or OH give rise to localized (“mid-gap”) states close to NP, which results in poorly conductive derivatives such as graphene oxide (6) and “single-sided graphane” (30). Despite the new names, these are not new chemical compounds but are the same graphene randomly decorated with adsorbates. Thermal annealing or chemical treatment allows the reduction of graphene to its original state with relatively few defects left behind (30). This reversible dressing up and down is possible because of the robust atomic scaffold that remains intact during chemical reactions. Within this surface science perspective, graphene chemistry looks similar to that of graphite, and the latter can be used for guidance. There are principal differences too. First, chemically induced changes in graphene’s properties are much more pronounced because of the absence of an obscuring contribution from the bulk (29). Second, unlike graphite’s surface, graphene is not flat but typically exhibits nanometer-scale corrugations (8). The associated strain and curvature can markedly influence local reactivity. Third, reagents can attach to both graphene faces, and this alters the energetics, allowing chemical bonds that would be unstable if only one surface were exposed (31). An alternative to the surface chemistry perspective is to consider graphene as a giant flat molecule (as first suggested by Linus Pauling). Like any other molecule, graphene can partake in chemical reactions. The important difference between the two viewpoints is that in the latter case, adsorbates are implicitly assumed to attach to the carbon scaffold in a stoichiometric manner—that is, periodically rather than randomly. This should result in new 2D crystals with distinct electronic structures and different electrical, optical, and chemical properties. The first known example is graphane, a 2D hydrocarbon with one hydrogen atom attached to every site of the honeycomb lattice (30, 31). Many other graphene-based crystals should be possible because adsorbates are likely to self-organize into periodic structures, similar to the case of graphite, which is well known for its surface superstructures. Instead of doping with atomic hydrogen (as in graphane), F, OH, and many functional groups appear to be viable candidates in the search for novel graphene-based 2D crystals. Graphene chemistry is likely to play an increasingly important role in future developments. For example, stoichiometric derivatives offer a way to control the electronic structure, which is of interest for many applications including electronics. Chemical changes can probably be induced even locally. Imagine, then, an all-graphene circuitry in which interconnects are made from pristine graphene, whereas other areas are modified to become semiconducting and allow transistors. Disordered graphene-based derivatives should not be overlooked either. They can probably be referred to as functionalized graphene, suitable for specific applications. “Graphene paper” is a spectacular example of how important such functionalization could be (Fig. 3). If it is made starting with a suspension of nonfunctionalized flakes (5), the resulting material is porous and extremely fragile. However, the same paper made of graphene oxide is dense, stiff, and strong (6, 32). In the latter case, the functional groups bind individual sheets together, which results in a microscopic structure not dissimilar to that of nacre, known for its strength. Instead of aragonite bound in nacre by biopolymer glue, graphene oxide laminate, in particular its reduced version (32), makes use of atomic-scale stitching of the strongest known nanomaterial. Fig. 3 Graphene derivatives. (A) Graphene oxide laminate is tough, flexible, transparent, and insulating (6). (B) Paper made in the same way as (A) but starting from graphene suspension (5) is porous, fragile, opaque, and metallic. [Courtesy of R. Nair, University of Manchester] Despite a cornucopia of possible findings and applications, graphene chemistry has so far attracted little interest from professional chemists. One reason is that graphene is neither a standard surface nor a standard molecule. However, the main obstacle has probably been the lack of samples suitable for traditional chemistry. The recent progress in making graphene suspensions (5, 6) has opened up a way to liquid-phase chemistry, and hopefully, the professional help that graphene researchers have long been waiting for is now coming. Sleeping Beauty It is customary these days to start reports on graphene by referring to it as a “unique electronic system.” This statement belittles what graphene is actually about. 2DES and even Dirac-like quasi-particles were known before, but one-atom-thick materials were not. In this respect, graphene has founded a league of its own, but little is known about its non-electronic properties. The situation is now rapidly changing, and this brings beautiful new dimensions into graphene research. Last year, the first measurements of graphene’s mechanical and thermal properties were reported. It exhibits a breaking strength of ~40 N/m, reaching the theoretical limit (33). Record values for room-temperature thermal conductivity (~5000 W m–1 K–1) (34) and Young’s modulus (~1.0 TPa) (33) were also reported. Graphene can be stretched elastically by as much as 20%, more than any other crystal (27, 33). These observations were partially expected on the basis of previous studies of carbon nanotubes and graphite, which are structurally made of graphene sheets. Somewhat higher values observed in graphene can be attributed to the virtual absence of crystal defects in samples obtained by micromechanical cleavage. Even more intriguing are those findings that have no analogs. For example, unlike any other material, graphene shrinks with increasing T at all values of T because membrane phonons dominate in 2D (35). Also, graphene exhibits simultaneously high pliability (folds and pleats are commonly observed) and brittleness [it fractures like glass at high strains (36)]. The notions constitute an oxymoron, but graphene combines both properties. Equally unprecedented is the observation that the one-atom-thick film is impermeable to gases, including helium (37). When wafers become available, there should be an explosion of interest in (bio)molecular and ion transport through graphene and its membranes with designer pores. Speaking of non-electronic properties, we do not even know such basic things about graphene as how it melts. Neither the melting temperature nor the order of the phase transition is known. Ultrathin films are known to exhibit melting temperatures that rapidly decrease with decreasing thickness. The thermodynamics of 2D crystals in a 3D space could be very different from that of thin films and may more closely resemble the physics of soft membranes. For example, melting can occur through generation of defect pairs and be dependent on the lateral size, similar to the Kosterlitz-Thouless transition. Experimental progress in studying graphene’s thermodynamic properties has been hindered by the small sizes of available crystals, but the situation may change soon. On the other hand, theoretical progress is likely to remain slow because small sizes have also proven to be a problem in molecular dynamics and other numerical approaches, which struggle to grasp the underlying physics when studying crystals of only a few nanometers in size. Grandeur and Plainness Potential applications of graphene were discussed in (2) and, during the past 2 years, substantial progress has been made along many lines. The major difference between now and then is the advent of mass production technologies for graphene. This has changed the whole landscape by making the subject of applications less speculative and allowing the development of new concepts unimaginable earlier. Most of the current buzz surrounds graphene’s long-term prospects in computer electronics. Immediate, but often mundane, applications are least discussed and remain unnoticed even within parts of the graphene community. An extreme example of popular speculations is an idea about graphene becoming the base electronic material “beyond the Si age.” Although this possibility cannot be ruled out, it is so far beyond the horizon that it cannot be assessed accurately. At the very least, graphene-based integrated circuits require the conducting channel to be completely closed in the off state. Several schemes have been proposed to deal with graphene’s gapless spectrum and, recently, nanoribbon transistors with large on-off current ratios at room temperature were demonstrated (22, 25) (Fig. 4A). Nonetheless, the prospect of “graphenium inside” remains as distant as ever. This is not because of graphene shortfalls, but rather because experimental tools to define structures with atomic precision are lacking. More efforts in this direction are needed, but the progress is expected to be painstakingly slow and to depend on technological developments outside the research area. Fig. 4 From dreams to reality. (A) Graphene nanoribbons of sub-10-nm scale exhibit the transistor action with large on-off ratios (22, 25). Scanning electron micrograph shows such a ribbon made by electron-beam lithography (22). Control of such a ribbon’s width and its edge structure with atomic precision remains a daunting challenge on the way toward graphene-based electronics. (B) All the fundamentals are in place to make graphene-based HEMTs. This false-color micrograph shows the source and drain contacts in yellow, two top gates in light gray, and graphene underneath in green (38). [Courtesy of Y. Lin, IBM] (C) Graphene-based NEMS. Shown is a drum resonator made from a 10-nm-thick film of reduced graphene oxide, which covers a recess in a Si wafer (32). (D) Ready to use: Graphene membranes provide an ideal support for TEM. The central part is a monolayer of amorphous carbon. Graphene itself shows on this image only as a gray background (see the top part). Carbon atoms in the amorphous layer appear dark and make a random array of pentagons, hexagons, and heptagons, as indicated by color lines. Individual oxygen atoms clearly visible on graphene were also reported (36). [Courtesy of J. C. Meyer, A. Chuvilin, and U. Kaiser, University of Ulm] An example to the contrary is the use of graphene in transmission electron microscopy (TEM). It is a tiny niche application, but it is real. Single-crystal membranes, one atom thick and with low atomic mass, provide the best imaginable support for atomic-resolution TEM. With micrometer-sized crystallites now available in solution (5) for their cheap and easy deposition on standard grids and with films transferable from metals (9, 10) onto such grids, graphene membranes are destined to become a routine TEM accessory (Fig. 4D). The space between graphene dreams and immediate reality is packed with applications. One such application is neither grand nor mundane: individual ultrahigh-frequency analog transistors (Fig. 4B). This area is currently dominated by GaAs-based devices known as high-electron-mobility transistors (HEMTs), which are widely used in communication technologies. Graphene offers a possibility to extend HEMTs’ operational range into terahertz frequencies. The fundamentals allowing this are well known: Graphene exhibits room-temperature ballistic transport such that the charge transit between source and drain contacts takes only 0.1 ps for a typical channel length of 100 nm. Gate electrodes can be placed as close as several nanometers above graphene, which allows shorter channels and even quicker transit. Although graphene’s gapless spectrum leads to low on-off ratios of 10 to 100, they are considered sufficient for the analog electronics. The progress toward graphene HEMTs is hindered by experimental difficulties in accessing the microwave range. The first frequency tests of graphene transistors were reported only recently (38). Long channels and low mobility in these experiments limited the cutoff frequencies to less than 30 GHz (38), well below the operational range of GaAs-based HEMTs. However, the observed scaling of the operational frequency as a function of the channel length and μ indicates that the terahertz range is accessible (38). With graphene wafers in sight, these efforts are going to intensify, and HEMTs and other ultrahigh-frequency devices such as switches and rectifiers have a realistic chance to reach the market. Sitting on a Graphene Mine There has been an explosion of ideas that suggest graphene for virtually every feasible use. This is often led by analogies with carbon nanotubes that continue to serve as a guide in searching for new applications. For example, graphene powder is considered to be excellent filler for composite materials (6). Reports have also been made on graphene-based supercapacitors, batteries, interconnects, and field emitters, but it is too early to say whether graphene is able to compete with the other materials, including nanotubes. Less expectedly, graphene has emerged as a viable candidate for use in optoelectronics (10, 39). Suspensions offer an inexpensive way to make graphene-based coatings by spinning or printing (Fig. 1B). An alternative is the transfer of films grown on Ni (9, 10). These coatings are often suggested as a competitor for indium tin oxide (ITO), the industry standard in such products as solar cells, liquid crystal displays, etc. However, graphene films exhibit resistivity of several hundred ohms for the standard transparency of ~80% (9, 10, 39). Such resistivity is two orders of magnitude higher than for ITO and is unacceptable in many applications (e.g., solar cells). It remains to be seen whether the conductivity can be improved to the required extent. Having said that, graphene coatings also offer certain advantages over ITO. They are chemically stable, robust, and flexible and can even be folded, which gives them a good chance of beating the competition in touch screens and bendable applications. There is also fast-growing interest in graphene as a base material for nanoelectromechanical systems (NEMS) (32, 40), given that lightness and stiffness are the essential characteristics sought in NEMS for sensing applications. Graphene-based resonators offer low inertial masses, ultrahigh frequencies, and, in comparison with nanotubes, low-resistance contacts that are essential for matching the impedance of external circuits. Graphene membranes have so far shown quality factors of ~100 at 100-MHz frequencies (40). Even more encouraging are data for drum resonators made from reduced graphene oxide films (32). These nanometer-thick polycrystalline NEMS (Fig. 4C) exhibit high Young’s moduli (comparable to those of graphene) and quality factors of ~4000 at room temperature. The films can be produced as wafers and then processed by standard microfabrication techniques. Further developments (increasing the frequency and improving quality factors) should allow graphene NEMS to assail such tantalizing challenges as inertial sensing of individual atoms and the detection of zero-point oscillations. Among other applications that require mentioning are labs-on-chips (electronic noses) and various resistive memories. The high sensitivity of graphene to its chemical environment is well acknowledged, now that sensors capable of detecting individual gas molecules have been demonstrated (29). Imagine an array of graphene devices, each functionalized differently to be able to react to different chemicals or biomolecules. Such functionalization has been intensively researched for the case of carbon nanotubes, and graphene adds the possibility of mass-produced arrays of identical devices. Furthermore, there are several enticing reports on nonvolatile memories in which graphene-based wires undergo reversible resistance switching by, for example, applying a sequence of current pulses (41, 42). The underlying mechanism remains largely unknown, but such nanometer-scale switches present an attractive alternative to phase-change memories and deserve further attention. Reports on graphene-ferroelectric memories (43) are also encouraging, given the basic simplicity of their operation. More Room in the Flatland Graphene has rapidly changed its status from being an unexpected and sometimes unwelcome newcomer to a rising star and to a reigning champion. The professional skepticism that initially dominated the attitude of many researchers with respect to graphene applications is gradually evaporating under the pressure of recent developments. Still, it is the wealth of new physics—observed, expected, and hoped for—that is driving the area for the moment. Research on graphene’s electronic properties is now matured but is unlikely to start fading any time soon, especially because of the virtually unexplored opportunity to control quantum transport by strain engineering and various structural modifications. Even after that, graphene will continue to stand out in the arsenal of condensed matter physics. Research on graphene’s non-electronic properties is just gearing up, and this should bring up new phenomena that may well sustain, if not expand, the graphene boom. References and Notes 1. Supported by the Engineering and Physical Science Research Council (UK), U.S. Office of Naval Research, and U.S. Air Force Office of Scientific Research. I thank I. Grigorieva, A. Castro Neto, A. MacDonald, P. Kim, and K. Novoselov for many helpful comments, and the staff of the Max Planck Institute for Solid-State Research for their hospitality during my sabbatical when this review was partially written. View Abstract Stay Connected to Science Navigate This Article
b94777842b513759
Difference between revisions of "Introduction" From victor Jump to: navigation, search (Align - ALIGNment generation and analysis) Line 61: Line 61: == Align - ALIGNment generation and analysis == == ALIGNment generation and analysis (Align) == Revision as of 13:44, 19 August 2014 The Victor2.0 library (Virtual Construction Toolkit for Proteins) is an open-source project dedicated to providing a C++ implementation of tools for analyzing and manipulating protein structures. Victor is composed of three main modules: • Biopool (BIOPolymer Object Oriented Library) - The core library that generates the protein object and provides useful methods to manipulate the structure. • Energy - A library to calculate statistical potentials from protein structures. • Lobo (LOop Build-up and Optimization) - Ab-intio prediction of missing loop conformation in protein models. • Align - ALIGNment generation and analysis. Biopolymer Object Oriented Library (Biopool) This implementation make easy the modification of the protein structure and lot of functions were implemented to modify/perturbate/transformate the residue relative position in an efficient way, rotation and Translation vectors. For more detail on how to use energy look Biopool Energy functions implementation Energy functions are used in a variety of roles in protein modelling. An energy function precise enough to always discriminate the native protein structure from all possible decoys would not only simplify the protein structure prediction problem considerably. It would also increase our understanding of the protein folding process itself. If feasible, one would like to use quantum mechanical models, being the most detailed representation, to calculate the energy of a protein. It can theoretically be done by solving the Schrödinger equation. This equation can be solved exactly for the hydrogen atom, but is no longer trivial for three or more particles. In recent years it has become possible to approximately solve the Schrödinger equation for systems up to hundred atoms with the Hartree-Fock or self-consistent field approximations. Their main idea is that the many-body interactions are reduced to several two-body interactions. Energy functions are important to all aspects of protein structure prediction, as they give a measure of confidence for optimization. An ideal energy function would also explain the process of protein folding. The most detailed way to calculate energies are quantum mechanical methods. These are, to date, still overly time consuming and impractical. Two alternative classes of functions have been developed: force fields and knowledge-based potentials. Force fields (e.g. AMBER) are empirical models approximating the energy of a protein with bonded and non-bonded interactions, attempting to describe all contributions to the total energy. They tend to be very detailed and are prone to yield many erroneous local minima. An alternative are knowledge-based potentials (e.g. [78]), where the “energy” is derived from the probability of a structure being similar to interaction patterns found in the database of known structures. This approach is very popular for fold recognition, as it produces a smoother “global” energy surface, allowing the detection of a general trend. Abstraction levels for knowledge-based potentials vary greatly, and several functional forms have been proposed. The energy functions presented in the package allow to optimize procedures. The main feature is its applicability in the context of the protein classes implemented in the package. It should be possible to invoke the energy calculation with any structure from all programs. At the same time the parameters of the energy models had to be stored externally to allow their rapid modification. With this considerations in mind, the package Energy was designed to collect the classes and programs dealing with energy calculation. The main design decision was to use the “strategy” design pattern from Gamma et al. The abstract class Potential was defined to provide a common interface for energy calculation. It contains the necessary methods to load the energy parameters during initialization of an object. Computing the energy value for objects of the Atom and Spacer classes as well as a combination of both is allowed. For more detail on how to use energy look Energy LOop Build-up and Optimization (Lobo) Current database methods using solely experimentally determined loop fragments do not cover all possible loop conformations, especially for longer fragments. On the other hand it is not feasible to use a combinatorial search of all possible torsion angle combinations. For an algorithm to be efficient, a compromise has to be found. One improvement in ab initio loop modelling is the use of look-up tables(LUT) to avoid the repetitive calculation of loop fragments. LUTs can be generated once and stored, only requiring loading during loop modelling. Using a set of LUTs reduces the computational time significantly. The next problem is how to best explore the conformational space. Especially for longer loops, it is useful to generate a set of different candidate loops to exclude improbable ones by ranking. The method should therefore be able to select different loops by global exploration of the conformational space independently of starting conditions. Methods building the loop stepwise from one anchor residue to the other bias the solutions depending on choices made in conformation of the first few residues. Rather a global approach to the optimization is required. This criterion is fulfilled by the divide & conquer algorithm, which is recursively described by the following steps: 1. if start = end, compute result; 2. else use algorithm for: (a) start to end/2 (b) end/2 to end 3. combine the partial solutions into the full result. Applied to loop modelling, the basic idea of a divide & conquer approach is to divide the loop into two segments of half the original length choosing a good central position, as shown: The segments can be recursively divided and transformed, until the problem is small enough to be solved analytically (conquered). The positions of main-chain atoms for segments of a single amino acid can be calculated analytically, using the vector representation. Longer loop segments can be stored in LUTs and their coordinates extracted by geometrically transforming the coordinates for single amino acids back into the context of the initial problem. To this end we need to define an unambiguous way to represent the conformation of any given residue along the chain and a set of operations to concatenate and decompose loop segments. For more detail on how to use Lobo look Lobo ALIGNment generation and analysis (Align) A C++ library for the generation of diverse alignments techniques of protein sequences and their analysis. The package comes in the form of C++ source code with several options that can be compiled and used. The necessary data files (e.g. substitution matrices) are provided. The most important feature of the package is the modular object oriented design, which should allow a moderately experienced C++ programmer to rapidly implement and test new features for sequence alignment. Inside this package, you can use, different weighting schemes, scoring functions, ways to penalize gaps, typologies of structural information. The Align library was designed to be modular and easy to expand. There are four basic components which are needed to use the alignment methods. The four main components are: • Blosum The substitution matrix • AlignmentData Stores information on sequence ("SequenceData") and, where needed, secondary structure ("SecSequenceData") • ScoringScheme Stores information on how a single position shall be scored in the alignment,e.g. sequence-to-sequence ("ScoringS2S"),profile-to-sequence ("ScoringP2S") or profile-to-profile ("ScoringP2P") scoring, etc.Requires both an "AlignmentData" and a "Blosum" object. • Align The alignment algorithm. This can be either local (Smith-Waterman, "SWAlign"),global (Needleman-Wunsch, "NWAlign") or glocal/overlap (Free-Shift, "FSAlign"). Requires both an "AlignmentData" and a "ScoringScheme" object. If P2S or P2P scoring is used, the class "Profile" stores the necessary information to generate the profile from a multiple sequence alignment. Two advanced options, which may be useful in certain circumstances, are supported by the software: 1) ReverseScoring This allows the estimation of a staistical significance of the raw alignment score by testing it against an ensemble of alignments based on the reversed sequence in the form of a Z-score. 2) Suboptimal alignments Rather than generating a single solution, the user may decide on a number of different, alternative, suboptimal alignments to be generated. The simplest possible C++ code fragment to generate a global alignment is: Blosum sub(matrixFile); SequenceData ad(2, seq1, seq2); ScoringS2S sc(&sub, &ad);
77b696725cd206b3
Friday, January 6, 2017 The Trouble with Quantum Mechanics Steven Weinberg writes The Trouble with Quantum Mechanics in the NY Review of Books: Probability enters Newtonian physics only when our knowledge is imperfect, ... The introduction of probability into the principles of physics was disturbing to past physicists, but the trouble with quantum mechanics is not that it involves probabilities. We can live with that. The trouble is that in quantum mechanics the way that wave functions change with time is governed by an equation, the Schrödinger equation, that does not involve probabilities. It is just as deterministic as Newton’s equations of motion and gravitation. That is, given the wave function at any moment, the Schrödinger equation will tell you precisely what the wave function will be at any future time. There is not even the possibility of chaos, the extreme sensitivity to initial conditions that is possible in Newtonian mechanics. So if we regard the whole process of measurement as being governed by the equations of quantum mechanics, and these equations are perfectly deterministic, how do probabilities get into quantum mechanics? ... This is a very strange complaint. Obviously he understands perfectly well how probability, measurement, and chaos get into quantum mechanics, because there is wide agreement on how to do the calculations that predict experiments. So his problem is purely philosophical. If this is a problem, then I think that most theories have problems if you take them too literally and ask too many philosophical questions. LuMo explains: At any rate, I consider Weinberg to be a 100% anti-quantum zealot ... at this point. It's sad. Weinberg's hangup about probabilities is especially strange. He says that probabilities enter classical mechanics "when our knowledge is imperfect", and enters quantum mechanics because "not everything can be simultaneously measured." Okay, I can accept that, but why is it a problem? Our knowledge is always imperfect in the classical case because of observation errors, and always imperfect in the quantum case for the additional reason that it is impossible to predict the measurement of variables that cannot be simultaneously measured. So yes, probabilities are appropriate in either case. I can only infer that Weinberg has some conceptual misunderstanding of probability, but I don't see what it is. He favorably describes the many-worlds interpretation, but does not endorse it. Physics professor Frank Tipler does endorse the many worlds: Most physicists, at least most physicists who apply quantum mechanics to cosmology, accept Everett’s argument. So obvious is Everett’s proof for the existence of these parallel universes, that Steve Hawking once told me that he considered the existence of these parallel universes “trivially true.” Everett’s insight is the greatest expansion of reality since Copernicus showed us that our star was just one of many. Yet few people have even heard of the parallel universes, or thought about the philosophical and ethical implications of their existence. Quantum mechanics is a theory of physics on an atomic scale, so only crackpots apply quantum mechanics to cosmology. Maybe most of them believe in many-worlds, I don't know, but I really don't think that most physicists do. 1. If QM were purely on an atomic scale, then how would we know, since we are not on an atomic scale? QM has to say something about the macro-universe too. Carver Mead wrote about this in his book Collective Electrodynamics. 2. Why does Lubos Motl agree with quantum computing? He told me "probabilities interfere all the time." He completely contradicts himself. None of these physicists can reason clearly. It's just mud and confusion.
fab79bbddf938713
Quantum mechanics Quantum mechanics is the theory of what happens at very small dimensions, on the order of 10-30 meters or less! It is therefore the theory which must be used in order to understand atoms and elementary particles. According to quantum mechanics, what is “out there” is a vast amount of space – not an empty backdrop, but actually something. This space is filled with particles so small that the distance between them is huge compared to their own sizes. Not only that, but they are waves, or something else which acts sometimes like waves and sometimes like particles. The modern interpretation of this is in terms of fields, things which have a value (and perhaps a direction) at every point in space. “Every particle and every wave in the Universe is simply an excitation of a quantum field that is defined over all space and time.1Blundell, 1. Nobody can actually measure simultaneously where a particle is and how fast it is moving (or how much energy it possesses and when). This effect is referred to as indeterminacy, or the Uncertainty Principle, one of the more uncomfortable and, simultaneously, fruitful results of the theory. As a result of this indeterminacy, energy need not be conserved, regardless of thermodynamics, for very short periods of time, giving rise to all sorts of unexpected phenomena, such as radiation from black holes. But that is another subject. Time-dependant non-relativistic Schrödinger equation Time-dependant non-relativistic Schrödinger equation from Wikipedia QM is explained by a mathematical formalism based on an equation, generally referred to as the Schrödinger equation, although it exists in several forms (differential, matrix, bra-ket, tensor). The solution to this equation is called the wave function, represented by the Greek letter ψ. The wave function serves to predict the probability that the system under study be in a given state. It gives only a probability for the state. (In fact, the probability is not the wave function itself, but its complex square.) This knowledge only of probabilities really irks some people and nobody really understands what it means (dixit Richard Feynman, one of the greatest of quantum theorists). But the mathematics works. According to QM, some parameters of a system, such as energy or wavelength, can only take on certain values; any values in between are not allowed. Such allowed values are called eigenvalues. The eigenvalues are separated by minimal “distances” called quanta and the system is said to be quantized. We will see a good example of them when we look at atomic structure. An important result of QM is that certain particles known as fermions are constrained so that two of them can never occupy the same QM state. This phenomenon, called the Exclusion Principle, is at the root of solid-state physics and therefore of the existence of transistors and all the technologies dependent thereupon – portable computers, mobile telephones, space exploration and the Internet, just as to mention a few examples. So QM has indeed revolutionized modern life, for the better and for the worse. The exclusion principle is also responsible for the fact that electrons in a collapsing super-dense star cannot all be in the same state, so there is a pressure effectively keeping them from being compressed any further. We will read more about that in the cosmology chapter. Closer to home, fermions constitute matter, including us. An important subject of study and discussion in current theoretical physics is the interpretation of QM, such as in the many-worlds hypothesis, but that subject is beyond the scope of this article. Go on to read about relativity, because it’s probably not what you thought it was. Notes   [ + ] 1. Blundell, 1. Leave a Reply
4bfd33bc5ce05819
Facebook Twitter Instagram YouTube Dense Atomic Vapors Steve Cundiff studies the behavior of dense atomic vapors at temperatures ranging from 300–800 °C. In his group’s initial experiments, researchers directed two or three excitation laser pulses into dense vapors of potassium atoms (39K). The group used a reflection cell to study the signal beam generated by coherent interaction between the excitation pulses in the vapor. This method is similar to using a stroboscope, which uses pulses of ordinary light to make tennis balls appear stationary as they fly through the air. One major goal of this and subsequent experiments is to test an 1873 prediction of a fundamental interaction of light (known as the Lorentz-Lorenz shift) with a dense ensemble of oscillators. The Cundiff group’s results suggest that the interactions are more complicated than Hendrik Lorentz predicted more than 130 years ago. The first set of experiments showed that the first laser pulse synchronized resonance frequencies of the emitted light from the 39K atoms (i.e., created coherence); additional pulses gathered information about the dissipation of the coherence caused by atomic collisions. By varying the amount of time between pulses, the group monitored what occurred as atoms approached each other, collided, and flew apart. The group also studied the change of the decay rate of the signal with different laser powers. In the next set of experiments, the Cundiff group used transmission cell in a JILA MONSTR (Multidimensional Optical Nonlinear SpecTRometer) to perform two-dimensional Fourier-transform spectroscopy of the dense atomic vapor. This technique allowed the researchers to observe how a laser pulse interacts with the 39K atoms. It also made it possible to discover new phenomena and investigate their properties. This research continues to shed light on the collision behavior of 39K atoms in a dense vapor. JILA MONSTR Helps Solve the Schrödinger Equation for Hot K Atoms The Cundiff group recently came up with an experimental technique to measure key parameters needed to solve the Schrödinger equation for detailed spectra of a gas of hot (180 °C) potassium (K) atoms. The researchers obtained the spectra by using the JILA MONSTR to perform optical three-dimensional (3D) Fourier-transform spectroscopy on the gas. The spectra allowed them to see what was happening inside the quantum world of the atoms in their experiment. The researchers were able to disentangle all possible pathways between specific initial conditions such as excited states or superposition states. Once all pathways had been identified, the researchers were able to make the measurement necessary for characterizing the pathways. With this information, they were able to figure out some pieces of the Hamiltonian they needed. The Hamiltonian is a key part of the Schrödinger equation that describes the time-dependent evolution of quantum states in a physical system such as a gas of hot K atoms.  This technique opens up many possibilities, including realizing the dream of coherently controlling chemical reactions. Coherent control requires an understanding of all possible quantum pathways in a particular reaction. The fact that optical 3D Fourier-transform spectroscopy made it possible to identify all the pathways in this experiment is a big step forward in realizing this dream. The new technique also opens the door to experimentally determining a Hamiltonian for an even more complex system.
598060e7b10e8fbf
Born–Oppenheimer approximation From Wikipedia, the free encyclopedia   (Redirected from Born-Oppenheimer approximation) Jump to navigation Jump to search In quantum chemistry and molecular physics, the Born–Oppenheimer (BO) approximation is the assumption that the motion of atomic nuclei and electrons in a molecule can be separated. The approach is named after Max Born, and J. Robert Oppenheimer. In mathematical terms, it allows the wavefunction of a molecule to be broken into its electronic and nuclear (vibrational, rotational) components. The success of the BO approximation is due to the difference between nuclear and electronic masses. The approximation is an important tool of quantum chemistry: all computations of molecular wavefunctions for large molecules make use of it, and without it only the lightest molecule, H2, can be handled. Even in the cases where the BO approximation breaks down, it is used as a point of departure for the computations. Short description[edit] In the first step the nuclear kinetic energy is neglected,[1] that is, the corresponding operator Tn is subtracted from the total molecular Hamiltonian. In the remaining electronic Hamiltonian He the nuclear positions enter as parameters. The electron–nucleus interactions are not removed, and the electrons still "feel" the Coulomb potential of the nuclei clamped at certain positions in space. (This first step of the BO approximation is therefore often referred to as the clamped-nuclei approximation.) The electronic Schrödinger equation In the second step of the BO approximation the nuclear kinetic energy Tn (containing partial derivatives with respect to the components of R) is reintroduced, and the Schrödinger equation for the nuclear motion[3] Derivation of the Born–Oppenheimer approximation[edit] We will assume that the parametric dependence is continuous and differentiable, so that it is meaningful to consider which in general will not be zero. The total wave function is expanded in terms of : The column vector has elements . The matrix is diagonal, and the nuclear Hamilton matrix is non-diagonal; its off-diagonal (vibronic coupling) terms are further discussed below. The vibronic coupling in this approach is through nuclear kinetic energy terms. The diagonal () matrix elements of the operator vanish, because we assume time-reversal invariant, so can be chosen to be always real. The off-diagonal matrix elements satisfy The matrix element in the numerator is The matrix element of the one-electron operator appearing on the right side is finite. When the two surfaces come close, , the nuclear momentum coupling term becomes large and is no longer negligible. This is the case where the BO approximation breaks down, and a coupled set of nuclear motion equations must be considered instead of the one equation appearing in the second step of the BO approximation. Conversely, if all surfaces are well separated, all off-diagonal terms can be neglected, and hence the whole matrix of is effectively zero. The third term on the right side of the expression for the matrix element of Tn (the Born–Oppenheimer diagonal correction) can approximately be written as the matrix of squared and, accordingly, is then negligible also. Only the first (diagonal) kinetic energy term in this equation survives in the case of well separated surfaces, and a diagonal, uncoupled, set of nuclear motion equations results: which are the normal second step of the BO equations discussed above. We reiterate that when two or more potential energy surfaces approach each other, or even cross, the Born–Oppenheimer approximation breaks down, and one must fall back on the coupled equations. Usually one invokes then the diabatic approximation. The Born–Oppenheimer approximation with the correct symmetry[edit] To include the correct symmetry within the Born–Oppenheimer (BO) approximation,[4][5] a molecular system presented in terms of (mass-dependent) nuclear coordinates and formed by the two lowest BO adiabatic potential energy surfaces (PES) and is considered. To insure the validity of the BO approximation, the energy E of the system is assumed to be low enough so that becomes a closed PES in the region of interest, with the exception of sporadic infinitesimal sites surrounding degeneracy points formed by and (designated as (1, 2) degeneracy points). The starting point is the nuclear adiabatic BO (matrix) equation written in the form[6] where is a column vector containing the unknown nuclear wave functions , is a diagonal matrix containing the corresponding adiabatic potential energy surfaces , m is the reduced mass of the nuclei, E is the total energy of the system, is the gradient operator with respect to the nuclear coordinates , and is a matrix containing the vectorial non-adiabatic coupling terms (NACT): To study the scattering process taking place on the two lowest surfaces, one extracts from the above BO equation the two corresponding equations: where (k = 1, 2), and is the (vectorial) NACT responsible for the coupling between and . Next a new function is introduced:[7] and the corresponding rearrangements are made: 1. Multiplying the second equation by i and combining it with the first equation yields the (complex) equation 2. The last term in this equation can be deleted for the following reasons: At those points where is classically closed, by definition, and at those points where becomes classically allowed (which happens at the vicinity of the (1, 2) degeneracy points) this implies that: , or . Consequently, the last term is, indeed, negligibly small at every point in the region of interest, and the equation simplifies to become In order for this equation to yield a solution with the correct symmetry, it is suggested to apply a perturbation approach based on an elastic potential , which coincides with at the asymptotic region. where is an arbitrary contour, and the exponential function contains the relevant symmetry as created while moving along . where satisfies the resulting inhomogeneous equation: The relevance of the present approach was demonstrated while studying a two-arrangement-channel model (containing one inelastic channel and one reactive channel) for which the two adiabatic states were coupled by a Jahn–Teller conical intersection.[8][9] A nice fit between the symmetry-preserved single-state treatment and the corresponding two-state treatment was obtained. This applies in particular to the reactive state-to-state probabilities (see Table III in Ref. 5a and Table III in Ref. 5b), for which the ordinary BO approximation led to erroneous results, whereas the symmetry-preserving BO approximation produced the accurate results, as they followed from solving the two coupled equations. See also[edit] 1. ^ This step is often justified by stating that "the heavy nuclei move more slowly than the light electrons". Classically this statement makes sense only if the momentum p of electrons and nuclei is of the same order of magnitude. In that case mnme implies p2/(2mn) ≪ p2/(2me). It is easy to show that for two bodies in circular orbits around their center of mass (regardless of individual masses), the momenta of the two bodies are equal and opposite, and that for any collection of particles in the center-of-mass frame, the net momentum is zero. Given that the center-of-mass frame is the lab frame (where the molecule is stationary), the momentum of the nuclei must be equal and opposite to that of the electrons. A hand-waving justification can be derived from quantum mechanics as well. Recall that the corresponding operators do not contain mass and think of the molecule as a box containing the electrons and nuclei and see particle in a box. Since the kinetic energy is p2/(2m), it follows that, indeed, the kinetic energy of the nuclei in a molecule is usually much smaller than the kinetic energy of the electrons, the mass ratio being on the order of 104).[citation needed] 2. ^ It is assumed, in accordance with the adiabatic theorem, that the same electronic state (for instance, the electronic ground state) is obtained upon small changes of the nuclear geometry. The method would give a discontinuity (jump) in the PES if electronic state switching would occur.[citation needed] 3. ^ This equation is time-independent, and stationary wavefunctions for the nuclei are obtained; nevertheless, it is traditional to use the word "motion" in this context, although classically motion implies time dependence.[citation needed] 8. ^ (a) R. Baer, D. M. Charutz, R. Kosloff and M. Baer, J. Chem. Phys. 111, 9141 (1996); (b) S. Adhikari and G. D. Billing, J. Chem. Phys. 111, 40 (1999). 9. ^ D. M. Charutz, R. Baer and M. Baer, Chem. Phys. Lett. 265, 629 (1996). External links[edit] Resources related to the Born–Oppenheimer approximation:
5c6695e1c809f7f4
Symmetry and Integrability of Equations of Mathematical Physics − 2011 Yuri Karadzhov (Institute of Mathematics of NAS of Ukraine, Kyiv, Ukraine) Matrix Superpotentials The classification of matrix-valued shape-invariant super potentials which give rise to new exactly solvable systems of Schrödinger equations is presented. The superpotentials of the generic form $W_k = kQ + P +\frac1k R$, where $k$ is variable parameter, $P, Q$ and $R$ are hermitian matrices of an arbitrary dimension $n$, are considered. Additionally it supposed that matrices $P, Q$ and $R$ are not zero matrices and they are not proportional to the unit matrix.
e90c94fee94efcc5
Quantum Mechanics From Uni Study Guides Jump to: navigation, search This is a topic from Higher Physics 1B Quantum mechanics is the branch of physics dealing with physical phenomena at microscopic scales, particularly with notions of probability. Probability in Matter • Considering light as particles (photons), the probability per volume of finding a photon in a given region of space at a given time is proportional to the number N of photons per unit volume at that time and to the intensity: Screen Shot 2012-10-08 at 1.25.00 PM.png • Considering light as a wave, the intensity is proportional to the magnitude of the electric field ( I α E2) • Combining these perspectives gives: Screen Shot 2012-10-08 at 1.24.53 PM.png • This equation means that the probability per unit volume of finding a photon in a given region is proportional to the square of the amplitude of the corresponding EM wave. This amplitude is called the probability amplitude or wave function and denoted Ψ Wave Function The complete wave function for a system is dependant upon the positions of all the particles which make up that system, for example the function for the jth particle within a system of t particles is given as: Ψ(r1, r2 ... rj ... rt) = Ψ(rj)e-iωt Where rj is the position of the jth particle in the system ω = 2πf is the angular frequency i = -10.5 t is the total number of particles in the system The Absolute Square of the Wave Function • The wave function is often of complex value, and so we consider the absolute square (|Ψ|2 = Ψ*Ψ where Ψ* is the complex conjugate) instead which is always real and positive • |Ψ|2 is proportional to the probability per unit volume of finding a particle in a given region at a given time - in general the probability of finding a particle in small incremental volume dV is |Ψ|2dV • In one dimension this becomes |Ψ|2dx such that the probability of finding a particle in the interval a ≤ x ≤ b is Screen Shot 2012-10-08 at 1.48.43 PM.png which equals the area under the curve Screen Shot 2012-10-08 at 1.50.36 PM.png • The wave function for a free particle moving along the x-axis is given as: Ψ(x) = Aeikx Where A is a constant amplitude k = 2π/λ is the angular wave number • |Ψ|2 is a probability density function for continuous discrete variables (see MATH1231 Probability and Statistics Course Notes) such that: Screen Shot 2012-10-08 at 1.53.26 PM.png • A wave function which satisfies this equation is said to be normalised, the implication of which is that it exists at some point in space Expectation Values • Ψ is not used as a measurable quantity in and of itself, but rather it is used to derive other measurable quantities, such as the expectation value of x, which is its average position and is defined as follows: Screen Shot 2012-10-08 at 2.01.29 PM.png • The expectation value for any function of x is given similarly as: Screen Shot 2012-10-08 at 2.01.45 PM.png • Ψ(x) can be complex or real • Ψ(x) must be defined and single-valued at all points in space • Ψ(x) must be normalised • Ψ(x) must be continuous The 'Particle in a Box' Thought Experiment Consider a particle confined to a one-dimensional region of space, by a one-dimensional 'box' such that it is bouncing back and forth between two impenetrable walls L metres apart. • As long as the particle remains inside the box, the potential energy is independent of location and can be set to zero • It is impossible for the particle to exist outside the box (Ψ(x) = 0 for x<0 and x>L), the implication of which is that if the particle was to be found outside of the box it would have infinite energy • The wave function Ψ(x) is always continuous, and so if Ψ(x) = 0 for x<0 and x>L, then Ψ(0) = 0 and Ψ(L) = 0 • The wave function can be expressed as a real sinusoidal function: Ψ(x) = A sin(2πx / λ) = A sin(nπx / L) • The wavelengths and corresponding absolute square wave functions are quantised, as depicted in the following figure: Screen Shot 2012-10-08 at 2.22.33 PM.png Energy for the 'Particle in a Box' • Potential energy inside the box was set at zero, so all the particle's energy is kinetic energy and is quantised, given as: Screen Shot 2012-10-08 at 2.26.15 PM.png Schrödinger's Equation The physicist Schrödinger applied De Broglie's equation to the probability wave equation to create an equation which describes the probability for a particle or wave in one dimension. The equation is complicated, as is its application, however the derivation is quite reasonable and will help in the understanding of quantum mechanics and in answering any questions pertaining to the equation in exam papers with confidence. The time-independent Schrödinger equation is as follows: Screen Shot 2012-10-08 at 3.53.35 PM.png Where ℏ = h / 2π m represents mass Ψ(x) is the wave function for a particle in one dimension x is position in one dimension U(x) is some function for the potential energy of the particle at position x E is the total energy of the particle (both kinetic and potential) • Begin by recalling the equation for the wave function in one dimension, and deriving it in terms of x, twice: Screen Shot 2012-10-08 at 7.18.07 PM.png • The second derivative can be rewritten in terms of the original function: Screen Shot 2012-10-08 at 7.21.11 PM.png • Given that all waves are also particles, we can substitute De Broglie's equation: Screen Shot 2012-10-08 at 7.26.41 PM.png • Recalling that total energy for a particle equals kinetic energy plus potential energy (E = K + U), and recalling that K = mv2/2 = (mv)2/2m = p2 / 2m: E = p2 / 2m + U(x) (expressing potential energy as a function of position) p2 = 2m[E - U(x)] • Substituting p2 into what we have derived so far and rearranging gives the equation: Screen Shot 2012-10-08 at 7.35.02 PM.png Schrödinger's Equation and the 'Particle in a Box' • In the region 0 ≤ x ≤ L, where U = 0, the equation is simplified to: Screen Shot 2012-10-08 at 7.41.47 PM.png • The solution to this, (as seen in the first part of the derivation) is a wave equation: Where the constants A and B are determined by the boundary and normalisation conditions • A consideration of Ψ(x) for different allowed energy levels gives: Screen Shot 2012-10-08 at 7.48.08 PM.png Finite Potential Wells A finite potential well is a concept much like the 'particle in a box'. Screen Shot 2012-10-08 at 7.51.32 PM.png • The energy is zero in region II • The energy has a finite value outside of the well (regions I and III) • The general solution is: Ψ(x) = AeCx + Be-Cx Where A, B and C are constants • In region I, B = 0 necessarily in order to avoid infinite energy for large negative values of x • In region III, A = 0 necessarily in order to avoid infinite energy for large positive values of x • A practical example of a finite energy well that may be referred to in exam questions is the quantum dot, a region used in nanotechnology which acts like a finite energy well • Classical physics suggests that it is impossible for the particle to somehow pass through the barriers of the wall instead of being reflected, but sometimes this happens. This process is called tunnelling or barrier penetration and it is seen in Alpha decay and in nuclear fusion • The probability of tunnelling is given by the transmission coefficient T, while the probability of reflection is given by the reflection coefficient R, such that: T + R = 1 Quantum Numbers Electron orbital states can be defined by a set of quantised values known as quantum numbers, of the form (n, l, m) where: n is the principal/radial number (the energy level) l is the angular number ml is the projection/magnetic moment Personal tools
98b66d5b130b9411
Parikshit Upadhyaya: The Eigenvector-Dependent Nonlinear Eigenvalue Problem Time: Fri 2018-06-15 13.15 - 14.15 Lecturer: Parikshit Upadhyaya , KTH Many numerical methods geared towards solving the Schrödinger equation eventually have to solve a nonlinear eigenvalue problem where the nonlinearity is present due to the dependence on the eigenvectors. This problem is usually solved using an iterative procedure called the “Self-consistent field”(SCF) iteration or one of its variants. Since it is an iterative algorithm, we can always ask the following questions: 1) Under what conditions does this algorithm converge to the actual solution? 2) What makes the rate of convergence slower/faster? In this talk, we will begin by discussing the sources of the problem("Hartree-Fock" discretization and "Density Functional Theory") and one of many formulations of the SCF iteration. Eventually, we will look at "answers" to the two questions. No previous knowledge of numerical methods for Schrödinger equations is required to understand the talk. Belongs to: Department of Mathematics Last changed: Dec 21, 2018
2e627b768e0a4144
Citation for this page in APA citation style.           Close Mortimer Adler Rogers Albritton Alexander of Aphrodisias Samuel Alexander William Alston Louise Antony Thomas Aquinas David Armstrong Harald Atmanspacher Robert Audi Alexander Bain Mark Balaguer Jeffrey Barrett William Belsham Henri Bergson George Berkeley Isaiah Berlin Richard J. Bernstein Bernard Berofsky Robert Bishop Max Black Susanne Bobzien Emil du Bois-Reymond Hilary Bok Laurence BonJour George Boole Émile Boutroux Michael Burke Joseph Keim Campbell Rudolf Carnap Ernst Cassirer David Chalmers Roderick Chisholm Randolph Clarke Samuel Clarke Anthony Collins Antonella Corradini Diodorus Cronus Jonathan Dancy Donald Davidson Mario De Caro Daniel Dennett Jacques Derrida René Descartes Richard Double Fred Dretske John Dupré John Earman Laura Waddell Ekstrom Herbert Feigl John Martin Fischer Owen Flanagan Luciano Floridi Philippa Foot Alfred Fouilleé Harry Frankfurt Richard L. Franklin Michael Frede Gottlob Frege Peter Geach Edmund Gettier Carl Ginet Alvin Goldman Nicholas St. John Green H.Paul Grice Ian Hacking Ishtiyaque Haji Stuart Hampshire Sam Harris William Hasker Georg W.F. Hegel Martin Heidegger Thomas Hobbes David Hodgson Shadsworth Hodgson Baron d'Holbach Ted Honderich Pamela Huby David Hume Ferenc Huoranszki William James Lord Kames Robert Kane Immanuel Kant Tomis Kapitan Jaegwon Kim William King Hilary Kornblith Christine Korsgaard Saul Kripke Andrea Lavazza Keith Lehrer Gottfried Leibniz Michael Levin George Henry Lewes David Lewis Peter Lipton C. Lloyd Morgan John Locke Michael Lockwood E. Jonathan Lowe John R. Lucas Alasdair MacIntyre Ruth Barcan Marcus James Martineau Storrs McCall Hugh McCann Colin McGinn Michael McKenna Brian McLaughlin John McTaggart Paul E. Meehl Uwe Meixner Alfred Mele Trenton Merricks John Stuart Mill Dickinson Miller Thomas Nagel Otto Neurath Friedrich Nietzsche John Norton Robert Nozick William of Ockham Timothy O'Connor David F. Pears Charles Sanders Peirce Derk Pereboom Steven Pinker Karl Popper Huw Price Hilary Putnam Willard van Orman Quine Frank Ramsey Ayn Rand Michael Rea Thomas Reid Charles Renouvier Nicholas Rescher Richard Rorty Josiah Royce Bertrand Russell Paul Russell Gilbert Ryle Jean-Paul Sartre Kenneth Sayre Moritz Schlick Arthur Schopenhauer John Searle Wilfrid Sellars Alan Sidelle Ted Sider Henry Sidgwick Walter Sinnott-Armstrong Saul Smilansky Michael Smith Baruch Spinoza L. Susan Stebbing Isabelle Stengers George F. Stout Galen Strawson Peter Strawson Eleonore Stump Francisco Suárez Richard Taylor Kevin Timpe Mark Twain Peter Unger Peter van Inwagen Manuel Vargas John Venn Kadri Vihvelin G.H. von Wright David Foster Wallace R. Jay Wallace Ted Warfield Roy Weatherford William Whewell Alfred North Whitehead David Widerker David Wiggins Bernard Williams Timothy Williamson Ludwig Wittgenstein Susan Wolf Michael Arbib Walter Baade Bernard Baars Gregory Bateson John S. Bell Charles Bennett Ludwig von Bertalanffy Susan Blackmore Margaret Boden David Bohm Niels Bohr Ludwig Boltzmann Emile Borel Max Born Satyendra Nath Bose Walther Bothe Hans Briegel Leon Brillouin Stephen Brush Henry Thomas Buckle S. H. Burbury Donald Campbell Anthony Cashmore Eric Chaisson Jean-Pierre Changeux Arthur Holly Compton John Conway John Cramer E. P. Culverwell Charles Darwin Terrence Deacon Lüder Deecke Louis de Broglie Max Delbrück Abraham de Moivre Paul Dirac Hans Driesch John Eccles Arthur Stanley Eddington Paul Ehrenfest Albert Einstein Hugh Everett, III Franz Exner Richard Feynman R. A. Fisher Joseph Fourier Philipp Frank Lila Gatlin Michael Gazzaniga GianCarlo Ghirardi J. Willard Gibbs Nicolas Gisin Paul Glimcher Thomas Gold Brian Goodwin Joshua Greene Jacques Hadamard Patrick Haggard Stuart Hameroff Augustin Hamon Sam Harris Hyman Hartman John-Dylan Haynes Donald Hebb Martin Heisenberg Werner Heisenberg John Herschel Art Hobson Jesper Hoffmeyer E. T. Jaynes William Stanley Jevons Roman Jakobson Pascual Jordan Ruth E. Kastner Stuart Kauffman Martin J. Klein Simon Kochen Hans Kornhuber Stephen Kosslyn Ladislav Kovàč Rolf Landauer Alfred Landé Pierre-Simon Laplace David Layzer Benjamin Libet Seth Lloyd Hendrik Lorentz Josef Loschmidt Ernst Mach Donald MacKay Henry Margenau James Clerk Maxwell Ernst Mayr John McCarthy Ulrich Mohrhoff Jacques Monod Emmy Noether Abraham Pais Howard Pattee Wolfgang Pauli Massimo Pauri Roger Penrose Steven Pinker Colin Pittendrigh Max Planck Susan Pockett Henri Poincaré Daniel Pollen Ilya Prigogine Hans Primas Adolphe Quételet Juan Roederer Jerome Rothstein David Ruelle Erwin Schrödinger Aaron Schurger Claude Shannon David Shiang Herbert Simon Dean Keith Simonton B. F. Skinner Roger Sperry John Stachel Henry Stapp Tom Stonier Antoine Suarez Leo Szilard Max Tegmark William Thomson (Kelvin) Peter Tse Vlatko Vedral Heinz von Foerster John von Neumann John B. Watson Daniel Wegner Steven Weinberg Paul A. Weiss John Wheeler Wilhelm Wien Norbert Wiener Eugene Wigner E. O. Wilson H. Dieter Zeh Ernst Zermelo Wojciech Zurek Fritz Zwicky Free Will Mental Causation James Symposium Space and Time Modern investigations into the fundamental nature of space and time have produced a number of paradoxes and puzzles that also might benefit from a careful examination of the information content in the problem. An information metaphysicist might throw new light on nonlocality, entanglement, spooky action-at-a-distance, the uncertainty principle, and even eliminate the conflict between special relativity and quantum mechanics! Space and time form an immaterial coordinate system that allows us to keep track of material events, the positions and velocities of the fundamental particles that make up every body in the universe. As such, space and time are pure information, a set of numbers that we use to describe matter in motion. When Immanuel Kant described space and time as a priori forms of perception, he was right that scientists and philosophers impose the four-dimensional coordinate system on the material world. But he was wrong that the coordinate geometry must therefore be a flat Euclidean space. That is an empirical and contingent fact, to be discovered a posteriori. Albert Einstein’s theories of relativity have wrenched the metaphysics of space and time away from Kant’s common-sense intuitive extrapolation from everyday experience. Einstein’s special relativity has shown that coordinate values in space and time depend on (are relative to) the velocity of the reference frame being used. It raises doubts about whether there is any “preferred” or “absolute” frame of reference in the universe. And Einstein’s theory of general relativity added new properties to space that depend on the overall distribution of matter. He showed that the motion of a material test particle follows a geodesic (the shortest distance between two points) through a curved space, where the curvature is produced by all the other matter in the universe. At a deep, metaphysical level the standard view of gravitational forces acting between all material particles has been replaced by geometry. The abstract immaterial curvature of space-time has the power to influence the motion of a test particle. It is one thing to say that something as immaterial as space itself is just information about the world. It is another to give that immaterial information a kind of power over the material world, a power that depends entirely on the geometry of the environment. Space and Time in Quantum Physics For over thirty years, from his 1905 discovery of nonlocal phenomena in his light-quantum hypothesis as an explanation of the photoelectric effect, until 1935, when he showed that two particles could exhibit nonlocal effects between themselves that Erwin Schrödinger called entanglement, Einstein was concerned about abstract functions of spatial coordinates that seemed to have a strange power to control the motion of material particles, a power that seemed to him to travel faster than the speed of light, violating his principle of relativity that nothing travels faster than light. Einstein’s first insight into these abstract functions may have started in 1905, but he made it quite clear at the Salzburg Congress in 1909. How exactly does the classical intensity of a light wave control the number of light particles at each point, he wondered. The classical wave theory assumes that light from a point source travels off as a spherical wave in all directions. But in the photoelectric effect, Einstein showed that all of the energy in a light quantum is available at a single point to eject an electron. Does the energy spread out as a light wave in space, then somehow collect itself at one point, moving faster than light to do so? Einstein already in 1905 saw something nonlocal about the photon and saw that there is both a wave aspect and a particle aspect to electromagnetic radiation. In 1909 he emphasized the dualist aspect and described the wave-particle relationship more clearly than it is usually presented today, with all the current confusion about whether photons and electrons are waves or particles or both. Einstein greatly expanded the 1905 light-quantum hypothesis in his presentation at the Salzburg conference in September, 1909. He argued that the interaction of radiation and matter involves elementary processes that are not reversible, providing a deep insight into the irreversibility of natural processes. The irreversibility of matter-radiation interactions can put microscopic statistical mechanics on a firm quantum-mechanical basis. While incoming spherical waves of radiation are mathematically possible, they are not practically achievable and never seen in nature. If outgoing waves are the only ones possible, nature appears to be asymmetric in time. Einstein speculated that the continuous electromagnetic field might be made up of large numbers of discontinuous discrete light quanta - singular points in a field that superimpose collectively to create the wavelike behavior. The parts of a light wave with the greatest intensity would have the largest number of light particles. Einstein’s connection between the wave and the particle is that the wave indicates the probability of finding particles somewhere. The wave is not in any way a particle. It is an abstract field carrying information about the probability of photons in that part of space. Einstein called it a “ghost field” or “guiding field,” with a most amazing power over the particles. The probability amplitude of the wave function includes interference points where the probability of finding a particle is zero! Different null points appear when the second slit in a two-slit experiment is opened. With one slit open, particles are arriving at a given point. Opening a second slit should add more particles to that point in space. Instead it prevents any particles at all from arriving there. Light falling at a point from one slit plus more light from a second open slit results in no light! Such is the power of a “ghost field” wave function, carrying only information about probabilities. Abstract information can influence the motions of matter and energy! We can ask where this information comes from? Similar to the general relativity theory, we find that it is information determined by the distribution of matter nearby, namely the wall with the two slits in it and the location of the particle detection screen. These are the “boundary conditions” which, together with the known wavelength of the incoming monochromatic radiation, immediately tells us the probability of finding particles everywhere, including the null points. We can think of the waves above as standing waves. Einstein might have seen that like his general relativity, the possible paths of a quantum particle are also determined by the spatial geometry. The boundary conditions and the wavelength tell us everything about where particles will be found and not found. The locations of null points where particles are never found, are all static, given the geometry. They are not moving. The fact that water waves are moving, and his sense that the apparent waves might be matter or energy moving, led Einstein to suspect something is moving faster than light, violating his relativity principle. But if we see the waves as pure information, mere probabilities, we may resolve a problem that remains today as the greatest problem facing interpretations of quantum mechanics, the idea that special relativity and quantum mechanics cannot be reconciled. Let us see how an information metaphysics might resolve it. First we must understand why Einstein thought that something might be moving faster than the speed of light. Then we must show that values of the probability amplitude wave function are static in space. Nothing other than the particles is moving at any speed, let alone faster than light. Although he had been concerned about this for over two decades, it was at the fifth Solvay conference in 1927 that Einstein went to a blackboard and drew the essential problem shown in the above figure. He clearly says that the square of the wave function |ψ|2 gives us the probability of finding a particle somewhere on the screen. But Einstein oddly fears some kind of action-at-a-distance is preventing that probability from producing an action elsewhere. He says that “implies to my mind a contradiction with the postulate of relativity.” As Werner Heisenberg described Einstein’s 1927 concern, the experimental detection of the particle at one point exerts a kind of action (reduction of the wave packet) at a distant point. How does the tiny remnant of probability on the left side of the screen “collapse” to the position where the particle is found? The simple answer is that nothing really “collapses,” in the sense of an object like a balloon collapsing, because the probability waves and their null points do not move. There is just an instantaneous change in the probabilities, which happens whenever one possibility among many becomes actualized. That possibility becomes probability one. Other possibilities disappear instantly. Their probabilities become zero, but not because any probabilities move anywhere. So “collapse” of the wave function is that non-zero probabilities go to zero everywhere, except the point where the particle is found. Immaterial information has changed everywhere, but not “moved.” If nothing but information changes, if no matter or energy moves, then there is no violation of the principle of relativity, and no conflict between relativity and quantum mechanics! Nonlocality and Entanglement Since 1905 Einstein had puzzled over information at one place instantly providing information about a distant place. He dramatized this as “spooky action-at-a-distance” in the 1935 Einstein-Podolsky-Rosen thought experiment with two “entangled” particles. Einstein’s simplest such concern was the case of two electrons that are fired apart from a central point with equal velocities, starting at rest so the total momentum is zero. If we measure electron 1 at a certain point, then we immediately have the information that electron 2 is an equal distance away on the other side of the center. We have information or knowledge about the second electron’s position, not because we are measuring it directly. We are calculating its position using the principle of the conservation of momentum. This metaphysical information analysis will be our basis for explaining the EPR “paradox,” which is actually not a paradox, because there is really no action-at-a-distance in the sense of matter or energy or even information moving from one place to another! It might better be called “knowledge-at-a-distance.” Einstein and his colleagues hoped to show that quantum theory could not describe certain intuitive “elements of reality” and thus is incomplete. They said that, as far as it goes, quantum mechanics is correct, just not “complete.” Einstein was correct that quantum theory is “incomplete” relative to classical physics, which has twice as many dynamical variables that can be known with arbitrary precision. The “complete” information of classical physics gives us the instantaneous position and momentum of every particle in space and time, so we have complete path information. Quantum mechanics does not give us that path information. This does not mean the continuous path of the particle, as demanded by conservation laws, does not exist - only that quantum measurements to determine that path are not possible! For Niels Bohr and others to deny the incompleteness of quantum mechanics was to juggle words, which annoyed Einstein. Einstein was also correct that indeterminacy makes quantum theory an irreducibly discontinuous and statistical theory. Its predictions and highly accurate experimental results are statistical in that they depend on an ensemble of identical experiments, not on any individual experiment. Einstein wanted physics to be a continuous field theory like relativity, in which all physical variables are completely and locally determined by the four-dimensional field of space-time in his theories of relativity. In classical physics we can have and in principle know complete path information. In quantum physics we cannot. Visualizing Entanglement Erwin Schrödinger said that his “wave mechanics” provided more “visualizability” (Anschaulichkeit) than the “damned quantum jumps” of the Copenhagen school, as he called them. He was right. We can use his wave function to visualize EPR. But we must focus on the probability amplitude wave function of the "entangled" two-particle state. We must not attempt to describe the paths or locations of independent particles - at least until after some measurement has been made. We must also keep in mind the conservation laws that Einstein used to describe nonlocal behavior in the first place. Then we can see that the “mystery” of nonlocality for two particles is primarily the same mystery as the single-particle collapse of the wave function. But there is an extra mystery, one we might call an “enigma,” that results from the nonseparability of identical indistinguishable particles. Richard Feynman said there is only one mystery in quantum mechanics (the superposition of multiple states, the probabilities of collapse into one state, and the consequent statistical outcomes). The additional enigma in two-particle nonlocality is that two indistinguishable and nonseparable particles appear simultaneously (in their original interaction frame) when their joint wave function “collapses.” There are two particles but only one wave function. In the time evolution of an entangled two-particle state according to the Schrödinger equation, we can visualize it - as we visualize the single-particle wave function - as collapsing when a measurement is made. Probabilities go to zero except at the particles’ two locations. Quantum theory describes the two electrons as in a superposition of electron spin up states ( + ) and spin down states ( - ), | ψ > = 1/√2) | + - > - 1/√2) | - + > What this means is that when we square the probability amplitude there is a 1/2 chance electron 1 is spin up and electron 2 is spin down. It is equally probable that 1 is down and 2 is up. We simply cannot know. The discontinuous “quantum jump” is also described as the “reduction of the wave packet.” This is apt in the two-particle case, where the superposition of | + - > and | - + > states is “projected” or “reduced” by a measurement into one of these states, e.g., | + - >, and then further reduced - or “disentangled" - to the product of independent one-particle states | + > | - >. In the two-particle case (instead of just one particle making an appearance), when either particle is measured, we know instantly the now determinate properties of the other particle needed to satisfy the conservation laws, including its location equidistant from, but on the opposite side of, the source. But now we must also satisfy another conservation law, that of the total electron spin. It is another case of “knowledge-at-a-distance,” now about spin. If we measure electron 1 to have spin up, the conservation of electron spin requires that electron 2 have spin down, and instantly. Just as we do not know their paths and positions of the electron before a measurement, we don’t know their spins. But once we know one spin, we instantly know the other. And it is not that anything moved from one particle to “influence” the other. Can Metaphysics Disentangle the EPR Paradox? Yes, if the metaphysicist pays careful attention to the information available from moment to moment in space and time. When the EPR experiment starts, the prepared state of the two particles includes the fact that the total linear momentum and the total angular momentum (including electron spin) are zero. This must remain true after the experiment to satisfy conservation laws. These laws are the consequence of extremely deep properties of nature that arise from simple considerations of symmetry. Physicists regard these laws as “cosmological principles.” For the metaphysicist, these laws are metaphysical truths that arise from considerations of symmetry alone. Physical laws do not depend on the absolute place and time of experiments, nor their particular direction in space. Conservation of linear momentum depends on the translation invariance of physical systems, conservation of energy the independence of time, and conservation of angular momentum the invariance of experiments under rotations. A metaphysicist can see that in his zeal to attack quantum mechanics, Einstein may have introduced an asymmetry into the EPR experiment that simply does not exist. Removing that asymmetry completely resolves any paradox and any conflict between quantum mechanics and special relativity. To clearly see Einstein’s false asymmetry, remember that a “collapse” of a wave function just changes probabilities everywhere into certainties. For a two-particle wave function, any measurement produces information about the particles two new locations instantaneously. The possibilities of being anywhere that violate conservation principles vanish instantly. At the moment one electron is located, the other is also located. At that moment, one electron appears in a spacelike separation from the other electron and a causal relation is no longer possible between them. Before the measurement, we know nothing about their positions. Either might have been “here” and the other “there.” Immediately after the measurement, they are separated, we know where both are and no communication between them is possible. Let’s focus on Einstein’s introduction of the asymmetry in his narrative that isn’t there in the physics. It’s a great example of going beyond the logic and the language to the underlying information we need to solve both philosophical and physical problems. Just look at any introduction to the problem of entanglement and nonlocal behavior of two particles. It always starts with something like “We first measure particle 1 and then...” Here is Einstein in his 1949 autobiography... There is to be a system which at the time t of our observation consists of two partial systems S1 and S2, which at this time are spatially separated and (in the sense of the classical physics) are without significant reciprocity. [Such systems are not entangled!] All quantum theoreticians now agree upon the following: If I make a complete measurement of S1, I get from the results of the measurement and from ψ12 an entirely definite ψ-function ψ2 of the system S2... the real factual situation of the system S2 is independent of what is done with the system S1, which is spatially separated from the former. But two entangled particles are not separable before the measurement. No matter how far apart they may appear after the measurement, they are inseparable as long as they are described by a single two-particle wave function ψ12 that cannot be the product of two single-particle wave functions. As Erwin Schrödinger made clear to Einstein in late 1935, they are only separable after they have become disentangled, by some interaction with the environment, for example. If ψ12 has decohered, it can then be represented by the product of independent ψ-functions ψ1 * ψ2, and then what Einstein says about independent systems S1 and S2 would be entirely correct. Schrödinger more than once told Einstein these facts about entanglement, but Einstein appears never to have absorbed them. A proof that neither particle can be measured without instantly determining the other’s position is seen by noting that a spaceship moving at high speed from the left sees particle 1 measured before particle 2. A spaceship moving in the opposite direction reverses the time order of the measurements. These two views introduce the false asymmetries of assuming one measurement can be made prior to the other. In the special frame that is at rest with respect to the center of mass of the particles, the “two” measurements are simultaneous, because there is actually only one measurement “collapsing” the two-particle wave function. Any measurement collapsing the entangled two-particle wave function affects the two particles instantly and symmetrically. We hope that philosophers and metaphysicians who pride themselves as critical thinkers will be able to explain these information and symmetry implications to physicists who have been tied in knots by Einstein-Podolsky-Rosen and entanglement for so many decades. Normal | Teacher | Scholar
db8244398321180e
Scattering and tunnelling Scattering and tunnelling Free course Scattering and tunnelling Session 1 Scattering is a process in which incident particles interact with a target and are changed in nature, number, speed or direction of motion as a result. Tunnelling is a quantum phenomenon in which particles that are incident on a classically impenetrable barrier are able to pass through the barrier and emerge on the far side of it. Session 2 In one dimension, wave packets scattered by finite square barriers or wells generally split into transmitted and reflected parts, indicating that there are non-zero probabilities of both reflection and transmission. These probabilities are represented by the reflection and transmission coefficients R and T. The values of R and T generally depend on the nature of the target and the properties of the incident particles. If there is no absorption, creation or destruction of particles, R + T = 1. Session 3 Unnormalisable stationary-state solutions of Schrödinger's equation can be interpreted in terms of steady beams of particles. A term such as Aei(kx − ωt) can be associated with a beam of linear number density n = |A|2 travelling with speed v = k/m in the direction of increasing x. Such a beam has intensity j = nv. In this approach, T = jtrans/jinc and R = jref/jinc. For particles of energy E0 > V0, incident on a finite square step of height V0, the transmission coefficient is are the wave numbers of the incident and transmitted beams. For a finite square well or barrier of width L, the transmission coefficient can be expressed as where , with the plus signs being used for a well and the minus signs for a barrier. Transmission resonances, at which T = 1 and the transmission is certain, occur when k2L = N where N is an integer. Travelling wave packets and steady beams of particles can both be thought of as representing flows of probability. In one dimension such a flow is described by the probability current In three dimensions, scattering is described by the total cross-section, σ, which is the rate at which scattered particles emerge from the target per unit time per unit incident flux. For any chosen direction, the differential cross-section tells us the rate of scattering into a small cone of angles around that direction. At very high energies, total cross-sections are dominated by inelastic effects due to the creation of new particles. Session 4 Wave packets with a narrow range of energies centred on E0 can tunnel though a finite square barrier of height V0 > E0. In a stationary-state approach, solutions of the time-independent Schrödinger equation in the classically forbidden region contain exponentially growing and decaying terms of the form Ce−αx and Deαx, where is the attenuation coefficient. The transmission coefficient for tunnelling through a finite square barrier of width L and height V0 is approximately Such a transmission probability is small and decreases rapidly as the barrier width L increases. Session 5 Square barriers and wells are poor representations of the potential energy functions found in Nature. However, if the potential V(x) varies smoothly as a function of x the transmission coefficient for tunnelling of energy E0can be roughly represented by This approximation can be used to provide a successful theory of nuclear alpha decay as a tunnelling phenomenon. It can also account for the occurrence of nuclear fusion in stellar cores, despite the relatively low temperatures there. In addition, it explains the operation of the scanning tunnelling microscope which can map surfaces on the atomic scale. Glossary and Physics Toolkit Attached below are PDFs of the original glossary and the Physics Toolkit, you may find it useful to refer to these documents as you work through the course. Click to view glossary [Tip: hold Ctrl and click a link to open it in a new tab. (Hide tip)] . (16 pages, 0.4 MB) Click to view toolkit. (17 pages, 0.4 MB) Take your learning further Request an Open University prospectus
8913e5cb1aaf4a58
Réunion d'été SMC 2010 Stabilité pour les équations aux dérivées partielles nonlinéaires Org: Stephen Gustafson (UBC) et Dmitry Pelinovsky (McMaster) BERNARDO GALVAO-SOUSA, McMaster University, 1280 Main St. W, Hamilton, ON, L8S 4L8 Thin Film Limits for Ginzburg-Landau with Strong Applied Magnetic Fields We study thin-film limits of the full three-dimensional Ginzburg-Landau model for a superconductor in an applied magnetic field oriented obliquely to the film surface. We obtain Γ-convergence results in several regimes, determined by the asymptotic ratio between the magnitude of the parallel applied magnetic field and the thickness of the film. Depending on the regime, we show that there may be a decrease in the density of Cooper pairs. We also show that in the case of variable thickness of the film, its geometry will affect the effective applied magnetic field, thus influencing the position of vortices. STEPHEN GUSTAFSON, University of British Columbia, 1984 Mathematics Rd., Vancouver, BC V6T 1Z2 Singularities and asymptotics for some geometric nonlinear Schroedinger equations I will describe results on singularity (non-)formation and stability, in the energy-critical 2D setting, for some nonlinear Schroedinger-type systems of geometric origin-the Schroedinger map and Landau-Lifshitz equation-which model dynamics of ferromagnets and liquid crystals. SLIM IBRAHIM, University of Victoria Strichartz type estimates and application to a 2D energy critical NLW in a bounded domain In this work, we establish an appropriate 2D Strichartz type estimate for the linear wave equation set on a bounded domain with either Dirichlet or Neumann type boundary conditions. The proof follows Burq-Lebeau-Planchon work in 3D and solely based on spectral projection estimates due to Sogge. Our Strichart estimate enables us to solve the nonlinear problem with exponential nonlinearity. We define a trichotomy for the cauchy problem, prove the wellposedness on two sides of the trichotomy and a sort of instability on the the last side. This is a joint work with R. Jrad. KAY KIRKPATRICK, Courant Institute, NYU, 251 Mercer St., New York, NY 10012 Bose-Einstein condensation: from many quantum particles to a quantum "super-particle" and beyond Near absolute zero, a gas of quantum particles can condense into an unusual state of matter, called Bose-Einstein condensation, that behaves like a giant quantum particle. It's only recently that we've been able to make the rigorous connection between the physics of the microscopic dynamics and the mathematics of the macroscopic model, the cubic nonlinear Schrodinger equation (NLS). I'll discuss joint work with Benjamin Schlein and Gigliola Staffilani on two-dimensional cases of Bose-Einstein condensation-and the periodic case is especially interesting, because of techniques from analytic number theory and applications to quantum computing. As time permits, I'll also mention work in progress on computational quantum many-body systems and phase transitions for the invariant measures of the NLS. EDUARD KIRR, University of Illinois at Urbana-Champaign Stability and bifurcations of large bound states in nonlinear Schroedinger equation I will present recent necessary and sufficient conditions for the existence of bifurcation points along ground state and excited state branches in nonlinear Schroedinger equations. The possible types of bifurcations and their effect on the dynamical stability of the bound states will also be discussed. This is joint work with D. Pelinovsky (McMaster), P. Kevrekidis (U. Mass.) and V. Natarajan (U. of Illinois). EVA KOO, University of British Columbia, Vancouver, BC Asymptotic stability of small solitary waves for nonlinear Schrödinger equations with electromagnetic potential in R3 Consider the nonlinear magnetic Schrödinger equation for u : R3 ×RC, iut = (i ∇+ A)2 u + V u + g(u),     u(x,0) = u0(x), where A : R3R3 is the magnetic potential, V : R3R is the electric potential, and g = ±|u|2 u is the nonlinear term. We will show that under suitable assumptions on A and V, if the initial data u0 is small enough in H1, then the solution u(x,t) decomposes uniquely into a standing wave part and a dispersive part which scatters. QIUPING LU, Centro de Modelamiento Matematico, U. de Chile Compactly supported solutions of a class of semilinear elliptic equations A class of semilinear elliptic equations, which describes time-independent solutions to a degenerate parabolic equation modeling population dynamics, is studied. Under suitable assumptions all solutions are compactly supported, moreover, multiplicity of solutions is shown by the methods of variations. DMITRY PELINOVSKY, McMaster University Excited states in the Thomas-Fermi limit Excited states of Bose-Einstein condensates are considered in the Thomas-Fermi limit, in the framework of the Gross-Pitaevskii equation with repulsive inter-atomic interactions in a harmonic potential. The relative dynamics of dark solitons (density dips on the localized condensates) with respect to the harmonic potential and to each other is approximated using the averaged Lagrangian method and the Lyapunov-Schmidt reductions. This permits a complete characterization of the equilibrium positions of the dark solitons as a function of the chemical potential parameters. It also yields an analytical handle on the oscillation frequencies of dark solitons around such equilibria. The asymptotic predictions are generalized for an arbitrary number of dark solitons and are corroborated by numerical computations for two- and three-soliton configurations. PATRICK REYNOLDS, Queen's University, Kingston, Ontario Criteria for certain systems of PDEs to be Hamiltonian A system of hydrodynamic type is a system of quasilinear first-order PDEs; the quasilinear nature, remarkably and beautifully, allows us to study such systems using finite-dimensional differential-geometric methods. To say such a system is Hamiltonian is to say that it is composed of some Poisson bracket and some Hamiltonian function. The motivating question is: given a system of hydrodynamic type, how can we determine whether or not it is Hamiltonian? We certainly can't test all Poisson brackets and all Hamiltonian functions! I'll present a recent answer to this question for systems of hydrodynamic type with three equations. ANTON SAKOVICH, McMaster University, Department of Mathematics, Hamilton, ON, L8S 4K1 Internal modes of discrete solitons near the anti-continuum limit of the dNLS equation Discrete solitons of the discrete nonlinear Schrödinger (dNLS) equation become compactly supported in the anti-continuum limit of the zero coupling between lattice sites. Eigenvalues of the linearization of the dNLS equation at the discrete soliton determine its spectral and linearized stability. All unstable eigenvalues of the discrete solitons near the anti-continuum limit were characterized earlier for this model. Here we analyze the resolvent operator and prove that it is uniformly bounded in the neighborhood of the continuous spectrum if the discrete soliton is simply connected in the anti-continuum limit. This result rules out existence of internal modes (neutrally stable eigenvalues of the discrete spectrum) of such discrete solitons near the anti-continuum limit. GIDEON SIMPSON, University of Toronto, Toronto, ON Spectral Analysis of Matrix Hamiltonian Operators We study the spectral properties of matrix Hamiltonians generated by linearizing nonlinear Schrödinger equations about soliton solutions. Using a hybrid analytic-numerical proof, we show that there are no embedded eigenvalues for the 3-dimensional cubic nonlinearity, and other nonlinearities. Though we focus on the 3d cubic problem, the goal of this work is to present a new, robust, algorithm for verifying the spectral properties needed for stability analysis. We also present several cases for which our approach is inconclusive and speculate on ways to extend the method. This is joint work with J. L. Marzuola (Columbia University). HOLGER TEISMANN, Acadia University Local controllability of a Bose-Einstein condensate in a time-varying box In this talk we consider the "condensate-in-time-varying-box" problem in one space dimension, iut = −uxx − |u|2 u u(t,0) = u = 0. Taking the length, L(t), of the box to be the control, we show that (1a), (1b) is controllable in the vicinity of the nonlinear ground state. This is joint work with Karine Beauchard (Cachan) and Horst Lange (Cologne). FRIDOLIN TING, Lakehead University, 955 Oliver Road, Thunder Bay, Ontario P7B 5E1 Dynamic stability of multi-vortex solutions to Ginzburg-Landau equations with external potential We consider the dynamic stability of pinned multi-vortex solutions to Ginzburg-Landau equations with external potential in R2. For a sufficiently small external potential with widely spaced non-degenerate critical points, there exists a perturbed multi-vortex (pinned) solution whose vortex centers are near critical points of the potential. We show that multi-vortex solutions which are concentrated near local maxima of the potential are orbitally stable w.r.t. gradient and Hamiltonian dynamics. VITALI VOUGALTER, University of Toronto, Department of Mathematics, Toronto, ON M5S 2E4 On the solvability conditions for the diffusion equation with convection terms Linear second order elliptic equation describing heat or mass diffusion and convection on a given velocity field is considered in R3. The corresponding operator L may not satisfy the Fredholm property. In this case, solvability conditions for the equation L u = f are not known. In this work, we derive solvability conditions in H2(R3) for the non self-adjoint problem by relating it to a self-adjoint Schrödinger type operator, for which solvability conditions are obtained in our previous work. © Société mathématique du Canada :
8c509f865b1ab6f5
Sunday, August 9, 2015 A very brief introduction to the electron correlation energy RHF is often not accurate enough for predicting the change in energies due to a chemical reaction, no matter how big a basis set we use.  The reason is the error due to the molecular orbital approximation and the energy difference due to this approximation is known as the correlation energy.  Just like we improve the LCAO approximation by including more terms in an expansion, we can improve the orbital approximation by an expansion, in terms of Slater determinants $$\Psi ({{\bf{r}}_1},{{\bf{r}}_2}, \ldots {{\bf{r}}_N}) \approx \sum\limits_{i = 1}^L {{C_i}{\Phi _i}({{\bf{r}}_1},{{\bf{r}}_2}, \ldots {{\bf{r}}_N})} $$ The “basis set” of Slater determinants $\{\Phi_i \}$ is generated by first computing an RHF wave function $\{\Phi_0 \}$ as usual, which also generates a lot of virtual orbitals, and then generating other determinants with these orbitals.  For example, for an atom or molecule with two electrons the RHF wave function is  $\left| {{\phi _1}{{\bar \phi }_1}} \right\rangle $ and we have $K-1$ virtual orbitals (${\phi _2}, \ldots ,{\phi _K}$ , where $K$ is the number of basis functions), which can be used to make other Slater determinants like $\Phi _1^2 = \left| {{\phi _1}{{\bar \phi }_2}} \right\rangle $ and $\Phi _{11}^{22} = \left| {{\phi _2}{{\bar \phi }_2}} \right\rangle $  (Figure 1). Figure 1. Schematic representation of the electronic structure of some of the determinants used in Equation 3 Conceptually (in analogy to spectroscopy), an electron is excited from an occupied to a virtual orbital: $\left| {{\phi _1}{{\bar \phi }_2}} \right\rangle$ represents a single excitation and $\left| {{\phi _2}{{\bar \phi }_2}} \right\rangle $  a double excitation.  For systems with more than two electrons higher excitations (like triple and quadruple excitations) are also possible.  In general $$\Psi  \approx {C_0}{\Phi _0} + \sum\limits_a {\sum\limits_r {C_a^r\Phi _a^r} }  + \sum\limits_a {\sum\limits_b {\sum\limits_r {\sum\limits_s {C_{ab}^{rs}\Phi _{ab}^{rs}} } } }  +  \ldots $$ The expansion coefficients can be found using the variational principle $$\frac{{\partial E}}{{\partial {C_i}}} = 0 \ \textrm{for all} \ i$$ and this approach is called configuration interaction (CI).  The more excitations we include (i.e. increase L in Eq 2.12.1) the more accurate the expansion and the resulting energy becomes.  If the expansion includes all possible excitations (known as a full CI, FCI) then we have a numerically exact wave function for the particular basis set, and if we use a basis set where the HF limit is reached then we have a numerically exact solution to the electronic Schrödinger equation!  That’s the good news … The bad news is that the FCI “basis set of determinants” is much, much larger than the LCAO basis set (i.e. $L >> K$), $$L = \frac{{K!}}{{N!(K - N)!}}$$ where $N$ is the number of electrons.  Thus, an RHF/6-31G(d,p) calculation on water involves 24 basis functions and roughly $\tfrac{1}{8}K^4$ = 42,000 2-electron integrals but a corresponding FCI/6-31G(d) calculation involves nearly 2,000,000 Slater determinants. Just like finding the LCAO coefficients involves the diagonalization of the Fock matrix, finding the CI coefficients (Ci) and the lowest energy also involves a matrix diagonalization. $$\bf{E} = {{\bf{C}}^t}{\bf{HC}}$$ where $\bf{E}$ is a diagonal matrix whose smallest value ($E_0$) corresponds to the variational energy minimum.  While the Fock matrix is a $K \times K$ matrix, the CI Hamiltonian ($\bf{H}$) is an $L \times L$ matrix.  Just holding the 2 million by 2 million matrix for the water molecule using the 6-31G(d,p) basis set requires millions of gigabites! Clever programming and large computers actually makes a FCI/6-31G(d,p) calculation on $\ce{H2O}$ possible, but FCI is clearly not a routine molecular modeling tool.  Using, for example, only single excitations (called CI singles, CIS) $${\Psi ^{CIS}} = {C_0}{\Phi _0} + \sum\limits_a {\sum\limits_r {C_a^r\Phi _a^r} } $$ is feasible, however is doesn’t result in any improvement.  The CIS Hamiltonian has three kinds of contributions & \langle \Phi _0\left| {\hat H} \right| \Phi_0 \rangle = E_{RHF}\\ \langle\Phi^{CIS}\left| {\hat H} \right| \Phi^{CIS} \rangle  \rightarrow & \langle \Phi _0\left| {\hat H} \right| \Phi_a^r \rangle = F_{ar} = 0 \\ & \langle \Phi _a^r\left| {\hat H} \right| \Phi_a^r \rangle which means that when this matrix is diagonalized $E_0=E_{RHF}$.  Thus CIS does not give us any correlation energy.  However, CIS is not completely useless.  The second lowest value of $\bf{E}$, $E_1$, represents the energy of the first excited state, at roughly an RHF quality. Thus, we need at least single and double excitations (CISD) to get any correlation energy.  However, in general including doubles already results in an $\bf{H}$ matrix that is impractically large for a matrix diagonalization.  CI, i.e. finding the $C_i$ coefficients using the variational principle, is therefore rarely used to compute the correlation energy. Perhaps the most popular means of finding the $C_i$’s is by perturbation theory, a standard mathematical technique in physics to compute corrections to a reference state (in this case RHF).  Perturbation theory using this reference is called Møller-Plesset pertubation theory, and there are several successively more accurate and more expensive variants: MP2 (which includes some double excitations), MP3 (more double excitations than MP2), and MP4 (single, double, triple, and some quadruple excitations). Another approach is called coupled cluster which has a similar hierarchy of methods, such as CCSD (singles and doubles) and CCSD(T) (CCSD plus an estimate of the triples contributions).  In terms of accuracy vs expense, MP2 is the best choice of a cheap correlation method, followed by CCSD, and CCSD(T).  For example, MP4 is not too much cheaper than CCSD(T), but the latter is much more accurate.  In fact for many practical purposes it is rarely necessary to go beyond CCSD(T) in terms of accuracy, provided a triple-zeta or higher basis set it used.  However, CCSD(T) is usually too computationally demanding for molecules with more than 10 non-hydrogen atoms.  In general, the computational expense of these correlated methods scale much worse than RHF with respect to basis set size: MP2 ($K^5$), CCSD ($K^6$), and CCSD(T) ($K^7$).  These methods also require a significant amount of computer memory, compared to RHF, which is often the practical limitation of these post-HF methods.  Finally, it should be noted that all these calculations also imply an RHF calculation as the first step. In conclusion we now have ways of systematically improving the wave function, and hence the energy, by increasing the number of basis functions ($K$) and the number of excitations ($L$) as shown in Figure 2. Figure 2 Schematic representation of the increase in accuracy due to using better correlation methods and larger basis sets. The most important implication of this is that in principle it is possible to check the accuracy of a given level of theory without comparison to experiment!  If going to a better correlation method or a bigger basis set does not change the answer appreciably, then we have a genuine prediction with only the charges and masses of the particles involved as empirical input.  These kinds of calculations are therefore known as ab initio or first principle calculations.  In practice, different properties will converge at different rates, so it is better to monitor the convergence of the property you are actually interested in, than the total energy.  For example, energy differences (e.g. between two conformers) converge earlier than the molecular energies. Furthermore, the molecular structure (bond lengths and angles) tends to converge faster than the energy difference.  So it is common to optimize the geometry at a low level of theory [e.g. RHF/6-31G(d)] followed by an energy computation (a single point energy) at a higher level of theory [e.g. MP2/6-311+G(2d,p)].  This level of theory would be denoted MP2/6-311+G(2d,p)//RHF/6-31G(d). Finally, the correlation energy is not just a fine-tuning of the RHF result but introduces an important intermolecular force called the dispersion energy.  The dispersion energy (also known as the induced dipole-induced dipole interaction) is a result of the simultaneous excitation of at least two electrons and is not accounted for in the RHF energy.  For example, the stacked orientation of base pairs in DNA is largely a result of dispersion interactions and cannot be predicted using RHF. Monday, August 3, 2015 Computational Chemistry Highlights: July issue The July issue of Computational Chemistry Highlights is out. Simulations of Chemical Reactions with the Frozen Domain Formulation of the Fragment Molecular Orbital Method
f1cd7ac659fae42c
next up previous contents Next: One-Electron Methods Up: Introduction Previous: Introduction The Many-Electron Problem Within the Born-Oppenheimer approximation[4], the time independent Schrödinger equation for a fully interacting many-electron system is where tex2html_wrap_inline5833 is the N-electron wavefunction, tex2html_wrap_inline5837 are the electron positions, tex2html_wrap_inline5839 are the positions of the ions and tex2html_wrap_inline5841 are the ionic charges. This equation is impossible to solve exactly so approximate solutions must be sought. One of the main challenges of condensed matter physics is to try to find good, workable approximations that contain the essence of the physics involved in a particular problem and to obtain the most accurate solutions possible. For the rest of this thesis all equations will be written in atomic units, tex2html_wrap_inline5843 . Andrew Williamson Tue Nov 19 17:11:34 GMT 1996
40bf77ef05d86705
Sign up Here's how it works: 1. Anybody can ask a question 2. Anybody can answer I remember from introductory Quantum Mechanics, that hydrogen atom is one of those systems that we can solve without too much ( embarrassing ) approximations. After a number of postulates, QM succeeds at giving right numbers about energy levels, which is very good news. We got rid of the orbit that electron was supposed to follow in a classical way ( Rutherford-Bohr ), and we got orbitals, that are the probability distribution of finding electron in space. So this tiny charged particle doesn't emit radiation, notwithstanding its "accelerated motion" ( Larmor ), which is precisely what happens in real world. I know that certain "classic questions" are pointless in the realm of QM but giving no answers it makes people asking the same questions over and over. • If the electron doesn't follow a classic orbit, what kind of alternative "motion" we can imagine? • Is it logical that while the electron is around nucleus it has to move in some way or another? • Is it correct to describe electron motion as being in different places around nucleus at different instants, in a random way? share|cite|improve this question Related: and links therein. – Qmechanic Apr 9 '12 at 16:08 Related: Planetary model of atom still valid? – voix Apr 9 '12 at 16:35 I'm more curious how the electron moves without producing EM radiation. But someone will tell me that it doesn't have a lower ground state to decay to. I know... but it's still a moving charge. I think a more satisfactory model would be that coherent states aren't "moving" in the classical sense, because the concept of moving is a limit-case approximation of QM to begin with. – Alan Rominger Apr 10 '12 at 13:15 The problem is that you're thinking of the electron as a particle. Questions like "what orbit does it follow" only make sense if the electron is a particle that we can follow. But the electron isn't a particle, and it isn't a wave either. Our current best description is that it's an excitation in a quantum field (philosophers may argue about what this really means; the rest of us have to get on with life). An electron can interact with its environment in ways that make it look like a particle (e.g., a spot on a photographic plate) or in ways that make it look like a wave (e.g., the double slits experiment) but it's the interaction that is particle-like or wave-like, not the electron. If we stick to the Schrödinger equation, which gives a good description of the hydrogen atom, then this gives us a wavefunction that describes the electron. The ground state has momentum zero, so the electron doesn't move at all in any classical sense. Excited states have a non-zero angular momentum, but you shouldn't think of this as a point like object spinning around the atom. The angular momentum is a property of the wavefunction as a whole and isn't concentrated at any particular spot. share|cite|improve this answer Why do you say "momentum" zero. At n=1 l=o m=0 it is the angular momentum that is zero. There is still energy in the orbital which I think can be interpreted as momentum in some manner. – anna v Apr 9 '12 at 16:10 I'm aware that some classical "prejudice" has to be dropped; but given the excited states, even if we don't have a trajectory over time for the electron, can we conjecture a kind of non-accelerated non-classical, ( weird ) "motion"? Or the wave-particle duality is unbalanced towards waves? – Marco De Lellis Apr 9 '12 at 20:33 You might be helped by reading carefully the wikipedia article on the hydrogen atom particularly the figures. The electron described in the orbital has not only a specific energy but also momentum and angular momentum, though it is only the operators of energy angular momentum and spin that give the eigenvalues for n l and m. So what is random is not the electron per se but the probability of finding it when you try to measure it in some way . It is moving with 1/137 of the velocity of light according to the linked article. as given in the pictures of the orbitals. such a fast moving particle will look like a cloud anyway, even if possible classically. Yes, we just cannot pin it, think of the uncertainty principle organized by a solution to Schrodinger's equation. No, not random. It is organized by the probabilities of the orbital it happens to be in. share|cite|improve this answer I have read the linked article, and thanks for your answer. However something sounds uncomfortable. The electron is moving at 1/137 c, so it shows a classical property, speed. If we consider the wave part from the wave-particle duality, we can imagine this wave that travels at that speed in space around the nucleus, drawing a weird pattern ( in places where orbital is non-zero ). However no traces of this moving wave are found in Schrödinger solutions ( the wave functions! ) for the hydrogen atom, why? – Marco De Lellis Apr 9 '12 at 21:19 I believe that it is the solutions of the Schrödinger equation that predict those patterns. Why do you say no traces? The probability functions are highly osclillatory in theta and phi except the n=o m=0. These are probability functions. One can only see waves by their interference after all. Or think of them as "standing waves". – anna v Apr 10 '12 at 4:08 "Standing waves": this is really interesting. Could be the electron described as a kind of stationary wave? And wave of what? This wave describes only the probability to find it, or something more deep, like a wave of its properties like mass, charge, ... ? Thanks for your patience with me. – Marco De Lellis Apr 10 '12 at 5:45 Just the probability of finding it. Once in a potential well the electron itself is in a virtual state. Virtual means that it is not possible to measure mass or charge except collectively with the atom as energy and charge conservation.One does not have a moment by moment snapshot of the electron, or the nucleus at that. Only of the probability of what you will find if you take a snapshot. – anna v Apr 10 '12 at 6:08 This is a difficult concept to swallow when one's intuition comes from classical physics which is our everyday experience, but it is true because it has been experimentally checked in very many cases, not just the hydrogen atom. The uncertainty principle and the probabilistic nature of nature is the corner stone of modern physics. Not random, there are envelopes to the uncertainty, but probabilistic. – anna v Apr 10 '12 at 6:10 That probably depends of what exactly you call motion, but I would highly recommend an excellent book And Yet It Moves by Mark P. Silverman, and the chapter #3 in particular. If you replace an electron (which is a stable particle, that is a particle without age and individual history) in a simple atom with a negative muon (which decays quickly, its lifetime being some 2 microseconds in its rest frame) you would expect that measured lifetime (in the atom or lab rest frame) will be longer if the muon moves at relativistic velocities due to time dilation, exactly as experiments confirm. share|cite|improve this answer Well, this quite interesting. I knew of "older" muons from cosmic rays, but if I understand correct, they made a "setup" with muon Moving at relativistic speed nearby some nucleus. To experience longer life, it has to move in some semi-classical way, is it correct? – Marco De Lellis Apr 10 '12 at 16:47 @Marco Yes, you are right. Atoms where muon replaces electron is prepared and muon lifetime mesured. Its length corresponds to the expected semi-classical velocity of muon in such an exotic atom and special relativistic time dilatation. – Leos Ondra Apr 10 '12 at 20:25 Think of an electron as of non-point particle. In a hydrogen atom it is "smeared" around proton. Its total momentum is zero – it is neither moving (in total) nor accelerating – hence in a classical limit it does not emit radiation. If an electron in an atom is a "cloud" rather than a point, it is in different points at the same time. That means that there is a non-zero distribution of "electron density" smeared around proton. An electron is not "moving" as a whole, but we can say that "parts of the cloud" are moving, since they carry non-zero momentum resulting in total angular momentum. This is a consequence of the fact that integration of the electron's momentum density over limited volume in space is non-zero. share|cite|improve this answer Smeared electron sounds good. A cloud that stays in the same place over time, only changing shape, not a fast moving electron drawing a cloud-like shape. The classical speed goes away, so we don't care anymore. – Marco De Lellis Dec 17 '14 at 11:42 Anyway, as Leos Ondra has pointed out, yet it moves in a semi-classical way, since the relativistic effect accounts for longer muons' life ( delayed decay ). It moves faster in space, and slower in time, basic relativity. We have to cope with this classical behavior. Has anyone predicted longer muon life using Schrödinger equation? Why should electron be at rest while muon is not? State equation is a good model like an ideal gas, but has to be adjusted to work in real life. – Marco De Lellis Dec 17 '14 at 13:52 Your Answer
d7e5585934370b61
Two potential quark models for double heavy baryons Результат исследований: Научные публикации в периодических изданияхстатья 2 Цитирования (Scopus) Baryons containing two heavy quarks (QQ′q) are treated in the Born-Oppenheimer approximation. Two non-relativistic potential models are proposed, in which the Schrödinger equation admits a separation of variables in prolate and oblate spheroidal coordinates, respectively. In the first model, the potential is equal to the sum of Coulomb potentials of the two heavy quarks, separated from each other by a distance - R and linear potential of confinement. In the second model the center distance parameter R is assumed to be purely imaginary. In this case, the potential is defined by the two-sheeted mapping with singularities being concentrated on a circle rather than at separate points. Thus, in the first model diquark appears as a segment, and in the second - as a circle. In this paper we calculate the mass spectrum of double heavy baryons in both models, and compare it with previous results. Язык оригиналаанглийский Страницы (с-по)100014_1-6 ЖурналAIP Conference Proceedings СостояниеОпубликовано - 2016
d777a14471d6d054
Date of Award Document Type Open Access Dissertation Chemistry and Biochemistry College of Arts and Sciences First Advisor Sophya Garashchuk Chemical dynamics, in principle, should be understood by solving the time-dependent Schrödinger equation for a molecular system, describing motion of the nuclei and electrons. However, the computational efforts to solve this partial second-order differential equation scales exponentially with the system size, which prevents us from getting exact numerical solutions for systems larger than 4-5 atoms. Thus, approximations simplifying the picture are necessary. The so-called Born-Oppenheimer approximation, separating motion of the electrons and nuclei is the central one: solution to the electronic Schrödinger equation defines the potential energy surface on which the nuclear motion unfolds, and there are standard quantum chemistry software packages for solving the electronic Schrödinger equation. For the nuclear Schrödinger equation, however, there are no widely applicable quantum-mechanical approaches, and most simulations are performed using classical Newtonian mechanics which is often adequate due to large nuclear masses. However, the nuclear quantum effects are significant for chemical processes involving light nuclei at low energies, and including these effects into simulation, even approximately, is highly desirable. In this dissertation, an approximate methodology of including quantum-mechanical effects within the quantum trajectory or the de Broglie-Bohm formulation of the Schrödinger equations is developed. Use of the trajectory framework makes the approach scalable to hundreds of degrees of freedom. The methodology is applied to study high-dimensional systems (solid He4 and others) relevant to chemistry. Included in Chemistry Commons
7f37749147531570
Nanostructural problem-solvers December 2009 Filed under: Lawrence Berkeley Computation ferrets out emergent behaviors of novel materials built from tiny blocks. The preeminent physicist-futurist Richard Feynman famously declared in a 1959 address to the American Physical Society that “there’s plenty of room at the bottom.” He then invited them to enter the strange new world of nanoscale materials, none of which had actually been invented, except in Feynman’s fantastical imagination. It took another generation of scientists before nanotechnology emerged, but Feynman’s assertion still rings true. There’s plenty of room at the nanoscale and scientists at Lawrence Berkeley National Laboratory (LBNL) in California are at the forefront in constructing new materials there. Paul Alivisatos, director of LBNL’s Materials Science Division, is a world leader in nanostructures and inventor of many technologies using quantum dots — special kinds of semiconductor nanocrystals. Quantum dots, which are one ten-millionth of an inch in diameter, fluoresce brightly, are exceedingly stable and don’t interfere with biological processes because they are made of inert minerals. Alivisatos and his colleagues have constructed dozens of variations in which the fluorescent color changes with the dot’s size. Today life-science researchers use quantum dots as markers, allowing them to visualize with extreme accuracy individual genes, proteins and other small molecules inside living cells and fulfilling a prediction Feynman made in his famous lecture. LBNL physicist Lin-Wang Wang likes to say that some day we will view the 21st century as the “nanostructure age.” Wang and LBNL colleague Andrew Canning, a computational physicist who helped pioneer the application of parallel computing to material science, want to use computational methods to understand the emergent behaviors of novel materials built from exceedingly small blocks. “There are a lot of challenges and there are still many mysteries to be solved,” Wang says. “For example, we still don’t quite understand the dynamics of the electron inside a quantum dot or a quantum rod. There is a lot of surface area in a quantum structure, much more than the same material in bulk. So how the surface is coupled with the interior states and how this affects the nanostructure properties is not well understood. The research team is not starting from scratch, of course. There are established equations that predict the behavior of the electron wave function in these materials. The devil lies in the size of the problem. “In terms of computation the nanostructure is challenging. For example, if you have a bulk material the crystal structure is a very small unit cell, just a few atoms, that repeats itself many, many times,” Wang says. “So computationally, you can treat bulk structures by calculating one unit cell — you only deal with a few atoms. With only a few atoms, you can represent the whole, much larger structure of the material. However, for a quantum dot or a quantum wire you have to treat the whole system together. These systems usually contain a few thousand to tens of thousands of atoms, and that makes the computation challenging.” To solve a problem containing thousands of atoms requires new algorithms that handle the physics differently without compromising accuracy and parallel computing on a massive scale. Says Canning, “We know we need to solve the Schrödinger equation for these problems, but to do so fully is exceedingly computationally expensive. What we did was make advances to approximate, solve the problem, and still get the physics right.” Canning collaborated with Steven Louie’s group at the University of California-Berkeley, to improve the Parallel Total Energy Code (Paratec), an ab initio, quantum- mechanical, total energy program. The program runs on Franklin, the Cray XT4 at LBNL’s National Energy Research Scientific Computing Center (NERSC). The massively parallel system has 9,660 compute nodes, but is due to receive an upgrade, increasing its processing capability to a theoretical peak of about 360 teraflops. “Paratec enables us to calculate thousand-atom nanosystems,” Canning says. “The calculation is fast and scales to the cube of the system, rather than exponentially, as a true solution of the many-body Schrödinger equation.” Besides massive parallelization of the codes, the researchers also developed many new algorithms for nanostructure calculations. For example, Wang devised a linear scaling method, called the folded spectrum method, for use on large-scale electronic structure calculations. The conventional methods in Paratec must calculate thousands of electron wave functions, but the Escan code uses the folded spectrum method to calculate only a few states near the nanostructure energy band gap. That means the computation scales linearly to the size of the problem — a critical requirement for efficient nanoscience computation. Wang and Canning recently worked with Osni Marques at LBNL and Jack Dongarra’s group at the University of Tennessee, Knoxville, to reinvestigate and significantly improve the Escan code by adding more advanced algorithms.colleagues also have recently invented a linear scaling three-dimensional fragment (LS3DF) method, which can be hundreds of times faster than a conventional method in calculating the total energy of a given nanostructure. The code has run at 107 teraflops on 137,072 processors of Intrepid, Argonne National Laboratory’s IBM Blue Gene/P. The researchers have in essence designed a new algorithm to solve an existing physical problem with petascale computation. Wang says the LS3DF program is designed for materials science applications such as studying material defects, metal alloys and large organic molecules. Within a nanostructure, the physicists are interested mainly in the location and energy level of electrons in the system because that determines the properties of a nanomaterial. For example, Wang says, electrons within a quantum rod or dot can occupy a series of quantum energy states or levels as they orbit the atomic nucleus and interact with each other. The color emitted by the material typically depends on these energy states. Specifically, the scientists focus on two quantum energy levels: the highest occupied molecular orbital (HOMO) and the lowest unoccupied molecular orbital (LUMO), which the Escan code can calculate. The energy difference between these two levels determines the material’s color. The color also changes with the quantum dot’s size, providing one way to engineer its properties. In principle, knowing the electronic properties of a given material lets the researchers predict how a new nanostructure will behave before actually spending the time and money to make it. It’s a potentially less expensive way to experiment with new nanomaterials, Wang says. This article originally appeared in Deixis: The CSGF Annual, 2008-09. (Visited 767 times, 1 visits today) About the Author Karyn Hede is news editor of the Nature Publishing Group journal Genetics in Medicine and a correspondent for the Journal of the National Cancer Institute. Her freelance writing has appeared in Science, New Scientist, Technology Review and elsewhere. She teaches scientific writing at the University of North Carolina, Chapel Hill, where she earned advanced degrees in journalism and biology. Leave a Comment
e8ab457572efd4c5
Monday, July 07, 2014 ... Deutsch/Español/Related posts from blogosphere Droplets and pilot waves vs quantum mechanics I've been overwhelmed by the sheer amount of idiocy about quantum mechanics that we may encounter in the would-be scientific mainstream media. A new wave of nonsense claiming that someone overthrew quantum mechanics is appearing on a daily basis. Please, don't send me links to this stuff, there has been just way too much of this worthless trash that is getting worse and worse. The first individuals who would start campaigns against theoretical physics a decade ago – various Shmoits – should have been given a proper thrashing at that time so that they wouldn't climb out of the sewerage system again. We have failed miserably and just like I predicted, similar anti-science compaigns are increasingly strong, increasingly stupid, and attacking increasingly fundamental (and increasingly elementary) layers of modern of science. It was "just" string theory 8 years ago, now it is quantum mechanics. By 2020, heliocentrism is sure to be under attack, too. About 8 people sent me links to this story about the droplets "proving" that quantum mechanics isn't based on probabilities and is governed by something like the pilot wave theory. What If There's a Way to Explain Quantum Physics Without the Probabilistic Weirdness? (Colin Schultz, Have We Been Interpreting Quantum Mechanics Wrong This Whole Time? (Natalie Wolchover, Quanta Magazine) The story is that Bush et al. at MIT did some playful experiments with droplets and the conclusion is supposed to be that this strengthens the case for de Broglie's pilot wave theory. All of Nature is governed by mathematics. So we encounter mathematical objects and equations everywhere. Even some types of ordinary or partial differential equations are recycled hundreds of times, in very diverse situations. A person who hasn't been sleeping since the time when she was an embryo must have noticed this omnipresence of mathematics and is no longer shocked by it. In fact, such a sane person has improved her resolution and precision a little bit so she is able to see differences. One may surely design some objects that obey equations not too mathematically different from those that Louis de Broglie (and 25 years later, if we just pretend that plagiarism is OK, David Bohm) proposed to replace proper quantum mechanics. The waves in the model may propagate similarly. In fact, this "modeling" and "visualization" has a rather long history: George Francis FitzGerald has constructed a working model of the "luminiferous aether" emulating Maxwell's equations (partly inspired by James Clerk Maxwell's own engineering sketches of the gadget) out of wheels and gears. Mechanics flourished in the 19th century. These successes couldn't change anything about the fact that the luminiferous aether doesn't exist. One would think that people learn some lesson. On the contrary, a vast majority of the people learn nothing at all and they are doing much more stupider mistakes than the people in the 19th century. The problem is that there are also a huge differences that you shouldn't overlook unless your brain is completely messed up. While the wheels-and-gears model of the aether pretty much did what it was supposed to do, there are differences both in the physical interpretation and in the mathematical details of the two situations here – droplets and the wave function. You might say that the former (physical, interpretational, conceptual differences) are more profound but once you learn to think quantitatively, you actually see that the latter (the mathematical differences) is equally profound and, in fact, equivalent. The physical, conceptual differences between any quantities describing droplets on one side and the wave function on the other side are clear. The former are observable – you may actually measure what the shape of the droplet looks like; you can't measure the wave function by any apparatus, at least not in a single repetition of the experiment. The former has an objective interpretation; the latter has a probabilistic interpretation, and so on. The wave function just encodes all the probability distributions for actual observables – but the wave function isn't and can't be one of them. There are also important enough mathematical differences. In Schrödinger's picture (and even in the misguided equation controlling the "pilot wave" proposed to supersede quantum mechanics), the wave function obeys an exactly linear equation\[ i\hbar\frac{\partial}{\partial t} \ket\psi = H \ket\psi. \] It is very important that all such equations are exactly linear and the actions of observables and operators expressing transformations are exactly linear. In combination with Born's rule, the exact linearity is required by the laws of "pure logic" expressed using the probability calculus, e.g. for the fact that\[ \] Note that this equation is linear in the probabilities and it has to be so for a simple reason. The probabilities are just ratios of repetitions of an event in which a condition is satisfied. The binary operators "OR" and/or "AND" correspond to the intersections and unions of sets of these repetitions of events and the equation above is nothing else than the equation dictating the number of elements in a union of two sets (divided by the total number of repetitions of the event). You just can't modify these rules, not even by a tiny amount. All the experiments we have ever made are consistent with the exactly linear evolution of the wave function and the exact linearity of all the operators encoding observables – any observables. But once again, you don't really need to make experiments. This is a matter of elementary consistency of quantum mechanics. On the other hand, the shape of droplets is encoded in observables, e.g. in functions \(x(t)\) of time or in the fields \(\varphi(x,y,z,t)\) etc. Classically, they are \(c\)-number-valued functions of time (or spacetime) coordinates. Quantum mechanically, these are observables – i.e. linear operators on the Hilbert space. If you look for the most direct quantum counterparts, the classical equations of motion are most straightforwardly translated to the Heisenberg equations of motion for the operators in the Heisenberg picture of quantum mechanics. And this evolution of the classical quantities or the quantum operators is pretty much never linear in the operators. Linear equations of motion would mean that the system is non-interacting and completely uninteresting. Using the arguments based on naturalness, or the Gell-Mann totalitarian principle, if you wish, pretty much every higher-order (nonlinear) term may appear and will appear in the equations of motion. So even if you forget about the completely different interpretations of the wave function and the shape of droplets, there is a difference (well, many differences, but I chose this one) at the purely mathematical level. The equations governing the evolution of the wave function must be exactly linear and there can't be any debate about it because it's a matter of consistency. The equations governing the evolution of the shape of droplets are almost certainly nonlinear because there is no general constraint that would ban the nonlinearity, and they are therefore 99.999...% likely to occur. You may find situations and approximations in which the nonlinearities are small or the nonlinear equations emulate some linear ones for other reasons, but fundamentally they are very different. So these things may be similar at the level of containing some remotely similar differential equations but as soon as your resolution gets better than mine is after 10 pints of beer, you should be able to see that there are profound differences both of mathematical and physical nature. Well, sometimes it may be true but what you mean is that there is an elephant inside the motor who is pushing the wheels using its trunk. That's cute – well, the child is perhaps cute which is why everything she likes is cute – but for an adult person, it's just stupid. Even if a child is satisfied with the explanation, it clearly doesn't work. It cannot satisfy a person who is able to ask why and creatively test the ideas that are being pushed into her ears. The motors are just not being built out of elephants. And the situation with the wave function is completely analogous. It demonstrably and obviously has nothing to do with the evolution of droplets of any kind. The probabilistic character of the wave function isn't a topic for deep philosophical debates or research. One can make elementary and trivial observations to directly and instantly see that the wave function has to be interpreted probabilistically, otherwise it has nothing to do with Nature! The probabilistic character of the wave function – so different from the evolving droplets – is an empirical fact that is trivial to prove by some of the simplest and fastest observations we can make. Just see that the double slit experiment creates individual dots while macroscopic droplets don't. In fact, the wave function has nothing to do with the evolution of any dynamical variables – observables – in any physical system in the world because the wave function is – importantly enough – not an observable. There are way too many things in the articles that drive me up the wall. While a few physicists – e.g. Anthony Leggett – are allowed to mention that this whole droplet-quantum "work" is worthless šit, there are many others who positively hype it, including some favorite physicists of mine. Their affiliation often happens to be the same as that of Mr Bush. But a bias that is this obvious is just bad. Note that I haven't mentioned the name of the particular physicist who disappointed me, in order to keep the name confidential, but to promote this kind of šit just because they do it at MIT is outrageous, Frank! ;-) Even if I subtract the pathetic "research" and the disappointing support of it by some well-known names, there are just so many things that are so insultingly stupid, manipulative, and contradicting the very essence of the scientific method. Wolchover's title is Have We Been Interpreting Quantum Mechanics Wrong This Whole Time? It's terrible how she talks about "we". She surely doesn't belong among the physicists who have something sensible to say about these matters – she is just an inkspiller. She has no clue what quantum mechanics actually is, and she (just like 99+ percent of the mankind) has never had such a clue – the whole time. But even physicists who have been talking about these matters are in no way "we". Clearly, quantum mechanics was too hard and too new for some physicists from the beginning – including a revolutionary named Einstein – so these people clearly didn't belong among the "we" of the people who properly understood quantum mechanics. But even the people who effectively understood quantum mechanics would find some differences in their "interpretation" of quantum mechanics – and the most reasonable ones would point out that the very phrase "interpretation of quantum mechanics" is silly. Quantum mechanics is the new theory so once we describe its rules and axioms, we know them and there's nothing to interpret. On the contrary, we may apply these rules to some situations and derive certain particular insights, for example the classical limit. In this sense, it's the "classical physics" that may be interpreted within quantum mechanics but quantum mechanics doesn't need (and doesn't allow) any additional "interpretations". You either understand the theory or you don't. More generally, science just doesn't work in this collectivist way, and it will never work like that. So if some hopeless morons decide that the probability of the pilot wave theory has increased because they have played with droplets like retarded 6-year-old children, it will still be true that "we" continue to know that the paradigm is as wrong as it was in 1927. I wanted to write a rant so that I may finally close the two insultingly stupid pages about the "droplets of quantum mechanics". I hope that now I will feel a bit lighter and please don't bother me with this junk again. This blog post won't be proofread again because this whole theme is an amazing waste of time. Add to Digg this Add to reddit snail feedback (53) : reader Gene Day said... Yes, it is a BIG waste of time :-( reader Holger said... Well, obviously QM is commonly regarded incomplete because it simply doesn't tell us what we need to know about nature in order to come to a deeper understanding about its basic mechanisms. "Probability" is not a mechanism, it is just an outcome, a consequence of something we don't yet understand. Take the case of a free neutron: A neutron inside a lab is objectively there, it can be "measured" and manipulated, until it suddenly decays. Now, why did that particular neutron decay after, say, 10 minutes, while another sample decays after 15 minutes? QM yields the probabilities for this decay process, but it does not predict when exactly a selected neutron is going to decay. This is a shortage that has to be overcome. A physicist always has to ask why, he should never be satisfied with "a set of rules" which yield some probabilities. This would not be physics, just bookkeeping. We need to go deeper into these mechanisms and thereby understand what exactly makes that particular neutron decay at that particular time. QM is only an intermediate step toward that goal. reader davideisenstadt said... thanks lubos, for letting me know what you think. sorry to have burdened you with BS. mea culpa reader Luboš Motl said... No problem, David, if you didn't do it, I would still have gotten 7 copies of that. ;-) reader AJ said... I saw these dancing droplets in an episode of Through the Wormhole and wondered if your response would be "Crackpottery!". You didn't disappoint :) I imagine that many concepts presented in this series fall into the same category, but I still find it entertaining. reader Dilaton said... QM is QFT in 1 time and 0 space dimension. reader Federico Barocci said... Hi Lubos, what do you think about the following. John Bell states in his book “Speakable and unspeakable in quantum mechanics” Chapter 14, page 115, that “the guiding wave, in the general case, propagates not in ordinary three-space but in a multidimensional-configuration space is the origin of the notorious 'nonlocality' of quantum mechanics. It is a merit of the de Broglie-Bohm version to bring this out so explicitly that it cannot be ignored.” If I understand him well, Bell also argues that we should get rid of the idea of “particles” or droplets described, as you say, by almost certainly nonlinear equations. The only valid way to speak about “particles” is by using their multi-dimensional wavefunction ground state. Then Bell refers to Ghirardi, Rimini and Weber: “The idea is that while a wavefunction normally evolves according to the Schrödinger equation, from time to time it makes a jump. Yes, a jump!”. Now, these “jumps” are “reduced” or “collapsed” wavefunctions that we observe as “particles” in ordinary 3d space. And further on page 209: [Schrödinger] would have liked the complete absence of particles from the theory, and yet the emergence of 'particle tracks', and more generally of the 'particularity' of the world, on the macroscopic level.” Particles, in the end, are the smallest concentration of energy incorporated into the wave, thus, they are itself collapsed waves described by the exactly linear evolution of the multi-dimensional wavefunction. I thought that the experiments carried out by Couder et al. could be nice approximations of the underlying ground state. reader Dilaton said... It is not only that more and more elementary established knowledge gets attacked in the course of time. Also, to me it seems that first the nonsense appeared "only" in the popular "science" channels (which made physicists wrongly ignore it), but in the course of time it also started to appear at places one should be able to trust that they are free of crap, such as "peer-reviewed" journals for example. reader Shintaro said... I think the problem at hand is you're assuming there actually is a reason the neutron decays at some particular time, and that the electron and anti-neutrino have to go off in particular directions when this happens. And there's no reason that there actually should be such a reason. Maybe Nature really is random and that there are uncaused events. Why shouldn't there be? If we had good reason to suspect that apparently identical neutrons were actually different, and that there were an underlying cause for different neutrons decaying with slightly different lifetimes, then it'd be a fair question to ask what these underlying differences were, and how they effected different decay times. But the fact of the matter is we have no reason to think any neutron is different than any other neutron. (And saying "well they must be different because they decay after different lengths of time" is just question-begging; it's the very claim that different outcomes require different initial conditions that's being contested, so you can't use your claim as a premise.) reader W.A. Zajc said... I can’t resist further beating this dead horse. If I correctly understand the experimental set-up, the droplets exhibit the “interference” pattern only after some period of time required for the putative pilot waves to establish the two-slit interference pattern. This is fundamentally different than the quantum mechanical case, where the interference pattern develops even when the flux of particles is so low as to guarantee that there is (at most) one particle in the apparatus at any given time. I therefore assume that if one starts with a quiescent apparatus, introduces one droplet, and then waits until the “pilot wave” has damped out before introducing the next droplet, you will see some a mish-mash rather than a simulation of an interference pattern. To argue that this experiment has any implications for QM or any ability to improve our understanding of how QM works is, for this reason and for all the other reasons Lubos has delineated, profoundly wrong-headed. reader tomandersen said... That's not how Couder's experiments work. The pilot wave never dampens out, as there is energy coming into the system to keep the model alive. (Whole plate vibrates near the Faraday instability limit). You get thus get interference with one particle at a time. reader Gene Day said... There is no deeper understanding and there never will be. God does, indeed, throw dice. Get used to it. I happen to be a physicist, Holger, and it is preposterous for you to tell me what I should or should not be satisfied with. Until you free your mind of deterministic thinking you will be forever lost in the woods. Physics is precisely a set of rules that yield probabilities. If you don’t like it you can follow Feynman’s advice and go to another universe. This one is quantum mechanical to its core. reader W.A. Zajc said... Ummm… that’s exactly my point. Let me try the following: when in high school, I cut class one day to go see the then new John Hancock tower in Chicago (this was an epic cut, as I lived about 150 km from Chicago). From the observation deck on the 92nd floor, I could see waves on Lake Michigan impinging on two openings (slits) in the breakwater that defined the Chicago harbor. There was a beautiful two-slit interference pattern. Now if on shore I recorded the location of every bit of flotsam and jetsam (aka piece of crap) and found it correlated with the interference pattern of the waves, would that provide new insights into quantum mechanics? The only difference I can see in Couder’s experiments is that the droplets (aka piece of crap) are auto-generated by some process (undoubtedly non-linear, as Lubos has emphasized) that is connected to the “pilot waves”. But there is nothing fundamental going on here... reader Hacienda said... A: If QM were as easy as the pilot wave theory is making it out be, then Niels Bohr is an idiot. B: Niels Bohr was not an idiot. Therefore pilot wave theory is not QM. reader jon said... The pilot wave is an attempt to get around superposition. But how can we be sure that even though there is superposition, there isn't still a deterministic interaction with some otherwise undetected deterministic process that triggers the collapse? That would be meaningless speculation if there was no way in principle to observe it, but there could be ways. To probe space for such a process you would need an extremely large number of interaction in a small volume, but perhaps some observed anomalies could be interpreted as a result of such a model. reader Peter F. said... I get your well put point - to me put well by crucially including the word "sometimes". Personally, I would have nothing to say/nothing to 'contribute' (am referring to a 'contribution' that tends to fall on deaf ears or be overwhelmingly refused, rejected, or recoiled from) if I were not focusing and betting on the tiny chance that aiming to explain a certain emergent evolution related aspect of What Is going on with words might be worthwhile. Am referring to an aspect of 'what there is to recognize', one which is recognized with optimally percEPTive potency (not adaptive potency) utterly rarely because of how 'the law of quantum-level produced probabilities played out' {and will as a matter of principle play out in any universe similar enough to ours - i.e. ~ any that forges a phylogeny of fauna} in the form of a sub-principle of Natural Selection that is not much less simple and heuristic than Darwin's super-principle. reader Luboš Motl said... Dear Federico, our world is relativistic and quantum particles have to be described by the so-called quantum field theory - or anything that is a "specialized extension of it", I mean string theory. And in quantum field theory or in those, the statements you quote are easily seen to be wrong. All of them. The particle-position basis isn't even well-defined in general and it is extremely general and non-fundamental. And particles are in no way "the most compressed quantum waves" one may wave. Quite on the contrary, when we talk about particles that are as well-defined as possible, their wave function must be much more spread than on the Compton wavelength corresponding to the particle. If you try to compress the "wave function" of a particle to distances shorter than that, you inevitably start to produce particle-antiparticle pairs and similar things. It's as far as you can get from the non-relativistic notion of an ordinary particle. The particle is observed at a point not because the maximally compressed wave functions would be natural or "the best" or optimized in any sense - they're among the worst, most singular, most non-relativistic, most unlikely to be the right description. Particles are seen at points because the damn function has a probabilistic interpretation, it always has had, it always will have, and who tries to deny that this fact is established and demonstrable is complety confused about the basics of modern physics. reader Luboš Motl said... Dear Dilaton, it's true that "quantum mechanics" is often used for quantum laws where some natural variables only depend on time and not other continuous variables, i.e. for QFT in 0+1 dimensions, and I sometimes use this interpretation of "quantum mechanics" myself (e.g. "Matrix theory is a model of quantum mechanics"). However, in all these texts about the foundations of quantum mechanics, I use "quantum mechanics" in a much broader sense, as any theory respecting the general postulates of QM such as the linearity of the observables as operators acting on the Hilbert space, Born's rule, and so on. In this primary meaning of "quantum mechanics", any QFT in any dimension (and even string theory itself) is just a particular example of a quantum mechanical theory. reader Luboš Motl said... An excellent clarification, William, thank you! The droplets just betray their being nothing else than a visualization of some features, not something that is supposed to be exactly equivalent to what it claims to model. reader Luboš Motl said... Excellent, William, and I had a similar experience except that the Hancock tower was in Boston, not Chicago, and it was a few days before 9/11 (and my thesis defense) when I visited it before the observatory got closer for years. reader Luboš Motl said... Jon, it often sounds that you are asking questions but you are never waiting for any answers. There are answers to all your questions. We know that the "interaction that triggers the collapse" can't exist in the sense as a real process because such an interaction would have to act instantaneously, and it would therefore violate the laws of relativity. You can phase the very same thing "experimentally", too. If such an interaction existed, it would have consequences that would manifest themselves as the violation of the Lorentz symmetry, and we observe there aren't any. reader Holger said... Lubos, I perfectly agree with most points you have raised - I only suggest to take them a little further. The concept of emergence does not need to stop at the point which we (currently) regard "fundamental". In fact, t'Hooft has demonstrated with a toy model how quantum mechanical features could emerge as well from something sub-quantum. In his example, that sub-quantum regime was classical, but there is no reason to restrict ourselves to classical models, why should we. Point is: In history, scientists often believed that they had reached the bottom, just to find out that the well reached far deeper. The question of "why does the neutron decay now" does certainly not imply a return to any classical concepts. We ask why because we want to know, and one day we may know why these processes look random in our labs. reader Luboš Motl said... Dear Holger, emergence (you mean the process of finding deeper explanations) doesn't have to stop at the point where science is now. But it cannot get reverted. The fundamental theories people would have before the 20th century revolutions have been *falsified* so they can never be resuscitated. Non-relativistic theories can emerge as the limit 1/c goes to zero of relativistic theories. But one just can never revert this arrow and derive relativity from a non-relativistic theory because the non-relativistic theories are more special - corresponding to a particular special value of the parameter 1/c, namely zero (corresponding to no Lorentz contraction, no speed limit etc.) - and once it's shown that Nature doesn't live in this special subset of theories, it can never be unshown. The situation of quantum mechanics is exactly analogous. Classical physics is a special, hbar goes to zero, limit or special case of quantum mechanics. Just like relativistic effects (e.g. contribution to Lorentz contraction etc.) and corrections scale like positive powers of 1/c, quantum effects - like the uncertainty of variables and the unavoidable probabilistic interpretation following from that uncertainty - scale like positive powers of hbar. It's been shown that Nature doesn't live in the hbar=0 subset or limit of the space of possible theories. It follows that this special subset has been falsified and it cannot be unfalsified. Your bigotry and obsession with undoing the quantum revolution is exactly analogous to the people who hope that the right explanation of Earth's shape will be a flat Earth again, or that creationism is right and the apparent evolution is just an illusion emerging from the Truth of Creation. It just isn't so and can't be so, OK? All the "possibilities" you propose have been proven impossible. You may have overlooked this subtle fact - you may have overlooked the 20th century in physics - but it's still there. reader Marcel van Velzen said... Yes it does. Don’t you even understand the Heisenberg uncertainty principle? reader Leo Vuyk said... Dear Lubos, I appreciate and respect your blog very much, but imo. you are not honest by writing: “quantum mechanics doesn't need (and doesn't allow) any additional "interpretations". You either understand the theory or you don't.” You also know that different interpretations of the symmetric universe (splitting locally or not splitting at large distances by CP symmetry ) could be possible. reader Luboš Motl said... Exactly, Martin. Things may be converted to the usual x-p uncertainty principle but the more subtle time-energy version of the uncertainty principle may also be applied if we do it right. If we measure the energy of the initial and final products with accuracy "delta E" or better (smaller), then the unavoidable uncertainty "dt" in the time of the decay does obey the usual dt * dE is greater than hbar/2. If we want to determine the point of the spacetime where it decayed as accurately as possible, we use the speed of the final particles, so the speed uncertainty can't be too high, and that implies an uncertainty of the position and therefore time of the decay, too. reader Holger said... OK, Marcel, then let me pass this "homework" to you: I have trapped my neutron inside a magnetic trap, within a volume of 1cm^3, at a temperature of 10^-3 Kelvin. How much does this trapping affect its lifetime? You will find that the effects of Heisenberg's uncertainty can be conveniently neglected here. This was not my point anyway. I was asking about the why, and such a question necessarily perforates the framework of any currently known theory. Yet, I insist that science has to ask such questions in order to progress. reader Jan said... Ah well, the media. The actual researchers do not make any claims "this is the quantum mechanics" and alike, but rather point out some interesting similarities and promote further research. Putting the media campaign aside, the interesting mathematical description of this problem can be found here. arXiv:1401.4356v1 ... I would love to see your input on this paper, but it's kind of clear that you a) don't find it interesting at all b) your opinion may be biased before you even started reading the paper anyway, so I don't hold my hopes high. Actually the paper highlights some differences between QM and this experiment. On the contrary, various strictly quantum phenomena are being derived from the first principles, which is - in principle *putting shades on* - interesting. But the experiment itself is just an analogy or a visualization of some of the phenomena, nothing more. Even in the conclusion the authors claim that they consider this experiment to be useful as a teaching tool. reader Marcel van Velzen said... You were talking about the width of the decay of the neutron (not effects of its environment), that’s Heisenberg. Wanting to know exactly when a particular neutron decays is by definition a return to classical concepts. reader Marcel van Velzen said... Pretty basic stuff isn’t it :-) reader Holger said... Nope, the width of the decay would be well covered by QM, it is a statistical notion. I was talking about the time at which a single, individual neutron is going to decay. I didn't touch any of those matters like precision here - give it an uncertainty if you like. A subquantum theory may have such an uncertainty, a jbar (as opposed to hbar). It may be quite different from hbar. It may possibly be zero as well (unless t'Hooft's deterministic example is mathematically flawed, which I am not aware of). reader Guest said... Qm is for those who don't want to understand nature. reader Luboš Motl said... Right, I give it the uncertainty I like - it's the very point of mine. The uncertainty may be arbitrarily large in general because of the superposition principle. And no, Nature only contains one hbar. It's the conversion between quantities like energy to quantities like frequency (E=hf). The same relationship is pretty much equivalent to the Heisenberg or Schrodinger equations of motion or the path integral which govern *everything* in Nature. That's also why we can set hbar=1 - it is unavoidably a universal constant. Everyone who tries to deny this thing is a crank regardless of the number of Nobel prizes he or she may have received for great work done 40 years ago. reader Luboš Motl said... I approved your comment to highlight my democratic credentials, and I have only placed you on the blacklist because I don't know the protocol to send you to a gas chamber. reader Marcel van Velzen said... So neutrons have different initial conditions and for that reason decay at different times? Is that what you’re saying? How are you going to know the initial conditions of the neutrons without interfering? Remember the double split experiment? reader tomandersen said... You said in post #1: " waits until the “pilot wave” has damped" It does not damp out. No one thinks that there is anything but messy classical non linear physics going on in these experiments. I think that you think its somehow cheating to have an energy source? The whole thing is lossy, as any experiment with waves on a fluid is. Its a puddle with drops, not QM. reader Holger said... Yes, different initial conditions. Surely nothing as simple as a "hidden classical variable" that has been forgotten to be implemented into the current framework of QM. It would have to be incorporated into another, more general theory which turns into QM in some of its limits. A very normal procedure, by the way. It would be very surprising if we were living in precisely that era in which all fundamental equations had just been laid out. But I have to point out that there exists no pressing need to extend the existing framework unless there exist obvious contradictions with experiments. Just, it appears strange to me that current theories do not answer certain questions, instead yielding probabilities. It feels suspicious. And, no, it is no reason to find myself another universe. Just wondering what is going to come next, it may turn out to be more exciting than we think. reader Marcel van Velzen said... reader Holger said... Obviously, such initial conditions do not show up within the framework of QM and hence do not affect the superposition of wavefunctions. Instead, they are most easily measured through the lifetime of the particle ;-) reader Uncle Al said... Every aspect of quantum mechanics requiring understanding can be resolved by inventing new families of virtual particles that cannot be made empirical to be detected. This tells you that approach is wrong. If science is not better than that, abandon it. Seven billion people's lives intimately arise from the most eldritch of technological subtleties. Shut the valve. Two billion survivors can reevaluate their philosophical position. reader Gene Day said... Toward the end you almost got it right. The experiment is just a visualization, nothing more; it has no deeper significance. I can do the same with chalk and a blackboard. I do not agree that the similarities are interesting. They are just a coincidence. reader Gene Day said... Lubos understands it. You do not. reader jon said... I don't understand the details, but in earlier posts you have said that inconsistent histories are eliminated when information arrives from separate locations, e.g. EPR. That seems to me to be enough to eliminate observing more than one particle in a dual slit experiment, without requiring faster than light processes. I apologize if it appears that I do not wait for your replies. I enjoy your blog quite a lot. I do admit that I am sometimes afraid of coming back to see harsh wording in your replies. If there is something wrong in my reasoning I would certainly like to correct it. reader WolfInSheepskin said... It's QM^{TM} reader Curious George said... Particle as a point .. take a 2 cm wavelength hydrogen superfine transition. At Goldstone I saw a high-pass radiotelescope filter for about that wavelength: a stainless steel(?) disc, 50 cm diameter, 2 cm thick, with a honeycomb pattern of holes. It reflects lower frequencies, lets higher frequency photons through. And still that photon can change a state of exactly one hydrogen atom. To complement it, take a 10-m mirror of the largest telescope. A single photon of a visible light gets reflected from a whole surface; limit the diameter to 1 m and the resolution gets 10x worse. reader Stephen Dedalus said... First, I know personally the guys involved and they really know fluid dynamics. The experiment and the theory are really interesting and It's not only because of the similarities with QM (which I think are just similarities) , but because they are ingenious and creative. Also, the phenomena investigated are closely related with interfacial phenomena, coalescence, lubrication theory, faraday waves, hydrodynamical instabilities and other very relevant things (at least for the area) in nonlinear dynamics and chaos. The first experiments by Y. Couder didn't even mention the word "quantum". I know its the way Lumos express his toughts and I particularly like it. However considering this series of experiment just a a bunch crackpot playing with droplets and trying to disprove one of the most succesful theory of physics isn't right. reader Luboš Motl said... Thanks for this voice. Sex involving the transmission of sexually transmittable diseases also involves lubrication theory, droplet coalescence, when it's done in the shower, also hydrodynamic instabilities etc. etc. and the people participating in it may even know something about these things of classical physics and applied maths and think that they're quantum cool. It doesn't imply that their act should be hyped as ingenious or a revolution in quantum physics. When QM is studied at some high or precise enough level, it simply has nothing to do with either of the two activities. reader Leo Vuyk said... Lubos knows that his “democratic credentials”, should also contain a choice between level 1 to 4 of Max Tegmark’s muliverses or as I believe a combination of a local and a distant NON splitting mirror CP symmetric mulitverse, able to understand human and material reader Gary Ehlenberger said... Very, very, nice discussion on: Quantum mechanics doesn't really imply solipsism What do you think of Bass's proof, only one consciousness assuming QM reader Luboš Motl said... I've read about 1/3 of the paper - not a compact clump, but representatively. I don't know what to do with it. It seems to parrot lots of misunderstandings by Einstein, add tons of sociological comments about the difference between philosophers and physicists, but ultimately fails to say what is right and fails to understand what quantum mechjanics - and Bohr - actually says about it, namely that the state vector is about the knowledge that is fundamentally subjective and there is therefore no contradiction at all if two observers use different state vectors. reader kashyap vasavada said... Thanks for pointing out Bass’ paper. This is an interesting work bordering on metaphysics and shows how Wigner’s friend paradox and singular nature of consciousness can be related. I do not know if Wigner himself believed until the end of his life that consciousness collapses wave function. It will be interesting to find out about this. For people like me, these are intriguing ideas which do not take anything out of the fantastic numerical success of QM. I can also see Luboš’ viewpoint that Copenhagen interpretation, mathematics of QM and super agreement with experiment are the only essential ideas. reader Gary Ehlenberger said... QBist metaphysics reader Gary Ehlenberger said... Check this paper out.
7f6a2046ab73d64b
Download Decoherence and Thermalization M. Merkli and I.M. Sigal G.P. Berman yes no Was this document useful for you?    Thank you for your participation! Document related concepts Probability amplitude wikipedia, lookup Quantum electrodynamics wikipedia, lookup Particle in a box wikipedia, lookup Perturbation theory (quantum mechanics) wikipedia, lookup X-ray fluorescence wikipedia, lookup Quantum computing wikipedia, lookup Quantum entanglement wikipedia, lookup Quantum machine learning wikipedia, lookup Bra–ket notation wikipedia, lookup Hydrogen atom wikipedia, lookup Interpretations of quantum mechanics wikipedia, lookup Molecular Hamiltonian wikipedia, lookup Scalar field theory wikipedia, lookup Renormalization wikipedia, lookup Quantum key distribution wikipedia, lookup Many-worlds interpretation wikipedia, lookup Theoretical and experimental justification for the Schrödinger equation wikipedia, lookup Measurement in quantum mechanics wikipedia, lookup Quantum teleportation wikipedia, lookup Max Born wikipedia, lookup Tight binding wikipedia, lookup Coherent states wikipedia, lookup Hidden variable theory wikipedia, lookup History of quantum field theory wikipedia, lookup Orchestrated objective reduction wikipedia, lookup Renormalization group wikipedia, lookup Relativistic quantum mechanics wikipedia, lookup Quantum group wikipedia, lookup Canonical quantization wikipedia, lookup Quantum state wikipedia, lookup Symmetry in quantum mechanics wikipedia, lookup T-symmetry wikipedia, lookup Density matrix wikipedia, lookup Quantum decoherence wikipedia, lookup Decoherence and Thermalization M. Merkli∗ and I.M. Sigal† Department of Mathematics, University of Toronto, Toronto, Ontario, Canada M5S 2E4 G.P. Berman‡ Theoretical Division, MS B213, Los Alamos National Laboratory, Los Alamos, NM 87545, USA (Dated: August 23, 2006) We present a rigorous analysis of the phenomenon of decoherence for general N -level systems coupled to reservoirs of free massless bosonic fields. We apply our general results to the specific case of the qubit. Our approach does not involve master equation approximations and applies to a wide variety of systems which are not explicitly solvable. PACS numbers: 03.65.Yz, 05.30.-d, 02.30.Tb We examine rigorously the phenomenon of quantum decoherence. This phenomenon is brought about by the interaction of a quantum system, called in what follows “the system S”, with an environment, or “reservoir R”. Decoherence is reflected in the temporal decay of offdiagonal elements of the reduced density matrix of the system in a given basis. The latter is determined by the measurement to be performed. To our knowledge, this phenomenon has been analyzed rigorously so far only for explicitly solvable models, see e.g. [1–7]. In this paper we consider the decoherence phenomenon for quite general non-solvable models. Our analysis is based on the modern theory of resonances for quantum statistical systems as developed in [8–15] (see also the book [16]), which is related to resonance theory in non-relativistic quantum electrodynamics [9, 17]. Let h = hS ⊗ hR be the Hilbert space of the system interacting with the environment, and let H = HS ⊗ 1lR + 1lS ⊗ HR + λv be its Hamiltonian. Here, HS and HR are the Hamiltonians of the system and the reservoir, respectively, and λv is an interaction with a coupling constant λ ∈ R. In the following we will omit trivial factors 1lS⊗ and ⊗1lR . The reservoir is taken initially in an equilibrium state at some temperature T = 1/β > 0. Let ρt be the density matrix of the total system at time t. The reduced density matrix (of the system S) at time t is then formally given ρt = TrR ρt , ∗ Electronic address: [email protected]; Present address: Department of Mathematics and Statistics, Memorial University of Newfoundland, St. John’s, NL, Canada A1C 5S7; Supported by NSERC under grant NA 7901. † Electronic address: [email protected]; Supported by NSERC under grant NA 7901. ‡ Electronic address: [email protected]; Supported by the NNSA of the U.S. DOE at LANL under Contract No. DE-AC52-06NA25396. where TrR is the partial trace with respect to the reservoir degrees of freedom. Formulas (1) and (2) describe the situation where a state of the reservoir is given by a well-defined density matrix on the Hilbert space hR . In order to describe decoherence and thermalization we need to consider “true” (dispersive) reservoirs, obtained for instance by taking a thermodynamic limit, or a continuousmode limit. We refer to [18] for a detailed description of such reservoirs, which is not needed in the presentation of our results here. Let ρ(β, λ) be the equilibrium state of the interacting system at temperature T = 1/β and set ρ(β, λ) := TrR ρ(β, λ). There are three possible scenarios for the asymptotic behaviour of the reduced density matrix as t → ∞: (i) ρt −→ ρ∞ = ρ(β, λ), (ii) ρt −→ ρ∞ 6= ρ(β, λ), (iii) ρt does not converge. The first situation is generic while the last two are not, although they are of interest, e.g. for energy conserving, or quantum non-demolition interactions characterized by [HS , v] = 0, see [3, 18]. Decoherence is a basis-dependent notion. It is usually defined as the vanishing of the off-diagonal elements [ρt ]m,n , m 6= n in the limit t → ∞, in a chosen basis. Most often decoherence is defined w.r.t. the basis of eigenvectors of the system Hamiltonian HS (the energy, or computational basis for a quantum register), though other bases, such as the position basis for a particle in a scattering medium [3], are also used. Since ρ(β, λ) is generically non-diagonal in the energy basis, the off-diagonal elements of ρt will not vanish in the generic case, as t → ∞. Thus, strictly speaking, decoherence in this case should be defined as the decay (convergence) of the off-diagonals of ρt to the corresponding off-diagonals of ρ(β, λ). The latter are O(λ). If these terms are neglected then decoherence manifests itself as a process in which initially coherent superpositions of basis elements ψj become incoherent statistical mixtures, pj |ψj ihψj |, as t → ∞. cj,k |ψj ihψk | −→ In particular, phase relations encoded in the cj,k disappear for large times. We consider N -dimensional quantum systems interacting with reservoirs of massless free quantum fields (photons, phonons or other massless excitations) through an interaction v = G ⊗ ϕ(g), see also (1) and (6). Here, G is a hermitian N × N matrix and ϕ(g) is the bosonic field operator smoothed out with the form factor g(k), k ∈ R3 . For any observable A of the system we set hAit := TrS (ρt A) = TrS+R (ρt (A ⊗ 1lR )). Assuming certain regularity conditions on g(k) (allowing e.g. g(k) = |k|p e−|k| g1 (σ) where g1 is a function on the sphere and where p = −1/2 + n, n = 0, 1, . . ., m = 1, 2), we show that the ergodic averages 1 T hAit dt hhAii∞ := lim T →∞ T 0 exist, i.e., that hAit converges in the ergodic sense as t → ∞. Furthermore, we show that for t ≥ 0, and for any 0 < τ 0 < 2π β , hAit − hhAii∞ = eitε Rε (A) + O(λ2 e−τ where the complex numbers ε are the eigenvalues of a certain explicitly given operator K(τ 0 ), lying in the strip {z ∈ C | 0 ≤ Imz < τ 0 /2}. They have the expansions 2 (s) ε ≡ ε(s) e = e − λ δe + O(λ ), where e ∈ spec(HS ⊗1lS −1lS ⊗HS ) = spec(HS )−spec(HS ) and the δe are the eigenvalues of a matrix Λe , called a level-shift operator, acting on the eigenspace of HS ⊗1lS − 1lS ⊗HS corresponding to the eigenvalue e (which is a subspace of hS ⊗ hS ). The level shift operators play a central role in the ergodic theory of open quantum systems, see e.g. [18, 19]. The coefficients Rε (A) in (4) are linear functionals of A which depend on the initial state ρ0 and the H. They have the expansion Rε (A) = (m,n)∈Ie κm,n Am,n + O(λ ), where Ie is the collection of all pairs of indices such that e = Em − En , the Ek being the eigenvalues of HS . Here, Am,n is the (m, n)matrix element of the observable A in the energy basis of HS , and the κm,n are coefficients depending on the initial state of the system (and on e, but not on A nor on λ). Our results for the qubit can be summarized as follows. Consider a linear coupling, a c ⊗ ϕ(g), c b where ϕ(g) is the Bose field operator as above. The formfactor g ∈ L2 (R3 , d3 k) contains an ultra-violet cutoff which introduces a time-scale τU V . This time scale depends on the physical system in question. We can think of it as coming from some frequency-cutoff determined by a characteristic length scale beyond which the interaction decreases rapidly. For instance, for a phonon field τU V is naturally identified with the inverse of the Debye frequency. We assume τU V to be much smaller than the time scales considered here. A key role in the decoherence analysis is played by the infrared behaviour of form factors g(k). We characterize this behaviour by the unique p ≥ −1/2 satisfying 0 < lim = C < ∞. The power p depends on the physical model considered, e.g. for quantum-optical systems p = 1/2. We can treat Decoherence in models with interaction (6) with c = 0 is considered in [1–6, 18, 21–23]. This is the situation of a non-demolition (energy conserving) interaction, where [v, HS ] = 0, and consequently energy-exchange processes are suppressed. The resulting decoherence is called phase-decoherence. A particular model of phasedecoherence is obtained by the so-called position-position coupling, where the matrix in the interaction (6) is the Pauli matrix σz [2, 6, 22, 23]. On the other hand, energyexchange processes, responsible for driving the system to equilibrium, have a probability proportional to |c|2n , for some n ≥ 1 (and a, b do not enter) [9, 10, 13, 15, 19, 20]. Thus the property c 6= 0 is important for thermalization (return to equilibrium). We express the energy-exchange effectiveness by the ξ(η) = lim d3 k coth ↓0 π R3 (|k| − η)2 + 2 where η ≥ 0 represents the energy at which processes between the qubit and the reservoir take place. Let ∆ = E2 −E1 > 0 be the energy gap of the qubit. In works on convergence to equilibrium it is usually assumed that |c|2 ξ(∆) > 0. This condition is called the “Fermi Golden Rule Condition”. It means that the interaction induces second-order (λ2 ) energy exchange processes at the Bohr frequency of the qubit (emission and absorption of reservoir quanta). The condition c 6= 0 is actually necessary for thermalization while ξ(∆) > 0 is not (higher order processes can drive the system to equilibrium). Observe that ξ(∆) converges to a fixed function, as T → 0, and increases exponentially as T → ∞. The expression for decoherence times involves also ξ(0), see (10). Our analysis allows to describe the dynamics of systems which exhibit both thermalization and (phase) decoherence. Let the initial density matrix, ρt=0 , be of the form ρ0 ⊗ ρR,β . (Our method does not require the initial state to be a product, see [18].) Denote by pm,n the operator represented in the energy basis by the 2×2 matrix whose entries are zero, except the (n, m) entry which is one. We show that for t ≥ 0 [ρt ]1,1 − hhp1,1 ii∞ = eitε0 (λ) C0 + O(λ2 ) + eitε∆ (λ) O(λ2 ) + eitε−∆ (λ) O(λ2 ) + O(λ2 e−tτ [ρt ]1,2 − hhp2,1 ii∞ = eitε∆ (λ) C∆ + O(λ2 ) itε0 (λ) O(λ ) + e itε−∆ (λ) 2 −tτ 0 /2 O(λ ) + O(λ e Here, C0 , C∆ are explicit constants depending on the initial condition ρ0 , but not on λ, and the resonance energies ε have the expansions ε0 (λ) = iλ2 π 2 |c|2 ξ(∆) + O(λ4 ) ε∆ (λ) = ∆ + λ2 R + 2i λ2 π 2 |c|2 ξ(∆) + (b − a)2 ξ(0) +O(λ4 ) and ε−∆ (λ) = −ε∆ (λ), with the real number R = 21 (b2 − a2 ) g, ω −1 g u2 |g(|u|, σ)|2 coth + 21 |c|2 P.V. R×S 2 The error terms in (8), (9) and (10) satisfy (for small λ): O(λ2 e−tτ 0 /2 ) O(λ2 ) λ2 < C and supt≥0 λ2 e−tτ 0 /2 < C. To our knowledge this is the first time that formulas for the decay of off-diagonal matrix elements of the reduced density matrix are obtained for models which are not explicitly solvable, and without using uncontrolled master equation approximations (see e.g. [22] and references therein). Remarks. 1) The corresponding expressions for the matrix elements [ρt ]2,2 and [ρt ]2,1 are obtained from the relations [ρt ]2,2 = 1 − [ρt ]1,1 (conservation of unit trace) and [ρt ]2,1 = [ρt ]∗1,2 (hermiticity of ρt ). 2) If the qubit is initially in one of the logic pure states ρ0 = |ϕj ihϕj |, where HS ϕj = Ej ϕj , j = 1, 2, then we find C∆ = 0, and C0 = eβ∆/2 ( eβ∆ + 1)−3/2 for j = 1 and C0 = eβ∆ ( eβ∆ + 1)−3/2 for j = 2, see [18]. 3) To second order in λ, the imaginary part of ε∆ is increased by a term ∝ (b − a)2 ξ(0) only if p = −1/2, where p is defined in (7). For p > −1/2 we have ξ(0) = 0 and that contribution vanishes. For p < −1/2 we have ξ(0) = ∞. 4) It is easy to see that ξ(∆) and R contain purely quantum, vacuum fluctuation terms as well as thermal ones, while ξ(0) is determined entirely by thermal fluctuations; it is proportional to β −1 = T . 5) The second order difference D, defined by Imε0(λ) − Imε∆ (λ) = λ2 D + O(λ4 ), is D = 1 2 |c|2 ξ(∆) − (b − a)2 ξ(0) . For D > 0 the popula2π tions converge to their limiting values faster than the offdiagonal matrix elements, as t → ∞ (coherence persists beyond thermalization of the populations). For D < 0 the off-diagonal elements converge faster. If the interaction matrix is diagonal (c = 0) then D ≤ 0, if it is off-diagonal then D ≥ 0. 6) For energy-conserving interactions, c = 0, it follows that full decoherence occurs if and only if b 6= a and ξ(0) > 0. If either of these conditions are not satisfied then the off-diagonal matrix elements are purely oscillatory (while the populations are constant), see also [18]. Illustration. Let the initial state of S be given by a coherent superposition in the energy basis, 1 1 ρ0 = 2 1 1 We obtain the following expressions for the dynamics of the reduced matrix elements, for all t ≥ 0: [ρt ]m,m = eitε0 (λ) +Rm,m (λ, t), m = 1, 2, [ρt ]1,2 = [ρt ]2,1 = eitε−∆ (λ) + R1,2 (λ, t), eitε∆ (λ) + R2,1 (λ, t), where the numbers ε are given in (10). The remainder terms satisfy |Rm,n (λ, t)| ≤ Cλ2 , uniformly in t ≥ 0, and they can be decomposed into a sum of a constant part (in t) and a decaying one, Rm,n (λ, t) = hhpn,m ii∞ − (λ, t), where |Rm,n (λ, t)| = O(λ2 e−γt ), δm,n eZS,β +Rm,n with γ = min{Imε0 , Imε±∆ }. Therefore, to second order in λ, convergence of the populations to the equilibrium values (Gibbs law), and decoherence occur exponentially fast, with rates τT = [Imε0 (λ)]−1 and τD = [Imε∆ (λ)]−1 , respectively. In particular, coherence of the initial state stays preserved on time scales of the order λ−2 [|c|2 ξ(∆)+ (b − a)2 ξ(0)]−1 , c.f. (10). Relation (4) gives a detailed picture of the dynamics of averages of observables. The resonance energies ε and the functionals Rε can be calculated for concrete models, to arbitrary precision (in the sense of rigorous perturbation theory in λ). See (8)-(10) for explicit expressions for the qubit, and the illustration above for an initially coherent superposition given by (11). In the present work we use relation (4) to discuss the processes of thermalization and decoherence of a qubit. In [18] we present, besides a proof of (4), applications to energy-preserving (non-demolition) interactions and to registers of arbitrarily many qubits. It would be interesting to apply the techniques developed here to the analysis of the transition from quantum to classical behaviour (see [1, 22]). In the absence of interaction (λ = 0) we have ε = e ∈ R, see (5). Depending on the interaction each resonance energy ε may migrate into the upper complex plane, or it may stay on the real axis, as λ 6= 0. The averages hAit approach their ergodic means hhAii∞ if and only if Imε > 0 for all ε 6= 0. In this case the convergence takes place on the time scale [Imε]−1 . Otherwise hAit oscillates. A sufficient condition for decay is that Imδe < 0 (and λ small, see (5)). There are two kinds of processes which drive the decay: energy-exchange processes and energy preserving ones. The former are induced by interactions enabling processes of absorption and emission of field quanta with energies corresponding to the Bohr frequencies of S (this is the “Fermi Golden Rule Condition” [9, 13, 15, 19, 20]). Energy preserving interactions suppress such processes, allowing only for a phase change of the system during the evolution (“phase damping”, [1–6, 21]). Even if the initial density matrix, ρt=0 , is a product of the system and reservoir density matrices, the density matrix, ρt , at any subsequent moment of time t > 0 is not of the product form. The evolution creates the system-reservoir entanglement. We develop a formula for hAit − hhAii∞ for all observables A of any N -level system S in [18]. If the system has the property of return to equilibrium, i.e., if ξ(∆) > 0, then [ρ∞ ]m,n = δm,n TrSe( e−βHS ) + O(λ2 ). Hence the Gibbs distribution is obtained by first letting t → ∞ and then λ → 0. A similar observation in the setting of the quantum Langevin equation has been made in [24]. If ρ0 is an arbitrary initial density matrix on HS ⊗ HR then our method yields a similar result, see [18]. Equations (8), (9) and (10) define the decoherence time scale, τD = [Imε∆ (λ)]−1 , and the thermalization time scale, τT = [Imε0 (λ)]−1 . We should compare τD with the decoherence time scales and with computational time scales in real systems. The former vary from 104 s for nuclear spins in paramagnetic atoms to 10−12 s for electronhole excitations in bulk semiconductors (see e.g. [25]). In the ubiquitous spin-boson model [26], obtained as a two-state truncation of a double-well system or an atom interacting with a Bose field, the Hamiltonian is given by (1) with HS = − 21 ~∆0 σx + 21 σz and v = σz ⊗ ϕ(g). Here, σx , σz are Pauli spin matrices, is the “bias” of the asymmetric double well, and ∆0 is the “bare tunneling matrix element”. In the canonical basis, whose vectors represent the states of the system localized in the left and the right well, HS has the representation [1] D. A. R. Dalvit, G. P. Berman, and M. Vishik, Phys. Rev. A 73, 13803 (2006). [2] L.-M. Duan and G.-C. Guo, Phys. Rev. A 57, 737 (1998). [3] E. Joos, H. D. Zeh, C. Kiefer, D. Giulini, J. Kupsch, and I. O. Stamatescu, Decoherence and the appearence of a classical world in quantum theory (Springer Verlag, Berlin, 2003). [4] D. Mozyrsky and V. Privman, J. Stat. Phys. 91, 567 [5] J. Shao, M.-L. Ge, and H. Cheng, Phys. Rev. E 53, 1243 [6] M. G. Palma, K.-A. Suominen, and A. Ekert, Proc. R. Soc. Lond. A 452, 567 (1996). [7] N. G. VanKampen, J. Stat. Phys. 78, 299 (1995). [8] V. Bach, J. Fröhlich, and I. M. Sigal, Lett. Math. Phys. 34, 183 (1995). [9] V. Bach, J. Fröhlich, and I. M. Sigal, J. Math. Phys. 41, 3985 (2000). [10] V. Jaks̆ić and C.-A. Pillet, Comm. Math. Phys. 176, 619 [11] V. Jaks̆ić and C.-A. Pillet, Comm. Math. Phys. 226, 131 [12] W. Hunziker and I. M. Sigal, J. Math. Phys. 41, 3448 [13] M. Merkli, M. Mück, and I. M. Sigal, preprint math- ph/0508005 and mp-arc 05-239, 2006. [14] I. M. Sigal and V. Vasilijevic, Ann. H. Poincaré 3, 347 [15] M. Merkli, M. Mück, and I. M. Sigal, preprint mathph/0603006 and mp-arc 06-42, 2006. [16] S. J. Gustafson and I. M. Sigal, Mathematical Concepts of Quantum Mechanics, 2nd edition. (Springer Verlag, [17] V. Bach, T. Chen, J. Fröhlich, and I. M. Sigal, J. Funct. Anal. 203, 44 (2003). [18] M. Merkli, I. M. Sigal, and G. P. Berman, preprint, 2006. [19] M. Merkli, Math. Anal. Appl. (2006). [20] J. Fröhlich and M. Merkli, Comm. Math. Phys. 251, 235 [21] G. P. Berman, F. Borgonovi, and D. A. R. Dalvit, preprint, quant-ph/0604024. [22] G. P. Berman, A. R. Bishop, F. Borgonovi, and D. A. R. Dalvit, Phys. Rev. A 69, 062110 (2004). [23] W. G. Unruh, Phys. Rev. A 51, 992 (1995). [24] R. Benguria and M. Kac, Phys. Rev. Lett. 46, 1 (1981). [25] D. P. DiVincenzo, Phys. Rev. A 51, 1015 (1995). [26] A. J. Leggett, S. Chakravarty, A. T. Dorsey, M. P. A. Fisher, A. Garg, and W. Zwerger, Rev. Mod. Phys. 59, 1 (1987). HS = −~∆0 − The diagonalization p of HS yields HS = diag(E+ , E− ), where E± = ± 12 2 + ~2 ∆20 . The operator v = σz ⊗ϕ(g) is represented in the basis diagonalizing HS as (6), with ~ 2 ∆2 a = −b = −( 2 0 + 1)−1/2 and c = 12 ( ~2∆2 + 1)−1/2 .
8bad4611005e830a
Rotational excitations in two-color photoassociation Jisha Hazra and Bimalendu Deb Department of Materials Science, and Raman Center for Atomic, Molecular and Optical Sciences, Indian Association for the Cultivation of Science, Jadavpur, Kolkata 700032, India. We show that it is possible to excite higher rotational states in ultracold photoassociation by two laser fields. Usually higher states are suppressed in photoassociation at ultracold temperatures in the regime of Wigner threshold laws. We propose a scheme in which one strong laser field drives photoassociation transition close to either or rotational state of a particular vibrational level of an electronically excited molecule. The other laser field is tuned near photoassociation resonance with rotational levels of the same vibrational state. The strong laser field induces a strong continuum-bound dipole coupling. The resulting dipole force between two colliding atoms modifies the continuum states forming continuum-bound dressed states with a significant component of higher partial waves in the continuum configuration. When the second laser is scanned near the resonance of the higher states, these states become populated due to photoassociative transitions from the modified continuum. 34.50.Cx, 34.50.Rk, 42.65.Dr, 33.20.Sn I introduction Photoassociation (PA) spectroscopy thorsheimPRL5887 ; julienneRMP7806 of ultracold atoms by which two colliding atoms absorb a photon to form an excited molecular state is an important tool for studying ultracold collisional properties at the interface of atomic and molecular states. PA is particularly useful for producing translationally cold molecules TsaiPRL7997 ; PilletPRL8098 ; Takekoshi ; AraujoJCP11903 ; DeiglmayrPRL10108 ; LangPRL10108 ; jingPRA8009 and generating optical Feshbach resonance FedichevPRL7796 ; fatemiPRL2002 ; theis ; EnomotoPRL101 ; debPRL10409 . More than a decade ago, theoretical models theory1 ; theory2 were developed to explain PA line shape in the weak-coupling regime. The effects of laser intensity on PA spectra BohnPRA5697 ; juliennePRA6099 ; williamsPRA662002 ; ZimmermannPRA6602 ; huletPRL9103 ; juliennePRA6904 have been an important current issue. Over the years, two-color Raman type PA has emerged as an important method for creating translationally cold molecules in the ground electronic configuration. Recently, using this method, cold polar molecules DeiglmayrPRL10108 in rovibrational ground state have been produced. Molecules created by one- or two- color PA of ultracold atoms generally possess low-lying rotational levels . Motivated by recent experimental observation of excitation of higher rotational states in ultracold PA with an intense laser field GomezPRA7507 , we here expore theoretically the possibility of rotational excitations in two-color PA. This may be important for producing translationally cold molecules in selective higher rotational states. Previously, two-color PA has been investigated in different other contexts bagnatoPRL7093 ; LeonhardtPRA5295 ; MolenaarPRL7796 ; JonesJPB3097 ; MarcassaPRL7394 ; SuominenPRA5195 ; ZilioPRL7696 ; AbrahamPRL7495 , such as photo-ionization of excited molecules bagnatoPRL7093 ; LeonhardtPRA5295 ; MolenaarPRL7796 ; JonesJPB3097 , shielding of atomic collision MarcassaPRL7394 ; SuominenPRA5195 ; ZilioPRL7696 , measurement of s-wave scattering length AbrahamPRL7495 , etc. In this paper we propose a method of two-color photoassociation of two homonuclear atoms for exciting higher rotational levels. Our proposed method is schematically shown in Fig.1. Laser L is a strong field and the laser L is a weak one. L is tuned near either or rotational state of a particular vibrational level of the excited state. This rotational state is predominantly accessed by PA transition from s-wave scattering state. A photon from L causes PA excitation from the continnum (s-wave) to the bound level . A second photon from the same laser can cause a stimulated de-excitation back to the continnum state. This is a stimulated Raman-type process which can lead to significant excitation of higher partial waves in the two-atom continnum. Now, if a weak laser L is tuned near states, these higher rotational states get excited due to PA from the modified continnum. In this scheme of two-color PA, three photons are involved. This does not fit into a standard or V-type process. Here bound-bound transition is absent. All the transitions are of continnum-bound type. This scheme may be viewed as a combination of and V-type process with continnum acting as an intermediate state for V-type transition. A schematic diagram showing the strong (double-arrow thick line) and weak (single-arrow thin line) field couplings between the excited rotational levels Figure 1: A schematic diagram showing the strong (double-arrow thick line) and weak (single-arrow thin line) field couplings between the excited rotational levels () and the continnum state. Strong laser L modifies the continnum state by a two-photon process (curly lines) as described in the text. The laser L is tuned near resonance with the rotational levels 3 which are then populated due to PA transition from the modified continnum. Molecular rotational levels and are accessible from s-wave () scattering state, but can only be accessed from higher partial-wave () scattering states. In the previous Raman-type PA experiments, excited molecular state is used as an intermediate state. Furthermore, usually two-color PA is carried out in the weak coupling regime. In contrast, our proposed scheme involves necessarily one strong laser field for inducing strong PA coupling. We demonstrate excitation of higher rotational levels in two-color ultracold PA by resorting to a simplified model. We first evaluate higher partial wave scattering states modified due to strong photoassociative coupling debPRL10409 induced by the strong laser L. We employ these modified wave functions to calculate two-color stimulated line widths which are significantly enhanced compared to those in the case of one-color. The paper is organized as follows. In the following section we describe the formulation of the problem and its solution. The numerical results and discussion has been given in Sec.3. Finally the paper is concluded in Sec.4. Ii The Model and Its Solution To start with, let us consider that PA laser couples continuum (scattering) states of collision energy (where is the reduced mass) of two alkali-type homo-nuclear ground state atoms to an excited diatomic (molecular) bound state which asymptotically corresponds to one ground and another excited atom. Under electric dipole approximation, the interaction Hamiltonian can be expressed as where is the dipole moment of -th atom whose valence electron’s position is given by with respect to the center of mass of this atom. Here represents an electron’s charge, is the laser field amplitude and is the polarization vector of the laser. The total Hamiltonian in the center-of-mass frame of the two atoms can be written as where is the electronic part of the Hamiltonian which includes terms corresponding to kinetic energy of the two valence electrons, mutual Coulomb interactions between nuclei and the electrons, exchange and electronic spin-orbit interaction. Here and represent the position vectors of the nuclei of atoms a and b, respectively; and denote the Laplacian operators corresponding to the relative coordinate and the center-of-mass coordinate and stands for the hyperfine interaction of two atoms. Under Born-Oppenheimer approximation, while solving the electronic part of the Hamiltonian, the nuclear coordinates appear merely as parameters. PA laser couples only two electronic molecular states which are the initial ground and the final excited diatomic states represented by and , respectively. These internal electronic states have parametrical dependence on the internuclear coordinate . They satisfy the eigenvalue equations We assume that the matrix element depends only on separation . Then the center-of-mass motion gets decoupled from relative motion. Henceforth we consider only the relative motion. By specifying the electronic parts of both the bound and the continuum states, one can calculate the matrix element of over the electronic parts of the two molecular levels involved in free-bound transition and thus obtain molecular coupling strength . The continnum-bound dressed state can be written as which is assumed to be energy normalized with being the energy eigen value. In the absence of atom-field interaction , the problem is to find out multi-channel scattering wave-function in the ground electronic configuration. The scattering channels correspond to two separated atoms a and b in hyperfine spin and , respectively. The molecular hyperfine state is characterized by the spin . A channel is defined by the angular state where where is the mechanical angular momentum of the relative motion of the two atoms. This asymptotic basis can be expressed in terms of the adiabatic molecular basis Tiesinga , where and are the total electronic and nuclear spin angular momentum of two atoms. In the case of excited molecular state, should be replaced by electronic angular momentum . Alternatively, the adiabatic basis can also be expressed in terms of . Thus the rotational state of a diatom can be expressed in terms of the matrix element where and are the z-component of in the space-fixed and body-fixed coordinate frame and represents the Euler angles for transformation from body-fixed to space-fixed frame. is the rotational matrix element. For ground electronic configuration, we have and ; thereby, reduces to spherical harmonics . We thus express the ground state in the following form where is the energy-normalized scattering state with collision energy and is the density of states of unperturbed continnum. Similarly, for a particular value of , we can expand the excited state in the following form Substitution of Eqs. (4) and (5) into time-independent Schrödinger equation leads to coupled differential equations. These equations are solved by the use of real space Green’s function. The detailed method of solution for a model problem is given in Appendix A. In our model calculations, we consider only a single ground hyperfine channel. The solution can be expressed as where is the excited molecular state (unit-normalized) in the absence of laser field and is the probability amplitude of excitation of from a particular partial wave . Here is the continnum-bound dipole matrix element and . represents the -th partial wave regular scattering solution in the absence of laser field and is the partial light shift of the excited state. Here is the propagator as defined in the Appendix A. The total probability amplitude of excitation for a particular is given by where is the total energy shift of the excited level. is the natural line width of the excited molecular state, is the bound state energy corresponding to the bound state solution of the excited state. is the frequency off-set between the laser frequency and atomic resonance frequency . The ground state scattering solution in the presence of PA laser is given by In the asymptotic limit (), the modified scattering wave-function behaves like where is the irregular wave function of -th partial wave. 1 2 3 4 5 6 (GHz) 0.57 2.13 4.77 8.55 13.03 18.34 (GHz) 1.56 2.64 3.78 4.48 5.31 (MHz) 19.69 22.79 17.36 11.61 6.83 2.22 Table 1: Numerically calculated rotational energies (in unit of GHz) and total shift (in unit of MHz) for one-color laser intensity = 1 kW/cm for vibrational state of 1 excited state. Also given are the rotational energy spacings (in unit of GHz) for a few lowest values. Here is the phase shift due to the applied laser field and is given by where . The two-color partial stimulated line width for a particular rotational state is given by and the total stimulated line width is = . The excitation of particular rotational state from the partial wave is governed by the following selection rule where is the total electronic orbital angular momentum and is the sum of two individual atomic spin, i.e. . So the lowest possible partial wave which can make the largest contribution to the excitation of rotational state = 1, 2, 3, 4, 5, 6 are 0, 0, 1, 2, 3, 4, respectively. The two-color photoassociation rate for is defined as where = and is the relative velocity of two atoms, is the inelastic cross-section due to loss of atoms. Here implies an averaging over the distribution of initial velocities, is the translational partition function and . In the next section, we apply this formalism to a model system and obtain numerical results. Iii Results and Discussion Two-color partial stimulated line widths Figure 2: Two-color partial stimulated line widths (in unit of MHz) as a function of (in unit of GHz) at collisional energy E = 10 K. The intensity of laser L tuned near (a) and (b) is 40 kW/cm and the intensity of weak laser L is 1 W/cm. The total shift of the rotational state and are -0.79 GHz and -0.91 GHz, respectively. = -1.25 GHz = -1.48 GHz (MHz) (MHz) (MHz) 3 1 1.55 0.0158 0.0107 3 2 0.0000 0.0111 0.0065 3 3 0.0000 0.0085 0.0051 4 2 0.0000 0.0128 0.0075 4 3 0.0000 0.0072 0.0047 5 3 0.0000 0.0103 0.0061 Table 2: Tabulated are one- and two-color partial stimulated line widths and at = 10 K for two values. Here laser is tuned near rotational state. The intensities of the two lasers are W/cm and = 40 kW/cm. The subplots (a), (b) and (c) show the light-induced scattering wavefunctions Figure 3: The subplots (a), (b) and (c) show the light-induced scattering wavefunctions (in unit of Bohr radius Hartree) for p- (), d- () and f-wave (), respectively. The solid and dashed curves correspond to the detunning GHz and GHz, respectively. The subplots (d), (e) and (f) exhibit the corresponding field-free regular wavefucntions . All the wave functions are plotted at collisional energy of 10 K and intensity I 40 kW/cm. For numerical illustration, we consider a model system of two cold ground state () Na atoms undergoing PA transition from ground state to the vibrational state of the excited molecular state GomezPRA7507 . At large internuclear distance this potential correlates to + free atoms and at short range to 1 Born-Oppenheimer potential. In Ref. GomezPRA7507 higher rotational lines upto have been clearly observed in PA with an intense laser field. The centrifugal barrier of of the two-atoms lies at (= Bohr radius) whereas PA excitations occur at . Therefore, the higher rotational states will be unlikely to be populated by PA transitions from partial-wave scattering states at ultra-cold temperatures in the weak-coupling regime. Previously, higher rotational levels have been excited in PA spectroscopy due to resonant dipole-dipole interaction with transition occurring at large internuclear separations longrangerotexcitation ; longrangeforce . The numerically calculated rotational energies , energy shifts and the corresponding energy difference for six lowest values are given in Table I. To demonstrate the working of our proposed scheme, we resort to a simplified two-state calculation. We consider only one ground hyperfine channel with and in the absence of any external magnetic field. In the excited molecular state, we neglect the hyperfine interaction. The two-color partial stimulated line width is plotted as a function of detunning in Fig.2 for ranging from 3 to 6. The strong laser L is tuned near (Fig.2a) and (Fig.2b). From Fig.2 we notice that strongly depends on the detunning of the strong laser from PA resonance of the rotational level . The maximun of occurs at = 0. For lower values, the probability of rotational excitation is higher. Figure 4: ( is the light induced phase-shift) is plotted as a function of detunning (MHz) when L is tuned near . The total shift at 40 kW/cm is -0.91 GHz. The other parameters are = 40 kW/cm and = 10 K The Two-color total stimulated line width Figure 5: The Two-color total stimulated line width (in unit of MHz) for different (as indicated in the plots) is plotted as a function of collisional energy (in unit of MHz) when L is tuned near = 1 for = -1.25 GHz (a), = -1.48 GHz (b) with = 40 kW/cm and = 1 W/cm. For comparison, we also calculate one-color partial stimulated line widths for from the expression . The one-color total stimulated line width is . At 10 K energy and at laser intensity 1 W/cm, the one-color partial stimulated line widths = 15.46 Hz, 0, 0. A comparison between one- and two-color partial stimulated line widths has been made in Table II for at collisional energy 10 K. The two-color total line widths , , are 0.03537 MHz, 0.0200 MHz and 0.0103 MHz, respectively when = -1.25 GHz and they are 0.02229 MHz, 0.0122 MHz and 0.0061 MHz, respectively for = -1.48 GHz. The corresponding one-color weak-coupling partial as well as total stimulated line widths and for the same rotational states with laser intensity of 1 W/cm are vanishingly small while the two-color partial and total exceed and by several orders of magnitude. We find the energy shift is 0.79 GHz which exceeds the spontaneous line width (say 2 MHz for model calculation) by two orders of magnitudes. = -2.95 GHz = -3.049 GHz = -3.17 GHz 1 263.00 36900.00 -229.00 2 4.93 659.00 -4.09 3 0.01 2.01 -0.01 4 0.00 0.01 0.00 Table 3: Tabulated are the when the laser is tuned near at = 10 K for three values of . The parameters are kW/cm, GHz and GHz. In the field-free case, = -1.57, = 1.20, 0 and 0 In order to trace the origin of increment of we plot perturbed for when laser L is tuned near and the corresponding field-free regular functions in Fig.3. It is clear from this figure that the amplitudes of are greatly enhanced by several orders of magnitude than that of . Next, we calculate by using Eq. (14) when laser L is tuned near . These are given in Table III for = -2.95 GHz, -3.17 GHz and -3.049 GHz. The first two values correspond to off-resonant and the last one to resonant condition. The variation of with is plotted in Fig.4 which exhibits resonance for higher partial waves induced by strong-coupling PA. The enhancement of the partial () wave amplitude is due to the term of Eq. (11). In figure 5, the two-color total stimulated line width is plotted as a function of collisional energy for two off-resonant values when L is tuned near = 1. The magnitude of for higher rotational states () is less than that of . This is due to the fact that the lowest possible partial wave contribution to the excitation of rotational states and are and , respectively while state can be populated from wave which has rotational barrier lower than that of d- and f-wave. The two-color photoassociation rate as defined in Eq.(17) has been plotted as a function of (Fig.6a) and (Fig.6b). The spectra in Fig.6b are red-shifted due to the presence of the term in Eq. (17). From the selection rule, it is obvious that rotational states can not be populated by a PA transition from s-wave scattering state. But the appearance of the lines in PA spectra is an indication of the significant modification of the partial scattering wavefunctions by intense light field. The upper panel (a) shows two-color photoassociation rate Figure 6: The upper panel (a) shows two-color photoassociation rate (in unit of meter sec) as a function of atom-field detunning (in unit of GHz) for three higher rotational levels (as indicated in the plots) for = -1.25 GHz when the laser L is tuned near . The lower panel (b) shows the same but as a function of detunning (in unit of MHz) from PA resonance. The other parameters for both the panels are = 40 kW/cm, = 1 W/cm and = 100 K. Iv Conclusion In the present paper we have developed a two-color PA scheme for the excitations of higher () rotational levels which are generally suppressed in the Wigner threshold law regime. We have calculated two-color stimulated line width (for ) by fixing strong laser either near or state and tuning another weak laser to higher rotational () states. Then we have compared these with one-color line widths. The enhancement of stimulated line width is a result of strong-coupling photoassociative dipole interaction which in turn modifies the continnum states. This proposed method may be important for coherent control of rotational excitations and manipulation of optical Feshbach resonance of higher partial waves. Appendix A The mathematical treatment given here is closely related to our earlier work debPRL10409 . Treating the laser field classically, the effective interaction Hamiltonian under rotating wave approximation the two state basis can be expressed as Form time-independent Schrödinger equation , we obtain two coupled equations Here is assumed to include hyperfine interaction of the chosen channel. Substituting Eqs. (4) and (5) into the Schödinger equations (19) and (20) we get two coupled equations where is the rotational term of excited molecular bound state in the absence of nuclear spin, is the centrifugal term in collision of two ground state (S) atoms, . The above two equations are solved by the green’s function method by setting . The single channel scattering equation becomes Let and represent the regular and irregular solutions of the above equation. The appropriate Green’s function for the scattering wave function can be written as The regular function, vanishes at r=0 and the irregular solution is defined by boundary only at . The energy normalised asymptotic form of both regular and irregular wave function is where is the phase-shift of -th partial wave in the absence of PA coupling. The homogeneous part of (21) with = 0 is The Green function corresponding to these rovibrational states can be written as Using this Green’s function, we can write down the solution of equation (21) in the form Substituting equation (30) into equation (22) we obtain The scattering solution can now be expressed as On substitution of equation (33) into (31) and after some algebra, we obtain Let . Now, adding a term on both side of equation (34), we can express in terms a quantity as well as other parameters. On summing over all possible we can evaluate . Having done all these algebra, we can explicitly express
3821ac027e66b318
Physics 55100 Introductory material: 2-slit experiment, matter waves and addition of amplitudes–superposition principle; Uncertainty principle, properties of matter waves: Boundary conditions and energy level quantization and Schrödinger interpretation–wave equation, application to one dimensional problems, barrier penetration, Bloch states in solids and how bands form in solids; The universality of the Harmonic potential–Simple Harmonic oscillator and applications; One electron atoms, spin, transition rates; Identical particles and quantum statistics; Beyond the Schrödinger equation: Variational methods and WKB. Prereq.: MATH 39100 and MATH 39200. Pre- or coreq.: PHYS 35100, PHYS 35400 (required for Physics majors). 4 hr./wk.; 4 cr.
a92694681e4b7669
Linear Combinations of d Orbitals Initializing live version Download to Desktop Requires a Wolfram Notebook System Chemistry students encountering atomic orbitals for the first time often wonder why the orbital looks so different from the others. The answer is related to the fact that boundary surface pictures of atomic orbitals typically show only the real part of these complex functions and often leave out the sign information as well. The one-electron wavefunctions resulting from the solution of the Schrödinger equation for the hydrogen atom are complex functions except when . The real forms of atomic orbitals can be constructed by taking appropriate linear combinations of the complex forms. Here, boundary surfaces of the orbitals are colored to indicate the real and imaginary components as well as the positive and negative signs. These color-coded atomic orbitals illustrate the linear combinations of the complex wavefunctions that result in the familiar four-lobe pictures. Contributed by: Lisa M. Goss (March 2011) Open content licensed under CC BY-NC-SA Feedback (field required) Email (field required) Name Occupation Organization
51dd644056a4057f
Notes on Chemistry codes I attended and gave a tutorial at the 11th LCI International conference last year at the Pittsburgh Super Computing Center.  There, I had the honor to meet several leading quantum chemistry HPC code researchers. One of them, Dr. Wang Yang.  Over reception, I picked his brain on how quantum chemistry codes work and why they are important to super computing research.  The 1/2 page nodes that I took gradually turned into a semi-research document as I filled in the details over the last year.  Quantum chemistry codes are some of the most important for super computing research, as they scale extremely well.  Dr. Yang at the time was able to scale his research simulation to almost 120,000 cores on the new #1 super computer at the time Jaguar at Oakridge. Here’s my notes on chemistry code, and please feel free to send me comments or suggestions, and I hope this helps you to learn more about the field as well. Molecular properties and interactions depend largely on their electronic structure, particularly that of the outer or "most exposed" electron shell. There are a number of commonly used methods employed for electronic structure calculations in physics and quantum chemistry.  The calculations tell us about physical arrangement or distribution of the electrons, their quantum mechanical states, molecular structures, and interatomic bond strengths.  From this information, we can derive the bulk description of the material including its conductivity, mechanical properties, bulk structure, and many other attributes. By looking at electron distributions between atoms, we can determine the type and properties of molecular bonds. At the root of this approach is determining solutions to the Schrödinger equation. Unfortunately, as we add particles to the problem (nuclei and electrons of atoms in the molecule), the equation quickly becomes virtually unsolvable due to the immense computational effort involved, except for the simplest of cases. Even within a simple solid, consisting of one type of atom, we must consider interactions among the electrons of neighboring atoms.  But there are simply too many electrons, making everything complicated, as the many-particle Schrödinger equation cannot be simplified due to the interaction term.  During the 1960s, Walter Kohn and others developed Density Functional Theory, an iterative approach toward understanding electronic properties. Basically, DFT reduces the scope of the calculation to a single electron problem. What started out as a problem with many electrons and nuclei is now a many nuclei, single electron problem, which is much easier to solve.  We still need to consider electron-electron and electron-nucleus interactions.  At this point, DFT makes a further simplification: the mean field representation of all other electrons in the system. As a result, the effective electron-electron interaction becomes an electron-mean field interaction. All other individual electrons are no longer in the picture and the only thing left is the mean field, a function of the electron density. Now we solve the single-electron Schrödinger equation, whose solution is the wave function for the system.  From the wave function, we determine the electron density and arrive at an expression for the mean field.  The latter introduces an exchange-correlation potential that can be calculated using approximations such as the local-density approximation (LDA), which depends only on the density itself; the generalized gradient approximation (GGA), which depends on the gradient of the density; or some other advanced methods.  Finally, we arrive at the mean field and our first iteration is complete.  We plug the mean field back into the Schrödinger equation and repeat the calculation until we converge to a consistent mean field. To summarize: At the beginning, the electron wave function and mean field are both unknown. We assume a mean field as a starting function and calculate the electron density, which is the square of the absolute value of the wave function.  We iterate the calculation until we arrive at a self-consistent result. Since we don't know how the mean field depends on electron density, we use approximations (either LDA or GGA). Similar calculations are performed in molecular simulation by packages such as VASP (Vienna ab-initio Package Simulation) and Gaussian. The molecular dynamics approach based on DFT is called Carr-Parrinello Molecular Dynamics (CPMD). This is an ab-initio quantum mechanical method in which we use the Schrödinger equation and assume the nuclei are at equilibrium and unperturbed. Instead of such quantum mechanical methods, we could use a simplified quasi-classical representation of the interactions by means of a suitable force field. In this approach, the primary variables are distances between atoms.  We do not calculate any densities or mean fields, since the force field is fixed.  Although less accurate, this approach is popular when studying biological systems, where the number of atoms (electrons and nuclei) is huge (hundreds, thousands, or more).  Such systems usually involve a small variety of atoms, namely carbon, hydrogen, nitrogen, and oxygen, which has motivated researchers to devise force field functions optimized for calculations on molecules consisting of these atoms.  Some classical force fields used in biomolecular and organic chemistry include AMBER, CHARMM, OPLS, and ECEPP.  More general force fields applicable to atoms of most or all elements in the periodic table also exist (UFF).  These force fields and their variations are the culmination of years or decades of development effort and comparison to experimentally observed molecular structures and properties. I am also proud to note that AMBER, GAMESS are available on the Windows HPC platform, and a port of Gaussian is in the works.  If you would like to know more details, please do feel free to send me a note via the blog portal. Comments (1) 1. Rangam says: LAMMPS, NAMD. Gromacs are also available and scale pretty good on WHPC Clustesr.  Be aware of using qlogic cards on windows clusters . I also could not get  GPFS to work properly on WHPC. I have some other interesting details i found while at my previous job. Skip to main content
b5a9297824283748
Dismiss Notice Dismiss Notice Join Physics Forums Today! Mathematical pre-requisites for Relativistic QFT 1. Sep 10, 2009 #1 I plan on continuing to study physics, mathematics and earth science. (independently) What are the mathematical pre-requisites for learning relativistic quantum field theory as smoothly as possible. On MIT's opencourseware, it indicates that a class on advanced ODEs is enough, but we all know that they usually teach the rest of the stuff in the physics class itself. I'm more comfortable with learning the mathematical theories independently of the class. So far, I have utmost confidence in my knowledge of Newtonian physics, (getting there with Lagrangian and Hamiltonian), electromagnetism, real analysis, calculus (single and multivariable), and linear algebra. I'm still getting there with ODEs but after those, I'm moving onto Fourier analysis, PDEs, topology and differental geometry. (needed for geodesy, seismology, quantum theory and general relativity) The plan is to start at basic Quantum Physics I and end at Relativistic Quantum Field Theory III. (see the opencourseware listings at http://ocw.mit.edu/OcwWeb/Physics/index.htm) Am I leaving out any mathematical course that would allow for this to go as smoothly as possible? Appropriate books are no problem for me to get my hands on so don't feel the need to leave anything out. Here's the link to MIT's math section. Can you guys help me to pick out the appropriate mathematical pre-requisites? 2. jcsd 3. Sep 10, 2009 #2 A decent grasp of basic group theory is necessary to some extent. Otherwise, your proposed maths courses seem to cover virtually all physics I've ever come across. However, I wouldn't underestimate how long that little lot will probably take you to get to grips with :wink: 4. Sep 10, 2009 #3 User Avatar Staff Emeritus Science Advisor Gold Member For an introductory class, you already have everything you need. (You won't need the classes about differential equations[1]. Some complex analysis could help, but only a little[2]). For a more advanced class, you need to know some stuff about groups and representations of groups[3], and it would be even better if you know about Lie groups, Lie algebras and their representations[4]. To understand algebraic QFT or constructive QFT (stuff that won't be covered in the QFT class(es) you intend to take), you would need to take classes in advanced analysis[5], integration theory[6], functional analysis[7] and differential geometry[8]. 1. It's useful to understand how to find solutions of certain partial differential equations using the "separation of variables" trick. If you understand why writing [itex]\psi(x,t)=T(t)u(x)[/itex] in the Schrödinger equation gives you two separate equations, one of which is the energy eigenvalue equation, you know enough already. 2. It might help you understand an occasional comment here and there that you could almost certainly skip anyway without affecting the grade you're getting. 3. Study the definition of "group" and "representation", find out what group homomorphisms and isomorphisms are, and what the notation G/H means (where H is a normal subgroup of G). 4. Most presentations of the subject of Lie Groups and Lie algebras require that you understand differential geometry first. I know one good book that focuses on teaching only what you can understand without differential geometry (which turns out to be almost everything): "Lie groups, Lie algebras and representations", by Brian C Hall. The book "Modern differential geometry for physicists" by Chris Isham is an excellent introduction to both differential geometry and the basics of Lie group/algebra theory. You might want to get both. If you only get one, get Isham. 5. E.g. "Principles of mathematical analysis" by Walter Rudin. 6. Any book about Lebesgue integrals and that kind of stuff will do. "Foundations of modern analysis" by Avner Friedman contains all you need, but don't get that one unless you like your math books to be just a long sequence of definitions, theorems and proofs with no explanations. (It's actually a very good book if you like that style). 7. Friedman contains the basics. I'm currently reading "Functional analysis: spectral theory" by V.S. Sunder, and I like it a lot, but "Introductory functional analysis with applications" by Erwin Kreyszig may be a better place to start. If you don't believe you need this stuff, try reading some of this. 8. Get Isham's book. It's awesome. 5. Sep 11, 2009 #4 OK, thank you guys. It's good to know that I've only left out the subject of group theory and Lie algebras. I will get those books and in 17 - 30 weeks, I'll get started on Quantum Field Theory after learning the math and basic quantum theory. I have nothing but time on my hands and I do independent readings for 10 hours every weekday. (I'm not in college) Thanks for the book recommendations. I will get them all. (especially Isham's) I actually have some of those from the list with me right now. 6. Sep 12, 2009 #5 I'm taking first semester QFT right now. Being a math major, I usually feel comfortable with any math they throw at me. But honestly I've never heavily used anything beyond sophomore level calculus and differential equations/linear algebra. All the math you need to know beyond this, they teach you on the fly. For example, they often use Fourier transforms in QFT to derive stuff, but you never end up having to manually do one. As long as you get the basic principle, you're OK. So I wouldn't bother spending too much time on the math. That's how they teach QFT at my school anyway. Maybe at other posters' schools they teach a version that involves group theory. There's a group theory class in my department (i.e. the physics department offers their own class taught by physicists), but it's not a prereq for QFT. 7. Sep 12, 2009 #6 I don't think you need to know a lot about groups, but the basics helps, and often fairly early on. For example, representations of the Lorentz algebra are covered in chapter 3 in Peskin and Schroeder, and in chapter 2 in Weinberg. Also, if you want to look a little beyond the formalism of QFT, then the standard model is constructed using symmetry groups. One omission I only just noticed from the OP- complex analysis. In a single complex variable will do fine. But in a book like P+S you'll see propagators quite early on, which feature contour integrals in a prominent way. Similar Discussions: Mathematical pre-requisites for Relativistic QFT 1. Mathematics for QFT (Replies: 3)
49724c31bac9083b
Wave Equation (redirected from D'Alembert equation) Also found in: Dictionary, Thesaurus. Related to D'Alembert equation: D'Alembert's formula Wave equation The name given to certain partial differential equations in classical and quantum physics which relate the spatial and time dependence of physical functions. In this article the classical and quantum wave equations are discussed separately, with the classical equations first for historical reasons. In classical physics the name wave equation is given to the linear, homogeneous partial differential equations which have the form of Eq. (1). Here &ugr; is a parameter with the dimensions of velocity; r represents the space coordinates x, y, z; t is the time; and ∇2 is Laplace's operator defined by Eq. (2). The function f( r ,t) is a physical observable; that is, it can be measured and consequently must be a real function. The simplest example of a wave equation in classical physics is that governing the transverse motion of a string under tension and constrained to move in a plane. A second type of classical physical situation in which the wave equation (1) supplies a mathematical description of the physical reality is the propagation of pressure waves in a fluid medium. Such waves are called acoustical waves, the propagation of sound being an example. A third example of a classical physical situation in which Eq. (1) gives a description of the phenomena is afforded by electromagnetic waves. In a region of space in which the charge and current densities are zero, Maxwell's equations for the photon lead to the wave equations (3). Here E is the electric field strength and B is the magnetic flux density; they are both vectors in ordinary space. The parameter c is the speed of light in vacuum. See Electromagnetic radiation, Maxwell's equations The nonrelativistic Schrödinger equation is an example of a quantum wave equation. Relativistic quantum-mechanical wave equations include the Schrödinger-Klein-Gordon equation and the Dirac equation. See Quantum mechanics, Relativistic quantum theory Wave Equation a partial differential equation that describes the process of propagation of a disturbance in a medium. In the case of small disturbances and a homogeneous, isotropic medium, the wave equation has the form where x, y, and z are spatial variables; t is time; u = u(x, y, z) is the function to be determined, which characterizes the disturbance at point (x, y, z) and time t; and a is the velocity of propagation of the disturbance. The wave equation is one of the fundamental equations of mathematical physics and is applied extensively. If u is a function of only two (one) spatial variables, then the wave equation is simplified and is called a two-dimensional (one-dimensional) equation. It permits a solution in the form of a“diverging spherical wave”: u = f(t – r/a)/r where f is an arbitrary function and Wave Equation. The so-called elementary solution (elementary wave) is of particular interest: u = δ(t - r/a)/r (where δ is the delta function); it gives the process of propagation for a disturbance produced by an instantaneous point source acting at the origin (when t = 0). Figuratively speaking, an elementary wave is an“infinite surge” on a circumference r = at that is moving away from the origin at a velocity a with gradually diminishing intensity. By superimposing elementary waves it is possible to describe the process of propagation of an arbitrary disturbance. Small vibrations of a string are described by a one-dimensional wave equation: In 1747, J. d’Alembert proposed a method of solving this wave equation in terms of superimposed forward and back waves: u = f(x - at) + g(x + at); and in 1748, L. Euler established that the functions f and g are determined by as-signing so-called initial conditions. wave equation [′wāv i‚kwā·zhən] In classical physics, a special equation governing waves that suffer no dissipative attenuation; it states that the second partial derivative with respect to time of the function characterizing the wave is equal to the square of the wave velocity times the Laplacian of this function. Also known as classical wave equation; d'Alembert's wave equation. Any of several equations which relate the spatial and time dependence of a function characterizing some physical entity which can propagate as a wave, including quantum-wave equations for particles. References in periodicals archive ? From this reason, the d'Alembert equations for the vector potential [b. Actually, these are the d'Alembert equations of the corrections [[zeta][alpha][beta]] the metric [g. Really, the resulting d'Alembert equations are derived from that form of the Ricci tensor obtained under the substantial simplifications of higher order terms withheld (i . 1) The approach gives the Ricci tensor and hence the d'Alembert equations of the metric to within higher order terms withheld, so the velocity of waves of the metric calculated from the equations is not an exact theoretical result;
49a120adc446121b
Dismiss Notice Join Physics Forums Today! State of electron in hydrogen atom 1. Jun 17, 2012 #1 The term (E-V) which stands for kinetic energy is used in Schrodinger equation.Kinetic energy of electron needs motion and fixed path which leads to the conclusion that electron is moving around the proton in orbits and is not spread out because derivation of Schrodinger equation is based on kinetic energy .Is it correct? 2. jcsd 3. Jun 17, 2012 #2 User Avatar Science Advisor No, this is not correct. There are no paths and no orbits in quantum mechanics. The time-independent Schrödinger equation is a PDE which determines the wave function (for stationary states) which can be interpreted as probability density amplitudes. Similar Discussions: State of electron in hydrogen atom
f9cd2367cd225ebe
June 1, 2016 The scientific self-elimination of Heterodoxy Comment on Jamie Morgan on ‘Economists confuse Greek method with science’ You say: “One underlying question is if economics is to be a science — what kind of ‘science’ can it be?” False, economics has only the choice between complying with well-defined standards or to be thrown out of science. Science was there before economics was there and the Greeks defined it as episteme = knowledge in contradistinction to doxa = opinion. This demarcation line is often hard to draw in the concrete case but it nevertheless exists. The guiding principle for establishing knowledge is the distinction true/false: “There are always many different opinions and conventions concerning any one problem or subject-matter (such as the gods). This shows that they are not all true. For if they conflict, then at best only one of them can be true. Thus it appears that Parmenides ... was the first to distinguish clearly between truth or reality on the one hand, and convention or conventional opinion (hearsay, plausible myth) on the other ...” (Popper, 1994, pp. 39-40) This is the meaning of SCIENTIFIC truth (= formal and material consistency) which is different from religious or philosophical truth. A heterodox economist who says that ‘there is no truth’ shoots himself in the foot because if anything goes and nothing matters Orthodoxy cannot be criticized/falsified/rejected. To deny the true/false criterion means to kick oneself out of science. Asad Zaman is a prominent example. Another widespread error of Heterodoxy is to maintain that rejection/debunking of Orthodoxy is sufficient. It is not: “The problem is not just to say that something might be wrong, but to replace it by something — and that is not so easy.” (Feynman, 1992, p. 161) Replacement means in concrete terms to replace the methodologically unacceptable microfoundations, which are nothing but the explicit formal specification of methodological individualism, by correct macrofoundations. This means in even more concrete terms that one needs a total replacement for this axiomatic hard core: “HC1 economic agents have preferences over outcomes; HC2 agents individually optimize subject to constraints; HC3 agent choice is manifest in interrelated markets; HC4 agents have full relevant knowledge; HC5 observable outcomes are coordinated, and must be discussed with reference to  equilibrium states.” (Weintraub, 1985, p. 147) The green cheese behavioral assumptions HC1, HC2, HC4 define economics as a social science. The crux of the so-called social sciences, though, is this: “By having a vague theory it is possible to get either result. ... It is usually said when this is pointed out, ‘When you are dealing with psychological matters things can’t be defined so precisely’. Yes, but then you cannot claim to know anything about it.” (Feynman, 1992, p. 159). Because of this, behavioral assumptions cannot be taken into the axiomatic foundations of a theory. Remember Aristotle: “When the premises are certain, true, and primary, and the conclusion formally follows from them, this is demonstration, and produces scientific knowledge of a thing.” As a matter of principle, behavioral assumptions/propositions are NOT certain enough and therefore they cannot be used as axiomatic foundation of economics. This is the fatal methodological blunder of Orthodoxy. Economics has to be redefined as system science and put on objective (= non-behavioral) foundations.* The mistake of Heterodoxy is to define itself in opposition to Orthodoxy. This negative identity has to be turned into a positive identity by spelling out the foundational propositions that define Heterodoxy. The fault of Orthodoxy has never been to apply the axiomatic-deductive method but to choose shaky behavioral assumptions as axioms. The fault of heterodox economics is that it cannot define itself with a handfull of objective and certain foundational propositions. What is built upon shaky foundations eventually falls apart. This is what happened to Orthodoxy and this is why Heterodoxy never got off the ground. Does the world expect from economists to find out how people behave? No, this is the proper job of psychology, sociology, anthropology, history, political science, evolution theory, criminology, etcetera. Does the world expect from economists to figure out what profit is? Yes, of course, no philosopher, psychologist, biologist, or sociologist will ever try to figure this out. Have orthodox or heterodox economists figured out what profit is? No: “A satisfactory theory of profits is still elusive.” (Desai, 2008, p. 10). So, economists can be defined as scientific write-offs who give economic policy advice without ever having understood the pivotal concept of their subject matter.** If there ever was a scientific failure worse than the flat earth theory then it is economics defined as social science. And exactly this is what Orthodoxy and Heterodoxy have in common. Egmont Kakarot-Handtke Popper, K. R. (1994). The Myth of the Framework. In Defence of Science and Rationality. London, New York, NY: Routledge. * See ‘Economics is NOT a science of behavior ** See ‘How the intelligent non-economist can refute every economist hands down For additional aspects see cross-references Heterodoxy. REPLY to Jamie Morgan on Jun 4 My main points in a nutshell: (i) Methodology is important — but only if it helps to promote the real thing. (ii) The real thing is to answer the question how the actual (world-) economy works. (iii) So, compared to physics the subject matter is the universe and not the learning-disabled fruit-fly called homo oeconomicus. (iv) Because of this, economics has to get out of folk psychology, folk sociology, folk history, and folk politics. Economics is a system science. (v) One cannot perceive the economy with the two natural eyes but only with the third eye of theory. So economics is abstract and not accessible by way of storytelling and misplaced concreteness. (vi) Theory is composed of elementary premises (= axioms) and the superstructure of derived propositions. A theory is true if it satisfies the criteria of formal and material consistency. True theory is the precondition of any policy advice. Policy advice without true theory is an abuse of science for agenda pushing. This is what economics is today. Both Orthodoxy and Heterodoxy is stuck at the proto-scientific level. (vii) Orthodoxy is microfounded (see the axioms HC1 to HC5 in the post above). This is the methodological root error/mistake. (viii) The task of Heterodoxy is to replace microfoundations by macrofoundations. REPLY to Ken Zimmerman on Jun 5 You say: “if you mean by ‘real thing’ the historical processes through which economic theories, actions, and ways of life are constructed, then we’re on the same page.” The real thing is theoretical economics, that is, the formally and materially correct explanation of how the (actual-monetary-world) economy works. The true theory is the precondition for the understanding of present and past economic reality. It is not only that Orthodoxy has failed but Heterodoxy, too. The representative economist — including historians since the German Historical School and the American Institutionialists since Veblen — has until this day NO idea of what profit is. This is comparable to a medieval physicist who has no proper understanding of the fundamental concept of energy. The contribution of historians to economic theory has been zero. So, we are certainly NOT on the same page. As I see it there is no chance that you will ever get out from behind the curve. For details see ‘The future of economics: why you will probably not be admitted to it, and why this is a good thing’. REPLY to Jamie Morgan on Jun 9 Imagine somebody throwing three golf balls amidst a cyclone over their shoulder into a very large sandbox. Clearly, the three balls form a triangle but no one can predict its form and size. Yet, the mathematician can tell with certainty that the sum of angles is 180 degrees (if the sandbox is Euclidean). Science is about invariants, that is, properties or relationships which remain unchanged over time. A famous example is E=mc2 which describes not a single historical event but something that is the case always and everywhere. Non-scientists and historians are glued to the ever changing surface, so they produce stories while scientists produce laws. Here, for example, is the First Economic Law, which shows the relationship between the firm, the market, and the income distribution for the pure consumption economy. Needless to say that all variables are measurable, hence, the First Economic Law is a testable proposition. This is the way how economics gets out of the proto-scientific stage. Or, as Popper put it: “It is the optimistic theory that science, that is real knowledge about the hidden real world, though certainly very difficult, is nevertheless attainable, at least for some of us.” At the moment neither orthodox nor heterodox economists are among the “some of us”. REPLY to Asad Zaman on Jun 9 The pure consumption economy is, of course, the most elementary case. The problem is that BOTH orthodox AND heterodox economists do not even understand the simple things. Because of this, they have NO chance to understand anything: “There can be no doubt whatsoever that a problem which has not yet been solved in all its aspects under its simplest conditions will be still more difficult to tackle if other, ‘more realistic’ assumptions are being made.” (Morgenstern, 1941, p. 373) From the pure consumption economy follows by successive differentiation the complete employment equation which contains profit/loss, profit distribution, saving/dissaving, investment/disinvestment (2012), public deficit spending, and import/export. This equation then describes the actual monetary economy exhaustively and — that is decisive — it is testable. So, everybody who thinks that the axiomatically founded structural employment equation is false can try to refute it. This is how matters are settled since the ancient Greeks invented the scientific method. If you have a better methodology it would be appropriate to present a testable proposition about how the actual economy works. At the end of the day methodological discussions must result in an improved understanding of the economy or else they are vacuous. In particular, it would be interesting to learn something about Zaman’s Profit Law. After all, profit is the pivotal phenomenon of the capitalist economy. Who does not understand profit understands nothing (2014). 1–23. URL REPLY to Jamie Morgan on Jun 10 (i) You write: “... and top be clear Euclid is not science it is maths-geometry.” This is not the only methodological point where you are way behind the curve. Note that science is ultimately the perfect SYNTHESIS of logic and experience: “Hilbert and Einstein again agree that geometry is a natural science based on real experiments and measurements. Thus, similarly to Einstein, Hilbert can assert: Geometry is nothing but a branch of physics; in no way whatsoever do geometrical truths differ essentially from physical truths nor are they of a different nature.” (Majer, 1995, p. 280) (ii) You ask: “How would you respond to: Axiom 1: people sometimes follow rules; Axiom 2: rules change.” My answer: there is NO such thing as a behavioral axiom.* To accept behavioral assumptions as axioms is the cardinal methodological error/mistake of Orthodoxy and Heterodoxy.** Logical consistency is secured by applying the axiomatic-deductive method and empirical consistency is secured by applying state-of-the-art testing. Isn’t it curious that genuine scientists have no problem at all with this methodology since the ancient Greeks but that so-called social scientists cannot get their head around it? (iv) A good rule for your methodological thoughts is: Whenever you meet with approval from Asad Zaman, Ken Zimmerman, Robert Locke, or other would-be scientists you can be sure that you have lost your way. Majer, U. (1995). Geometry, Intuition and Experience: From Kant to Husserl. Erkenntnis, 42(2): 261–285. URL * See ‘Austrian blather ** See ‘Economics is NOT a science of behavior REPLY to Jamie Morgan and Ken Zimmerman on Jun 14 (i) Science does not explain everything, but non-science explains nothing. Scientific explanation comes in the communicative format of theory. Non-science comes in the format of storytelling. (ii) Science is well-defined by material and formal consistency. These criteria are demanding and it is often not clear in the concrete case how to apply them. So the need for specification arises. This is where methodology comes in. (iii) Nobody is forced to do science. But if one decides to do science one has to stick to the rules. As in all walks of life, some people either do not understand the rules or misapply them. Here again, methodology can be helpful. The proper role of methodology is NOT to soften scientific standards but to enforce them.* (iv) Economics is a science as clearly communicated in the title “Bank of Sweden Prize in Economic Sciences in Memory of Alfred Nobel”. (v) Orthodox economics, though, does NOT satisfy scientific standards. More specifically: Walrasianism, Keynesianism, Marxianism, Austrianism is PROVABLE false. (vi) Economics does not live up to the claim as stated in (iv). And this is the exact point where Heterodoxy comes it. Either Heterodoxy fully replaces Orthodoxy or both are eventually thrown out of science. There cannot be a pluralism of false theories. (vii) To replace a theory means in very practical terms to replace the foundational hard core, a.k.a. axioms, that is the Walrasian propositions HC1 to HC5 as enumerated above. (viii) Due to form, at this critical juncture Ken Zimmerman again asks a silly question from behind the curve: “Egmont, ever ask yourself how and why were the axioms made? From God? From some great human law givers? From the universe?” And this brings us directly back to the key issue of this thread, viz. to Asad Zaman’s cognitive dissonance with the ancient Greeks. “To Plato’s question, ‘Granted that there are means of reasoning from premises to conclusions, who has the privilege of choosing the premises?’ the correct answer, I presume, is that anyone has this privilege who wishes to exercise it, but that everyone else has the privilege of deciding for himself what significance to attach to the conclusions, and that somewhere there lies the responsibility, through the choice of the appropriate premises, to see to it that judgment, information, and perhaps even faith, hope and charity, wield their due influence on the nature of economic thought.” (Viner, 1963, p.12) (ix) So, nobody hinders Jamie Morgan, Asad Zaman, Ken Zimmerman and the rest of Heterodoxy to employ ‘the privilege of choosing the premises’. All there is to do is to take care that “the premises are certain, true, and primary” (Aristotle). Heterodoxy defines itself by its axioms or it is scientifically non-existent. * See also ‘The insignificance of Gödel’s theorem for economics’. REPLY to Asad Zaman on Jun 14 Yes, I can, see Wikipedia (picture at the right-hand side) “But it was a second and more important quality that struck readers of the Principia. At the head of Book I stand the famous Axioms, or the Laws of motion: … For readers of that day, it was this deductive, mathematical aspect that was the great achievement.” (Truesdell, quoted in Schmiechen, 2009, p. 213) In Newton's own words: “Could all the phaenomena of nature be deduced from only thre [sic] or four general suppositions there might be great reason to allow those suppositions to be true.” (Westfall, 2008, p. 642) REPLY to blockethe on Jun 14 I appreciate your work about management. But this is an entirely different matter. Your example shows that you do not understand what the subject matter of economics is. Strictly speaking, management is the subject matter of psychology/sociology and NOT of economics. The subject matter of economics is the (world-) economy as a whole. Imagine a physicist is asked to figure out how the universe works and after some time he comes back and says: “The universe is much too large, not of direct relevance to our daily lives, and ultimately incomprehensible, so I have analyzed the molehills in my front garden — with surprising results.” One can say without contradiction that this physicist has done valuable empirical work but failed at the original task. The error/mistake of the microfoundations approach is to take green cheese behavioral assumptions as axioms. And this, indeed, is what Poincaré has told Walras in no uncertain terms: “Walras approached Poincaré for his approval. ... But Poincaré was devoutly committed to applied mathematics and did not fail to notice that utility is a nonmeasurable magnitude. ... He also wondered about the premises of Walras’s mathematics: It might be reasonable, as a first approximation, to regard men as completely self-interested, but the assumption of perfect foreknowledge ‘perhaps requires a certain reserve’.” (Porter, 1994, p. 154) No genuine scientist ever accepted or will ever accept the Walrasian axioms. Walrasians are de facto out of science since Walras. Most economists have not realized that economics is NOT a science of human nature/behavior/action — not of individual behavior, not of social behavior, not of rational behavior, not of irrational behavior, not of sincerity, not of corruption. All these issues belong entirely to the realms of psychology, sociology, anthropology, political science, history, criminology, social philosophy, etcetera. As you have certainly noticed,* I do not propose to change the Walrasian behavioral axioms but to completely REPLACE them by objective structural axioms. Economics is NOT a social science but a system science. And these are the three axioms of the correct macrofoundations approach: A1. Yw=WL wage income Yw is equal to wage rate W times working hours L, A2. O=RL output O is equal to productivity R times working hours L, A3. C=PX consumption expenditure C is equal to price P times quantity bought/sold X. That’s the absolute minimum for a start. This set is obviously superior to Walrasian and Keynesian axioms and leads to testable propositions. Empirical tests decide whether A1 to A3 are acceptable and NOT vacuous methodological filibuster. * If not see the documentation on this blogspot REPLY to Jamie Morgan on Jun 15 You say: “What concerns me is that you consistently respond to all inquiry with assertion ...”. True, but I give you the reference to the comprehensive argument. What concerns me is that you consistently overlook that your questions and arguments have already been thoroughly answered.* You say: “the failure to agree is not itself a failure — since sometimes it reminds us of the limits of knowledge and the boundaries of ignorance — which is basic also to Socratic dialogue…” Trivially true, we cannot know everything, but from this does not follow that we cannot know something. It is this limited but certain Something that science and theoretical economics is all about. Nobody needs a reminder that there are limits of knowledge and nobody needs the false modesty of ‘I know that I know nothing’. For a philosopher this is fine but for a scientist this is self-disqualifying. In economics, we are FAR away from the limits of knowledge. In fact, the problem is the exact OPPOSITE: “we know little more now about ‘how the economy works,’ or about the modus operandi of the invisible hand than we knew in 1790, after Adam Smith completed the last revision of The Wealth of Nations.” (Clower, 1999, p. 401) What we actually have are multiple approaches that are PROVABLE false. There are (at least) four heterodox profit theories and you can tell nobody that they are all true.** This lack of consistency has NOTHING to do with the limits of knowledge or the failure to agree but with incompetence and intellectual sloppiness and poor methodology and the persistent ignorance/violation of scientific standards.*** Economics is a proto-scientific swamp: “We are lost in a swamp, the morass of our ignorance. ... We have to find the roots and get ourselves out! ... Braids or bootstraps are necessary for two purposes: to pull ourselves out of the swamp and, afterwards, to keep our bits and pieces together in an orderly fashion.” (Schmiechen, 2009, p. 11) This is in simple words was axiomatization is all about. What concerns me is that you and Asad Zaman and many others on this blog do not grasp what the ancient Greeks grasped more than 2000 years ago. As long as economists do not have a consistent defininition of the pivotal concepts profit and income it is absurd to philosophize about the limits of knowledge. Economists are over the ears in the swamp of ignorance and have NO idea of how to pull themselves out. This is the concrete historical situation: Heterodoxy either REPLACES the vacuous Walrasian axioms HC1 to HC5 and comes forward with TESTABLE propositions about how the economy works or it goes down the scientific drain together with Orthodoxy. * For a test go occasionally to this blogspot and enter for example Gödel or Duhem-Quine or Zaman in the search field. ** See ‘Heterodoxy, too, is scientific junk *** See also ‘The prophets of wish-wash, ignoramus et ignorabimus, and preemptive vanitization REPLY to Asad Zaman You say: “Even though Newton calls his four laws axioms, what he means by axioms is very different from what you mean by axioms.” What I mean is the SAME what Newton meant. And this is the SAME what Popper meant: “The attempt is made to collect all the assumptions, which are needed, but no more, to form the apex of the system. They are usually called the ‘axioms’ (or ‘postulates’, or ‘primitive propositions’; no claim of truth is implied in the term ‘axiom’ as here used). The axioms are chosen in such a way that all the other statements belonging to the theoretical system can be derived from the axioms by purely logical or mathematical transformations.” (1980, p. 71) And this is the SAME what Einstein meant: “But the axioms Science is the attempt to make the chaotic diversity of our sense-experience correspond to a logically uniform system of thought ...” (quoted in Clower, 1998, p. 409) Newton, Popper, Einstein referred to the context of JUSTIFICATION. Peirce’s abduction refers to the context of DISCOVERY and it is the SAME as Popper’s Conjectures and Refutations: “It is a great mistake to suppose that the mind of the active scientist is filled with propositions which, if not proved beyond all reasonable cavil, are at least extremely probable. On the contrary, he entertains hypotheses which are almost wildly incredible, and treats them with respect for the time being. Why does he do this? Simply because any scientific proposition whatever is always liable to be refuted and dropped at short notice. A hypothesis is something which looks as if it might be true and were true, and which is capable of verification or refutation by comparison with facts. The best hypothesis, in the sense of the one most recommending itself to the inquirer, is the one which can be the most readily refuted if it is false. This far outweighs the trifling merit of being likely. For after all, what is a likely hypothesis? It is one which falls in with our preconceived ideas. But these may be wrong. Their errors are just what the scientific man is out gunning for more particularly. But if a hypothesis can quickly and easily be cleared away so as to go toward leaving the field free for the main struggle, this is an immense advantage.” (Peirce, 1931, 1.120) You constantly CONFUSE the context of discovery with the context of justification. What Peirce said about the axiomatic-deductive method is the SAME what Aristotle, Newton, Einstein, Popper said and what I mean: “Inference, which is the machinery of logic, is the process by which one belief determines another belief, habit or action. A successful inference is one that leads from true premises to true conclusions.” (quoted in Hoover, 1994, p. 300) The clearly stated premises, a.k.a axioms/postulates/principles/primitive propositions, of Orthodoxy (HC1 to HC5 above) are provably false. What I mean with true macrofoundations I have clearly stated (A1 to A3 above). Now it is YOUR TURN to clearly state your economic axioms. Subsequently, the truth of the respective premises is indirectly established by testing the conclusions. This settles the matter. As Peirce said: “That the settlement of opinion is the sole end of inquiry is a very important proposition.” (1992, p. 115) Clower, R. W. (1998). New Microfoundations for the Theory of Economic Growth? In G. Eliasson, C. Green, and C. R. McCann (Eds.), Microfoundations of Economic Growth., pages 409–423. Ann Arbour, MI: University of Michigan Press. Hoover, K. D. (1994). Pragmatism, Pragmaticism and Economic Method. In R. E. Backhouse (Ed.), New Directions in Economic Methodology, pages 286–315. London, New York, NY: Routledge. Peirce, C. S. (1931). Collected Papers of Charles Sanders Peirce, volume I. Cambridge, MA: Harvard University Press. URL Peirce, C. S. (1992). The Fixation of Belief. In N. Houser, and C. Kloesel (Eds.), The Essential Peirce. Selected Philosophical Writings., volume 1, pages 109–123. Bloomington, IN: Indiana University Press. REPLY to Jamie Morgan on Jun 17 You say: “Perhaps there is a different way of thinking about these problems…”. This exactly is the scientific problem: there are always many ways and opinions and questions and interests, and it can be a tedious task to figure out what is true and what is false. The difference between scientists and the rest is that scientists attempt to get a clear-cut either/or answer with the highest possible degree of certainty. Without this certainty, science cannot be cumulative. In other words, if one does not make sure that the elementary Law of the Lever or the Law of Falling Bodies is certain beyond reasonable doubt one will never arrive at more complex relationships like the Law of Gravitation or the Schrödinger equation. Physicists are so high up the ladder because each rung from the first one onward is certain and stable and reliable and can carry heavy weight. This is what logical and empirical consistency is all about. Economics is not cumulative but circles since Adam Smith at a rather low level around the same issues. Take value or capital theory as an example or take Wicksell’s pertinent characterization. “... when it is a matter of finding the cause of general changes in the price of commodities, and especially the influence on those of credit and the institutions regulating credit, some maintain that cheap and easy credit, in other words, a low rate of interest, will tend to increase the amount of means of payment in circulation and the demand for goods and this will tend to increase the general level of prices; while others maintain the contrary, that cheap credit means the same things as cheaper costs of production and so tends to lower the level of prices, not to raise it; and naturally ... there is no lack of more moderate opinion between the two extremes, eclectics who say that the influence of credit on prices is sometimes in one direction, sometimes in another and is sometimes nil.” (quoted in Deane, 1983, p. 8) Such is the pseudo-scientist's tautologically true answer. However, many are perfectly satisfied with this inconclusive sitcom stuff and euphemize it as Socratic. But, clearly, confused wish-wash is not that highly appreciated among genuine scientists. Economics is still at the proto-scientific level because of scientific incompetence and because many simply prefer the swampy lowlands where “nothing is clear and everything is possible” (Keynes) over the hard rocks of true/false. Those, who have ― for whatever reason ― established themselves in the swampy lowlands of plausible myths defend it with phrases like: there is no truth, nobody knows anything, uncertainty is ontological, the effect is sometimes in one direction, sometimes in another and sometimes nil, everybody has their own truths, quantum physics says whether the cat is dead or alive depends on the observer, knowledge is arrogance, ignorance is humility, truth is relative and culture-specific, Gödel has proved that logic has limits and ― the emotional solidarity of the incompetent ― we are ALL fallible humans. Yes, indeed, but we are NOT ALL imbeciles who accept utility maximization and equilibrium as scientific explanation of how markets work or who maintain that a dozen false profit theories are better than one correct theory. The fact of the matter is: scientists and swampies can never be friends. Deane, P. (1983). The Scope and Method of Economic Science. Economic Journal, 93(369): 1–12. URL REPLY to Asad Zaman on Jun 17 (i) You and I and everybody who uses the zero in a calculation care whether the calculation is correct and does NOT care whether the Arabs, Indians, Egyptians, or Greeks invented the zero. It is the same with fire making or methodology. The actual use and the history of concepts or tools are different things. It is the very task of the economist to figure out how the economy works and NOT to clarify who invented what. The history of scientific thought is valuable in its own right but it is a DISTRACTION in the context of economics. (ii) It is misleading and counterproductive to play the ‘experimental method of the Arabs’ against the ‘axiomatic-deductive method of the Greeks’. Science is defined by material AND formal consistency. It is the sophisticated COMBINATION of empirics and logic, or experiment and axiomatics, which delivers the winning formula of science. Incompetent scientists fall either on the side of crude observational empiricism or vacuous formalism. (iii) Your account of the history of science is in almost every respect false or confused. Two examples suffice here, your characterization of Bacon* and your misplaced debunking of axiomatics: “Here is what the Greeks grasped more than 2000 years ago, based on an axiomatic-deductive approach to the natural sciences: 1. The Earth is the center of the universe.” It is common knowledge that: “Aristarchus of Samos was an ancient Greek astronomer and mathematician who presented the first known model that placed the Sun at the center of the known universe with the Earth revolving around it.” (Wikipedia) (iv) I wonder whether anybody checks the content of the WEA Pedagogy Blog. Just for the records: I do NOT accept this easy to disprove garbage as Heterodoxy economics or methodology. The Pedagogy Block has to be clearly labeled as Asad Zaman’s idiosyncratic contribution which does NOT represent any official consensus of the WEA. The Pedagogy Blog is sufficient to expel RWER-Heterodoxy from the sciences. (v) You say: “My critique of axiomatics is based on axiomatics as defined by Lionel Robbins, who is the founder of current economic methodology. ‘The propositions of economic theory, like all scientific theory, are obviously deductions from a series of postulates. And the chief of these postulates are all assumptions involving in some way simple and indisputable facts of experience….’ With this methodology, axioms are certain, and logical deductions are certain so there is no room for conflict with experience. This is exactly the same as axiomatic Greek geometry, which can never be wrong, because it is based on certainties and driven by logic.” This is as confused as it can get. First of all, Robbins explicitly claims that his postulates are based on “simple and indisputable facts of experience”. This is what you praise as the superior method of the Arabs. Robbins is the very prophet of what Popper called observationism: “These are not postulates the existence of whose counterpart in reality admits of extensive dispute once their nature is fully realized. We do not need controlled experiments to establish their validity: they are so much the stuff of our everyday experience that they only have to be stated to be recognized as obvious.” (Robbins, 1935, p. 79) (vi) Robbins’s methodological blunder was that he based economics on BEHAVIORAL axioms (e.g. constrained optimization). Yet, there is NO such thing as a ‘certain, true, and primary’ (Aristotle) behavioral axiom. To make a long argument short: Robbins’s definition of economics has to be changed from: “Economics is the science which studies human behavior as a relationship between ends and scarce means which have alternative uses.” (Robbins, 1935, p. 16) to “Economics is the science which studies how the economic system works.” (2013, p. 20) Objective structural axioms lead to relationships that are readily testable, e.g. the Profit Law or the employment equation.** (vii) The lethal error/mistake of Orthodoxy does NOT consist in the application of the axiomatic-deductive method but in taking green cheese behavioral assumptions as axioms. From this follows that economics has to move from behavioral microfoundations to structural macrofoundations. (viii) Science is about true/false and NOTHING else. We agree that Robbins based economics upon provable false axioms. Kakarot-Handtke, E. (2013). Crisis and Methodology: Some Heterodox Misunderstandings. SSRN Working Paper Series, 2083519: 1–25. URL Robbins, L. (1935). An Essay on the Nature and Significance of Economic Science. London, Bombay, etc.: Macmillan, 2nd edition. * Go occasionally to this blogspot and enter Bacon in the search field ** See ‘Unemployment ― the fatal consequence of economists’ scientific incompetence’,
3d12f717c5f0b301
Tuesday , 22 August 2017 17 Equations That Changed the World Equations are a vital part of our culture. The stories behind them — the people who discovered/invented them and the periods in which they lived — are fascinating and some are particularly relevant to anybody affected by the current financial crisis. [Let’s take a look.] Words: 2060 The comments above & below are edited ([ ]) and abridged (…) excerpts from the original article by Max Nisen (businessinsider.com) “Equations definitely CAN be dull, and they CAN seem complicated…but you CAN appreciate the beauty and importance of equations without knowing how to solve them…..[My] intention was to locate them in their cultural and human context, and pull back the veil on their hidden effects on history. [Check out https://goo.gl/T2fnWj – Register for chance to win an iPad Pro!] [The above, is what mathematician Ian Stewart said when asked why he came out with a book titled “In Pursuit of the Unknown: 17 Equations That Changed the World“which takes a look at the most pivotal equations of all time, and puts them in a human, rather than technical context”.] Number 17 on the list, for example, is a derivative pricing equation called Black Scholes which contributed to the financial crisis. People took the theoretical equation too seriously, overreached its assumptions, used it to justify poor decisions, and built a trillion dollar house of cards on it making the crisis inevitable: From an email exchange with Professor Stewart: “I think that the crisis became inevitable once the financial instruments being traded in gigantic quantities became so complex that no one could understand either their value or the risks they entailed. When markets trade real goods for real money, excesses can only grow to the limits of what is actually out there. When they trade virtual goods (derivatives) for virtual money (leverage), there’s no real-world limit, so the markets can gallop off into Cloud Cuckoo Land.” The 17 equations that changed the world are outlined below or can be seen in a slide show format by going here: 1. The Pythagorean Theorem The Pythagorean Theorem What does it mean: The square of the hypotenuse of a triangle is equal to the sum of the squares of its legs. History: Though attributed to Pythagoras, it is not certain that he was the first person to prove it. The first clear proof came from Euclid, and it is possible the concept was known 1000 years before Pythoragas by the Babylonians. Importance: The equation is at the core of much of geometry, links it with algebra, and is the foundation of trigonometry. Without it, accurate surveying, mapmaking, and navigation would be impossible. Modern use: To pinpoint relative location for GPS navigation. 2. The logarithm and its identities The logarithm and its identities What does it mean: You can multiply numbers by adding related numbers. History: The initial concept was discovered by the Scottish Laird John Napier of Merchiston in an effort to make the multiplication of large numbers, then incredibly tedious and time consuming, easier and faster. It was later refined by Henry Briggs to make reference tables easier to calculate and more useful. Importance: Logarithms were revolutionary, making calculation faster and more accurate for engineers and astronomers. That’s less important with the advent of computers, but they’re still an essential to scientists. Modern use: To inform our understanding of radioactive decay. 3. The fundamental theorem of calculus The fundamental theorem of calculus What does it mean?: Allows the calculation of an instantaneous rate of change. History: Calculus as we currently know it was described around the same [time] in the late 17th century by Isaac Newton and Gottfried Leibniz. There was a lengthy debate over plagiarism and priority which may never be resolved. We use the leaps of logic and parts of the notation of both men today. Importance: According to Stewart, “More than any other mathematical technique, it has created the modern world.” Calculus is essential in our understanding of how to measure solids, curves, and areas. It is the foundation of many natural laws, and the source of differential equations. Modern use: To provide optimal solutions to mathematical problems associated with medicine, economics, and computer science. 4. Newton’s universal law of gravitation Newton's universal law of gravitation What does it mean?: Calculates the force of gravity between two objects. History: Isaac Newton derived his laws with help from earlier work by Johannes Kepler. He also used, and possibly plagiarized, the work of Robert Hooke. Importance: Used techniques of calculus to describe how the world works. Even though it was later supplanted by Einstein’s theory of relativity, it is still essential for practical description of how objects interact with each other. We use it to this day to design orbits for satellites and probes. Modern use:  To find optimal gravitational “tubes” or pathways for space mission launches so they can be as energy efficient as possible and also to make satellite TV possible. 5. The origin of complex numbers The origin of complex numbers What does it mean?: The square of an imaginary number is negative. History: Imaginary numbers were originally posited by famed gambler/mathematician Girolamo Cardano, then expanded by Rafael Bombelli and John Wallis. They still existed as a peculiar, but essential, problem in math until William Hamilton described this definition. Importance: According to Stewart “…. most modern technology, from electric lighting to digital cameras could not have been invented without them.” Imaginary numbers allow for complex analysis, which allows engineers to solve practical problems working in the plane. Modern use: To allow engineers to solve practical problems working in the plane. 6. Euler’s formula for polyhedra Euler's formula for polyhedra What does it mean?: Describes a space’s shape or structure regardless of alignment. History: The relationship was first described by Descartes, then refined, proved, and published by Leonhard Euler in 1750. Importance: Fundamental to the development of topography which extends geometry to any continuous surface. An essential tool for engineers and biologists. Modern use: To understand the behavior and function of DNA. 7. The normal distribution The normal distribution What does it mean?: Defines the standard normal distribution, a bell shaped curve in which the probability of observing a point is greatest near the average, and declines rapidly as one moves away. History: The initial work was by Blaise Pascal, but the distribution came into its own with Bernoulli. The bell curve we currently [use] comes from Belgian mathematician Adolphe Quetelet. Importance: The equation is the foundation of modern statistics. Science and social science would not exist in their current form without it. Modern use: To determine whether drugs are sufficiently effective relative to negative side effects in clinical trials. 8. The wave equation The wave equation What does it mean?: A differential equation that describes the behavior of waves, originally the behavior of a vibrating violin string. History: The mathematicians John Bournoulli and Francois D’Alembert were the first to describe this relationship in the 18th century, albeit in slightly different ways. Importance: The behavior of waves generalizes to the way sound works, how earthquakes happen, and the behavior of the ocean. Modern use: To predict geological formations from the ensuing sound waves generated from setting off explosives. 9. The Fourier transform The Fourier transform What does it mean?: Describes patterns in time as a function of frequency. History: Joseph Fourier discovered the equation, which extended from his famous heat flow equation, and the previously described wave equation. Importance: The equation allows for complex patterns to be broken up, cleaned up, and analyzed. This is essential in many types of signal analysis. Modern use: To compress information for the JPEG image format and discover the structure of molecules. 10. The Navier-Stokes equations The Navier-Stokes equations What does it mean?: The left side is the acceleration of a small amount of fluid, the right indicates the forces that act upon it. History: Leonhard Euler made the first attempt at modeling fluid movement, French engineer Claude-Louis Navier and Irish mathematician George Stokes made the leap to the model still used today Importance: Once computers became powerful enough to solve this equation, it opened up a complex and very useful field of physics allowing for, among other things, the development of modern passenger jets. Modern use: To make vehicles more aerodynamic. 11. Maxwell’s equations Maxwell's equations What does it mean?: Maps out the relationship between electric and magnetic fields. History: Michael Faraday did pioneering work on the connection between electricity and magnetism, James Clerk Maxwell translated it into equations, fundamentally altering physics. Importance: Helped predict and aid the understanding of electromagnetic waves, helping to create many technologies we use today. Modern use: Radar, television, and modern communications. 12. Second law of thermodynamics Second law of thermodynamics What does it mean?: Energy and heat dissipate over time. History: Sadi Carnot first posited that nature does not have reversible processes. Mathematician Ludwig Boltzmann extended the law, and William Thomson formally stated it. Importance: Essential to our understanding of energy and the universe via the concept of entropy. It helps us realize the limits on extracting work from heat, and helped lead to a better steam engine. Modern use: Helped prove that matter is made of atoms, which has been somewhat useful. 13. Einstein’s theory of relativity Einstein's theory of relativity What does it mean?: Energy equals mass times the speed of light squared. History: The less known (among non-physicists) genesis of Einstein’s equation was an experiment by Albert Michelson and Edward Morley that proved light did not move in a Newtonian manner in comparison to changing frames of reference. Einstein followed up on this insight with his famous papers on special relativity (1905) and general relativity (1915). Importance: Probably the most famous equation in history. Completely changed our view of matter and reality. Modern use: Helped lead to nuclear weapons, and if GPS didn’t account for it, your directions would be off thousands of yards. 14. The Schrödinger equation The Schrödinger equation What does it mean?: Models matter as a wave, rather than a particle. History: Louis-Victor de Broglie pinpointed the dual nature of matter in 1924. The equation you see was derived by Erwin Schrodinger in 1927, building off of the work of physicists like Werner Heisenberg. Importance: Revolutionized the view of physics at small scales. The insight that particles at that level exist at a range of probable states was revolutionary. Modern use: Essential to the use of the semiconductor and transistor, and thus, most modern computer technology. 15. Shannon’s information theory Shannon's information theory What does it mean?: Estimates the amount of data in a piece of code by the probabilities of its component symbols. History: Developed by Bell Labs engineer Claude Shannon in the years after World War 2. Importance: According to Stewart, “It is the equation that ushered in the information age.” By stopping engineers from seeking codes that were too efficient, it established the boundaries that made everything from CDs to digital communication possible. Modern use: Pretty much anything that involves error detection in coding. Anybody use the internet lately? 16. The logistic model for population growth The logistic model for population growth What does it mean?: Estimates the change in a population of creatures across generations with limited resources. History: Robert May was the first to point out that this model of population growth could produce chaos in 1975. Important work by mathematicians Vladimir Arnold and Stephen Smale helped with the realization that chaos is a consequence of differential equations. Importance: Helped in the development of chaos theory, which has completely changed our understanding of the way that natural systems work. Modern use: To model earthquakes and forecast the weather. 17. The Black–Scholes model The Black–Scholes model What does it mean?: Prices a derivative based on the assumption that it is riskless and that there is no arbitrage opportunity when it is priced correctly. History: Developed by Fischer Black and Myron Scholes, then expanded by Robert Merton. The latter two won the 1997 Nobel Prize in Economics for the discovery. Importance: Helped create the now multi trillion dollar derivatives market. It is argued that improper use of the formula (and its descendants) contributed to the financial crisis. In particular, the equation maintains several assumptions that do not hold true in real financial markets. Modern use: Variants are still used to price most derivatives, even after the financial crisis. Win An iPad Pro! Related Article: 1. The World’s 10 Most Important Numbers are HERE!
9af40fab72fc77c1
Thursday, October 27, 2011 Liberty Science Center Liberty Science Center The center, which first opened in 1993 as New Jersey's first major state science museum, has science exhibits, the largest IMAX Dome theater in the United States, numerous educational resources, and the original Hoberman sphere, a silver, computer-driven engineering artwork designed by Chuck Hoberman. The museum opened with another artistic exhibit that is related to the sciences, Jim Gary's Twentieth Century Dinosaurs sculpture exhibition, as the exhibit on the ground floor. [1] Liberty Science Center completed a twenty-two-month, $109 million expansion and renewal project on July 19, 2007.[2] The expansion added 100,000 square feet (9,300 m2) to the facility, bringing it to nearly 300,000 square feet (28,000 m2).[3] However the amount of exhibit space slightly decreased with the expansion as all the new space added is open space such as queue lines for the ticketing office. It also has state-of-the-art surround sound, and also one of the world's best picture screen in the IMAX dome. Liberty Science Center's exhibitions include:[2] • Skyscraper! Achievement and Impact - the largest exhibition on the subject of skyscrapers in the world - with artifacts from the World Trade Center, a walk along an I-Beam two stories above the exhibition floor, an earthquake-shake table, a glass-Schindler 400A mid-rise Traction elevator, which is open to show how the elevator moves, the machine room, and the pit, and much more. • Eat and Be Eaten - this exhibit of life animals explores the predator-prey relationship, offering lots of live animals including deadly vipers, amazing puffer fish, and scores of other creatures. • Infection Connection - helps guests understand how individual actions may affect global health issues. You may ride the IC Express, which shows a film about different types of infectious diseases. • I Explore - an age-restricted area, where guests under age six and their caregivers can explore aspects of the world around them through water play, a microscope, a large climbing structure, a street scape, and a rock xylophone - made from hanging rocks that ring like bells when struck. • Our Hudson Home - teaches guests about the wildlife and ecology of the Hudson River. • Wonder Why - holds many of the original exhibits from the earliest days of the museum • Energy Quest - explores different energy types and the technologies to harness these. • Wildlife Challenge - a seasonal outdoor exhibit in which guests can take part in a variety of physical activities, designed to simulate different animals' environments. Activities include balance beams, and a zip line accessible only to guests that can hold onto a rope for at least ten seconds. • Traveling Exhibit - Various exhibits on display. The first exhibit since the center re-opened was Islamic Science Re-Discovered. A recent traveling exhibit was Goose Bumps! The Science of Fear, where guests saw how they would react when they were exposed to creepy animals, loud noises, electric shock and the fear of falling. The exhibit explored why their bodies reacted the way they do. Liberty Science Center is currently hosting "Mammoths and Mastodons: Titans of the Ice Age" until January 9. This exhibit uses video installations, hands-on interactive displays, life-sized models and fossils to teach more about the extinct mammals. Between October 16 and November 10, the exhibit showcased Lyuba, the world's best preserved woolly mammoth specimen.[4] Jennifer A. Chalsty Center for Science Learning and Teaching 1. ^ Kolata, Gina. "Science Gets Its Chance to Dazzle"The New York TimesJanuary 221993. Accessed December 302007. 2. a b Kitta MacPherson. "Innovation & Inspiration" Newark Star-Ledger, Oct 4, 2006. 3. ^ Liberty Science Center Expansion Project, accessed January 30, 2007 4. ^ Smith, Olivia (2009-04-21). "Baby mammoth Lyuba, pristinely preserved, offers scientists rare look into mysteries of Ice Age"Daily News (New York). External links Saturday, October 22, 2011 Decoherence 101 The Decoherence Interpretation of Quantum Mechanics from “The New Quantum Universe” by Hey and Walters (2009) A less extravagant (than the Copenhagen and Many Worlds  Interpretations) and rather more mundane attempt to solve the measurement problem goes by the name of “decoherence”. This approach argues that quantum systems can never be totally isolated from the larger environment and that Schrodinger’s equation must be applied not only to the quantum system but also to the coupled quantum environment. In real life, the “coherence” of a quantum state – the delicate phase relations between the different parts of a quantum superposition – is rapidly affected by interactions with the rest of the world outside the quantum system. Wojciech Zurek is one of the most prominent advocates of this “decoherence” approach to the measurement problem, and he speaks of the quantum coherence as “leaking out “ into the environment. Zurek claims that recent years have seen a growing consensus that it is interactions of quantum systems with the environment that randomize the phases of quantum superpositions. All we have left is an ordinary non-quantum choice between states with classical probabilities and no funny interference effects. This seems a very prosaic end to the quantum measurement problem! How does this come about? Does decoherence by the environment really supply an answer to all the problems? Let us look at an experiment that claims to see decoherence of “Schrodinger cat” states in action. Serge Haroche and Jean-Michel Raimond, working in Paris with their research group, have recently performed some exciting experiments that give support to this decoherence picture. There are three different parts to an experiment that can all interact – the quantum system, the “classical” measurement apparatus, and the environment. In their experiment the quantum system consists of an atom that can be prepared in one of two states. They measure the quantum state of the atom by injecting the atom into a cavity and using the electromagnetic field of the “cavity” as a classical “pointer.” What happens if we prepare the atom in a quantum superposition of the two states? If we treat the cavity as a second quantum system in its own right, we find that the supposedly classical counter is now predicted to be in a “Schrödinger cat” state. – a quantum superposition of two classical states of the pointer.  Schrodinger’s thought experiment just highlighted the peculiarity of this situation by using his cat as a classical pointer. How do we escape from this apparent paradox? According to the decoherence picture, we must include the unavoidable coupling of the pointer to the environment. The pointer – or cavity – is under a constant bombardment from random photons, air molecules and so on that constitute the “environment.” Models of this random process as a third quantum system show that all phase information between the two original atomic states with their corresponding pointer positions is very rapidly lost. For the usual classical pointer fields with many photons, this decoherence is predicted to take place in an immeasurably short time. Remarkably, by using pointer cavity fields consisting of only a few photons, Haroche and Raimond have been able to observe and measure the decoherence time of this system. They do this by sending a second atom into the cavity at varying times after the first atom and measuring interference effects that depend on the continued coherence of the wavefunction of the first atom. By observing how fast these interference effects fall off with the time delay between the traversals through the cavity of the first and second atoms, they claim to have “caught decoherence in the act”! Einstein’s problem with the Moon can be “explained” by using a similar decoherence argument. The Moon is not an inert system – not only are its individual molecules constantly interacting with their neighbors but also its surface is under constant bombardment by particles and radiation, mainly from the Sun. The coherence of any Schrodinger cat state involving the Moon would rapidly be destroyed by these constant interactions. According to such decoherence arguments, we can rest assured that the Moon is really there after all, even when we are not looking at. Bombardment by solar photons is enough to constitute a measurement and to destroy any quantum coherence. Would these decoherence arguments have satisfied John Bell as an explanation of the measurement problem? Probably not! We have described not only the quantum system under observation but also the measuring apparatus as a quantum system. The quantum wavefunction for the combined system will be in a superposition of states corresponding to different classical states of the measuring apparatus, as in the experiment of Haroche and Raimond. The decoherence argument says we must include the environment as a third quantum system interacting with our measuring apparatus. As a result, phase randomization rapidly sets in and the quantum superposition is effectively reduced to a sum of different possible outcomes with classical probabilities. Bell had two problems with this approach. Firstly, all quantum states – for system, measuring apparatus, and environment – evolve according to the Schrödinger equation. It is mathematically impossible for such evolution to turn a coherent quantum superposition into an incoherent probabilistic sum. Although it is certainly true that the particular measurements one usually chooses to make display little or no quantum coherence, Bell argues that there is nothing “in principle” to stop us considering different types of measurements for which this will not be true. As Bell has said: “So long as nothing, in principle, forbids consideration of such arbitrarily complicated observables, it is not permitted to speak of wave packet reduction. While for any given observable one can find a time for which the unwanted interference is as small as you like, for any given time one can find an observable for which it is as big as you do ‘not’ like.” In Bell’s view, any mechanism for the collapse should also be applicable to small systems and should not be dependent on “the laws of large numbers.”  His second problem, concerned the actual measurement itself. Even if one accepts that decoherence reduces the problem to a probabilistic choice between outcomes, nowhere does decoherence say how any particular outcome is achieved. Bell did not disagree about the practicality of measurements in quantum mechanics, but he felt strongly that unless we know “exactly when and how it [wavefunction reduction] takes over from the Schrödinger equation, we do not have an exact and unambiguous formulation of our most fundamental physical theory.” Friday, October 21, 2011 This is the sort of thing the Senior Undergrad Engineering Students are up to today, using the Computer-Aided Design program: Solidworks. Bicycle and Motorcycle Dynamics Bicycle and motorcycle dynamics is the science of the motion of bicycles and motorcycles and their components, due to the forces acting on them. Dynamics is a branch of classical mechanics, which in turn is a branch of physics. Bike motions of interest include balancingsteeringbrakingacceleratingsuspension activation, andvibration. The study of these motions began in the late 19th century and continues today.[1][2][3] Bicycles and motorcycles are both single-track vehicles and so their motions have many fundamental attributes in common and are fundamentally different from and more difficult to study than other wheeled vehicles such as dicyclestricycles, and quadracycles.[4] As with unicycles, bikes lack lateral stability when stationary, and under most circumstances can only remain upright when moving forward. Experimentation and mathematical analysis have shown that a bike stays upright when it is steered to keep its center of mass over its wheels. This steering is usually supplied by a rider, or in certain circumstances, by the bike itself. Several factors, including geometry, mass distribution, and gyroscopic effect all contribute in varying degrees to this self-stability, but long-standing hypotheses and claims that any single effect, such asgyroscopic or trail, is solely responsible for the stabilizing force have been discredited.[1][5][6][7] While remaining upright may be the primary goal of beginning riders, a bike must lean in order to maintain balance in a turn: the higher the speed or smaller the turnradius, the more lean is required. This balances the roll torque about the wheel contact patches generated by centrifugal force due to the turn with that of the gravitational force. This lean is usually produced by a momentary steering in the opposite direction, called countersteering. Countersteering skill is usually acquired by motor learningand executed via procedural memory rather than by conscious thought. Unlike other wheeled vehicles, the primary control input on bikes is steering torque, not position.[8] The history of the study of bike dynamics is nearly as old as the bicycle itself. It includes contributions from famous scientists such as RankineAppell, and Whipple.[2] In the early 19th century Karl von Drais, credited with inventing the two-wheeled vehicle variously called the laufmaschinevelocipededraisine, and dandy horse, showed that a rider could balance his device by steering the front wheel.[2] By the end of the 19th century, Emmanuel Carvallo and Francis Whipple showed with rigid-body dynamicsthat some safety bicycles could actually balance themselves if moving at the right speed.[2] It is not clear to whom should go the credit for tilting the steering axis from the vertical which helps make this possible.[10] In 1970, David E. H. Jones published an article in Physics Today showing that gyroscopic effects are not necessary to balance a bicycle.[6] Since 1971, when he identified and named the wobble, weave and capsize modes,[11] Robin Sharp has written regularly about the behavior of motorcycles and bicycles.[12] While at Imperial College, London, he worked with David Limebeer and Simos Evangelou.[13] In 2007, Meijaard, et al., published the canonical linearized equations of motion, in the Proceedings of the Royal Society A, along with verification by two different methods.[2] These equations assumed the tires to roll without slip, that is to say, to go where they point, and the rider to be rigidly attached to the rear frame of the bicycle. External forces on a bike and rider leaning in a turn: gravity in green, drag in blue, vertical ground reaction in red, net propulsive and rolling resistance in yellow, friction in response to turn in orange, and net torques on front wheel in magenta. [edit]External forces As with all masses, gravity pulls the rider and all the bike components toward the earth. At each tire contact patch there are ground reaction forces with both horizontal and vertical components. The vertical components mostly counteract the force of gravity, but also vary with braking and accelerating. For details, see the section onlongitudinal stability below. The horizontal components, due to friction between the wheels and the ground, including rolling resistance, are in response to propulsiveforces, braking forces, and turning forces. Aerodynamic forces due to the atmosphere are mostly in the form of drag, but can also be from crosswinds. At normal bicycling speeds on level ground, aerodynamic drag is the largest force resisting forward motion.[14] At faster speed, aerodynamic drag becomes overwhelmingly the largest force resisting forward motion. [edit]Internal forces Internal forces are mostly caused by the rider or by friction. The rider can apply torques between the steering mechanism (front fork, handlebars, front wheel, etc.) and rear frame, and between the rider and the rear frame. Friction exists between any parts that move against each other: in the drive train, between the steering mechanism and the rear frame, etc. Many bikes have front and rear suspensions, and some motorcycles have a steering damper to dissipate undesirable kinetic energy.[13] On bikes with rear suspensions, feedback between the drive train and the suspension is an issue designers attempt to handle with various linkage configurations and dampers.[15] [edit]Lateral dynamics Balancing a bicycle by keeping the wheels under the center of mass A bike remains upright when it is steered so that the ground reaction forces exactly balance all the other internal and external forces it experiences, such as gravitational if leaning, inertial or centrifugal if in a turn, gyroscopic if being steered, and aerodynamic if in a crosswind.[14] Steering may be supplied by a rider or, under certain circumstances, by the bike itself. This self-stability is generated by a combination of several effects that depend on the geometry, mass distribution, and forward speed of the bike. Tires, suspension, steering damping, and frame flex can also influence it, especially in motorcycles. [edit]Forward speed [edit]Center of mass location The farther forward (closer to front wheel) the center of mass of the combined bike and rider, the less the front wheel has to move laterally in order to maintain balance. Conversely, the further back (closer to the rear wheel) the center of mass is located, the more front wheel lateral movement or bike forward motion will be required to regain balance. This can be noticeable on long-wheelbase recumbents and choppers. It can also be an issue for touring bikes with a heavy load of gear over or even behind the rear wheel.[18] Mass over the rear wheel can be more easily controlled if it is lower than mass over the front wheel.[10] A bike is also an example of an inverted pendulum. Just as a broomstick is easier to balance than a pencil, a tall bike (with a high center of mass) can be easier to balance when ridden than a low one because its lean rate will be slower.[19] However, a rider can have the opposite impression of a bike when it is stationary. A top-heavy bike can require more effort to keep upright, when stopped in traffic for example, than a bike which is just as tall but with a lower center of mass. This is an example of a vertical second-class lever. A small force at the end of the lever, the seat or handlebars at the top of the bike, more easily moves a large mass if the mass is closer to the fulcrum, where the tires touch the ground. This is why touring cyclists are advised to carry loads low on a bike, and panniers hang down on either side of front and rear racks.[20] \text{Trail} = \frac{(R_w \cos(A_h) - O_f)}{\sin(A_h)} where Rw is wheel radius, Ah is the head angle measured clock-wise from the horizontal and Of is the fork offset or rake. Trail can be increased by increasing the wheel size, decreasing or slackening the head angle, or decreasing the fork rake. The more trail a traditional bike has, the more stable it feels. Bikes with negative trail (where the contact patch is actually in front of where the steering axis intersects the ground), while still ridable, are reported to feel very unstable. Bikes with too much trail can feel difficult to steer. Normally, road racing bicycles have more trail than mountain bikes or touring bikes. In the case of mountain bikes, less trail allows more accurate path selection off-road, and also allows the rider to recover from obstacles on the trail which might knock the front wheel off course. Touring bikes are built with small trail to allow the rider to control a bike weighed down with baggage. As a consequence, an unloaded touring bike can feel unstable. In bicycles, fork rake, often a curve in the fork blades forward of the steering axis, is used to diminish trail.[22] Bikes with negative trail exist, such as the Python Lowracer, and are ridable, and an experimental bike with negative trail has been shown to be self-stable.[1] A small survey by Whitt and Wilson[14] found: However, these ranges are not hard and fast. For example, LeMond Racing Cycles offers [24] both with forks that have 45 mm of offset or rake and the same size wheels: • a 2007 Filmore, designed for the track, with a head angle that varies from 72½° to 74°, depending on frame size, and thus trail that varies from 61 mm to 51.5 mm. A measurement similar to trail, called either mechanical trailnormal trail, or true trail,[26] is the perpendicular distance from the steering axis to the centroid of the front wheel contact patch. [edit]Steering mechanism mass distribution [edit]Gyroscopic effects At low forward speeds, the precession of the front wheel is too quick, contributing to an uncontrolled bike’s tendency to oversteer, start to lean the other way and eventually oscillate and fall over. At high forward speeds, the precession is usually too slow, contributing to an uncontrolled bike’s tendency to understeer and eventually fall over without ever having reached the upright position.[10] This instability is very slow, on the order of seconds, and is easy for most riders to counteract. Thus a fast bike may feel stable even though it is actually not self-stable and would fall over if it were uncontrolled. A bicycle wheel with an internal flywheel for enhanced gyroscopic effect is under development as a commercial product, the Gyrobike, for making it easier to learn to ride bicycles. Another contribution of gyroscopic effects is a roll moment generated by the front wheel during countersteering. For example, steering left causes a moment to the right. The moment is small compared to the moment generated by the out-tracking front wheel, but begins as soon as the rider applies torque to the handlebars and so can be helpful inmotorcycle racing.[9] For more detail, see the countersteering article. However, even without self-stability a bike may be ridden by steering it to keep it over its wheels.[6] Note that the effects mentioned above that would combine to produce self-stability may be overwhelmed by additional factors such as headset friction and stiff control cables.[14] This video shows a riderless bicycle exhibiting self-stability. Motorcycles leaning in a turn. Cyclist riding with no hands on the handlebars. r = \frac{w}{\delta \cos \left (\phi \right )} where r\,\! is the approximate radius, w\,\! is the wheelbase\delta\,\! is the steer angle, and \phi\,\! is the caster angle of the steering axis.[9] \theta = \arctan \left (\frac{v^2}{gr}\right ) where v is the forward speed, r is the radius of the turn and g is the acceleration of gravity.[28] This is in the idealized case. A slight increase in the lean angle may be required on motorcycles to compensate for the width of modern tires at the same forward speed and turn radius.[25] r = \frac{w\cos \left (\theta \right )}{\delta \cos \left (\phi \right )} \arcsin \left ( t \frac {\sin(\phi)} {h-t} \right ) In order to initiate a turn and the necessary lean in the direction of that turn, a bike must momentarily steer in the opposite direction. This is often referred to as countersteering. With the front wheel now at a finite angle to the direction of motion, a lateral force is developed at the contact patch of the tire. This force creates a torque around the longitudinal (roll) axis of the bike. This torque causes the bike to roll in the opposite direction of the turn. Where there is no external influence, such as an opportune side wind to create the force necessary to lean the bike, countersteering is necessary to initiate a rapid turn.[28] [edit]Steady-state turning [edit]Steering angle \Delta = \delta \cos \left (\phi \right ) where \Delta\,\! is the kinematic steering angle, \delta\,\! is the steering angle, and \phi\,\! is the caster angle of the steering axis.[9] where r\,\! is the approximate radius, w\,\! is the wheelbase, \theta\,\! is the lean angle, \delta\,\! is the steering angle, and \phi\,\! is the caster angle of the steering axis.[9] Fourth, camber thrust contributes to the centripetal force necessary to cause the bike to deviate from a straight path, along with cornering force due to the slip angle, and can be the largest contributor.[25] Camber thrust contributes to the ability of bikes to negotiate a turn with the same radius as automobiles but with a smaller steering angle.[25] When a bike is steered and leaned in the same direction, the camber angle of the front tire is greater than that of the rear and so can generate more camber thrust, all else being equal.[9] [edit]No hands While countersteering is usually initiated by applying torque directly to the handlebars, on lighter vehicles such as bicycles, it can also be accomplished by shifting the rider’s weight. If the rider leans to the right relative to the bike, the bike will lean to the left to conserve angular momentum, and the combined center of mass will remain in the same vertical plane. This leftward lean of the bike, called counter lean by some authors,[25] will cause it to steer to the left and initiate a right-hand turn as if the rider had countersteered to the left by applying a torque directly to the handlebars.[28] Note that this technique may be complicated by additional factors such as headset friction and stiff control cables. [edit]Gyroscopic effects [edit]Two-wheel steering [edit]Rear-wheel steering Because of the theoretical benefits, especially a simplified front-wheel drive mechanism, attempts have been made to construct a ridable rear-wheel steering bike. The Bendix Company built a rear-wheel steering bicycle, and the U.S. Department of Transportation commissioned the construction of a rear-wheel steering motorcycle: both proved to be unridable. Rainbow Trainers, Inc. in Alton, Illinois, offered US$5,000 to the first person "who can successfully ride the rear-steered bicycle, Rear Steered Bicycle I".[35] One documented example of someone successfully riding a rear-wheel steering bicycle is that of L. H. Laiterman at Massachusetts Institute of Technology, on a specially designed recumbent bike.[14] The difficulty is that turning left, accomplished by turning the rear wheel to the right, initially moves the center of mass to the right, and vice versa. This complicates the task of compensating for leans induced by the environment.[36] Examination of the eigenvalues for bicycles with common geometries and mass distributions shows that the rear-wheel steering configuration is inherently unstable. However, designs have been published that do not suffer this problem.[1] [edit]Center steering Flevobike with center steering Between the extremes of bicycles with classical front-wheel steering and those with strictly rear-wheel steering is a class of bikes with a pivot point somewhere between the two referred to as center-steering, similar to articulated steering. An early implementation of the concept was the Phantom bicycle in the early 1870s promoted as a safer alternative to the penny-farthing.[37] This design allows for simple front-wheel drive and current implementations appear to be quite stable, even ridable no-hands, as many photographs illustrate.[38][39] [edit]Tiller effect Tiller effect is the expression used to describe how handlebars that extend far behind the steering axis (head tube) act like a tiller on a boat, in that one moves the bars to the right in order to turn the front wheel to the left, and vice versa. This situation is commonly found on cruiser bicycles, some recumbents, and even some cruiser motorcycles. It can be troublesome when it limits the ability to steer because of interference or the limits of arm reach.[41] Tires have a large influence over bike handling, especially on motorcycles.[9][25] Through a combination of cornering force and camber thrust, tires generate the lateral forces necessary for steering and balance. Tire inflation pressures have also been found to be important variables in the behavior of a motorcycle at high speeds.[42] Because the front and rear tires can have different slip angles due to weight distribution, tire properties, etc., bikes can experience understeer or oversteer. Of the two, understeer, in which the front wheel slides more than the rear wheel, is more dangerous since front wheel steering is critical for maintaining balance.[9] Also, because real tires have a finite contact patch with the road surface that can generate a scrub torque, and when in a turn, can experience some side slipping as they roll, they can generate torques about an axis normal to the plane of the contact patch. Bike tire contact patch during a right-hand turn [edit]High side [edit]Maneuverability and handling [edit]Rider control inputs [edit]Differences from automobiles [edit]Rating schemes • The Koch index is the ratio between peak steering torque and the product of peak lean rate and forward speed. Large, touring motorcycles tend to have a high Koch index, sport motorcycles tend to have a medium Koch index, and scooters tend to have a low Koch index.[9] It is easier to maneuver light scooters than heavy motorcycles. [edit]Lateral motion theory A bike is a nonholonomic system because its outcome is path-dependent. In order to know its exact configuration, especially location, it is necessary to know not only the configuration of its parts, but also their histories: how they have moved over time. This complicates mathematical analysis.[28] Finally, in the language of control theory, a bike exhibits non-minimum phase behavior.[44] It turns in the direction opposite of how it is initially steered, as described above in the section on countersteering [edit]Degrees of freedom Graphs of bike steer angle and lean angle vs turn radius. The number of degrees of freedom of a bike depends on the particular model being used. The simplest model that captures the key dynamic features, four rigid bodies with knife edge wheels rolling on a flat smooth surface, has 7 degrees of freedom (configuration variables required to completely describe the location and orientation of all 4 bodies):[2] 1. x coordinate of rear wheel contact point 2. y coordinate of rear wheel contact point 3. orientation angle of rear frame (yaw) 4. rotation angle of rear wheel 5. rotation angle of front wheel 6. lean angle of rear frame (roll) 7. steering angle between rear frame and front end [edit]Equations of motion The equations of motion of an idealized bike, consisting of • a rigid frame, • a rigid fork, • two knife-edged, rigid wheels, M_{\theta\theta}\ddot{\theta_r} + K_{\theta\theta}\theta_r + M_{\theta\psi}\ddot{\psi} + C_{\theta\psi}\dot{\psi} + K_{\theta\psi}\psi = and the steer equation M_{\psi\psi}\ddot{\psi} + C_{\psi\psi}\dot{\psi} + K_{\psi\psi}\psi + M_{\psi\theta}\ddot{\theta_r} + C_{\psi\theta}\dot{\theta_r} + K_{\psi\theta}\theta_r = • θr is the lean angle of the rear assembly, • ψ is the steer angle of the front assembly relative to the rear assembly and These can be represented in matrix form as M\mathbf{\ddot{q}}+C\mathbf{\dot{q}}+K\mathbf q=\mathbf f • M is the symmetrical mass matrix which contains terms that include only the mass and geometry of the bike, • C is the so-called damping matrix, even though an idealized bike has no dissipation, which contains terms that include the forward speed v and is asymmetric, • K is the so-called stiffness matrix which contains terms that include the gravitational constant g and v2 and is symmetric in g and asymmetric in v2, • \mathbf q is a vector of lean angle and steer angle, and • \mathbf f is a vector of external forces, the moments mentioned above. Eigenvalues plotted against forward speed for a typicalutility bicycle simplified to have knife-edge wheels that roll without slip. [edit]Wobble or shimmy Eigenvalues plotted against forward speed for amotorcycle modeled with frame flexibility and realistic tire dynamics. Additional modes can be seen, such as wobble, which becomes unstable at 43.7 m/s. Wobbleshimmytank-slapperspeed wobble, and death wobble are all words and phrases used to describe a rapid (4–10 Hz) oscillation of primarily just the front end (front wheel, fork, and handlebars). The rest of the bike remains essentially unaffected. This instability occurs mostly at high speed and is similar to that experienced by shopping cart wheels, airplane landing gear, and automobile front wheels.[9][10] While wobble or shimmy can be easily remedied by adjusting speed, position, or grip on the handlebar, it can be fatal if left uncontrolled.[47] This AVI movie shows wobble. [edit]Rear wobble [edit]Design criteria The effect that the design parameters of a bike have on these modes can be investigated by examining the eigenvalues of the linearized equations of motion.[42] For more details on the equations of motion and eigenvalues, see the section on the equations of motion above. Some general conclusions that have been drawn are described here. The lateral and torsional stiffness of the rear frame and the wheel spindle affects wobble-mode damping substantially. Long wheelbase and trail and a flatsteering-head angle have been found to increase weave-mode damping. Lateral distortion can be countered by locating the front fork torsional axis as low as possible. Cornering weave tendencies are amplified by degraded damping of the rear suspension. Cornering, camber stiffnesses and relaxation length of the rear tiremake the largest contribution to weave damping. The same parameters of the front tire have a lesser effect. Rear loading also amplifies cornering weave tendencies. Rear load assemblies with appropriate stiffness and damping, however, were successful in damping out weave and wobble oscillations. [edit]Other hypotheses Examples in print: And online: [edit]Longitudinal dynamics A bicyclist performing a wheelie. The net aerodynamic drag forces may be considered to act at a single point, called the center of pressure.[25] At high speeds, this will create a net moment about the rear driving wheel and result in a net transfer of load from the front wheel to the rear wheel.[25] Also, depending on the shape of the bike and the shape of any fairing that might be installed, aerodynamic lift may be present that either increases or further reduces the load on the front wheel.[25] Though longitudinally stable when stationary, a bike may become longitudinally unstable under sufficient acceleration or deceleration, and Euler's second law can be used to analyze the ground reaction forces generated.[49] For example, the normal (vertical) ground reaction forces at the wheels for a bike with a wheelbase L and a center of mass at height h and at a distance b in front of the rear wheel hub, and for simplicity, with both wheels locked, can be expressed as:[9] N_r = mg\left(\frac{L-b}{L} - \mu \frac{h}{L}\right) for the rear wheel and N_f = mg\left(\frac{b}{L} + \mu \frac{h}{L}\right) for the front wheel. The frictional (horizontal) forces are simply F_r = \mu N_r \, for the rear wheel and F_f = \mu N_f \, for the front wheel, where μ is the coefficient of frictionm is the total mass of the bike and rider, and g is the acceleration of gravity. Therefore, if \mu \ge \frac{L-b}{h}, \theta = \tan^{-1} \left( \frac{1}{\mu} \right) \, On the other hand, if the center of mass height is behind or below the line, as is true, for example on most tandem bicycles or long-wheel-base recumbent bicycles, then, even if the coefficient of friction is 1.0, it is impossible for the front wheel to generate enough braking force to flip the bike. It will skid unless it hits some fixed obstacle, such as a curb. Of course, the angle of the terrain can influence all of the calculations above. All else remaining equal, the risk of pitching over the front end is reduced when riding up hill and increased when riding down hill. The possibility of performing a wheelie increases when riding up hill,[50] and is a major factor in motorcycle hillclimbing competitions. A motorcyclist performing a stoppie. [edit]Front wheel braking [edit]Rear-wheel braking The rear brake of an upright bicycle can only produce about 0.1 g (1 m/s2) deceleration at best,[14] because of the decrease in normal force at the rear wheel as described above. All bikes with only rear braking are subject to this limitation: for example, bikes with only a coaster brake, and fixed-gear bikes with no other braking mechanism. There are, however, situations that may warrant rear wheel braking[52] • Front brake failure.[52] Mountain bike rear suspension Bikes may have only front, only rear, full suspension or no suspension that operate primarily in the central plane of symmetry; though with some consideration given to lateral compliance.[25] The goals of a bike suspension are to reduce vibration experienced by the rider, maintain wheel contact with the ground, and maintain vehicle trim.[9]The primary suspension parameters are stiffnessdamping, sprung and unsprung mass, and tire characteristics.[25] Besides irregularities in the terrain, brake, acceleration, and drive-train forces can also activate the suspension as described above. Examples include bob and pedal feedback on bicycles, the shaft effect on motorcycles, and squat and brake dive on both. The study of vibration in bikes includes its causes, such as engine balance,[53] wheel balance, ground surface, and aerodynamics; its transmission and absorption; and its effects on the bike, the rider, and safety.[54] An important factor in any vibration analysis is a comparison of the natural frequencies of the system with the possible driving frequencies of the vibration sources.[55] A close match means mechanical resonance that can result in large amplitudes. A challenge in vibration damping is to create compliance in certain directions (vertically) without sacrificing frame rigidity needed for power transmission and handling (torsionally).[56] Another issue with vibration for the bike is the possibility of failure due to material fatigue[57] Effects of vibration on riders include discomfort, loss of efficiency, Hand-Arm Vibration Syndrome, a secondary form Raynaud's disease, and whole body vibration. Vibrating instruments may be inaccurate or difficult to read.[57] [edit]In bicycles The primary cause of vibrations in a properly functioning bicycle is the surface over which it rolls. In addition to pneumatic tires and traditional bicycle suspensions, a variety of techniques have been developed todamp vibrations before they reach the rider. These include materials, such as carbon fiber, either in the whole frame or just key components such as the front forkseatpost, or handlebars; tube shapes, such as curved seat stays;[58] and special inserts, such as Zertz by Specialized,[59][60] and Buzzkills by Bontrager. [edit]In motorcycles In addition to the road surface, vibrations in a motorcycle can be caused by the engine and wheels, if unbalanced. Manufacturers employ a variety of technologies to reduce or damp these vibrations, such as enginebalance shafts, rubber engine mounts,[61] and tire weights.[62] The problems that vibration causes have also spawned an industry of after-market parts and systems designed to reduce it. Add-ons include handlebarweights,[63] isolated foot pegs, and engine counterweights. At high speeds, motorcycles and their riders may also experience aerodynamic flutter or buffeting.[64] This can be abated by changing the air flow over key parts, such as the windshield.[65] • David Jones built several bikes in a search for an unridable configuration.[6] • Schwab and Kooijman have performed measurements with an instrumented bike.[67] [edit]See also 1. a b c d e f g J. D. G. Kooijman, J. P. Meijaard, J. M. Papadopoulos, A. Ruina, and A. L. Schwab (April 15, 2011). "A bicycle can be self-stable without gyrosocpic or caster effects" (PDF). Science 332(6027): 339–342. Bibcode 2011Sci...332..339K.doi:10.1126/science.1201959. 2. a b c d e f g h i j k l m n o p q J. P. Meijaard, J. M. Papadopoulos, A. Ruina, and A. L. Schwab (2007). "Linearized dynamics equations for the balance and steer of a bicycle: a benchmark and review"(PDF). Proc. R. Soc. A. 463 (2084): 1955–1982. Bibcode2007RSPSA.463.1955Mdoi:10.1098/rspa.2007.1857. 4. ^ Pacejka, Hans B. (2006). Tire and Vehicle Dynamics (2nd ed.).Society of Automotive Engineers, Inc.. pp. 517–585. ISBN 0 7680 1702 5. "The single track vehicle is more difficult to study than the double track automobile and poses a challenge to the vehicle dynamicist." 5. a b c d e f Klein, Richard E.; et al.. "Bicycle Science". Archived from the original on 2008-02-13. Retrieved 2008-09-09. 6. a b c d e f Jones, David E. H. (1970). "The stability of the bicycle" (PDF). Physics Today 23 (4): 34–40.doi:10.1063/1.3022064. Retrieved 2008-09-09. 7. ^ Sharp, R. S. (2008). "On the stability and control of the bicycle".Applied Mechanics Reviews 61 (6): 1–24. 8. a b c d Sharp, R. S. (July 2007). "Motorcycle Steering Control by Road Preview". Journal of Dynamic Systems, Measurement, and Control (ASME) 129 (July 2007): 373–381.doi:10.1115/1.2745842. 9. a b c d e f g h i j k l m n o p q r s t u v w x y z aa ab ac ad ae af agCossalter, Vittore (2006). Motorcycle Dynamics (Second ed.). pp. 241–342. ISBN 978-1-4303-0861-4. 10. a b c d e f g Wilson, David Gordon; Jim Papadopoulos (2004).Bicycling Science (Third ed.). The MIT Press. pp. 263–390. ISBN 0-262-73154-1. 11. ^ Sharp, R. S. (1971). "The stability and control of motorcycles".Journal of Mechanical Engineering Science 13 (5): 316–329.doi:10.1243/JMES_JOUR_1971_013_051_02. 12. ^ Sharp, R.S. (1985). "The Lateral Dynamics of Motorcycles and Bicycles". Vehicle System Dynamics 14 (4–6): 265–283.doi:10.1080/00423118508968834. 13. a b c Limebeer, D. J. N.; R. S. Sharp and S. Evangelou (November 2002). "Motorcycle Steering Oscillations due to Road Profiling".Transactions of the ASME 69 (6): 724–739. Bibcode2002JAM....69..724Ldoi:10.1115/1.1507768. 14. a b c d e f g h i j k l Whitt, Frank R.; David G. Wilson (1982).Bicycling Science (Second ed.). Massachusetts Institute of Technology. pp. 198–233. ISBN 0-262-23111-5. 15. ^ Phillips, Matt (April 2009). "You Don't Know Squat". Mountain Bike(Rodale): 39–45. 17. ^ Fajans, Joel. "Email Questions and Answers: Balancing at low speeds". Retrieved 2006-08-23. 18. ^ "MaxMoto: Motorcycle Touring Tips Part 3. Preparing the Bike.". Retrieved 2008-06-28. 20. ^ REI. "Cycle Expert Advice: Packing for a Tour". Retrieved 2007-11-13. 24. ^ "LeMond Racing Cycles". 2006. Retrieved 2006-08-08. 25. a b c d e f g h i j k l m n o Foale, Tony (2006). Motorcycle Handling and Chassis Design (Second ed.). Tony Foale Designs.ISBN 978-84-933286-3-4. 26. ^ "Gear Head College: Trail". Retrieved 2009-08-05. 27. a b c Hand, Richard S. (1988). "Comparisons and Stability Analysis of Linearized Equations of Motion for a Basic Bicycle Model" (PDF). Archived from the original on June 17, 2006. Retrieved 2006-08-04. 28. a b c d e f Fajans, Joel (July 2000). "Steering in bicycles and motorcycles" (PDF). American Journal of Physics 68 (7): 654–659. Bibcode 2000AmJPh..68..654Fdoi:10.1119/1.19504. Retrieved 2006-08-04. 30. ^ V Cossalter, R Lot, and M Peretto (2007). "Steady turning of motorcycles"Journal of Automobile Engineering 221 Part D: 1343–1356. "As concerns the first street vehicle, notable over-steering behaviour is evident; ..., and hence driving is carried on using some counter-steering angle." 31. ^ V Cossalter, R Lot, and M Peretto (2007). "Steady turning of motorcycles"Journal of Automobile Engineering 221 Part D: 1343–1356. "Correlations with the subjective opinions of expert test riders have shown that a low torque effort should be applied to the handlebar in order to have a good feeling, and preferably in a sense opposite to the turning direction." 32. ^ Brown, Sheldon (2006). "Sheldon Brown's Bicycle Glossary". Sheldon Brown. Retrieved 2006-08-08. 33. ^ Foale, Tony (1997). "2 Wheel Drive/Steering". Retrieved 2006-12-14. 34. ^ Drysdale, Ian. "Drysdale 2x2x2". Retrieved 2009-04-05. 36. ^ Wannee, Erik (2005). "Rear Wheel Steered Bike". Retrieved 2006-08-04. 38. ^ Wannee, Erik (2001). "Variations on the theme 'FlevoBike'". Retrieved 2006-12-15. 42. a b Evangelou, Simos (2004). "The Control and Stability Analysis of Two-wheeled Road Vehicles" (PDF). Imperial College London. p. 159. Retrieved 2006-08-04. 44. ^ Klein, Richard E.; et al. (2005). "Counter-Intuitive.". Archived from the original on October 27, 2005. Retrieved 2006-08-07. 46. ^ Schwab, A. L.; J. P. Meijaard and J. D. G. Kooijman (5–9 June 2006). "Experimental Validation of a Model of an Uncontrolled Bicycle" (PDF). III European Conference on Computational Mechanics Solids, Structures and Coupled Problems in Engineering (Lisbon, Portugal: C.A. Mota Soares et al.). Retrieved 2008-10-19. 47. ^ Kettler, Bill (2004-09-15). "Crash kills cyclist"Mail Tribune. Retrieved 2006-08-04. 48. ^ Lennard Zinn (2008-12-30). "VeloNews: Technical Q&A with Lennard Zinn: Torque wrenches and temps; shifting and shimmy". Retrieved 2009-01-02. 49. ^ Ruina, Andy; Rudra Pratap (2002) (PDF). Introduction to Statics and Dynamics. Oxford University Press. p. 350. Retrieved 2006-08-04. 50. a b Cassidy, Chris. "Bicycling Magazine: The Wheelie". Retrieved 2009-05-22.[dead link] 51. ^ Kurtus, Ron (2005-11-02). "Coefficient of Friction Values for Clean Surfaces". Retrieved 2006-08-07. 52. a b c d Brown, Sheldon "Front Brake". "Braking and Turning". Retrieved 2009-05-22. 53. ^ "Shaking forces of twin engines". Retrieved 2008-06-23. 56. ^ Strickland, Bill (2008-08). "Comfort is the New Speed". Bicycling Magazine (Rodale) XLIV (7): 118–122. 61. ^ "Design News: Good Vibrations". Retrieved 2008-06-24. 63. ^ "American Motorcyclist: Good Vibrations". Retrieved 2008-06-24. 66. ^ Gromer, Cliff (2001-02-01). "STEER GEAR So how do you actually turn a motorcycle?". Popular Mechanics. Retrieved 2006-08-07. 67. ^ Schwab, Arend; et al. (2006). "Bicycle Dynamics". Retrieved 2006-08-07. [edit]Further reading [edit]External links Research centers: Discussion groups:
b0d776292e4d003e
Sign up Here's how it works: 1. Anybody can ask a question 2. Anybody can answer It has been claimed by some people that Schrödinger's picture is more misleading compared to the Heisenberg principle or path integrals, and that we would be better off abandoning the Schrödinger picture in favor of either the Heisenberg picture or path integrals. However, when it comes to open quantum systems in a constant interaction with the environment, which isn't fully modeled, it is not at all clear how to apply the Heisenberg picture or path integrals. The Heisenberg picture requires the operators of the system to evolve in such a way in time what it becomes extremely mixed up with the environmental degrees of freedom, and the only way this can be done is to fully model the environment as a whole, or at least all of the part of the environment which ever has a chance of interacting with the system in question. Similarly, how do you even go about adapting path integrals to open systems without including the entire environment, or at least all of those parts which can ever interact with the system? Might it be the case that the Heisenberg picture and path integrals can only be strictly applied to the universe as a whole, or at the very least, causal diamonds? share|cite|improve this question I suspect (and I may be wrong) that the suggestion that we abandon the Schrodinger picture is usually only made in a pedagogical context, i.e., the idea is that we shouldn't use the Schrodinger picture when trying to understand or teach the fundamental principles of quantum mechanics. This doesn't necessarily mean that we shouldn't make use of it in contexts where it makes the math easier. (The interaction picture is already treated in much the same way.) – Harry Johnston Dec 15 '11 at 21:06 Some people celebrating the Heisenberg picture and path integrals were right! – Luboš Motl Dec 16 '11 at 17:07 Yes, it can. An example is Brownian motion, in which you are interested in the dynamics of a particle in contact with some external reservoir without being interested in the dynamics of said reservoir. What you want is to incorporate the effect of the external reservoir on the dynamics of the particle (or subsystem), which can --in principle-- be done by integrating out all degrees of freedom associated with the external reservoir. A specific example for which this is done is the Caldeira-Leggett model. The model starts with the full action in a path integral formulation. The degrees of freedom associated with the external reservoir are then integrated out, which can be a bit tricky. The action that remains is one completely defined in terms of the degrees of freedom of the subsystem / particle, but with effective potentials that account for the influence of the environment. This is also known as the influence functional, and you can find a chapter on it in the book by Feynman and Hibbs. This is one approach which has proven to be quite succesfull. Another approach is to write down the equations of motions for the reduced density operator, i.e. the density operator obtained by, again, integrating out the reservoir degrees of freedom. This leads to the Lindblad equation. In general you cannot keep track of the exact influence of the external reservoir on the subsystem. Some statistical averaging / assumptions have to be made and the effect of the reservoir is often mimiced by a random potential, in which the potential is a random variable rather than explicitly known. I mentioned Brownian motion as an example of an open quantum system. There are many more applications, such as quantum transport and quantum optics. This basically means that some type of randomness is introduced into the state evolution. share|cite|improve this answer The question is to formulate a path integral for Libdblatt eqn. directly. – Ron Maimon Dec 15 '11 at 20:21 The Lindblad equation is only an approximated equation of motion for the reduced density operator. – juanrga Nov 16 '12 at 19:46 Since that the the Schrödinger picture and the Heisenberg picture are equivalent, you can start from the Schrödinger equation for simple cases such as a particle in an external field and obtain the Heisenberg equations of motion or vice verse. Idem about the path integrals formalism, which can be derived from the Schrödinger equation or vice verse. A complete treatment of an open quantum systems interacting with an environment cannot be done in any of those formalisms. You have to resort to the more general density matrix formalism for a complete treatment of open systems. The idea of fully model the environment as a whole does not change this if the system is in a mixed state, for the which no wavefunction exists. Again you have to resort to the density matrix formalism to fully model the environment. share|cite|improve this answer Your Answer
0a84652b17af3103
Dismiss Notice Join Physics Forums Today! Eigenvalues and -vectors in class 1. Jan 22, 2008 #1 For math we were assigned a subject which we'd present during one class' hour in a group. My group got "Eigenvalues & eigenvectors". So basically first I have to give the definition and explain what it actually is (AX = [tex]\lambda[/tex]X) and then we can spend the rest of the 45 min on making class exercises on this new subject and (something I think welcome to eigenvalues) showing an application of it, to make it less abstract. Any ideas of how I could present this? I read somewhere it's used for the Schrödinger equation -- a very interesting piece of science, but I don't think that's something you "show to the aid of eigenvalues". I was thinking, maybe I could start by drawing a two-dimensional plane with an x and y axis and lead to eigenvalues from the geometrical point of view Last edited: Jan 22, 2008 2. jcsd 3. Jan 22, 2008 #2 User Avatar Staff Emeritus Science Advisor Do you even know what "eigenvalues" are? If not, why would you agree to give a class talk on them? 4. Jan 22, 2008 #3 Oh, I'm only 16, but I do 8h of math, meaning it's my main course and as an assignment, the class was divided into groups of three. And as we're currently seeing matrices, each group was assigned a different aspect of it (like eigenvalues for us), which we would learn about for ourselves (we could ask our teacher for help) and then we each get an hour to teach it to the rest of the class. Have something to add? Similar Discussions: Eigenvalues and -vectors in class 1. Eigenvalue Condition (Replies: 5)
045623f223211a69
Quantum mechanics From Wikipedia, the free encyclopedia Jump to: navigation, search Quantum mechanics ("QM") is the part of physics that tells us how the things that make up atoms work. QM also tells us how electromagnetic waves like light work. QM is a mathematical framework (rules written in math) for much of modern physics and chemistry. Quantum mechanics helps us make sense of the smallest things in nature like protons, neutrons and electrons. Complex mathematics is used to study subatomic particles and electromagnetic waves because they act in very strange ways. Quantum mechanics is important to physics and chemistry The wavelength of a wave of light Quantum is a Latin word that means 'how much'. So a quantum of energy is a specific amount of energy. Light sources such as candles or lasers shoot out (or "emit") light in bits called photons. Photons are like packets. Each one has a certain little bit of energy. Waves and photons[change | change source] Photons are particles, much smaller than atoms. The more photons a lamp shoots off, the brighter the light. Light is a form of energy that behaves like the waves in water or radio waves. The distance between the top of one wave and the top of the next wave is called a 'wavelength.' Each photon carries a certain amount, or 'quantum', of energy depending on its wavelength. Black at left is ultraviolet (high frequency); black at right is infrared (low frequency). A light's color depends on its wavelength. The color violet (the bottom or innermost color of the rainbow) has a wavelength of about 400 nm ("nanometers") which is 0.00004 centimeters or 0.000016 inches. Photons with wavelengths of 10-400 nm are called ultraviolet (or UV) light. Such light cannot be seen by the human eye. On the other end of the spectrum, red light is about 700 nm. Infrared light is about 700 nm to 300,000 nm. Human eyes are not sensitive to infrared light either. Wavelengths are not always so small. Radio waves have longer wavelengths. The wavelengths for your FM radio can be several meters in length (for example, stations transmitting on 99.5 FM are emitting radio energy with a wavelength of about 3 meters, which is about 10 feet). Each photon has a certain amount of energy related to its wavelength. The shorter the wavelength of a photon, the greater its energy. For example, an ultraviolet photon has more energy than an infrared photon. Pictorial description of frequency Wavelength and frequency (the number of times the wave crests per second) are inversely proportional. This means a longer wavelength will have a lower frequency, and vice versa. If the color of the light is infrared (lower in frequency than red light), each photon can heat up what it hits. So, if a strong infrared lamp (a heat lamp) is pointed at a person, that person will feel warm, or even hot, because of the energy stored in the many photons. The surface of the infrared lamp may even get hot enough to burn someone who may touch it. Humans cannot see infrared light, but we can feel the radiation in the form of heat. For example, a person walking by a brick building that has been heated by the sun will feel heat from the building without having to touch it. The mathematical formulations of quantum mechanics are abstract. A mathematical function, the wavefunction, provides information about the probability amplitude of position, momentum, and other physical properties of a particle. Many of the results of quantum mechanics are not easily visualized in terms of classical mechanics. On the left, a plastic thermometer is under a bright heat lamp. This infrared radiation warms but does not damage the thermometer. On the right, another plastic thermometer gets hit by a low intensity ultraviolet light. This radiation damages but does not warm the thermometer. If the color of the light is ultraviolet (higher in frequency than violet light), then each photon has a lot of energy, enough to hurt skin cells and cause a sunburn. In fact, most forms of sunburn are not caused by heat; they are caused by the high energy of the sun's UV rays damaging your skin cells. Even higher frequencies of light (or electromagnetic radiation) can penetrate deeper into the body and cause even more damage. X-rays have so much energy that they can go deep into the human body and kill cells. Humans cannot see or feel ultraviolet light or x-rays. They may only know they have been under such high frequency light when they get a radiation burn. Areas where it is important to kill germs often use ultraviolet lamps to destroy bacteria, fungi, etc. X-rays are sometimes used to kill cancer cells. Quantum mechanics started when it was discovered that a certain frequency means a certain amount of energy. Energy is proportional to frequency (E ∝ f). The higher the frequency, the more energy a photon has, and the more damage it can do. Quantum mechanics later grew to explain the internal structure of atoms. Quantum mechanics also explains the way that a photon can interfere with itself, and many other things never imagined in classical physics. Quantization[change | change source] Planck discovered the relationship between frequency and energy. Nobody before had ever guessed that frequency would be directly proportional to energy (this means that as one of them doubles, the other does, too). If we choose to use what are called natural units, then the number representing the frequency of a photon would also represent its energy. The equation would then be: E = f meaning energy equals frequency. But the way physics grew, there was no natural connection between the units then used to measure energy and the units commonly used to measure time (and therefore frequency). So the formula that Planck worked out to make the numbers all come out right was: E = h × f or, energy equals h times frequency. This h is a number called Planck's constant after its discoverer. Quantum mechanics is based on the knowledge that a photon of a certain frequency means a photon of a certain amount of energy. Besides that relationship, a specific kind of atom can only give off certain frequencies of radiation, so it can also only give off photons that have certain amounts of energy. Photoelectric effect: photons hit metal and electrons are pushed away. History[change | change source] Isaac Newton thought that light was made of very small things that we would now call particles (he referred to them as "Corpuscles"). Christiaan Huygens thought that light was made of waves. Scientists thought that a thing cannot be a particle and a wave at the same time. Scientists did experiments to find out whether light was made of particles or waves. They found out that both ideas were right — light was somehow both waves and particles. The Double-slit experiment performed by Thomas Young showed that light must act like a wave. The Photoelectric effect discovered by Albert Einstein proved that light had to act like particles that carried specific amounts of energy, and that the energies were linked to their frequencies. This experimental result is called the "wave-particle duality" in quantum mechanics. Later, physicists found out that everything behaves both like a wave and like a particle, not just light. However, this effect is much smaller in large objects. Here are some of the people who discovered the basic parts of quantum mechanics: Max Planck, Albert Einstein, Satyendra Nath Bose, Niels Bohr, Louis de Broglie, Max Born, Paul Dirac, Werner Heisenberg, Wolfgang Pauli, Erwin Schrödinger, John von Neumann, and Richard Feynman. They did their work in the first half of the 20th century. Beyond Planck[change | change source] Visible light given off by glowing hydrogen. (Wavelengths in nanometers.) Quantum mechanics formulas and ideas were made to explain the light that comes from glowing hydrogen. The quantum theory of the atom also had to explain why the electron stays in its orbit, which other ideas were not able to explain. It followed from the older ideas that the electron would have to fall in to the center of the atom because it starts out being kept in orbit by its own energy, but it would quickly lose its energy as it revolves in its orbit. (This is because electrons and other charged particles were known to emit light and lose energy when they changed speed or turned.) Hydrogen lamps work like neon lights, but neon lights have their own unique group of colors (and frequencies) of light. Scientists learned that they could identify all elements by the light colors they produce. They just could not figure out how the frequencies were determined. Then, a Swiss mathematician named Johann Balmer figured out an equation that told what λ (lambda, for wave length) would be: \lambda = B\left(\frac{n^2}{n^2-4}\right) \qquad\qquad n = 3,4,5,6 where B is a number that Balmer determined to be equal to 364.56 nm. This equation only worked for the visible light from a hydrogen lamp. But later, the equation was made more general: \frac{1}{\lambda} = R \left(\frac{1}{m^2} - \frac{1}{n^2}\right), where R is the Rydberg constant, equal to 0.0110 nm−1, and n must be greater than m. Putting in different numbers for m and n, it is easy to predict frequencies for many types of light (ultraviolet, visible, and infared). To see how this works, go to Hyperphysics and go down past the middle of the page. (Use H = 1 for hydrogen.) In 1908, Walter Ritz made the Ritz combination principle that shows how certain gaps between frequencies keep repeating themselves. This turned out to be important to Werner Heisenberg several years later. In 1905, Albert Einstein used Planck's idea to show that a beam of light is made up of a stream of particles called photons. The energy of each photon depends on its frequency. Einstein's idea is the beginning of the idea in quantum mechanics that all subatomic particles like electrons, protons, neutrons, and others are both waves and particles at the same time. (See picture of atom with the electron as waves at atom.) This led to a theory about subatomic particles and electromagnetic waves called wave-particle duality. This is where particles and waves were neither one nor the other, but had certain properties of both. An electron falls to lower orbit and a photon is created. In 1913, Niels Bohr had a new idea. Maybe electrons could only take up certain orbits around the nucleus of an atom. The numbers called m and n in the equation above could represent orbits. An electron could begin in some orbit m and end up in some orbit n, or an electron could begin in some orbit n and end up in some orbit m. So if a photon hits an electron its energy will be absorbed and the electron will move to a higher orbit because of that extra energy. And if an electron falls from a higher orbit to a lower orbit then it will have to give up energy in the form of a photon. The energy of the photon will equal the energy difference between the two orbits, and the energy of a photon makes it have a certain frequency and color. Everything worked out very well that way, but there was one big question left: Each of the colors of light produced by glowing hydrogen (and by glowing neon or any other element) has a brightness of its own, and the brightness differences are always the same for each element. Why? BohrEquation to red light.svg At this point, most things about the light produced by a hydrogen lamp were known. One big problem remained: How can we explain the brightness of each of the lines produced by glowing hydrogen? Spaced-out intensities in arbitrary units Werner Heisenberg took on the job of explaining the brightness or "intensity" of each line. He could not use any simple rule like the one Balmer had come up with. He had to use the very difficult math of classical physics that figures everything out in terms of things like the mass (weight) of an electron, the charge (static electric strength) of an electron, and other tiny quantities. Classical physics already had answers for the brightness of the bands of color that a hydrogen lamp produces, but the classical theory said that there should be a continuous rainbow, and not four separate color bands. Heisenberg's explanation is: There is some law that says what frequencies of light glowing hydrogen will produce. It has to predict spaced-out frequencies when the electrons involved are moving between orbits close to the nucleus (center) of the atom, but it also has to predict that the frequencies will get closer and closer together as we look at what the electron does in moving between orbits farther and farther out. It will also predict that the intensity differences between frequencies get closer and closer together as we go out. Where classical physics already gives the right answers by one set of equations the new physics has to give the same answers but by different equations. Classical physics uses the methods of the French mathematician Fourier to make a math picture of the physical world, and it uses collections of smooth curves that go together to make one smooth curve that gives, in this case, intensities for light of all frequencies from some light. But it is not right because that smooth curve only appears at higher frequencies. At lower frequencies, there are always isolated points and nothing connects the dots. So, to make a map of the real world, Heisenberg had to make a big change. He had to do something to pick out only the numbers that would match what was seen in nature. Sometimes people say he "guessed" these equations, but he was not making blind guesses. He found what he needed. The numbers that he calculated would put dots on a graph, but there would be no line drawn between the dots. And making one "graph" just of dots for every set of calculations would have wasted lots of paper and not have gotten anything done. Heisenberg found a way to efficiently predict the intensities for different frequencies and to organize that information in a helpful way. Just using the empirical rule given above, the one that Balmer got started and Rydberg improved, we can see how to get one set of numbers that would help Heisenberg get the kind of picture that he wanted: The rule says that when the electron moves from one orbit to another it either gains or loses energy, depending on whether it is getting farther from the center or nearer to it. So we can put these orbits or energy levels in as headings along the top and the side of a grid. For historical reasons the lowest orbit is called n, and the next orbit out is called n - a, then comes n - b, and so forth. It is confusing that they used negative numbers when the electrons were actually gaining energy, but that is just the way it is. Since the Rydberg rule gives us frequencies, we can use that rule to put in numbers depending on where the electron goes. If the electron starts at n and ends up at n, then it has not really gone anywhere, so it did not gain energy and it did not lose energy. So the frequency is 0. If the electron starts at n-a and ends up at n, then it has fallen from a higher orbit to a lower orbit. If it does so then it loses energy, and the energy it loses shows up as a photon. The photon has a certain amount of energy, e, and that is related to a certain frequency f by the equation e = h f. So we know that a certain change of orbit is going to produce a certain frequency of light, f. If the electron starts at n and ends up at n - a, that means it has gone from a lower orbit to a higher orbit. That only happens when a photon of a certain frequency and energy comes in from the outside, is absorbed by the electron and gives it its energy, and that is what makes the electron go out to a higher orbit. So, to keep everything making sense, we write that frequency as a negative number. There was a photon with a certain frequency and now it has been taken away. So we can make a grid like this, where f(a←b) means the frequency involved when an electron goes from energy state (orbit) b to energy state a (Again, sequences look backwards, but that is the way they were originally written.): Grid of f Electron States n n-a n-b n-c .... n f(n←n) f(n←n-a) f(n←n-b) f(n←n-c) ..... n-a f(n-a←n) f(n-a←n-a) f(n-a←n-b) f(n-a←n-c) ..... n-b f(n-b←n) f(n-b←n-a) f(n-b←n-b) f(n-b←n-c) ..... Heisenberg did not make the grids like this. He just did the math that would let him get the intensities he was looking for. But to do that he had to multiply two amplitudes (how high a wave measures) to work out the intensity. (In classical physics, intensity equals amplitude squared.) He made an odd-looking equation to handle this problem, wrote out the rest of his paper, handed it to his boss, and went on vacation. Dr. Born looked at his funny equation and it seemed a little crazy. He must have wondered, "Why did Heisenberg give me this strange thing? Why does he have to do it this way?" Then he realized that he was looking at a blueprint for something he already knew very well. He was used to calling the grid or table that we could write by doing, for instance, all the math for frequencies, a matrix. And Heisenberg's weird equation was a rule for multiplying two of them together. Max Born was a very, very good mathematician. He knew that since the two matrices (grids) being multiplied represented different things (like position (x,y,z) and momentum (mv), for instance), then when you multiply the first matrix by the second you get one answer and when you multiply the second matrix by the first matrix you get another answer. Even though he did not know about matrix math, Heisenberg already saw this "different answers" problem and it had bothered him. But Dr. Born was such a good mathematician that he saw that the difference between the first matrix multiplication and the second matrix multiplication was always going to involve Planck's constant, h, multiplied by the square root of negative one, i. So within a few days of Heisenberg's discovery they already had the basic math for what Heisenberg liked to call the "indeterminacy principle." By "indeterminate" Heisenberg meant that something like an electron is just not pinned down until it gets pinned down. It is a little like a jellyfish that is always squishing around and cannot be "in one place" unless you kill it. Later, people got in the habit of calling it "Heisenberg's uncertainty principle," which made many people make the mistake of thinking that electrons and things like that are really "somewhere" but we are just uncertain about it in our own minds. That idea is wrong. It is not what Heisenberg was talking about. Having trouble measuring something is a problem, but it is not the problem Heisenberg was talking about. Heisenberg's idea is very hard to grasp, but we can make it clearer with an example. First, we will start calling these grids "matrices," because we will soon need to talk about matrix multiplication. Suppose that we start with two kinds of measurements, position (q) and momentum (p). In 1925, Heisenberg wrote an equation like this one: Y(n,n-b) = \sum_{a}^{} \, p(n,n-a)q(n-a,n-b) (Equation for the conjugate variables momentum and position) He did not know it, but this equation gives a blueprint for writing out two matrices (grids) and for multiplying them. The rules for multiplying one matrix by another are a little messy, but here are the two matrices according to the blueprint, and then their product: Matrix of p Electron States n-a n-b n-c .... n p(n←n-a) p(n←n-b) p(n←n-c) ..... n-a p(n-a←n-a) p(n-a←n-b) p(n-a←n-c) ..... n-b p(n-b←n-a) p(n-b←n-b) p(n-b←n-c) ..... Matrix of q Electron States n-b n-c n-d .... n-a q(n-a←n-b) q(n-a←n-c) q(n-a←n-d) ..... n-b q(n-b←n-b) q(n-b←n-c) q(n-b←n-d) ..... n-c q(n-c←n-b) q(n-c←n-c) q(n-c←n-d) ..... The matrix for the product of the above two matrices as specified by the relevant equation in Heisenberg's 1925 paper is: Electron States n-b n-c n-d ..... n A ..... ..... ..... n-a ..... B ..... ..... n-b ..... ..... C ..... and so forth. If the matrices were reversed, the following values would result: and so forth. Note how changing the order of multiplication changes the numbers, step by step, that are actually multiplied. Beyond Heisenberg[change | change source] The work of Werner Heisenberg seemed to break a log jam. Very soon, many different other ways of explaining things came from people such as Louis de Broglie, Max Born, Paul Dirac, Wolfgang Pauli, and Erwin Schrödinger. The work of each of these physicists is its own story. The math used by Heisenberg and earlier people is not very hard to understand, but the equations quickly grew very complicated as physicists looked more deeply into the atomic world. Further mysteries[change | change source] In the early days of quantum mechanics, Albert Einstein suggested that if it were right then quantum mechanics would mean that there would be "spooky action at a distance." It turned out that quantum mechanics was right, and that what Einstein had used as a reason to reject quantum mechanics actually happened. This kind of "spooky connection" between certain quantum events is now called "quantum entanglement". Two entangled particles are separated: one on Earth and one taken to some distant planet. Measuring one of them forces it to "decide" which role to take, and the other one must then take the other role whenever (after that) it is measured. When an experiment brings two things (photons, electrons, etc.) together, they must then share a common description in quantum mechanics. When they are later separated, they keep the same quantum mechanical description or "state." In the diagram, one characteristic (e.g., "up" spin) is drawn in red, and its mate (e.g., "down" spin) is drawn in blue. The purple band means that when, e.g., two electrons are put together the pair shares both characteristics. So both electrons could show either up spin or down spin. When they are later separated, one remaining on Earth and one going to some planet of the star Alpha Centauri, they still each have both spins. In other words, each one of them can "decide" to show itself as a spin-up electron or a spin-down electron. But if later on someone measures the other one, it must "decide" to show itself as having the opposite spin. Einstein argued that over such a great distance it was crazy to think that forcing one electron to show its spin would then somehow make the other electron show an opposite characteristic. He said that the two electrons must have been spin-up or spin-down all along, but that quantum mechanics could not predict which characteristic each electron had. Being unable to predict, only being able to look at one of them with the right experiment, meant that quantum mechanics could not account for something important. Therefore, Einstein said, quantum mechanics had a big hole in it. Quantum mechanics was incomplete. Later, it turned out that experiments showed that it was Einstein who was wrong.[1] Heisenberg uncertainty principle[change | change source] In 1925, Werner Heisenberg described the Uncertainty principle, which says that the more we know about where a particle is, the less we can know about how fast it is going and in which direction. In other words, the more we know about the speed and direction of something small, the less we can know about its position. Physicists usually talk about the momentum in such discussions instead of talking about speed. Momentum is just the speed of something in a certain direction times its mass. The reason behind Heisenberg's uncertainty principle says that we can never know both the location and the momentum of a particle. Because light is an abundant particle, it is used for measuring other particles. The only way to measure it is to bounce the light wave off of the particle and record the results. If a high energy, or high frequency, light beam is used, we can tell precisely where it is, but cannot tell how fast it was going. This is because the high energy photon transfers energy to the particle and changes the particle's speed. If we use a low energy photon, we can tell how fast it is going, but not where it is. This is because we are using light with a longer wavelength. The longer wavelength means the particle could be anywhere along the stretch of the wave. The principle also says that there are many pairs of measurements for which we cannot know both of them about any particle (a very small thing), no matter how hard we try. The more we learn about one of such a pair, the less we can know about the other. Even Albert Einstein had trouble accepting such a bizarre concept, and in a well-known debate said, "God does not play dice". To this, Danish physicist Niels Bohr famously responded, "Einstein, don't tell God what to do". Uses of QM[change | change source] QM can also help us understand big things, such as stars and even the whole universe. QM is a very important part of the theory of how the universe began called the Big Bang. Everything made of matter is attracted to other matter because of a fundamental force called gravity. Einstein's theory that explains gravity is called the theory of general relativity. A problem in modern physics is that some conclusions of QM do not seem to agree with the theory of general relativity. QM is the part of physics that can explain why all electronic technology works as it does. Thus QM explains how computers work, because computers are electronic machines. But the designers of the early computer hardware of around 1950 or 1960 did not need to think about QM. The designers of radios and televisions at that time did not think about QM either. However, the design of the more powerful integrated circuits and computer memory technologies of recent years does require QM. QM has also made possible technologies such as: Why QM is hard to learn[change | change source] QM is a challenging subject for several reasons: • QM explains things in very different ways from what we learn about the world when we are children. • Understanding QM requires more mathematics than algebra and simple calculus. It also requires matrix algebra, complex numbers, probability theory, and partial differential equations. • Physicists are not sure what some of the equations of QM tell us about the real world. • QM suggests that atoms and subatomic particles behave in strange ways, completely unlike anything we see in our everyday lives. QM describes nature in a way that is different from how we usually think about science. It tells us how likely to happen some things are, rather than telling us that they certainly will happen. One example is Young's double-slit experiment. If we shoot single photons (single units of light) from a laser at a sheet of photographic film, we will see a single spot of light on the developed film. If we put a sheet of metal in between, and make two very narrow slits in the sheet, when we fire many photons at the metal sheet, and they have to go through the slits, then we will see something remarkable. All the way across the sheet of developed film we will see a series of bright and dark bands. We can use mathematics to tell exactly where the bright bands will be and how bright the light was that made them, that is, we can tell ahead of time how many photons will fall on each band. But if we slow the process down and see where each photon lands on the screen we can never tell ahead of time where the next one will show up. We can know for sure that it is most likely that a photon will hit the center bright band, and that it gets less and less likely that a photon will show up at bands farther and farther from the center. So we know for sure that the bands will be brightest at the center and get dimmer and dimmer farther away. But we never know for sure which photon will go into which band. One of the strange conclusions of QM theory is the "Schrödinger's cat" effect. Certain properties of a particle, such as their position, speed of motion, direction of motion, and "spin", cannot be talked about until something measures them (a photon bouncing off of an electron would count as a measurement of its position, for example). Before the measurement, the particle is in a "superposition of states," in which its properties have many values at the same time. Schrödinger said that quantum mechanics seemed to say that if something (such as the life or death of a cat) was determined by a quantum event, then its state would be determined by the state that resulted from the quantum event, but only at the time that somebody looked at the state of the quantum event. In the time before the state of the quantum event is looked at, perhaps "the living and dead cat (pardon the expression) [are] mixed or smeared out in equal parts."[3] Reduced Planck's constant[change | change source] People often use the symbol \hbar , which is called "h-bar." \hbar=\frac{h}{2\pi} . H-bar is a unit of angular momentum. When this new unit is used to describe the orbits of electrons in atoms, the angular momentum of any electron in orbit is always a whole number.[4] Example[change | change source] The particle in a 1-dimensional well is the most simple example showing that the energy of a particle can only have specific values. The energy is said to be "quantized." The well has zero potential energy inside a range and has infinite potential energy everywhere outside that range. For the 1-dimensional case in the x direction, the time-independent Schrödinger equation can be written as:[5] Using differential equations, we can see that \psi must be \psi = A e^{ikx} + B e ^{-ikx} \;\;\;\;\;\; E = \frac{\hbar^2 k^2}{2m} \psi = C \sin kx + D \cos kx \; (by Euler's formula) The walls of the box mean that the wavefunction must have a special form. The wavefunction of the particle must be zero anytime the walls are infinitely tall. At each wall: \psi = 0 \; \mathrm{at} \;\; x = 0,\; x = L Consider x = 0 • sin 0 = 0, cos 0 = 1. To satisfy \scriptstyle \psi = 0 \; the cos term has to be removed. Hence D = 0 Now consider: \scriptstyle \psi = C \sin kx\; • at x = L, \scriptstyle \psi = C \sin kL =0\; • If C = 0 then \scriptstyle \psi =0 \; for all x. This solution is not useful. • therefore sin kL = 0 must be true, giving us kL = n \pi \;\;\;\; n = 1,2,3,4,5,... \; We can see that n must be an integer. This means that the particle can only have special energy values and cannot have the energy values in between. This is an example of energy "quantization." Related pages[change | change source] References[change | change source] • Feynman, Richard, 1985. The Strange Theory of Light and Matter . Princeton University Press. • McEvoy, J.P. and Oscar Zarate, 1996. Introducing Quantum Theory. Icon Books. Notes[change | change source] 1. For an overview of the whole issue of entanglement, see J.P. McEvoy and Oscar Zarate, Introducing Quantum Theory, pp. 168—170. 2. For a good foundation see The Nature of the Chemical Bond, by Linus Pauling. 3. Schrödinger: "The Present Situation in Quantum Mechanics," p. 8 of 22. 4. Scientific American Reader, Simon and Schuster, 1953, p. 117. 5. Derivation of particle in a box, chemistry.tidalswan.com More reading[change | change source] • Cox, Brian; & Forshaw, Jeff (2011). The Quantum Universe: Everything That Can Happen Does Happen. Allen Lane. ISBN 978-1-84614-432-5 Other websites[change | change source]
2400164c012072a9
Take the 2-minute tour × I want to solve the time-dependent Schroedinger equation: $$ i\partial_t \psi(t) = H(t)\psi(t) $$ for matrix, time-dependent $H(t)$ and vector $\psi$. What is an efficient way of doing this so that it efficiently scales to high-dimensional spaces? share|improve this question what are the valuse of b and omega for the second plot? I want to check my code to see is working or not! Thanks Jiyan. –  user21640 Oct 23 '14 at 17:26 3 Answers 3 up vote 27 down vote accepted Time-dependent case in the time-dependent case, $[H(t),H(t')]\neq0$ in general and we need to time-order, ie, the operator taking a state from $t=0$ to $t=\tau$ is $U(0,\tau)=\mathcal{T}\exp(-i\int_0^\tau dt\, H(t))$ with $\mathcal{T}$ the time-ordering operator. In practice we just split the time interval into lots of small pieces (basically using the Baker-Campbell-Hausdorff thing). So, consider the time-dependent Hamiltonian for a two-level system: $$ H = \left( \begin{array}{cc} \epsilon_1 && b \cos(\omega t) \\ b\cos(\omega t) && \epsilon_2 \\ \end{array} \right) $$ i.e. two level coupled by a time-periodic driving (see here). Even this simplest possible periodically-driven system can't be solved analytically in general. Anyway, here's a function to construct the hamiltonian: ham[e1_, e2_, b_, omega_, t_] := {{e1, b*Cos[omega*t]}, {b*Cos[omega*t], e2}} and here's one to construct the propagator from some initial time to some final time, given a function to construct the Hamiltonian matrix at each point in time (and splitting the interval into $n$ slices--you should try with increasing $n$ until your results stop changing): constructU::usage = "constructU[h,tinit,tfinal,n]"; constructU[h_, tinit_, tfinal_, n_] := Module[{dt = N[(tfinal - tinit)/n], curVal = IdentityMatrix[Length@h[0]]}, Do[curVal = MatrixExp[-I*h[t]*dt].curVal, {t, tinit, tfinal - dt, This constructs the operator $U(0,\tau)=\mathcal{T}\exp(-i\int_0^\tau dt\,H(t))$ as $$ U(0,\tau)\approx\prod_{n=0}^{N}\exp\left( -iH(ndt)dt \right) $$ with $N=\tau/dt-1$ (or its ceiling anyway). This is an approximation to the correct $U$. And now here is how to look at the time-dependent expectation of $\sigma_z$ for different coupling strengths $b$: ClearAll[cU, psi0]; psi0 = {1., 0}; Chop[#\[Conjugate].PauliMatrix[3].#] &@(constructU[ ham[-1., 1., b, 1., #] &, 0, upt, 100].psi0), {upt, .01, 20, .1} Joined -> True, PlotRange -> {-1, 1} {b, 0, 2} Mathematica graphics Alternatively, you could calculate the wavefunction at some time tfinal given the wavefunction at time tinit with this: propPsi[h_, psi0_, tinit_, tfinal_, n_] := psi = psi0}, psi = MatrixExp[-I*h[t]*dt, psi], {t, tinit, tfinal - dt, dt} which uses the form MatrixExp[-I*h*t,v]. For large sparse matrices (eg, for h a many-body Hamiltonian), this can be much faster at the cost of losing access to $U$. share|improve this answer Thanks a lot for all this. However as you mentioned in your previous comment, my problem is a time-dependent schrodinger equation. In this case, the Hamiltonian doesn't commute in different times, and it can't just be a simple exponential; it should be a path ordered exponential. This is the reason that I can't do this. Such this problems don't have an analytical solution, they ave to be solved numerically!! –  ZKT Mar 21 '13 at 13:49 @Zahra For a time-dep H, you can simply construct the propagator from $t=0$ to some time $t=\tau$, say. I can explain how if you want. but let me know if you actually want so I don't waste my time if you insist on doing it with NDSolve--but ask yourself which is the most practical way if you have a hilbert space of dimension 20000, for instance (so you'd need to solve 20000 coupled ODEs with your approach) –  acl Mar 21 '13 at 15:35 Thanks a lot for the time that you give to ask my question. I really appreciate that. I'm not insisting to solve my problem with NDSolve. It would be great if I can solve it the way you are explaining. I just thought it is not possible to solve it non-numerically. Can you please tell me how to do that? I appreciate it. –  ZKT Mar 21 '13 at 15:55 @Zahra oh I see, no, what I suggest is fully numerical. You're absolutely right that such problems cannot be solved analytically in general. OK let me write it up quickly and you can see if it's useful (I routinely use this on systems with much bigger Hilbert spaces than yours, up to 20-30000) –  acl Mar 21 '13 at 15:58 here you go (I had actually done this the first time, just posted only the time-independent limit because I did not realize you had a time-dependent hamiltonian). Note that the way I construct the Manipulate is not efficient because I recalculate $U$ from scratch all the time, but it's fast enough... –  acl Mar 21 '13 at 16:04 Since there hasn't been any discussion of NDSOlve yet, let me point out that for a finite-dimensional Hilbert space where the Schrödinger equation is merely a first-order equation in time, it's easiest to just do this (using the two-dimensional Hamiltonian ham from acl's answer): ham[e1_, e2_, b_, omega_, Manipulate[Module[{ψ, sol, tMax = 20}, sol = First@NDSolve[{I D[ψ[t], t] == ham[-1, 1, b, 1, t] .ψ[t], ψ[0] == {1,0}}, ψ, {t, 0, tMax}]; Plot[Chop[#\[Conjugate].PauliMatrix[3].#] &@(ψ /. sol)[t], {t, 0, tMax}, PlotRange -> {-1, 1}] {{b, 1}, 0, 2} I copied the parameters from acl's answer too, to show the direct comparison in the Manipulate. Here the vector $\psi$ is recognized by NDSolve as two-dimensional, so the formulation of the problem is quite concise, and we can leave the time step choice up to Mathematica instead of choosing a discretization ourselves. share|improve this answer In fact the original question explicitly mentioned NDSolve (I edited it to make it less localized). There's nothing wrong with NDSolve for up to a few thousand states, but the approach I gave scales much better (I use it for systems with dimensions in the tens of thousands; NDSolve seems to tank much earlier); of course the way I wrote the code it's inefficient. –  acl Mar 22 '13 at 9:33 as an additional comment, this approach (using NDSolve directly) works also for cases where the "Hamiltonian" depends on the wavefunction, so that we have a set of nonlinear coupled ODEs. This kind of problem appears in various mean-field approaches to many-body systems (eg, Gutzwiller-ansatz approach to many-body dynamics of bosons, see eg eq 3 here ). I've used NDSolve for precisely this problem with up to a couple of thousand coupled ODEs; it's really not practical at those sizes, but there's no alternative (in mma) for nonlinear ODEs. –  acl Mar 22 '13 at 14:10 @acl Thanks for pointing that out (I already upvoted your answer). I think it would have been better to edit the question in such a way as to retain some information on what the OP originally tried already. –  Jens Mar 22 '13 at 14:19 feel free to change it, I would not object (and I imagine neither would Zahra). I thought the question as it was was way too localized (eg, it was asked about a specific Hamiltonian, defined only in hard-to read code--take a look at the original form if you haven't) and wanted to make it as general as possible so it's useful. I think that phrasing it the current way it admits as many approaches as possible. –  acl Mar 22 '13 at 14:47 @acl I see your point. No worries, it's fine the way it is. –  Jens Mar 22 '13 at 15:16 Frame it as a set of Linear ODEs and solve it somehow. I usually use Implicit Runge Kutta in the interaction picture. solver[H_, a_] := soln = Module[{d, init, eq, vars, solargs, t, t0, tf}, d = Dimensions[H][[1]]; t0 = a[[2]]; tf = a[[3]]; t = a[[1]]; u[t_] := Table[Subscript[u, i, j][t], {i, 1, d}, {j, 1, d}]; init = (u[t0] == IdentityMatrix[d]); eq = (I u'[t] == H.u[t]); vars = Flatten[Table[Subscript[u, i, j], {i, 1, d}, {j, 1, d}]]; solargs = LogicalExpand[eq && init]; NDSolve[solargs, vars, a, Method -> {"FixedStep", Method -> {"ImplicitRungeKutta", "DifferenceOrder" -> 10}}, StartingStepSize -> tf/100, MaxSteps -> Infinity]]]; U[t_] := u[t] /. soln[[1]] Alternatively, you could solve for the $\psi(t)$ and obtain U as $|\psi(t)\rangle\langle \psi(0)| $ and use an appropriate normalisation to preserve probability. share|improve this answer Your Answer
a21778873591d224
DRUM Collection: Atmospheric & Oceanic Science Theses and Dissertations http://hdl.handle.net/1903/2747 Wed, 01 Jul 2015 23:10:33 GMT 2015-07-01T23:10:33Z MINIMIZING REANALYSIS JUMPS DUE TO NEW OBSERVING SYSTEMS http://hdl.handle.net/1903/16411 Title: MINIMIZING REANALYSIS JUMPS DUE TO NEW OBSERVING SYSTEMS Authors: Zhou, Yan Abstract: A major problem with reanalyses has been the presence of jumps in the climatology associated with changes in the observing system. Such changes are common in reanalysis products. These jumps became especially obvious when satellites were first introduced in 1979. After 1979, however, during the "satellite era" jumps have continued to appear whenever a new observing system was introduced. To explore possible solutions to this problem, we develop and test new methodologies to minimize these reanalysis jumps in the reanalyses time series due to new observing systems. In the first part of this dissertation, we study a state-of-the-art reanalysis, NASA's Modern Era Retrospective-analysis for Research and Applications (MERRA thereafter). Analysis increments from the MERRA and from one reanalysis without SSM/I observations (NoSSMI thereafter) are compared and their differences are defined as correction terms. The correction terms are then introduced into the tendency equation of the forecast model, i.e., GEOS-5. The debiased reanalysis without SSM/I observation shows improvements in almost all fields, even in the precipitation field, which is generally considered to be significantly uncertain on all time and space scales. However, the difference between the analysis increments of MERRA and NoSSMI is not just due to the assimilation of SSMI, but to the accumulated effect of the assimilation of previous SSMI observations. These produce a change in the model climatology and nonlinear interactions between the variables currently observed by SSM/I, and the variables that have been modified by previous assimilations of SSMI. The nonlinear interactions introduce an additional accumulated impact during the 2-year training period. In the second part of this dissertation, we test a new methodology in a simpler data assimilation system, SPEEDY-LETKF, because it would be unfeasible for our computational resources to apply this method to the complex MERRA system. The new method defines the correction terms by calculating the difference of analysis increments from the following two analyses, 1) assimilating both rawinsondes (RAOB) and AIRS observations, named RaobAirs, and 2) assimilating only RAOB but with its background coming from the RaobAirs analysis at every 6-hour analysis cycle. This new method limits the growth of nonlinear interactions between variables observed by AIRS and the variables that have been modified by previous assimilation of AIRS. The results show that the new method is significantly more effective in minimizing reanalysis "jumps" compared with the method applied to MERRA system. In the third part of this dissertation, we explore a spectral model instability problem. Imperfect SPEEDY-LETKF OSSEs are unstable when assimilating RAOB observations only. Data assimilation processes worsen this problem. We found two methods to stabilize the imperfect SPEEDY-LETKF OSSEs. Traces of the spectral waves are also clearly present in other spectral reanalyses such as the NCEP and the ERA15, but since their resolutions are higher than that of the SPEEDY model, their impact is smaller. Wed, 01 Jan 2014 00:00:00 GMT http://hdl.handle.net/1903/16411 2014-01-01T00:00:00Z OCEAN VARIABILITY IN CMIP5 (COUPLED MODEL INTERCOMPARISON PROJECT PHASE 5) HISTORICAL SIMULATIONS http://hdl.handle.net/1903/16284 Title: OCEAN VARIABILITY IN CMIP5 (COUPLED MODEL INTERCOMPARISON PROJECT PHASE 5) HISTORICAL SIMULATIONS Authors: Ding, Yanni Abstract: The oceans play a key role in the global climate variability. This dissertation examines climate variability in historical simulations from fourteen CMIP5 (Coupled Model Intercomparison Project Phase 5) coupled models on different time scales. Responses of oceans to the external volcanic eruption, green house gas forcing, and internally generated variability are investigated with emphasis on higher latitudes. Chapter 2 addresses the oceanic response to tropical volcanic eruptions. Previous modeling studies have provided conflicting high latitude climate responses to volcanic eruptions, including the ocean's role. This controversy happens mainly because the response varies widely from model to model, and even varies among ensemble members of a single model. The increase in Atlantic Meridional Overturning Circulation (AMOC) after the volcanic eruption is closely linked with its internal variability. Chapter 3 addresses the seasonal and centennial trends in the Arctic Ocean. The Arctic warming is apparent in all models, although there is considerable variability especially its seasonal cycle. Both the surface heat flux and the oceanic heat convergence contribute to the Arctic warming on centennial time scale. Meanwhile, the seasonal variation of oceanic warming is largely determined by the atmospheric heating. In models presenting a clear seasonal cycle of surface net flux increases, there is a notable retreat of sea ice extent in winter, which allows more heat loss from the ocean through turbulent fluxes. Chapter 4 discusses the internally generated variability of high latitude water masses. Both the magnitude and the time scale of subarctic decadal variability are strikingly similar to observations. The analysis of the more realistic models provides constraints on relative roles of the oceanic heat transport and the atmospheric heat flux. One possible factor that could give rise to the different origins of ocean variability is the blocking of mid-latitude jet stream. The oceanic heat transport is more important to the decadal variability of the high latitude ocean in models where winter-time atmospheric blocking events over the Euro-Atlantic sector are more frequent. Wed, 01 Jan 2014 00:00:00 GMT http://hdl.handle.net/1903/16284 2014-01-01T00:00:00Z Atlantic Multidecadal Variability: Surface and Subsurface Thermohaline Structure and Hydroclimate Impacts http://hdl.handle.net/1903/16072 Title: Atlantic Multidecadal Variability: Surface and Subsurface Thermohaline Structure and Hydroclimate Impacts Authors: Kavvada, Argyro Abstract: The Atlantic Multidecadal Oscillation (AMO), a sea surface temperature mode of natural variability with dominant timescales of 30 -70 years and largest variations centered on the northern North Atlantic latitudes is one of the principal climate signals that have earned considerable attention in the recent decades, due to its multilateral impact on both local and remote weather and climate and its importance in predicting extreme events, such as drought development over North America. A 3-dimensional structure of the AMO is constructed based on observations and coupled, ocean-atmosphere 20th century climate simulations. The evolution of modeled, decadal-to-multidecadal variability and its hydroclimate impact is also investigated between two successive model versions participating in the CMIP3 and CMIP5 projects. It is found that both model versions underestimate low frequency variability in the 70-80 and 30-40 year ranges, while overestimating variability in higher frequencies (10-20 year range). In addition, no significant improvements are noted in the simulation of AMO's hydroclimate impact. A subsurface, vertically integrated heat content index (0-1000m) is proposed in an effort to capture the thermal state of the ocean and to understand the origin of AMO variability, especially its surface-subsurface link on decadal- to- multidecadal timescales in the North Atlantic basin. The AMO-HC index exhibits stronger oscillatory behavior and shorter timescales in comparison to the AMO-SST index, while leading the latter by about 5 years. A cooling of the North Atlantic subsurface is discernible in the recent years (mid-2000s -present), a feature that is almost absent at the ocean surface and could have tremendous implications in predicting future North Atlantic climate and in relation to the recent hiatus in the rise of global surface temperatures that was noted in the latest Intergovernmental Panel on Climate Change assessment report. Finally, AMO's decadal variability is shown linked to Gulf Stream's northward surges and the low-frequency NAO, as envisioned by Vinhelm Bjerknes in 1964. A cycle encompassing the low-frequency NAO, Gulf Stream's poleward excursions and the associated shifts in surface winds and SSTs over the subpolar North Atlantic is proposed as a possible mechanism for AMO's origin and a principal target for future research. Wed, 01 Jan 2014 00:00:00 GMT http://hdl.handle.net/1903/16072 2014-01-01T00:00:00Z Breeding Analysis of Growth and Decay in Nonlinear Waves and Data Assimilation and Predictability in the Martian Atmosphere http://hdl.handle.net/1903/16063 Title: Breeding Analysis of Growth and Decay in Nonlinear Waves and Data Assimilation and Predictability in the Martian Atmosphere Authors: Zhao, Yongjing Abstract: The effectiveness of the breeding method in determining growth and decay characteristics of certain solutions to the Kortweg-de Vries (KdV) equation and the Nonlinear Schrödinger equation is investigated. Bred vectors are a finite amplitude, finite time generalization of Leading Lyapunov Vectors (LLV), and breeding has been used to predict large impending fluctuations in many systems, including chaotic systems. Here, the focus is on predicting fluctuations associated with extreme waves. The bred vector analysis is applied to the KdV equation with two types of initial conditions: soliton collisions, and a Gaussian distribution which decays into a group of solitons. The soliton solutions are stable, and the breeding analysis enables tracking of the growth and decay during the interactions. Furthermore, this study with a known stable system helps validate the use of breeding method for waves. This analysis is also applied to characterize rogue wave type solutions of the NLSE, which have been used to describe extreme ocean waves. In the results obtained, the growth rate maxima and the peaks of the bred vector always precede the rogue wave peaks. This suggests that the growth rate and bred vectors may serve as precursors for predicting energy localization due to rogue waves. Finally, the results reveal that the breeding method can be used to identify numerical instabilities. Effective simulation of diurnal variability is an important aspect of many geophysical data assimilation systems. For the Martian atmosphere, thermal tides are particularly prominent and contribute much to the Martian atmospheric circulation, dynamics and dust transport. To study the Mars diurnal variability (or thermal tides), the GFDL Mars Global Climate Model (MGCM) with the 4D-Local Ensemble Transform Kalman Filter (4D-LETKF) is used to perform a reanalysis of spacecraft temperature retrievals. We find that the use of a "traditional" 6-hr assimilation cycle induces spurious forcing of a resonantly-enhanced semi-diurnal Kelvin waves represented in both surface pressure and mid-level temperature by forming a wave 4 pattern in the diurnal averaged analysis increment that acts as a "topographic" stationary forcing. Different assimilation window lengths in the 4D-LETKF are introduced to remove the artificially induced resonance. It is found that short assimilation window lengths not only remove the spurious resonance, but also push the migrating semi-diurnal temperature variation at 50 Pa closer to the estimated "true" tides even in the absence of a radiatively active water ice cloud parameterization. In order to compare the performance of different assimilation window lengths, short-term to long-term forecasts based on the hour 00 and 12 assimilation are evaluated and compared. Results show that during NH summer, it is not the assimilation window length, but the radiatively active water ice cloud that influences the model prediction. A "diurnal bias correction" that includes bias correction fields dependent on the local time is shown to effectively reduce the forecast root mean square differences (RMSD) between forecasts and observations, compensate for the absence of water ice cloud parameterization, and enhance Martian atmosphere prediction. The implications of these results for data assimilation in the Earth's atmosphere are also discussed. Wed, 01 Jan 2014 00:00:00 GMT http://hdl.handle.net/1903/16063 2014-01-01T00:00:00Z
488b5e430f627d51
Sign up Here's how it works: 1. Anybody can ask a question 2. Anybody can answer I want to make a diffusion kernel, which involves $e^{\beta A}$, where A is a large matrix (25k by 25k). It is an adjacency matrix, so it's symmetric and very sparse. Does anyone have a recommendation of a tool to solve this? I use the term "tool" loosely - if you know that transforming it in this way first or whatever is useful then I'd like to know that. I am going with a hack - since the kernel "diffuses" relatively quickly, I just take only the neighbourhood around the two vertices that I want. This gives me a much reduced adjacency matrix which I can then raise e to without difficulty. I'm not familiar enough with the kernel function though to know how severely this is screwing up my results, and it's imperfect at best, so I'm still interested if anyone has a better idea. share|cite|improve this question 6… – Steve Huntsman May 28 '10 at 14:02 You might find this article of interest – Guy Katriel May 28 '10 at 14:26 The two links posted so far are identical. – j.c. May 28 '10 at 14:47 In MATLAB you'll want to sparsify explicitly if you haven't already; the "sparse" command does this. Then use "eigs" (not "eig") to return the eigenvectors. Do what everyone else is saying (if your matrix is really that sparse, MATLAB should be up to it on a modern laptop) and then compare the results you obtain with "expm" (if you can). I'd be surprised if the calculation took more than a few minutes. – Steve Huntsman May 28 '10 at 20:40 Xodarap, why are you exponentiating this matrix, and where does the problem come from? In other words, do you want the object $(e^A)$, or are you interested in computing its action on a given vector? These are (at the numerical linear algebra level) somewhat different questions. I'll be happy to point you to some references if you specify what you're trying to do. – Nilima Nigam Jun 23 '11 at 5:26 Suprised that no one mentioned Expokit, It does exactly what was requested, and is available in several different implementations (including Matlab). share|cite|improve this answer The book by Higham and the "nineteen dubious ways" paper deal with the dense case only. For the sparse case, the best way to go is using an algorithm that computes the so-called action, i.e., the map $ v \mapsto \exp(A)v$. See e.g. Al-Mohy, The matrix $\exp(A)$ itself is full and unstructured, and generally you do not want to use it. If you really need it, though, check out a series of papers by Benzi and coauthors: they show that the off-diagonal elements of many matrix functions decay exponentially, and thus your matrix might be "nearly banded". share|cite|improve this answer Al-Mohy and Higham's paper is great is you are dealing with sparse matrices. A preprint of the paper can be found on Higham's website, and he has MATLAB code that implements the algorithm. – Marcus P S Jan 15 '14 at 0:50 This is not an answer, but it's too long for a comment. First, you need advice from a numerical analyst, not me. Computing matrix exponentials is a well-studied problem with a large literature. For one example, the recent book by Higham "Functions of matrices. Theory and computation" devotes a chapter to it. Matlab has a builtin routine for it. The trick will be to take advantage of the sparseness, which almost certainly rules out an approach based on diagonalization. Taylor series are not likely to help---try computing $\exp(100)$ using the series expansion about $0$. Also, just because you can write down the problem you want to solve using a matrix exponential, does not guarantee this is the best way to solve it. (To give a crude example, the solution to the linear system $Ax=b$ is $A^{-1}b$, but no-one in their right mind solves linear systems by computing inverses.) share|cite|improve this answer Glad somebody sees this my way. I googled "diffusion kernel," this problem is very far from being simply about exponentiating matrices. Then I deleted my answer, nobody seemed interested. Paper by Kondor and Lafferty, presentations by Liang Sun and then Bruno Jedynak. – Will Jagy May 29 '10 at 18:26 Yeah, I must admit that when I asked this question I didn't realize it was so unsolved. I thought the answer would be "use the really-big-sparse-matrix add-on to Matlab" or something. That being said, sparse adjacency graphs (e.g. the web, genome mapping, etc.) appear all the time, and so I don't believe that there is no acceptable solution - I will accept that there is no perfect solution, but the problem seems too common for there to be no standard toolkit. – Xodarap May 30 '10 at 20:54 @Xodorap, to be clear: there are scores of excellent algorithms out there for sparse matrix operations. What we need to get from you is a categorical statement like 'I want the matrix exponential itself' or 'I want the solution of the diffusion equation $u_t=Au$ with given data'. There are lots of acceptable approaches in either case, but they are not the same. As Will and Chris point out, it is rare for someone to genuinely need $e^A$ for a large, symmetric and sparse $A$. – Nilima Nigam Jun 23 '11 at 15:02 I've asked for some clarification in a comment. In the meanwhile, if you're looking for software, I'll assume you've tried PETSc or Trilinos already? Here's a link to the freeware by Jiri Pittner, which links to BLAS routines as well: Here's a site from INRIA share|cite|improve this answer If you have a sparse matrix with localized effect (e.g. small valences), fast eigenvalue drop off and are required to compute the full matrix exponential, then you might be interested in 'diffusion wavelets'. While calculating the exponential they are as well calculating a basis where the result is still sparse. Yet I am not aware of a ready to use implementation. share|cite|improve this answer You can use the Chebyshev Polynomial expansion to calculate the effect of the matrix exponential on a vector. Which is a standard technique in quantum chemistry community and the method is extremely stable and fast. This method was developed by Tal-Ezer and Kossloff in an article named An accurate and efficient scheme for propagating the time dependent Schrödinger equation You can see a Reviews of Modern Physics article by Alexander Wesse which deals with Kernal Polynomial Method (A generalization of the Chebyshev type algorithms). I assume that to access these references you have the subscription to these scientific journals. share|cite|improve this answer If your matrix is diagonalizable, say $A = PDP^-1$, then $\exp(A) = P \exp(D) P^-1$. If your matrix is not diagonalizable and you need the more general Jordan Canonical Form, this approach may not work. JCF is not suitable for numerical computation since it forming the JCF is a discontinuous process: arbitrarily close matrices can map to canonical forms that differ by an integer in one entry. You could calculate $\exp(A)$ directly by its Taylor series. Then the problem becomes how to efficiently calculate powers of $A$. Maybe you could take advantage of your particular sparsity structure to calculate these powers. share|cite|improve this answer Do you know of a good way to diagonalize such a large matrix? Figuring out all 25k eigenvalues seems very time-consuming. – Xodarap May 28 '10 at 15:42 You don't have to calculate all of the Taylor series. If you let P be the characteristic polynomial of the matrix, then you can write exp(A) = g(A) * P(A) + rest, where g is entire, and Cayley-Hamilton then gives exp(A) = rest (you can divide entire functions of matrices by polynomials). The rest can be calculated by finite differences, if I remember correctly. – Gunnar Þór Magnússon May 28 '10 at 16:33 Xodarap says A is real symmetric, so it is indeed diagonalizable. So as Xodarap points out above, the real question is how to go about diagonalizing. – Mark Meckes May 28 '10 at 16:58 A full diagonalization will not take advantage of sparsity. – Terry Loring Dec 22 '13 at 18:20 Have a look at a recent paper discussing how matrix sparseness and locality go together: "Decay Properties of Spectral Projectors with Applications to Electronic Structure" by Benzi et al. in SIAM Review, 55(1), 3--64, (2013). The paper has applications that go beyond what the title indicates. Much of the paper covers continuous functions applied to sparse hermitian matrices. If you have some way of determining a priori which matrix elements will be small, you can compute a polynomial of the matrix quickly. If your graph is related to a surface, you have an idea of how far apart on the graph two vertices need to be before they can be neglected. To decide what polynomial to use, I would suggest you get an approximation of the operator norm. This is fast for a sparse matrix. In matlab you use normest. In other languages see: "Estimating the matrix p-norm" by Nicholas J. Higham, Numerische Mathematik, 62(1), 539--555, (1992). The code there simplifies in the case $p=2$, which is the case you want. This norm estimate, rounded up a bit for good measure, tells you where the spectrum of your matrix sits. Now get (say from a truncated power series) a polynomial that is close enough for your purposes to the actual exponential on the spectrum of your matrix. Even if you can't figure which matrix elements of the answer you will zero-out, if you can accept a modest error and so deal with a polynomial of relatively small degree, then you are just needing to compute several powers of a sparse matrix. It is then a question of how-sparse you start with vs. how high a power you need. I will warn you that I find Matlab does not do so well taking products of sparse matrices. I think it is optimized for minimizing data storage, not matrix multiplication. share|cite|improve this answer Your Answer
c8ef6c8b32a6c8c6
Sign up Here's how it works: 1. Anybody can ask a question 2. Anybody can answer If a free region in space has a potential difference of one volt, an electron in this region will acquire kinetic energy of 1 eV. Its speed will be much smaller than the speed of light hence it will be a non relativistic electron. On the other hand conduction electrons in graphene are relativistic for the same potential difference. Question is how come that when the electrons are in vacuum they are non relativistic, and when they are inside Graphene they are relativistic (for the same potential difference)? share|cite|improve this question I think you assume from the first sentence $v\approx \sqrt{\frac{2 U_{eV}}{m_e}}<<c_0$ But your second sentence is a little bit misleading "conduction electrons in graphene are relativistic for the same potential difference". What do you mean? Electron mobility? Drift velocity? They are nonrelativistic. Or are you asking for the electron movement in a molecule? Well for Carbon the Schrödinger equation is a good approximation, you do not need the Dirac equation. You have to consider relativity for s electrons for heavy elements with high charge with e.g. ZORA - zeroth order relativistic approxi – Alex1167623 Feb 26 '12 at 0:10 Please do not answer questions if you are not familiar with the field. A quick google immediately brings up a wealth of information about the statement. In this case, there is no connection to the actual speed of light and the statement is a purely formal one regarding the equation of motion for quasiparticles. – genneth Feb 26 '12 at 2:32 @genneth You are right that this should not be an answer, but rather a comment on the given question to improve it. Pushed the wrong button :-(. But I am dissapointed too that a proper answer was not given by you, nor a reference. Hans de Vries finally clearified it, thnx. – Alex1167623 Feb 29 '12 at 9:33 As far as I understand, electrons in graphene are not relativistic, although quasiparticles in graphene are indeed described by the massless Dirac equation. However, for graphene, the speed velocity in this equation is replaced by the Fermi velocity, which is much smaller. share|cite|improve this answer @Revo: In my book, a particle is relativistic if its velocity is comparable to the velocity of light. If you use a different definition, please give me a reference to a reliable source (if you just alluded in your comment that the eigenvalues of velocity projections for a Dirac particle are always +-c due to Zitterbewegung, this does not seem to be relevant to your question). The velocity of the quasiparticles in graphene is always comparable to the "velocity of light" in the massless Dirac equation for graphene, but that "velocity of light" is not the genuine velocity of light. – akhmeteli Feb 9 '12 at 22:14 @Revo: No. I believe a particle is relativistic when its velocity is comparable to the velocity of light in vacuum. In most cases the velocity of speed in media is comparable to that in vacuum, so the clarification about vacuum is usually omitted. I agree that some exotic media may present exceptions. That does not mean that the particle that you describe must be described by a relativistic equation. It just so happens that quasiparticles in graphene can be described satisfactorily (to some extent) by an equation looking exactly like the massless Dirac equation with lesser "velocity of light" – akhmeteli Feb 10 '12 at 19:35 @Revo: I have two problems with your reasoning. While I agree that the standard Dirac equation is a relativistic equation and that it correctly describes a relativistic spin one half particle, that does not mean that if a particle is correctly described by the Dirac equation, it is necessarily relativistic, because the Dirac equation correctly describes slow particles as well. The above reasoning is correct, however, for the standard MASSLESS Dirac equation, as such an equation does not describe correctly slow particles. The other problem is described in another comment. – akhmeteli Feb 11 '12 at 1:19 @Revo: The other problem is as follows. The massless Dirac equation used for quasiparticles in graphene is not the standard massless Dirac particles for the following reasons. While it looks exactly like the standard massless Dirac equation, the speed constant in this equation is much less than the velocity of speed in vacuum, so it only describes particles that are slow compared to the velocity of light in vacuum. Furthermore, the equation is not relativistic in the sense that it is not invariant under Lorentz transforms, it is only correct in the frame of reference of the graphene lattice. – akhmeteli Feb 11 '12 at 1:30 @Revo: you are mistaken about the link between the Dirac equation and relativity. The Dirac equation correctly describes a single particle relativistically, but does not have to. One can use it to do other things. The statement "electrons in graphene are relativistic" is a purely formal statement about the lack of a rest mass for the quasiparticles. – genneth Feb 26 '12 at 2:30 According to this article: The statement that in graphene the "conduction electrons are massless" is because the energy levels (bands) are proportional to their momenta. So the $E = \sqrt{p^2+m^2}$ relation of a free electron becomes $E\propto p$ in graphene. Massless particles travel all at the same speed because of the $E\propto p$ relation but this characteristic velocity in graphene is far below c though, only 0.3% of the speed of light. The reason that the relation $E\propto p$ leads to a characteristic speed is due to the quantum mechanical wave character. $E$ is proportional to the phase changes in time, $p$ is proportional to the phase changes in space and therefor $p/E$ is proportional to the velocity. In the case that $E\propto p$ there is a characteristic velocity $v$ independent of the energy level. share|cite|improve this answer Your Answer
5796810b249a532a
Sign up Here's how it works: 1. Anybody can ask a question 2. Anybody can answer Why is a proton assumed to be always at the center while applying the Schrödinger equation? Isn't it a quantum particle? share|cite|improve this question Self interactions are not considered in a non-relativistic quantum mechanical treatment and the Hydrogen atom is usually treated that way, in a first course. – Torsten Hĕrculĕ Cärlemän Dec 31 '13 at 8:51 @TorstenHĕrculĕCärlemän : What about proton being at the center? – Rajesh Dachiraju Dec 31 '13 at 8:53 I don't get the fact about it being at the center of a coordinate frame, and it being a quantum particle. You can infact take any point as the origin, only to complicate the expressions further. It is most natural to hence take the nucleus at the center. – Torsten Hĕrculĕ Cärlemän Dec 31 '13 at 8:56 @RajeshD The assumption that the proton is stationary is just an approximation used since protons are about 2000 times as massive as the electrons and 2000 is approximately infinity. – David H Dec 31 '13 at 8:57 @DavidH : Thanks David. That seems very reasonable. – Rajesh Dachiraju Dec 31 '13 at 9:00 up vote 20 down vote accepted There is a rigorous formal analysis which lets you do this. The true problem, of course allows both the proton and the electron to move. The corresponding Schrödinger equation thus has the coordinates of both as variables. To simplify things, one usually transforms those variables to the relative separation and the centre-of-mass position. It turns out that the problem then separates (for a central force) into a "stationary proton" equation and a free particle equation for the COM. There is a small price to pay for this: the mass for the centre of mass motion is the total mass - as you'd expect - but the radial equation has a mass given by the reduced mass $$\mu=\frac {Mm}{M+m}=\frac{m}{1+m/M} ,$$ which is close to the electron mass $m$ since the proton mass $M$ is much greater. It's important to note that an exactly analogous separation holds for the classical treatment of the Kepler problem. Regarding self-interactions, these are very hard to deal with without invoking the full machinery of quantum electrodynamics. Fortunately, in the low-energy limits where hydrogen atoms can form, it turns out you can completely neglect them. share|cite|improve this answer I assume you're talking of the hydrogen atom; the hamiltonian of the nucleus + electron system is $$ H = \frac{p_e^2}{2 m _e} + \frac{p_n^2}{2 m _n} - \frac{e^2}{|r_e - r_n|}. $$ You can do a change of coordinates (center of mass coordinates) $$ \vec{R} = \frac{m_e \vec{r}_e + m_n \vec{r}_n}{m_e+m_n} \\ \vec{r} = r_e -r_n $$ and find the conjugate momenta to these coordinates: $$ \vec{P} = \vec{p}_e + \vec{p}_n \\ \vec{p} = \frac{m_n \vec{p}_e - m_e \vec{p}_n}{m_e+m_n}. $$ Defining also the reduced mass $\mu$ such that $$ \frac{1}{\mu} = \frac{1}{m_e} + \frac{1}{m_n} $$ and the total mass $M = m_e + m_n$, you can write the hydrogen atom hamiltonian as $$ H = \frac{P^2}{2 M} + \frac{p^2}{2 \mu} - \frac{e^2}{r} = H_{CM} + H_{rel}. $$ In this calculations I always treated the nucleus as a quantum particle; but if you look at $H_{rel} = p^2/2\mu - e^2/r$ and let the mass of the nucleus tend to infinity, you obtain the hydrogen atom hamiltonian usually taught in basic QM courses Also, you don't have other terms like spin-orbit, j-j couplings etc. because they are relativistic effects that come out from the Dirac equation. share|cite|improve this answer thanks for the explanation @AlexA – Rajesh Dachiraju Dec 31 '13 at 9:36 With regards your first question: A similar (the same?) question you might reasonably ask is: how can we assume that the proton is stationary, at the centre of the problem, since it is surely going to be attracted by the electron and jiggle about a little? This is a question that would be just as valid directed at a classical system --- say, a planet orbiting a star --- as a quantum mechanical one. The solution to this is as described above, by others: the fact that the star/proton is so much more massive than the planet/electron means that it is going to move very little (the acceleration of an object is inversely proportional to its mass, and hence with a large mass we have a very small acceleration i.e. very little motion), and so the stationary nature of the star/proton is a great approximation. And in fact, we can make the analysis completely rigorous by dealing with relative separations and reduced masses. But the finite mass of the proton means that indeed, the proton won't actually be stationary. However, I'm not sure this is the question you're asking. Your concern was not "isn't the proton a particle of finite mass" but rather "isn't it a quantum particle". The suggestion is that you think the proton should jiggle due to its quantum mechanical nature --- that is, due to the uncertainty principle etc. --- irrespective of the mass of the proton (perhaps I am mistaken about this). In the limit of the proton having infinitely more mass than the electron, the quantum mechanical nature of the proton won't force it to jiggle. In other words, the uncertainty in its position, $\Delta x$, can be made arbitrarily close to zero. This is consistent with the uncertainty principle since its momentum $p$ (mass x velocity) can tend to infinity in the limit of an infinitely massive proton. Hence we can still achieve $$ \Delta p \Delta x \geq \frac{\hbar}{2} $$ with an arbitrarily small velocity and positional uncertainty, if we make the mass arbitrarily large. In other words, in the assumption that we're using to neglect motion of the proton due to it being attracted to the electron, we are also able to neglect the motion of the proton due to quantum mechanical effects. The reality of course is that the proton will jiggle --- it will jiggle a bit due to its intrinsic quantum mechanical nature, and it will jiggle a bit more due to the attractive force on it of the electron. However, this can be dealt with rigorously just as before, using relative separations and reduced masses. share|cite|improve this answer protected by Qmechanic Mar 3 '14 at 16:12 Would you like to answer one of these unanswered questions instead?
23c49195c473ec20
alegría, galería, argelia, alergia, riégala, aligera Leyendo rápidamente las inquietudes y alegrías de la gente en facebook esta mañana, leo en el muro de una amiga: “Dia mundial contra el cancer de mama” Y yo pienso, “joder, vaya madre más importante la suya, que tiene su propio día mundial; pensaré en ella yo también, espero que se recupere”. Como leer los muros del día es algo que se hace con sólo relativo interés y vaga atención, me lleva unos tres segundos darme cuenta de lo que acabo de hacer. Gente hispanoparlante, ¡acentuad vuestras palabras! in the 3rd position, shame award: people who bring their house keys hanging from their neck in a red collar in the 2nd position, idiocy award: people who talk to their dog (with the same voice tone you use with babies) and actually believe to be having a conversation with them in the 1st position, disrespect award: people in the platform who get into the train car before letting the packed crowd inside get off every once in a while, like today in the train, i temporarily loose faith on human intelligence we are amazing. look, i just asked google to find information on “peeing in fresh fallen snow”. i know, bear with me. i was speaking about that a couple of posts ago, so that’s why i came up with such a sentence to look for. thing is, i hoped it would be a rare enough of a concept as to give google search a hard time giving me back any sensible information. i have no clue about the internal mechanics of the search engine, but i was naively expecting that this being an infrequent query, the answer would not be cached anywhere and that a long search process would be run, possibly giving me some random links not related to the semantics of my query at all but to pages speaking of pee alone, or snow. so yeah, i just asked google to find information on “peeing in fresh fallen snow”; and in a blink, the search engine gave me a link to a page at the urban dictionary website which has the following definition: Urinart: Drawing a picture in freshly-fallen snow using urine and this, my friends, blows my mind in so, so many ways. first of all, the fact that the word urinarting has already been created is pretty awesome. secondly, that somebody invested the time to put this information online is also pretty amazing. third, that google was able to handle my weird query by crossing information with all sort of unstructured sources of information out there and that it found this definition is seriously astonishing. fourth, that it did it in no more than 0.26 seconds is ridiculously impressive. sixth, that humans have reached this state of mastery in information manipulation and management, that we do have tools to store, classify and index information in such a cheap manner that not even the most daring science fiction author would possibly have dreamed of just 20 years ago, this is freaking mind blowing. i don’t know. when i was a kid before internet became popular at around ’95, i would often have to cycle to the public library to physically scan shelves in order to search for an outdated version of the information i was looking for. my great-grandmother, who was born in a tiny village in the mountains by the same time the light bulb was created, knew nothing about the world but what a guy in a black dress would tell her every Sunday morning in form of canticles and rituals. so look at it with a bit of perspective. we are a ridiculously plastic species. you know those crazy high tech cameras able to record thousands of frames per second, that cost $250,000 ? i wonder if they are any useful beyond recording random objects being blown up in slow motion like, say, water balloons in peoples faces. seriously. it got so boooooooring being a man has some advantages, and some disadvantages. among the former, there is that of, when in the mountains during winter, being able to write your name by peeing in a bunch of new fallen snow (american’s do have it easier since they have it really short. their name, i mean – just one syllable most of the times). the joy of this realization is immense in every early morning bart car heading to the city there’s a few young woman with a mirror in one hand, an eyeliner in the other. time is precious, and this is a great way to buy some extra 15 minutes of sleep back home. they change the eyeliner for a mascara applier and proceed with the eyelashes, there in the middle of a crowd with whom they have nothing to do. only the people that they can reach through the social network in their smart phone’s really matters to them. like the work colleagues they are about to meet in the office or the new clients they will talk to today. it’s time to go for some lipstick. some astonishingly precise moves, and they’re ready to go. in every late night bart car heading to the city there’s a few young woman with a mirror in one hand, an eyeliner in the other. time is precious, and this is a great way to buy some extra 15 minutes of rest back home. they change the eyeliner for a mascara applier and proceed with the eyelashes, there in the middle of a crowd with whom they have nothing to do. only the people that they can reach through the social network in their smart phone’s really matters to them. like the best friends they are about to meet in the pub or the new strangers they will talk to today. it’s time to go for some lipstick. some astonishingly precise moves, and they’re ready to go. today i saw this image below in a blog dedicated to science, and i got immediately sad, cause it reminds me that even people doing science themselves don’t always really get it – they seem to not fully understand what science is about. the statement above is basically saying that a perfect world doesn’t have discontinuities – that things change slowly without abrupt alterations, that things that are a lot don’t become a little suddenly without ramping down gradually, that if things are here now and they will be there later it’s only because they are going to be “in between” before. basically the image is claiming that in a perfect world things are not broken but smooth. that’s not true though, reality, the world, the things around us, everything is mostly broken and discontinuous. whoever wrote that blog post above noted it by implying that this world is indeed not perfect or ideal. see, this is my problem – there’s nothing wrong with the world. the world it’s ideal, it’s not the imperfect thing Plato thought it was (with terrible consequences for the western culture as we know). the world is doing just fine, believe me. let me repeat it: the world is doing just fine. humans aren’t. indeed, it is our mathematics that are not ideal. or at least, they are not up to the task of describing efficiently everything around us, discontinuities included. but surely enough, the universe is full of discontinuities at all scales (it can really get pretty fractallie sometimes), it’s not made of boring spheres and planes as Galileo wrongly claimed, nor it’s made out of derivatives, ordinary differential equation and any other human abstractions. an ideal world does not follow lim {x->c} f(x) = f(c). and this, is not a problem. it’s a gift. on the contrary, in an ideal world humans enjoy less primitive mathematics than our current, some mathematics that allow us to describe and model and manipulate discontinuities and all other beautiful features of all the things that we see around us. basically, we humans have a problem, the universe doesn’t. thinking that an ideal world is one where the universe follows our thinking process (and not the other way around) is simply a too much of a human egocentric position. which ironically the scientific community has always proudly claimed to refrain from. thing is that science too fails to do so sometimes, for humans have this tendency of making the universe orbitate around them. even some scientist. still today. i know. sight. walking down some dark alley i find this you know what, i am so not calling this number …there are quite a lot. and in spite of the fact that they are short, you can still say quite a lot with them. but since it can still get quite hard to say any long phrase too, and just for the sake of fun, i thought we might play this game where we only talk with them. what do you think, shall we give it a try? well, read this text back – it’s your turn now! next time they ask me “what’s up, dude?” i’ll answer “it’s a direction” how come “quite a lot” and “quite a few” mean the same thing? it’s a pretty regular fall day, not cold, not warm, a bit cloudy, but not overcast. just a pretty regular fall day, and just that. while in the last few meters of pedaling till my home i think i should probably go grocery shopping before they close the stores. so i climb the stairs, leave the iñicleta (my bicycle’s name), and head downstairs again. as i open the door to quit the building i notice something weird. i see some orange colors everywhere, like if there was some nearby building in fire or something. alarmed look around and i notice the it’s not any building nor car, but the sky which is orange and purple, tinting everything in deep saturated orange. it’s pretty gorgeous in fact. amazingly beautiful. extraordinary, such vivid colors, it’s completely surreal, i’ve certainly never seen anything like this in my life. i see lots of people looking up the sky too. there’s a rainbow. no, two rainbows! but i don’t mind, at this moment is not the sky colors nor the double rainbows, but the fact that the streets are full of people looking to the sky. people have left the shops, restaurants and cars and stop whatever they were doing in order to loop up in the sky. it’s an amazing phenomena. not only the sky, rainboes and the crazy colors of the city in orange and purple fire, but also seeing how everybody is amazed to the spectacle and we are all looking up the sky. to this fantastic surreal painting that we are part of, the double rainbow is nothing but just the perfect signature. i love when random fact/events connect together. the connection often happens in the form of a flashback. event #1: i just woke up in a pretty fancy hotel in downtown LA. first thing to do in this sunny morning is to perform some exploration and try to identify a place for breakfast. so i start walking, and pass by a huge library that has this huge metallic plate with some equations on physics (or for the matter, on that gray area where physics meets chemistry). of course i pause my walk and have a closer look to it. i cannot tell exactly what they are, i only recognize what looks to me like Heisenberg’s uncertainty principle (but i’m a bit unsure, as this is not an area of science where i am exactly comfortable). but it is clear to me that this is about quantum physics, that’s all i can tell. intuitively E seems to be some sort of force or potential to me, given how it gets substracted from itself in the last equation and how it acts as a driving/forced excitation in the third. but who knows. yet, i cannot stop looking at the third equation – it really catches my attention, as its shape feels sort of familiar. i look at it more closely, and i realize it’s a Helmholtz equation plus an external force indeed, an equation that in isolation expresses the change of the change of something as being proportional to the thing in itself (yes two changes, this is, the laplacian). these sort of equations/behaviors are common in electrical engineering, and result in all sort of wave equations. but of course i don’t recognize the quantities in this particular wave equation at all, so i have no idea what the subject of the equation is. only that it must have be describing something in quantum physics and that since after taking changes (derivatives) of it twice still remains propotional to itself, it must be some sort of harmonic function, something that oscillates. indeed harmonics functions (which are eigenfunctions of the laplacian) result in stuff that oscillates like a pendulum, or like a wave (therefore the name of these equations). oscillation means cosinus functions (in 1D), complex exponential (in 2D) or spherical harmonics (in radial 3D). so whatever this equation is describing it is something what undulates like wave. of course at this point i cannot go further, and since i’m still hungry and the reason for this walk was to fulfill my stomach needs, i take a picture of the equations, which is my ever first picture in LA, and i continue walking. i’ll probably never see these equations again in my life. picture taken in the entrance to a library in LA event #2: i’m chatting with my friend to whom i didn’t talk in the last few weeks. today she has been preparing some notes for a course for undergraduate students of chemistry, and she expresses her concern about how to best introduce the Schrodinger’s equation first as an introduction without alienating them with an abstract understanding of what it means. of course, i have no idea myself what the heck she’s talking about, but a science lover as i am, my first reaction is of course to go to Wikipedia and look for “Schrödinger equation”. as soon as start i reading i realize how rotten my memories in physics are. i soon lose any hope to understand anything in this article, unless i would spend a couple of days diving in the subject, which i of course have no time to do. but at least i now know what she’s talking about. sort of. very superficially. i’m about to close the page, but i poke one more page-down in the article, and there suddenly i see something that produces an instantaneous flashback. there is an equation there that i have seen before. not that i’ve been trained in equation matching and detection or anything, but this one equation, yes, i have seen it before. i quickly go to my phone, and search for the picture i took in LA a few weeks before. and…. match!!! yay, that Helmholtz equation i saw in LA was this famous Schrodinger’s equation thingy, and from the little bit i understand of this article it seems it has something to do with physics/chemistry and the study of atom. so that’s that it was that thing in LA, cool! of course at this point i cannot go further, and since we are talking about other topics already anyway, i close the Wikipedia. i’ll probably never see these equations again in my life. event #3 weeks later my friend asks me for advice/help in realtime visualization of atomic structures, cause she believes that may probably help her fellow students understand what’s going on in three dimensional space. i receive the notes she is preparing for the students so i can see the context in which the visualization is needed. i’m reading the notes in my morning commuting in the b.a.r.t., and my eyes bump into one of diagrams she had. “eh, wait a minute!”. i have seen these diagrams before when working with the essentials of lighting in computer graphics. or are they just some similar diagrams? they look exactly the same to me, hm. i read the preceding paragraphs, and i see two dimensional coefficients called m and l related to these diagrams, m running from -l to l. pretty much like indices to Legendre polynomials. oki, this cannot be an accident, these are spherical harmonics. like in computer graphics. like in electrical engineering. i get an instantaneous flashback again. Legendre, Harmonics, Helmholtz, Schrödinger!! electromagnetic wave propagation, visibility encoding for computer graphics, atoms!! i read the full notes, and indeed, it feels like a present given to me after all these years since i last studied about the s, p, d and f atom orbitals at school. now, 17 years later, i finally learn what they actually are, or more correctly, why they are the way they are! where they come from, how to solve them, how to describe them! how exciting! but of course at this point i cannot go further, and since i’m heading to work and finally made it to my station, and i’m running late, i stop reading the notes here. but this time i won’t say that i’ll probably never see these equations again in my life. i love the tickles it produced in my spirit to close this circle today. relating things i know today to things i learnt no less than 17 years ago, as if they had been waiting for the connection to happen. learning is fascinating. and when it happens this way, even more. and all thanks to that metallic panel in doors to that library in Los Angles that one morning. there aren’t many things more humiliating than being the hurricane reporter. your dignity gets miserably ruined forever, in front of the whole world, while you wear that ridiculous slicker and wellingtons, you fight the wind while trying to speak to the mic and your face gets slapped over and over again by your hoodie. i mean, was it really necessary to send anybody there to report the news? i can imagine the conversation that same morning in the officre: - hey, have you met the new guy yet? - the intern? - yep, Mr Look At Me I’m A Professional Journalist. i think we should teach him how things really are over here. - you know what, they told me there’s a hurricane coming tonight in Texas… looking to the contacts in my phone seems like looking back in the past. it brings old memories of good times through names that i had almost forgotten, names that like a thread that i can pull from, allow me to recover amazingly vivid moments, situations, experiences, places, people, moods, expectations, smells, adventures, ideas, interests, sounds and songs that would otherwise have sunk and get lost forever in an ocean of past times. a few of these names belong to people i met 15 years ago and that i’m still in touch with, and many other names belong to people i only met for 15 minutes. sometimes even less. but regardless of that, as i scroll the contact list i take a moment to think about how i met every single one of these people, in which context. and regardless of that too, sometimes it all comes automatically in a fraction of a second, sharp and vivid, while other times i have to do an effort, like if for some reason the memory had decided to slip away, perhaps with complicity of the person the memory is about or with my own. but in the end all memories come back, one by one; and as i scroll this list down, for every of these names, i recover a bit of that myself i was once. looking to this contacts list in the phone really seems like looking back in the past.
fa04680767bd321d
Top banner Grappling with Quantum Weirdness Peter Woit Sneaking a Look at God's Cards: Unraveling the Mysteries of Quantum Mechanics. Giancarlo Ghirardi. xxii + 488 pp. Princeton University Press, 2005. $35. The discovery of quantum mechanics in the mid-1920s by Werner Heisenberg, Erwin Schrödinger and others was a truly revolutionary development in the history of physics. The new theory was an immediate success, explaining a wide range of atomic-scale physical phenomena that had until then been mysterious. Since its discovery, the basic quantum-mechanical formalism has survived more or less unchanged and has become the very foundation of modern physics. But from the earliest days of the theory, confusion about its interpretation engendered a continuing series of debates, and these are the subject of Italian physicist Giancarlo Ghirardi's new book, Sneaking a Look at God's Cards. As a mathematical formalism, quantum mechanics is remarkably simple. It postulates that the state of a physical system is completely characterized by a vector in an infinite-dimensional vector space (the familiar quantum-mechanical "wavefunction"), and observable quantities correspond to linear operators on this space. What is not so simple is the relation of this formalism to the standard ideas about physical reality used both in everyday life and in experimental physics laboratories. These describe reality in terms of objects with definite positions and velocities, something which doesn't correspond to any quantum mechanical state-vector. Ghirardi begins by describing in detail the conceptual setup of quantum mechanics, focusing on certain simple physical systems for which the interpretational issues are as clear as possible. He then recounts the history and content of the interpretational debates that began immediately after the formulation of the theory, the most famous of which pitted Niels Bohr against Albert Einstein. Einstein believed that quantum mechanics was an incomplete theory, because in many cases it was only capable of giving statistical predictions. His arguments were most sharply made in his work with Boris Podolsky and Nathan Rosen on the so-called EPR (Einstein-Podolsky-Rosen) paradox. Ghirardi carefully explains the EPR paradox, which is a real challenge to encapsulate in a way that makes sense. The general consensus of the physics community is that Bohr's point of view triumphed, enshrined in what became known as the "Copenhagen interpretation" of quantum mechanics. According to Bohr, the state-vector of a physical system evolves in time according to the Schrödinger equation and does not typically have a well-defined value for classical observables like position and velocity. When the system interacts with an experimental apparatus, the state-vector "collapses" into a state with a well-defined value of the observable being measured. In general, Bohr's interpretation works perfectly well operationally, but it is conceptually incoherent and leaves important questions unanswered. How exactly does this "collapse" take place? A more coherent interpretation would describe both the system under study and the experimental apparatus in terms of a state-vector, but this approach runs up against the problem that one usually expects quantum state-vectors to be super-positions of simpler states with different values of observables. This point was most clearly made by Schrödinger in his famous thought-experiment that leads to the impossible notion of a cat being in a superposition of a state in which it is alive and one in which it is dead. Ghirardi and collaborators have investigated modifications of the Schrödinger equation involving nonlinear and stochastic terms, such that their versions of quantum mechanics agree with the standard theory in regimes for which they have been tested but evade the "collapse" problem. These new versions of quantum mechanics are nonrelativistic and encounter severe problems at relativistic energies, so Ghirardi wisely avoids making too much of them, noting just that they suggest new directions for future research. Most physicists generally believe that quantum mechanics, in its relativistic version as a theory of quantum fields, is a complete, consistent and highly successful conceptual framework. They assume that there must be some well-defined way of describing the entirety of a physical system, experimental apparatus and human observer, appropriately dealing with the confusing interpretational issues. As a result, the study of the sorts of questions examined in this book has often been considered somewhat of a backwater. Recent years have seen great progress in constructing macroscopic systems that behave in characteristically quantum-mechanical fashion, together with possible revolutionary applications of such systems in quantum cryptography and quantum computation. The long-standing interpretational problems of quantum mechanics may be significantly clarified as they become directly relevant to this important new technology. Ghirardi's book provides a careful, evenhanded and well-thought-out introduction to this timely topic. comments powered by Disqus Bottom Banner
f05114182bf92248
next up previous Next: Basis Functions in Coordinate Up: Time Evolution Previous: Summary Decoupling of Equations in Quantum Mechanics Recall that the time-dependent Schrödinger equation is \begin{displaymath}i \hbar \frac{d \Psi({\bf r}, t)}{dt} = {\hat H} \Psi({\bf r}, t), \end{displaymath} (29) where ${\bf r}$ represents the set of all Cartesian coordinates of all particles in the system. If we assume that ${\hat H}$ is time-independent, and if we pretend that ${\hat H}$ is just a number, than we can be confident that the solution is just \begin{displaymath}\Psi({\bf r}, t) = e^{- i {\hat H} t / \hbar} \Psi({\bf r}, 0). \end{displaymath} (30) In fact, this remains true even though ${\hat H}$ is of course an operator, not just a number. So, the propagator in quantum mechanics is {\hat G}(t) = e^{- i {\hat H} t / \hbar}. \end{displaymath} (31) C. David Sherrill
6b77ec5867599ca3
Phase qubit From Wikipedia, the free encyclopedia Jump to: navigation, search The phase qubit is a superconducting device based on the superconductor-insulator-superconductor (SIS) Josephson junction,[1] designed to operate as a quantum bit, or qubit.[2] The phase qubit is closely related, yet distinct from, the flux qubit and the charge qubit, which are also quantum bits implemented by superconducting devices. A phase qubit coupled to a piezoelectric mechanical resonator was used to create the world's first quantum machine. A phase qubit is a current-biased Josephson junction, operated in the zero voltage state with a non-zero current bias. A Josephson junction is a tunnel junction,[3] made of two pieces of superconducting metal separated by a very thin insulating barrier, about 1 nm in thickness. The barrier is thin enough that electrons, or in the superconducting state, Cooper-paired electrons, can tunnel through the barrier at an appreciable rate. Each of the superconductors that make up the Josephson junction is described by a macroscopic wavefunction, as described by the Ginzburg-Landau theory for superconductors.[4] The difference in the complex phases of the two superconducting wavefunctions is the most important dynamic variable for the Josephson junction, and is called the phase difference \frac{}{}\delta, usually just the phase for short. Main equations describing the SIS junction[edit] The Josephson equation [1] relates the superconducting current (usually called the supercurrent) \frac{}{}I through the tunnel junction to the phase difference \frac{}{}\delta, \frac{}{} I = I_0 \sin \delta (Josephson current-phase relationship) Here \frac{}{}I_0 is the critical current of the tunnel junction, determined by the area and thickness of the tunnel barrier in the junction, and by the properties of the superconductors on either side of the barrier. For a junction with identical superconductors on either side of the barrier, the critical current is related to the superconducting gap \frac{}{} \Delta and the normal state resistance \frac{}{} R_n of the tunnel junction by the Ambegaokar-Baratoff formula [3] I_0 = \frac{\pi \Delta}{2 e R_n} (Ambegaokar-Baratoff formula) The Gor'kov phase evolution equation [1] gives the rate of change of the phase (the velocity of the phase) as a linear function of the voltage \frac{}{}V as V = \frac{\hbar}{2 e} \frac{d \delta}{d t} (Gor'kov-Josephson phase evolution equation) This equation is a generalization of the Schrödinger equation for the phase of the BCS wavefunction (see BCS theory). The generalization was carried out by Gor'kov in 1958.[5] The McCumber-Stewart model[edit] The ac and dc Josephson relations control the behavior of the Josephson junction itself. The geometry of the Josephson junction, two plates of superconducting metal separated by a thin tunnel barrier, is that of a parallel plate capacitor, so in addition to the Josephson element the device includes a parallel capacitance \frac{}{}C. The external circuit is usually simply modeled as a resistor \frac{}{}R in parallel with the Josephson element. The set of three parallel circuit elements is biased by an external current source \frac{}{}I, thus the current-biased Josephson junction.[6] Solving the circuit equations yields a single dynamic equation for the phase, \frac{\hbar C}{2 e} \, \frac{d^2 \delta}{dt^2} + \frac{\hbar}{2 e R} \frac{d \delta}{dt} = I - I_0 \sin \delta. The terms on the left side are identical to those of a particle with coordinate (location) \frac{}{}\delta, with mass proportional to the capacitance \frac{}{}C, and with friction inversely proportional to the resistance \frac{}{}R. The particle moves in a conservative force field given by the term on the right, which corresponds to the particle interacting with a potential energy \frac{}{}U(\delta) given by U(\delta) = \frac{\hbar}{2 e} \left ( -I_0 \cos \delta - I \, \delta \right ). This is the washboard potential,[6] so-called because it has an overall linear dependence \frac{}{}-I \, \delta, modulated by the washboard modulation -\frac{}{}I_0 \, \cos \delta. The zero voltage state describes one of the two distinct dynamic behaviors displayed by the phase particle, and corresponds to when the particle is trapped in one of the local minima in the washboard potential. These minima exist for bias currents \frac{}{}|I| < I_0, i.e. for currents below the critical current. With the phase particle trapped in a minimum, it has zero average velocity and therefore zero average voltage. A Josephson junction will allow currents up to \frac{}{}I_0 to pass through without any voltage; this corresponds to the superconducting branch of the Josephson junction's current-voltage characteristic. The voltage state is the other dynamic behavior displayed by a Josephson junction, and corresponds to the phase particle free-running down the slope of the potential, with a non-zero average velocity and therefore non-zero voltage. This behavior always occurs for currents \frac{}{} I above the critical current, i.e. for \frac{}{} |I| > I_0, and for large resistances \frac{}{}R also occurs for currents somewhat below the critical current. This state corresponds to the voltage branch of the Josephson junction current-voltage characteristic. For large resistance junctions the zero-voltage and voltage branches overlap for some range of currents below the critical current, so the device behavior is hysteretic. Nonlinear inductor[edit] Another way to understand the behavior of a Josephson junction in the zero-voltage state is to consider the SIS tunnel junction as a nonlinear inductor.[7] When the phase is trapped in one of the minima, the phase value is limited to a small range about the phase value at the potential minimum, which we will call \frac{}{}\delta_0. The current through the junction is related to this phase value by \frac{}{} I = I_0 \sin \delta_0 . If we consider small variations \frac{}{}\Delta \delta in the phase about the minimum \frac{}{}\delta_0 (small enough to maintain the junction in the zero voltage state), then the current will vary by \frac{}{} \Delta I = \left (I_0 \cos \delta_0\right) \Delta \delta. These variations in the phase give rise to a voltage through the ac Josephson relation, \Delta V = \frac{\hbar}{2 e} \frac{d \Delta \delta}{dt} = \frac{\hbar}{2 e} \frac{1}{I_0 \cos \delta_0} \frac{d \Delta I}{dt} = L \frac{d \Delta I}{dt} This last relation is the defining equation for an inductor with inductance This inductance depends on the value of phase \frac{}{}\delta_0 at the minimum in the washboard potential, so the inductance value can be controlled by changing the bias current \frac{}{}I. For zero bias current, the inductance reaches its minimum value, L_{\rm min} = \frac{\hbar}{2 e} \frac{1}{I_0} = \frac{\hbar R_n}{\pi \Delta}. As the bias current increases, the inductance increases. When the bias current is very close (but less than) the critical current \frac{}{}I_0, the value of the phase \frac{}{}\delta_0 is very close to \frac{}{}\pi/2, as seen by the dc Josephson relation, above. This means that the inductance value \frac{}{}L becomes very large, diverging as \frac{}{}I reaches the critical current \frac{}{}I_0. The nonlinear inductor represents the response of the Josephson junction to changes in bias current. When the parallel capacitance from the device geometry is included, in parallel with the inductor, this forms a nonlinear \frac{}{}LC resonator, with resonance frequency \omega_p = \frac{1}{\sqrt{L C}} = \sqrt{\frac{2 e I_0 \cos \delta_0}{\hbar C}}, which is known as the plasma frequency of the junction. This corresponds to the oscillation frequency of the phase particle in the bottom of one of the minima of the washboard potential. For bias currents very near the critical current, the phase value in the washboard minimum is \delta_0 \approx \sqrt{1-(I/I_0)^2} , and the plasma frequency is then \omega_p \approx \sqrt{\frac{2 e I_0}{\hbar C}} \left [ 1 - (I/I_0)^2 \right ]^{1/4}, clearly showing that the plasma frequency approaches zero as the bias current approaches the critical current. The simple tunability of the current-biased Josephson junction in its zero voltage state is one of the key advantages the phase qubit has over some other qubit implementations, although it also limits the performance of this device, as fluctuations in current generate fluctuations in the plasma frequency, which causes dephasing of the quantum states. Quantized energy levels[edit] The phase qubit is operated in the zero-voltage state, with \frac{}{}|I| < I_0. At very low temperatures, much less than 1 K (achievable using a cryogenic system known as a dilution refrigerator), with a sufficiently high resistance and small capacitance Josephson junction, quantum energy levels [8] become detectable in the local minima of the washboard potential. These were first detected using microwave spectroscopy, where a weak microwave signal is added to the current \frac{}{}I biasing the junction. Transitions from the zero voltage state to the voltage state were measured by monitoring the voltage across the junction. Clear resonances at certain frequencies were observed, which corresponded well with the quantum transition energies obtained by solving the Schrödinger equation [9] for the local minimum in the washboard potential. Classically only a single resonance is expected, centered at the plasma frequency \frac{}{}\omega_p. Quantum mechanically, the potential minimum in the washboard potential can accommodate several quantized energy levels, with the lowest (ground to first excited state) transition at an energy \frac{}{} E_{01} \approx \hbar \omega_p, but the higher energy transitions (first to second excited state, second to third excited state) shifted somewhat below this due to the non-harmonic nature of the trapping potential minimum, whose resonance frequency falls as the energy increases in the minimum. Observing multiple, discrete levels in this fashion is extremely strong evidence that the superconducting device is behaving quantum mechanically, rather than classically. The phase qubit uses the lowest two energy levels in the local minimum; the ground state \frac{}{}|g\rangle is the zero state of the qubit, and the first excited state \frac{}{}|e\rangle is the one state. The slope in the washboard potential is set by the bias current \frac{}{}I, and changes in this current change the washboard potential, changing the shape of the local minimum (equivalently, changing the value of the nonlinear inductance, as discussed above). This changes the energy difference between the ground and first excited states. Hence the phase qubit has a tunable energy splitting. 1. ^ a b c Barone, Antonio; Paterno, Gianfranco (1981). Physics and Applications of the Josephson Effect. New York: Wiley.  3. ^ a b van Duzer, Theodore; Turner, Charles (1999). Principles of Superconductive Devices and Circuits, 2nd ed. Upper Saddle River NJ: Prentice-Hall.  4. ^ Tinkham, Michael; Paterno, Gianfranco (1996). Introduction to Superconductivity. New York: McGraw-Hill.  5. ^ Gor'kov, L.P. (1958). Soviet Phys. JETP 7: 505.  6. ^ a b Likharev, Konstantin (1986). Dynamics of Josephson Junctions and Circuits. New York: Gordon and Breach.  7. ^ Devoret, Michel; Martinis, John (2004). "Superconducting Qubits". In Esteve, Daniel; Raimond, J.-M.; Dalibard, J. Quantum Entanglement and Information Processing. Elsevier. ISBN 0-444-51728-6.  8. ^ Martinis, J.M.; Devoret, M.; Clarke, J. (1985). "Energy-Level Quantization in the Zero-Voltage State of a Current-Biased Josephson Junction". Phys. Rev. Lett. 55 (15): 1543–1546. Bibcode:1985PhRvL..55.1543M. doi:10.1103/PhysRevLett.55.1543. PMID 10031852.  9. ^ Griffiths, David J. (2004). Introduction to Quantum Mechanics, 2nd ed. New York: Benjamin Cummings. ISBN 0-13-111892-7.
b8d1f55eb685427a
Oceanographic Waves Just as a rock dropped into water produces waves, sudden displacements such as landslides and earthquakes can produce high energy waves of short duration that can devastate coastal regions (see tsunami). Hurricanes traveling over shallow coastal waters can generate storm surges that in turn can cause devastating coastal flooding (see under storm). Seismic and Atmospheric Waves wave, in physics, the transfer of energy by the regular vibration, or oscillatory motion, either of some material medium or by the variation in magnitude of the field vectors of an electromagnetic field (see electromagnetic radiation). Many familiar phenomena are associated with energy transfer in the form of waves. Sound is a longitudinal wave that travels through material media by alternatively forcing the molecules of the medium closer together, then spreading them apart. Light and other forms of electromagnetic radiation travel through space as transverse waves; the displacements at right angles to the direction of the waves are the field intensity vectors rather than motions of the material particles of some medium. With the development of the quantum theory, it was found that particles in motion also have certain wave properties, including an associated wavelength and frequency related to their momentum and energy. Thus, the study of waves and wave motion has applications throughout the entire range of physical phenomena. Classification of Waves Parameters of Waves Wave Fronts and Rays Principle that subatomic particles possess some wavelike characteristics, and that electromagnetic waves, such as light, possess some particlelike characteristics. In 1905, by demonstrating the photoelectric effect, Albert Einstein showed that light, which until then had been thought of as a form of electromagnetic wave (see electromagnetic radiation), must also be thought of as localized in packets of discrete energy (see photon). In 1924 Louis-Victor Broglie proposed that electrons have wave properties such as wavelength and frequency; their wavelike nature was experimentally established in 1927 by the demonstration of their diffraction. The theory of quantum electrodynamics combines the wave theory and the particle theory of electromagnetic radiation. Learn more about wave-particle duality with a free trial on Britannica.com. or abrasion platform Gently sloping rock ledge that extends from the high-tide level at a steep cliff base to below the low-tide level. It develops as a result of wave abrasion; beaches protect the shore from abrasion and therefore prevent the formation of platforms. A platform is broadened as waves erode a notch at the base of the sea cliff, causing overhanging rock to fall. As the sea cliffs are attacked, weak rocks are quickly eroded, leaving the more resistant rocks as protrusions. Learn more about wave-cut platform with a free trial on Britannica.com. Device that constrains the path of electromagnetic waves (see electromagnetic radiation). It can be used to transmit power or signals in the form of waves while minimizing power loss. Common examples are metallic tubes, coaxial cables, and optical fibres (see fibre optics). Waveguides transmit energy by propagating transmitted electromagnetic waves through the inside of a tube to a receiver at the other end. Metal waveguides are used in such technologies as microwave ovens, radar systems, radio relay systems, and radio telescopes. Learn more about waveguide with a free trial on Britannica.com. Variable quantity that mathematically describes the wave characteristics of a particle. It is related to the likelihood of the particle being at a given point in space at a given time, and may be thought of as an expression for the amplitude of the particle wave, though this is strictly not physically meaningful. The square of the wave function is the significant quantity, as it gives the probability for finding the particle at a given point in space and time. Seealso wave-particle duality. Learn more about wave function with a free trial on Britannica.com. Imaginary surface that represents corresponding points of waves vibrating in unison. As identical waves from the same source travel through a homogeneous medium, corresponding crests and troughs are in phase at any instant; that is, they have completed the same fraction of their periodic motion. Any surface drawn through all points of the same phase constitutes a wave front. Learn more about wave front with a free trial on Britannica.com. In oceanography, a ridge or swell on the surface of a body of water, normally having a forward motion distinct from the motions of the particles that compose it. Ocean waves are fairly regular, with an identifiable wavelength between adjacent crests and with a definite frequency of oscillation. Waves result when a generating force (usually the wind) displaces surface water and a restoring force returns it to its undisturbed position. Surface tension alone is the restoring force for small waves. For large waves, gravity is more important. Learn more about wave with a free trial on Britannica.com. Vibrational or stress waves in elastic media that have a frequency above 20 kilohertz, the highest frequency of sound waves that can be detected by the human ear. They can be generated or detected by piezoelectric transducers (see piezoelectricity). High-power ultrasonics produce distortion in a medium; applications include ultrasonic welding, drilling, irradiation of fluid suspensions (as in wine clarification), cleaning of surfaces (such as jewelry), and disruption of biological structures. Low-power ultrasonic waves do not cause distortions; uses include sonar, structure testing, and medical imaging and diagnosis. Some animals, including bats, employ ultrasonic echolocation for navigation. Learn more about ultrasonics with a free trial on Britannica.com. or seismic sea wave or tidal wave Catastrophic ocean wave, usually caused by a submarine earthquake. Underwater or coastal landslides or volcanic eruptions also may cause tsunamis. The term tsunami is Japanese for “harbour wave.” The term tidal wave is a misnomer, because the wave has no connection with the tides. Perhaps the most destructive tsunami ever occurred in 2004 in the Indian Ocean, after an earthquake struck the seafloor off the Indonesian island of Sumatra. More than 200,000 people were killed in Indonesia, Thailand, India, Sri Lanka and other countries as far away as Somalia on the Horn of Africa. Learn more about tsunami with a free trial on Britannica.com. Artificial imitation of sound to accompany action and supply realism in a dramatic production. Sound effects were first used in the theatre, where they can represent a range of action too vast or difficult to present onstage, from battles and gunshots to trotting horses and rainstorms. Various methods were devised by backstage technicians to reproduce sounds (e.g., rattling sheet metal to create thunder); today most sound effects are reproduced by recordings. An important part of old-fashioned radio dramas, sound effects are still painstakingly added to television and movie soundtracks. Learn more about sound effect with a free trial on Britannica.com. Sharp rise in aerodynamic drag that occurs as an aircraft approaches the speed of sound. At sea level the speed of sound is about 750 miles (1,200 km) per hour, and at 36,000 feet (11,000 metres) it is about 650 miles (1,050 km) per hour. The sound barrier was formerly an obstacle to supersonic flight. If an aircraft flies at somewhat less than sonic speed, the pressure waves (sound waves) it creates outspeed their sources and spread out ahead of it. Once the aircraft reaches sonic speed the waves are unable to get out of its way. Strong local shock waves form on the wings and body; airflow around the craft becomes unsteady, and severe buffeting may result, with serious stability difficulties and loss of control over flight characteristics. Generally, aircraft properly designed for supersonic flight have little difficulty in passing through the sound barrier, but the effect on those designed for efficient operation at subsonic speeds may become extremely dangerous. The first pilot to break the sound barrier was Chuck Yeager (1947), in the experimental X-1 aircraft. Learn more about sound barrier with a free trial on Britannica.com. Mechanical disturbance that propagates as a longitudinal wave through a solid, liquid, or gas. A sound wave is generated by a vibrating object. The vibrations cause alternating compressions (regions of crowding) and rarefactions (regions of scarcity) in the particles of the medium. The particles move back and forth in the direction of propagation of the wave. The speed of sound through a medium depends on the medium's elasticity, density, and temperature. In dry air at 32 °F (0 °C), the speed of sound is 1,086 feet (331 metres) per second. The frequency of a sound wave, perceived as pitch, is the number of compressions (or rarefactions) that pass a fixed point per unit time. The frequencies audible to the human ear range from approximately 20 hertz to 20 kilohertz. Intensity is the average flow of energy per unit time through a given area of the medium and is related to loudness. Seealso acoustics; ear; hearing; ultrasonics. Learn more about sound with a free trial on Britannica.com. formerly Melville Sound Body of water, northern Canada. Located in the Arctic Archipelago, between Melville and Victoria islands, the sound is 250 mi (400 km) long and 100 mi (160 km) wide. Its discovery, when reached from the east (1819–20) by William E. Parry and from the west (1850–54) by Robert McClure, proved the existence of the Northwest Passage. The sound is navigable only under favourable weather conditions. Learn more about Viscount Melville Sound with a free trial on Britannica.com. Deep inlet, Norwegian Sea, eastern central coast of Greenland. It runs inland for 70 mi (110 km) and has numerous fjords (the longest is 280 mi, or 451 km) and two large islands. It was charted by William Scoresby in 1822. Learn more about Scoresby Sound with a free trial on Britannica.com. Arm of the Pacific Ocean indenting northwestern Washington, U.S. It was explored by the British navigator George Vancouver in 1792 and named by him for Peter Puget, a second lieutenant in his expedition, who probed the main channel. It has many deepwater harbours, including Seattle, Tacoma, Everett, and Port Townsend, which are shipping ports for the rich farmlands along the river estuaries. It provides a sheltered area for recreational boating and salmon fishing. Learn more about Puget Sound with a free trial on Britannica.com. Learn more about Prince William Sound with a free trial on Britannica.com. Shallow body of water, eastern shore of North Carolina, U.S. It is separated from the Atlantic Ocean by the Outer Banks. It extends 80 mi (130 km) south from Roanoke Island and is 8–30 mi (13–48 km) wide. Numerous waterfowl nest along the coastal waters; there is some commercial fishing, especially for oysters. Learn more about Pamlico Sound with a free trial on Britannica.com. Inlet of the Tasman Sea, southwestern coast of South Island, New Zealand. About 2 mi (3 km) wide, the sound extends inland for 12 mi (19 km). It was named by a whaler in the 1820s for its resemblance to Milford Haven in Wales. It is the northernmost fjord in Fiordland National Park and is the site of Milford Sound town, one of the region's few permanently inhabited places. Learn more about Milford Sound with a free trial on Britannica.com. Bay, western extension of the Ross Sea, Antarctica. Lying at the edge of the Ross Ice Shelf, the channel is 92 mi (148 km) long and up to 46 mi (74 km) wide; it has been a major centre for Antarctic explorations. First discovered in 1841 by Scottish explorer James C. Ross, it served as one of the main access routes to the Antarctic continent. Ross Island, on the shores of the sound, was the site of headquarters for British explorers Robert Falcon Scott and Ernest Shackleton. Learn more about McMurdo Sound with a free trial on Britannica.com. Coastal inlet, northeastern North Carolina, U.S. Protected from the Atlantic Ocean by the Outer Banks, it is about 50 mi (80 km) long and 5–14 mi (8–23 km) wide. It is connected with Chesapeake Bay by the Dismal Swamp Canal and the Albemarle and Chesapeake Canal. Elizabeth City is its chief port. Explored by Ralph Lane in 1586, it was later named for George Monck, duke of Albemarle. Learn more about Albemarle Sound with a free trial on Britannica.com. Vibration generated by an earthquake, explosion, or similar phenomenon and propagated within the Earth or along its surface. Earthquakes generate two principal types of waves: body waves, which travel within the Earth, and surface waves, which travel along the surface. Seismograms (recorded traces of the amplitude and frequency of seismic waves) yield information about the Earth and its subsurface structure; artificially generated seismic waves are used in oil and gas prospecting. Learn more about seismic wave with a free trial on Britannica.com. Energy propagated through free space or through a material medium in the form of electromagnetic waves. Examples include radio waves, infrared radiation, visible light, ultraviolet radiation, X rays, and gamma rays. Electromagnetic radiation exhibits wavelike properties such as reflection, refraction, diffraction, and interference, but also exhibits particlelike properties in that its energy occurs in discrete packets, or quanta. Though all types of electromagnetic radiation travel at the same speed, they vary in frequency and wavelength, and interact with matter differently. A vacuum is the only perfectly transparent medium; all others absorb some frequencies of electromagnetic radiation. Learn more about electromagnetic radiation with a free trial on Britannica.com. A wave is a disturbance that propagates through space and time, usually with transference of energy. While a mechanical wave exists in a medium (which on deformation is capable of producing elastic restoring forces), waves of electromagnetic radiation (and probably gravitational radiation) can travel through vacuum, that is, without a medium. Waves travel and transfer energy from one point to another, often with little or no permanent displacement of the particles of the medium (that is, with little or no associated mass transport); instead there are oscillations around almost fixed locations. Agreeing on a single, all-encompassing definition for the term wave is non-trivial. A vibration can be defined as a back-and-forth motion around a point m around a reference value. However, defining the necessary and sufficient characteristics that qualify a phenomenon to be called a wave is, at least, flexible. The term is often understood intuitively as the transport of disturbances in space, not associated with motion of the medium occupying this space as a whole. In a wave, the energy of a vibration is moving away from the source in the form of a disturbance within the surrounding medium (Hall, 1980: 8). However, this notion is problematic for a standing wave (for example, a wave on a string), where energy is moving in both directions equally, or for electromagnetic / light waves in a vacuum, where the concept of medium does not apply. For such reasons, wave theory represents a peculiar branch of physics that is concerned with the properties of wave processes independently from their physical origin (Ostrovsky and Potapov, 1999). The peculiarity lies in the fact that this independence from physical origin is accompanied by a heavy reliance on origin when describing any specific instance of a wave process. For example, acoustics is distinguished from optics in that sound waves are related to a mechanical rather than an electromagnetic wave-like transfer / transformation of vibratory energy. Concepts such as mass, momentum, inertia, or elasticity, become therefore crucial in describing acoustic (as opposed to optic) wave processes. This difference in origin introduces certain wave characteristics particular to the properties of the medium involved (for example, in the case of air: vortices, radiation pressure, shock waves, etc., in the case of solids: Rayleigh waves, dispersion, etc., and so on). Other properties, however, although they are usually described in an origin-specific manner, may be generalized to all waves. For example, based on the mechanical origin of acoustic waves there can be a moving disturbance in space-time if and only if the medium involved is neither infinitely stiff nor infinitely pliable. If all the parts making up a medium were rigidly bound, then they would all vibrate as one, with no delay in the transmission of the vibration and therefore no wave motion (or rather infinitely fast wave motion). On the other hand, if all the parts were independent, then there would not be any transmission of the vibration and again, no wave motion (or rather infinitely slow wave motion). Although the above statements are meaningless in the case of waves that do not require a medium, they reveal a characteristic that is relevant to all waves regardless of origin: within a wave, the phase of a vibration (that is, its position within the vibration cycle) is different for adjacent points in space because the vibration reaches these points at different times. Similarly, wave processes revealed from the study of wave phenomena with origins different from that of sound waves can be equally significant to the understanding of sound phenomena. A relevant example is Young's principle of interference (Young, 1802, in Hunt, 1978: 132). This principle was first introduced in Young's study of light and, within some specific contexts (for example, scattering of sound by sound), is still a researched area in the study of sound. A wave is polarized, if it can only oscillate in one direction. The polarization of a transverse wave describes the direction of oscillation, in the plane perpendicular to the direction of travel. Longitudinal waves such as sound waves do not exhibit polarization, because for these waves the direction of oscillation is along the direction of travel. A wave can be polarized by using a polarizing filter. Examples of waves include: Mathematical description From a mathematical point of view, the most primitive or fundamental wave is harmonic (sinusoidal) wave which is described by the equation f(x,t) = Asin(omega t-kx)), where A is the amplitude of a wave - a measure of the maximum disturbance in the medium during one wave cycle (the maximum distance from the highest point of the crest to the equilibrium). In the illustration to the right, this is the maximum vertical distance between the baseline and the wave. The units of the amplitude depend on the type of wave — waves on a string have an amplitude expressed as a distance (meters), sound waves as pressure (pascals) and electromagnetic waves as the amplitude of the electric field (volts/meter). The amplitude may be constant (in which case the wave is a c.w. or continuous wave), or may vary with time and/or position. The form of the variation of amplitude is called the envelope of the wave. The wavelength (denoted as lambda) is the distance between two sequential crests (or troughs). This generally is measured in meters; it is also commonly measured in nanometers for the optical part of the electromagnetic spectrum. A wavenumber k can be associated with the wavelength by the relation k = frac{2 pi}{lambda}. , The period T is the time for one complete cycle for an oscillation of a wave. The frequency f (also frequently denoted as nu) is how many periods per unit time (for example one second) and is measured in hertz. These are related by: f=frac{1}{T}. , In other words, the frequency and period of a wave are reciprocals of each other. The angular frequency omega represents the frequency in terms of radians per second. It is related to the frequency by omega = 2 pi f = frac{2 pi}{T}. , There are two velocities that are associated with waves. The first is the phase velocity, which gives the rate at which the wave propagates, is given by v_p = frac{omega}{k} = {lambda}f. The second is the group velocity, which gives the velocity at which variations in the shape of the wave's amplitude propagate through space. This is the rate at which information can be transmitted by the wave. It is given by v_g = frac{partial omega}{partial k}. , The wave equation The wave equation is a differential equation that describes the evolution of a harmonic wave over time. The equation has slightly different forms depending on how the wave is transmitted, and the medium it is traveling through. Considering a one-dimensional wave that is traveling down a rope along the x-axis with velocity v and amplitude u (which generally depends on both x and t), the wave equation is frac{1}{v^2}frac{partial^2 u}{partial t^2}=frac{partial^2 u}{partial x^2}. , In three dimensions, this becomes frac{1}{v^2}frac{partial^2 u}{partial t^2} = nabla^2 u. , where nabla^2 is the Laplacian. The velocity v will depend on both the type of wave and the medium through which it is being transmitted. A general solution for the wave equation in one dimension was given by d'Alembert. It is u(x,t)=F(x-vt)+G(x+vt). , This can be viewed as two pulses traveling down the rope in opposite directions; F in the +x direction, and G in the −x direction. If we substitute for x above, replacing it with directions x, y, z, we then can describe a wave propagating in three dimensions. The Schrödinger equation describes the wave-like behavior of particles in quantum mechanics. Solutions of this equation are wave functions which can be used to describe the probability density of a particle. Quantum mechanics also describes particle properties that other waves, such as light and sound, have on the atomic scale and below. Traveling waves Simple wave or a traveling wave, also sometimes called a progressive wave is a disturbance that varies both with time t and distance z in the following way: y(z,t) = A(z, t)sin (kz - omega t + phi), , where A(z,t) is the amplitude envelope of the wave, k is the wave number and phi is the phase. The phase velocity vp of this wave is given by v_p = frac{omega}{k}= lambda f, , where lambda is the wavelength of the wave. Standing wave Also see: Acoustic resonance, Helmholtz resonator, and organ pipe Propagation through strings The speed of a wave traveling along a vibrating string (v) is directly proportional to the square root of the tension (T) over the linear density (μ): v=sqrt{frac{T}{mu}}. , Transmission medium The medium that carries a wave is called a transmission medium. It can be classified into one or more of the following categories: • A bounded medium if it is finite in extent, otherwise an unbounded medium. • A uniform medium if its physical properties are unchanged at different locations in space. • An isotropic medium if its physical properties are the same in different directions. See also • Campbell, M. and Greated, C. (1987). The Musician’s Guide to Acoustics. New York: Schirmer Books. • Hunt, F. V. (1978). Origins in Acoustics. New York: Acoustical Society of America Press, (1992). • Ostrovsky, L. A. and Potapov, A. S. (1999). Modulated Waves, Theory and Applications. Baltimore: The Johns Hopkins University Press. • Vassilakis, P.N. (2001) Perceptual and Physical Properties of Amplitude Fluctuation and their Musical Significance. Doctoral Dissertation. University of California, Los Angeles. External links Search another word or see waveon Dictionary | Thesaurus |Spanish Copyright © 2015 Dictionary.com, LLC. All rights reserved. • Please Login or Sign Up to use the Recent Searches feature
3cfa5115df9a1b90
Partnering Events: TechConnect Summit Clean Technology 2008 Quantum Gates Simulator Based on DSP TI6711 V.H. Tellez, C. Iuga, G.I. Duchen, A. Campero Universidad Autonoma Metropolitana, MX quantum gates, quantum bits, Hamiltonian, simulation Quantum theory has found a new field of application in the information and computation fields during recent years. We developed a Quantum Gate Simulator based on the Digital Signal Processor (DSP) DSP TI6711 using the Hamiltonian in the time- dependent Schrödinger equation. The Hamiltonian describes the Quantum System by manipulating a Quantum Bit (QuBit) using unitary matrices. Gates simulated are conditional NOT operation, Controlled-NOT Gate, Multi-bit Controlled-NOT Gate or Toffoli gate, Rotation Gate or Hadamard transform and twiddle gate, all useful in quantum computation due to their inherently reversible characteristic. With the simulation process, we have obtained approximately 95% fidelity action of the gate on an arbitrary two and three QuBit input state. We have determined an average error probability bounded above by 0.07 ± 0.01. Nanotech 2008 Conference Program Abstract
4635f24987de6425
Biology Topics In an effort to illuminate connections between chemistry and biology and spark students' excitement for chemistry, we incorporate frequent biology-related examples into the lectures. These in-class examples range from two to ten minutes, designed to succinctly introduce biological connections without sacrificing any chemistry content in the curriculum. A list of the biology-, medicine-, and MIT research-related examples used in 5.111 is provided below. Click on the associated PDF for more information on each example. To reinforce the connections formed in lecture, we also include biology-related problems in each homework assignment. Selected homework problems and solutions are available below. L1 The importance of chemical principles Chemical principles in research at MIT   L2 Discovery of electron and nucleus, need for quantum mechanics   Activity. Rutherford backscattering experiment with ping-pong ball alpha particles L3 Wave-particle duality of light Quantum dot research at MIT (PDF)   L4 Wave-particle duality of matter, Schrödinger equation   Demo. Photoelectric effect demonstration L5 Hydrogen atom energy levels   Demo. Viewing the hydrogen atom spectrum L6 Hydrogen atom wavefunctions (orbitals)     L7 p-orbitals     L8 Multielectron atoms and electron configurations     L9 Periodic trends Alkali earth metals in the body: Na and K versus Li (lithiated 7-up) (PDF) Selected biology-related questions based on Lecture 1-9. (PDF) Answer key (PDF) L10 Periodic trends continued; covalent bonds Atomic size: sodium ion channels in neurons (PDF)   L11 Lewis structures Lewis sturucture examples: 1) Cyanide ion in cassava plants, cigarettes 2) Thionyl chloride for the synthesis of novacaine Exceptions to Lewis structure rules; Ionic bonds 1) Free radicals in biology (in DNA damage and essential for life) 2) Lewis structure example: Nitric Oxide (NO) in vasodilation (and Viagra) Polar covalent bonds; VSEPR theory 1) Water versus fat-soluble vitamins (comparing folic acid and vitamin A) 2) Molecuar shape: importance in enzyme-substrate complexes L14 Molecular orbital theory 2008 Nobel Prize in chemistry: Green Flourescent Protein (GFP) (PDF) L15 Valence bond theory and hybridization Restriction of rotation around double bonds: application to drug design (PDF)   Determining hybridization in complex molecules; Thermochemistry and bond energies / bond enthalpies 1) Hybridization example: ascorbic acid (vitamin C) 2) Thermochemistry of glucose oxidation: harnessing energy from plants L17 Entropy and disorder 1) Hybridization example: identifying molecules that follow the "morphine rule" 2) ATP hydrolysis in the body L18 Free energy and control of spontaneity 1) ATP-coupled reactions in biology 2) Thermodynamics of hydrogen bonding: relevance to DNA replication L19 Chemical equilibrium     L20 Le Chatelier's principle and applications to blood-oxygen levels 1) Maximizing the yield of nitrogen fixation: inspiration from bacteria 2) Le Chatelier's principle and hemoglobin: blood-oxygen levels (PDF - 2.7 MB) Selected biology-related questions based on Lectures 10-20 (PDF) Answer key (PDF) L21 Acid-base equilibrium: Is MIT water safe to drink?   Demo. Determining pH of household items using a color indicator from cabbage leaves L22 Chemical and biological buffers L23 Acid-base titrations pH and blood-effects from vitamin B12 deficiancy (PDF - 2.4 MB) L24 Balancing oxidation/reduction equations     L25 Electrochemical cells Oxidative metabolism of drugs (PDF) Demo. Oxidation of magnesium (resulting in a glowing block of dry ice) L26 Chemical and biological oxidation/reduction reactions Reduction of vitamin B12 in the body (PDF) Selected biology-related questions based on Lectures 21-26 (PDF) Answer key (PDF) L27 Transition metals and the treatment of lead poisoning 1) Metal chelation in the treatment of lead poisoning 2) Geometric isomers and drugs: i.e. the anti-cancer drug cisplatin L28 Crystal field theory     L29 Metals in biology Inspiration from metalloenzymes for the reduction of greenhouse gasses (PDF - 1.3 MB) Activity. Toothpick models: gumdrop d-orbitals, jelly belly metals and ligands L30 Magnetism and spectrochemical theory Demo. Oscillating clock reaction L31 Rate laws Kinetics of glucose oxidation (energy production) in the body (PDF) Activity. Hershey kiss "experment" on the oxidation of glucose L32 Nuclear chemistry and elementary reactions Medical applications of radioactive decay (technetium-99) (PDF) "Days of Our Halflives" poem L33 Reaction mechanism Reaction mechanism of ozone decomposition (PDF)   L34 Temperature and kinetics   Demo. Liquid nitrogen (glowsticks: slowing the chemiluminescent reaction) L35 Enzyme catalysis Eyzmes as the catalysts of life, inhibitors (i.e. HIV protease inhibitors) (PDF)   L36 Biochemistry The methionine synthase case study (chemistry in solution!) (PDF) Selected biology-related questions based on Lectures 27-36 (PDF) Answer key (PDF)
531fbe245b4618c6
Symmetry, Integrability and Geometry: Methods and Applications (SIGMA) SIGMA 4 (2008), 014, 7 pages      arXiv:0802.0482 Symmetry Transformation in Extended Phase Space: the Harmonic Oscillator in the Husimi Representation Samira Bahrami a and Sadolah Nasiri b a) Department of Physics, Zanjan University, Zanjan, Iran b) Institute for Advanced Studies in Basic Sciences, Iran Received October 08, 2007, in final form January 23, 2008; Published online February 04, 2008 In a previous work the concept of quantum potential is generalized into extended phase space (EPS) for a particle in linear and harmonic potentials. It was shown there that in contrast to the Schrödinger quantum mechanics by an appropriate extended canonical transformation one can obtain the Wigner representation of phase space quantum mechanics in which the quantum potential is removed from dynamical equation. In other words, one still has the form invariance of the ordinary Hamilton-Jacobi equation in this representation. The situation, mathematically, is similar to the disappearance of the centrifugal potential in going from the spherical to the Cartesian coordinates. Here we show that the Husimi representation is another possible representation where the quantum potential for the harmonic potential disappears and the modified Hamilton-Jacobi equation reduces to the familiar classical form. This happens when the parameter in the Husimi transformation assumes a specific value corresponding to Q-function. Key words: Hamilton-Jacobi equation; quantum potential; Husimi function; extended phase space. pdf (190 kb)   ps (136 kb)   tex (10 kb) 1. Bohm D., Hiley B.J., Unbroken quantum realism, from microscopic to macroscopic levels, Phys. Rev. Lett. 55 (1985), 2511-2514. 2. Holland P.R., The quantum theory of motion, Cambridge University Press, 1993, 68-69. 3. Takabayashi T., The formulation of quantum mechanics in terms of ensemble in phase space, Progr. Theoret. Phys. 11 (1954), 341-373. 4. Muga J.G., Sala R., Snider R.F., Comparison of classical and quantum evolution of phase space distribution functions, Phys. Scripta 47 (1993), 732-739. 5. Brown M.R., The quantum potential: the breakdown of classical symplectic symmetry and the energy of localization and dispersion, quant-ph/9703007. 6. Holland P.R., Quantum back-reaction and the particle law of motion, J. Phys. A: Math. Gen. 39 (2006), 559-564. 7. Shojai F., Shojai A., Constraints algebra and equation of motion in Bohmian interpretation of quantum gravity, Classical Quantum Gravity 21 (2004), 1-9, gr-qc/0409035. 8. Carroll R., Fluctuations, gravity, and the quantum potential, gr-qc/0501045. 9. Nasiri S., Quantum potential and symmetries in extended phase space, SIGMA 2 (2006), 062, 12 pages, quant-ph/0511125. 10. Carroll R., Some fundamental aspects of a quantum potential, quant-ph/0506075. 11. Sobouti Y., Nasiri S., A phase space formulation of quantum state functions, Internat. J. Modern Phys. B 7 (1993), 3255-3272. 12. Nasiri S., Sobouti Y., Taati F., Phase space quantum mechanics - direct, J. Math. Phys. 47 (2006), 092106, 15 pages, quant-ph/0605129. 13. Nasiri S., Khademi S., Bahrami S., Taati F., Generalized distribution functions in extended phase space, in Proceedings QST4, Editor V.K. Dobrev, Heron Press Sofia, 2006, Vol. 2, 820-826. 14. Wigner E., On the quantum correction for thermodynamic equillibrium, Phys. Rev. 40 (1932), 749-759. 15. Lee H.W., Theory and application of the quantum phase space distribution functions, Phys. Rep. 259 (1995), 147-211. 16. de Gosson M., Symplectically covariant Schrödinger equation in phase space, J. Phys. A: Math. Gen. 38 (2005), 9263-9287, math-ph/0505073. 17. Jannussis A., Patargias N., Leodaris A., Phillippakis T., Streclas A., Papatheos V., Some remarks on the nonnegative quantum mechanical distribution functions, Preprint, Department of Theoretical Physics, University of Patras, 1982. 18. Husimi K., Some formal properties of the density matrix, Proc. Phys.-Math. Soc. Japan 22 (1940), 264-314. Previous article   Next article   Contents of Volume 4 (2008)
c2cd6e2d629b2cd9
2014-2016 CATALOGUE 2012-2014 CATALOGUE 2010-2012 CATALOGUE 2008-2010 CATALOGUE 2006-2008 CATALOGUE Historically, the discipline of physics is identified as the branch of science that seeks to discover, unify, and apply the most basic laws of nature. Our curriculum introduces students to its principal subfields—electromagnetism, mechanics, thermal physics, optics, and quantum mechanics—and provides the most extensive training in mathematical and analytical methods of any of the sciences. Since this is the foundation upon which all other sciences and engineering are based, the study of physics provides a strong background for students who plan careers in areas such as physics, astrophysics, astronomy, geophysics, oceanography, meteorology, engineering, operations research, teaching, medicine, and law. Because physics is interested in first causes, it has a strong connection to philosophy as well. Increasingly in the modern era, physicists have turned their attention to areas in which their analytical and experimental skills are particularly demanded, exploring such things as nanotechnology, controlled nuclear fusion, the evolution of stars and galaxies, the origins of the universe, the properties of matter at ultra-low temperatures, the creation and characterization of new materials for laser and electronics technologies, biophysics and biomedical engineering, and even the world of finance. PHYS 150 and 160 have a calculus co-requisite and are intended for students majoring in the natural sciences or other students with a strong interest in science. Courses with numbers lower than 150 are particularly suitable for students not majoring in a physical science. Prerequisites for any course may be waived at the discretion of the instructor. Grades in courses comprising the major or the minor must average C- or better. A joint-degree engineering program is offered with Columbia University and The Thayer School of Engineering at Dartmouth. Upon completion of three years at Hobart and William Smith Colleges and two years at an engineering school, a student will receive a B.S. or B.E. in engineering from the engineering school and either a B.A. or a B.S. from Hobart or William Smith. Majoring in physics here provides the best preparation for further work in most engineering fields. See “Joint Degree Programs” elsewhere in the Catalogue for details. disciplinary, 12 courses PHYS 150, PHYS 160, PHYS 270, PHYS 285, PHYS 383, MATH 130 Calculus I, MATH 131 Calculus II, and five additional courses in physics at the 200- or 300-level. A course at the 200- or 300-level from another science division department may be substituted for a physics course with the approval of the department chair. disciplinary, 16 courses All of the requirements for the B.A. physics major, plus four additional courses in the sciences. Only those courses which count toward the major in the departments that offer them satisfy this requirement. disciplinary, 6 courses PHYS 150, PHYS 160, PHYS 270, and three additional physics courses. PHYS 110 “Beam Me Up, Einstein”: Physics Through Star Trek Can you really learn physics watching Star Trek? This course says “yes.” Students consider such Star Trek staples as warp drive, cloaking devices, holodecks, and time travel and learn what the principles of physics tell us about these possibilities—and what these possibilities would mean for the principles of physics. Anyone who has ever enjoyed a science fiction book or movie will find that using Star Trek offers an excellent context for learning about a variety of topics in physics, including black holes, antimatter, lasers, and other exotic phenomena. (Offered periodically) PHYS 112 Introduction to Astronomy This course offers a survey of the celestial universe, including planets, stars, galaxies, and assorted other celestial objects which are not yet well understood. The Big Bang cosmological model is thoroughly explored, as are the various observational techniques employed to collect astronomical data. (Offered occasionally) PHYS 113 The Solar System and Extra-solar Planets This course is designed to help the student understand the nature and process of science by studying the subject of astronomy. Specifically, this course provides an introduction to the general physical and observational principles necessary to understand the celestial bodies. We will specifically discuss what is known about our Solar System, including the Sun, the rocky and gaseous planets and their moons, and the minor planets and asteroids. The course will culminate in an overview of the discovery and characterization of planets around other stars where we will begin to put our Solar System in the context of other recently discovered exo-solar systems. (Offered annually) PHYS 114 Stars, Galaxies and The Universe  This course provides an introduction to the general physical and observational principles necessary to understand stars, galaxies and the Universe as a whole. We will discuss light, optics and telescopes, properties of stars, black holes, galaxies, and cosmology. The course will culminate in a discussion of the formation of the Universe starting with the Big Bang. (Offered annually PHYS 115 Astrobiology Astrobiology is the scientific study of the origin and evolution of life in the Universe.  This course examines the origin of life on Earth and the possibility of life elsewhere in the Universe. We will explore the fundamental questions: What is life? How did it arise on Earth?  How does the presence of life change the Earth? How do we know about the early history of life on Earth? What is the potential for life in our solar system? Where else in the Universe might life be found? How do we search for life elsewhere? What might life elsewhere look like? What do we know about the newly discovered habitable planets around other stars? (Arens, Hebb, Kendrick, offered annually) PHYS 120 Physics of Dance The course is an exploration of the connection between the art of dance and the science of motion with both lecture/discussion sessions and movement laboratories. Topics include: velocity, acceleration , mass, force, energy, momentum, torque, equilibrium, rotation and angular momentum. "Dance it-Measure it" is the movement laboratory which combines personal experience of movement with scientific measurements and analysis. This is a science lab, not a dance technique course. (Offered periodically) PHYS 140 Principles of Physics This is a one-semester survey course in physics with laboratory, which makes use of algebra and trigonometry, but not calculus. It is designed particularly for Architectural Studies students, for whom it is a required course. It also provides a serious, problem-solving introduction to physics for students not wishing to learn calculus. The following topics are included: mechanics (particularly statics, stress, and strain), sound, and heat. This course satisfies the physics prerequisite for PHYS 160. (Offered annually) PHYS 150 Introductory Physics I This is a calculus-based first course in mechanics and waves with laboratory. Prerequisite: MATH 130 Calculus I(may be taken concurrently). (Offered annually) PHYS 160 Introductory Physics II This course offers a calculus-based first course in electromagnetism and optics with laboratory. Prerequisites: PHYS 150 and MATH 131 Calculus II(may be taken concurrently). (Offered annually) PHYS 210 Introduction to Astrophysics This first course in Astrophysics will add the foundational rigors of physics to the observations of astronomy to generate a more thorough understanding of our universe. topics for the course include Stellar dynamics and evolution ( Star formation, fusion and nucleosynthesis, hydrostatic equilibrium, post-main -sequence evolution, supernovae, white dwarfs, compact objects ), Galactic formation and evolution , active galaxies, galactic clusters, dark matter, Big Bang and Universe evolution, and dark energy. PHYS 240 Electronics This course offers a brief introduction to AC circuit theory, followed by consideration of diode and transistor characteristics, simple amplifier and oscillator circuits, operational amplifiers, and IC digital electronics. With laboratory. Prerequisite: PHYS 160. (Offered annually) PHYS 250 Green Energy: Understanding Sustainable Energy Production and Use The climate change crisis has spurred the need for and interest in sustainable energy technologies. In this course we will study the major green energy technologies: efficiency, wind, solar ( photovoltaic and thermal ), geothermal, current/wave energy, smart grids and decentralized production. The class will study each technology from the basic principles through current research. In parallel, students will work together on a green energy project. Project ideas include: developing a green energy production project on campus, or a campus/Geneva self-sufficiency study. PHYS 260 Waves and Optics Beginning with simple harmonic motion, the course covers coupled oscillators and mechanical waves. Then it explores the Fourier decomposition of oscillatory motion. Next we explore electromagnetic waves and phenomena of scattering, reflection, interference, and diffraction. We conclude with an exploration of modern optical techniques, such as waveguides, interferometers, and stable cavities. PHYS 262 Applied Photonics This course surveys new optical technologies widely used to control light with an emphasis on generation, detection, and imaging. These include new techniques in microscopy relevant to biological applications and nanotechnology, applications of lasers in micromanipulation, optical trapping, quantum-dots, and fluorescence imaging of cells and single molecules. Prerequisites: PHYS 160 and MATH 131 Calculus IIor permission of the instructor. (Offered occasionally) PHYS 270 Modern Physics This course, which includes a laboratory component, provides a comprehensive introduction to 20th century physics. Topics are drawn from the following: special relativity, early quantum views of matter and light, the Schrödinger wave equation and its applications, atomic physics, masers and lasers, radioactivity and nuclear physics, the band theory of solids, and elementary particles. With laboratory. Prerequisites: PHYS 160 and MATH 131 Calculus II. (Offered annually) PHYS 285 Math Methods This course covers a number of mathematical topics that are widely used by students of science and engineering. It is intended particularly to prepare physics majors for the mathematical demands of 300-level physics courses. Math and chemistry majors also find this course quite helpful. Techniques that are useful in physical science problems are stressed. Topics are generally drawn from: power series, complex variables, matrices and eigenvalues, multiple integrals, Fourier series, Laplace transforms, differential equations and boundary value problems, and vector calculus. Prerequisite: MATH 131 Calculus II. (Offered annually) PHYS 287 Computational Methods in Physics This course explores topics in computational methodologies and programming within physics.  Computers are a ubiquitous tool in physics data acquisition and analysis.  Each semester we will explore a set of topics within this field.  Topics may include the statistics of data analysis, techniques of linear and nonlinear fitting, frequency analysis, time-frequency analysis, signal and image processing.  Technologies may include data acquisition systems, data analysis environments, and common scientific programming languages.  Prerequisite: PHYS 285. (Offered annually) PHYS 351 MechanicsStarting from the Newtonian viewpoint, this course develops mechanics in the Lagrangian and Hamiltonian formulations. Topics include Newton's laws, energy and momentum, potential functions, oscillations, central forces, dynamics of systems and conservation laws, rigid bodies, rotating coordinate systems, Lagrange's equations, and Hamiltonian mechanics. Advanced topics may include chaotic systems, collision theory, relativistic mechanics, phase space orbits, Liouville's theorem, and dynamics of elastic and dissipative materials.  Prerequisites: PHYS 160 and MATH 131 Calculus II. (Offered alternate years) PHYS 352 Quantum Mechanics This course develops quantum mechanics, primarily in the Schrödinger picture. Topics include the solutions of the Schrödinger equation for simple potentials, measurement theory and operator methods, angular momentum, quantum statistics, perturbation theory and other approximate methods. Applications to such systems as atoms, molecules, nuclei, and solids are considered. Prerequisite: PHYS 270. (Offered alternate years) PHYS 355 Classical and Quantum Information and Computing This course covers the intersection of physics with the study of information. There are two broad areas to this subject. One is the area of overlap with classical physics and the appearance of entropy in the study of computation. The other is the area of overlap with quantum physics, reflected in the explosive growth of the potentially revolutionary area of quantum computing. Topics will be drawn from Shannon’s theory of information; reversible and irreversible classical computation; the no-cloning theorem; EPR states and entanglement; Shor’s algorithm and other quantum algorithms; quantum error correction; quantum encryption; theoretical aspects of quantum computing; and physical models for quantum computing. Prerequisite: One 300-level course in Physics or Mathematics. (Offered alternate years) PHYS 361 Electricity and Magnetism This course develops the vector calculus treatment of electric and magnetic fields both in free space and in dielectric and magnetic materials. Topics include vector calculus, electrostatics, Laplace’s equation, dielectrics, magnetostatics, scalar and vector potentials, electrodynamics, and Maxwell’s equations. The course culminates in a treatment of electromagnetic waves. Advanced topics may include conservation laws in electrodynamics, electromagnetic waves in matter, absorption and dispersion, wave guides, relativistic electrodynamics, and Liénard-Wiechert potentials. Prerequisites: PHYS 160 and MATH 131 Calculus II. (Offered alternate years) PHYS 362 Optics A survey of optics that includes geometrical optics, the usual topics of physical optics such as interference and diffraction, and lasers. Prerequisites: PHYS 160 and MATH 131 Calculus II. (Offered alternate years) PHYS 370 Relativity, Spacetime, and Gravity This course covers the ideas and some of the consequences of Einstein’s special and general theories of relativity. Topics include postulates of special relativity, paradoxes in special relativity, geometry of Minkowski space, geometry of curved spacetime, geodesics, exact solutions of the field equations, tests of general relativity, gravitational waves, black holes, and cosmology. Prerequisites: PHYS 270 and PHYS 285. (Offered alternate years) PHYS 375 Thermal Physics This course reviews the laws of thermodynamics, their basis in statistical mechanics, and their application to systems of physical interest. Typical applications include magnetism, ideal gases, blackbody radiation, Bose-Einstein condensation, chemical and nuclear reactions, neutron stars, black holes, and phase transitions. Prerequisites: PHYS 160 and MATH 131 Calculus II. (Offered alternate years) PHYS 380 Contemporary Inquiries in Physics This course examines current major lines of development in the understanding of physics. Typical examples include symmetries, superconductivity, superstrings and other attempts at unification, phase transitions, cosmology and the early universe, and non-linear systems and chaotic dynamics. Prerequisites: PHYS 270 and two 300-level physics courses or permission of the instructor. (Offered alternate years) PHYS 381 Topics in Laboratory Physics I This laboratory course offers a series of experiments for students in 200- or 300-level physics courses. Whenever possible, the experiments assigned are related to the field of physics being studied in the corresponding 200- or 300-level courses. PHYS 381 and PHYS 382 together may be substituted for PHYS 383. (0.5 credit; offered occasionally) PHYS 382 Topics in Laboratory Physics II This laboratory course offers a series of experiments for students in 200- or 300-level physics courses similar to PHYS 381 but at a higher level. PHYS 381 and PHYS 382 together may be substituted for PHYS 383. (0.5 credit; offered occasionally) PHYS 383 Advanced Physics Laboratory Advanced laboratory is the capstone laboratory experience in which students perform a wide variety of experiments that cover the major concepts in Modern Physics and Quantum Mechanics including wave-particle duality, NMR, particle decay, time dilation, particle scattering and absorption, and laser dynamics and spectroscopy. (Offered annually) PHYS 450 Independent Study PHYS 495 Honors Hobart and William Smith Colleges, Geneva, NY 14456 (315) 781-3000 Contact the Colleges Offices and Resources Campus Directory make a gift Preparing Students to Lead Lives of Consequence.
76de39ee1fdba1d9
Sign up Here's how it works: 1. Anybody can ask a question 2. Anybody can answer Lets say we have a finite square potential well like below: finite well This well has a $\psi$ which we can combine with $\psi_I$, $\psi_{II}$ and $\psi_{III}$. I have been playing around and got expressions for them, but they are not the same for ODD and EVEN solutions but lets do this only for ODD ones. ODD solutions: $$ \boxed{\psi_{I}= Ae^{\mathcal{K} x}~~~~~~~~\psi_{II}= - \dfrac{A e^{-\mathcal{K}\tfrac{d}{2}}}{\sin\left( \mathcal{L} \tfrac{d}{2} \right)}\, \sin\left(\mathcal{L} x\right)~~~~~~~~ \psi_{III}=-Ae^{-\mathcal{K} x}} $$ When i applied boundary conditions to these equations i got transcendental equation which is: \begin{align} &\boxed{-\dfrac{\mathcal{L}}{\mathcal{K}} = \tan \left(\mathcal{L \dfrac{d}{2}}\right)} && \mathcal L \equiv \sqrt{\tfrac{2mW}{\hbar^2}} && \mathcal K \equiv \sqrt{\tfrac{2m(W_p-W)}{\hbar^2}} \\ &{\scriptsize\text{transcendental eq.} }\\ &\boxed{-\sqrt{\tfrac{1}{W_p/W-1}} = \tan\left(\tfrac{\sqrt{2mW}}{\hbar} \tfrac{d}{2} \right)}\\ &{\scriptsize\text{transcendental eq. - used to graph} } \end{align} Transcendental equation can be solved graphically by separately plotting LHS and RHS and checking where crosssections are. $x$ coordinates of crossections represent possible energies $W$ in finite potential well. So i can theoreticaly get values for possible energies $W$ and when i get these i can calculate $\mathcal L$ and $\mathcal K$. But i am still missing constant $A$. I would like to plot $\psi_I$, $\psi_{II}$ and $\psi_{III}$ but my constant $A$ is still missing. How can i plot these functions so the normalisation will be applied? After all of your suggestions i decided to work on a speciffic case of an electron with mass $m_e$ which i put in a finite well. So the constants i know are: \begin{align} d &= 0.5nm\\ m_e &= 9.109\cdot 10^{-31} kg\\ W_p &= 25eV\\ \hbar &= 1.055 \cdot 10^{-34} Js {\scriptsize~\dots\text{well known constant}}\\ 1eV &= 1.602 \cdot 10^{-19} J {\scriptsize~\dots\text{need this to convert from eV to J}} \end{align} I first used constants above to again draw a graph of transcendental equation and i found 2 possible energies $W$ (i think those aren't quite accurate but should do). This looks like in any QM book (thanks to @Chris White): enter image description here Lets chose only one of the possible energies and try to plot $\psi_I$, $\psi_{II}$ and $\psi_{III}$. I choose energy which is equal to $0.17\, W_p$ and calculate constants $\mathcal K$ and $\mathcal L$: \begin{align} \mathcal K &= 2.3325888\cdot 10^{10}\\ \mathcal L &= 1.5573994\cdot 10^{10}\\ \end{align} Now when the picture above looks like in a book i will try to use constants $\mathcal K$, $\mathcal L$ and $\boxed{A \!=\! 1}$ (like @Chris White sugested) to plot $\psi_I$, $\psi_{II}$ and $\psi_{III}$. Even now the boundary conditions at $-\tfrac{d}{2}$ and $\tfrac{d}{2}$ are not met: enter image description here It looks like boundary conditions are not met. I did calculate my constants quite accurately, but i really can't read the energies (which are graphicall solutions to the first graph) very accurately. Does anyone have any suggestions on how to meet the boundary conditions? Here is the GNUPLOT script used to draw 2nd graph: set terminal epslatex color colortext size 9cm,5cm set size 1.5,1.0 set output "potencialna_jama_6.tex" set style line 1 linetype 1 linewidth 3 linecolor rgb "#FF0055" set style line 2 linetype 2 linewidth 1 linecolor rgb "#FF0055" set style line 3 linetype 1 linewidth 3 linecolor rgb "#2C397D" set style line 4 linetype 2 linewidth 1 linecolor rgb "#2C397D" set style line 5 linetype 1 linewidth 3 linecolor rgb "#793715" set style line 6 linetype 2 linewidth 1 linecolor rgb "#793715" set style line 7 linetype 1 linewidth 3 linecolor rgb "#b1b1b1" set style line 8 linetype 3 linewidth 1 linecolor rgb "#b1b1b1" set grid set samples 7000 set key at graph .70, 0.4 set key samplen 2 set key spacing 0.8 m = 9.9109*10**(-31) d = 0.5*10**(-9) U = 25 * 1.602*10**(-19) h = 1.055*10**(-34) K = 2.3325888*10**10 L = 1.5573994*10**10 A = 1 f(x) = A*exp(K*x) g(x) = -( A*exp(-L*(d/2)) )/( sin(L*(d/2)) )*sin(L*x) h(x) = -A*exp(-K*x) set xrange [-d:d] set yrange [-8*10**(-2):8*10**(-2)] set xtics ("$0$" 0, "$\\frac{d}{2}$" (d/2), "$-\\frac{d}{2}$" -(d/2)) set ytics ("$0$" 0) set xlabel "$x$" plot [-1.5*d:1.5*d] f(x) ls 1 title "$\\psi_{I}$", g(x) ls 3 title "$\\psi_{II}$", h(x) ls 5 title "$\\psi_{III}$" share|cite|improve this question Your values $d=1$, $\mathcal{L} = 1.5$ and $\mathcal{K} = 2$ don't actually satisfy the boundary condition! You need to choose the values for a solution. – Michael Brown Apr 2 '13 at 7:27 So do i have to first get allowed energies $W$ from graphical solution of transcendental equations and use them to find possible $\mathcal L$ an $\mathcal{K}$ and finally calculating $A$ with normalisation. Then finally plotting the equation? – 71GA Apr 2 '13 at 7:38 Yes. The boundary conditions on $\psi$ can only be satisfied for values of $W$ that satisfy the quantization condition (the transcendental equation). – Michael Brown Apr 2 '13 at 8:52 Indeed. The steps are: (1) choose any $d$; (2) find a pair $(\mathcal{L},\mathcal{K})$ that satisfies the transcendental equation; (3) choose any $A$ (perhaps via a normalization condition); (4) plot. – Chris White Apr 2 '13 at 15:56 You have the wrong sign inside the square root in $\mathcal{K}$. Make that change, and you will find exactly 2 bound solutions ($W<W_p$). As it stands currently, the only real solutions for $(\mathcal{L},\mathcal{K})$ correspond to $W>W_p$, but in this case you know the solution in regions I and III will be sines and cosines rather than exponentials, but such solutions cannot be accommodated in the form currently written. – Chris White Apr 2 '13 at 23:28 up vote 11 down vote accepted Wavefunctions are found by solving the time-independent Schrödinger equation, which is simply an eigenvalue problem for a well-behaved operator: $$ \hat{H} \psi = E \psi. $$ As such, we expect the solutions to be determined only up to scaling. Clearly if $\psi_n$ is a solution with eigenvalue $E_n$, then $$ \hat{H} (A \psi_n) = A \hat{H} \psi_n = A E_n \psi_n = E_n (A \psi_n) $$ for any constant $A$, so $\psi$ can always be rescaled. In this sense, there is no physical meaning associated with $A$. To actually choose a value, for instance for plotting, you need some sort of normalization scheme. For square-integrable functions, we often enforce $$ \int \psi^* \psi = 1 $$ in order to bring the wavefunction more in line with the traditional definition of probability (which says the sum of probabilities is $1$, also an arbitrary constant). In your case, $$ \psi(x) = \begin{cases} \psi_\mathrm{I}(x), & x < -\frac{d}{2} \\ \psi_\mathrm{II}(x), & \lvert x \rvert < \frac{d}{2} \\ \psi_\mathrm{III}(x), & x > \frac{d}{2}. \end{cases} $$ Thus choose $A$ such that $$ \int_{-\infty}^\infty \psi(x)^* \psi(x) \ \mathrm{d}x \equiv \int_{-\infty}^{-d/2} \psi_\mathrm{I}(x)^* \psi_\mathrm{I}(x) \ \mathrm{d}x + \int_{-d/2}^{d/2} \psi_\mathrm{II}(x)^* \psi_\mathrm{II}(x) \ \mathrm{d}x + \int_{d/2}^\infty \psi_\mathrm{III}(x)^* \psi_\mathrm{III}(x) \ \mathrm{d}x $$ is unity. If you happen to be in the regime $E > W_p$, then $\mathcal{K}$ will be imaginary, $\psi_\mathrm{I}$ and $\psi_\mathrm{III}$ will be oscillatory rather than decaying, and the first and third of those integrals will not converge. You could pick an $A$ that conforms to some sort of "normalizing to a delta function," but there are many different variations on this, especially for a split-up domain like this. In that case I would recommend picking an $A$, if you really have to do it, based on some other criterion, such as $\max(\lvert \psi \rvert) = 1$ or something. share|cite|improve this answer I tried to plot this by using some random values in gnuplot. IT came out like nothing what read about in Griffith, Beiser... - check my edit. – 71GA Apr 2 '13 at 6:47 That large normalisation integral looks like a challenge :) – 71GA Apr 2 '13 at 22:51 I found the case myself. There was a mistake in a GNUPLOT script. The line: Should have $\mathcal K$ in place of the first $\mathcal L$. This was the first mistake but after i fixed it my graphs still were sloppy, so i redid all the readings for energies from a graphicall solutions to the transcendental equations and recalculated $\mathcal K$ and $\mathcal L$. As it turns out my graphs now came out perfectly! Here are the images: enter image description here enter image description here Thank you all for the help. Reward goes to @Chris White. share|cite|improve this answer I forgot to mention that this is for $A=1$. I should do the normalisation, but it is a tricky thing and will leave it for now. – 71GA Apr 5 '13 at 9:24 Your Answer
e26372e269601a58
On Particles Mass and the Universons Hypothesis Jacques Consiglio In the logic of the Universons assumption, we deduce the nature of De Broglie wave and periodic mass variation for particles. We verify consistency with quantum mechanics, in particular the Schrödinger equation. We analyze the hypothesis that elementary particle mass is momentum circulating at light speed. We discover resonance rules acting within elementary particles leading to a formula governing the quantization of masses. Applying this formula to the electrons, muons, tauons and quarks, we find resonances that match with current measurements. We deduce the energy of unknown massless sub–particles at the core of electrons, muons, and tauons. Geometrical constraints inherent to our formula lead to a possible explanation to only three generations of particles. Based on particles geometry, we verify the consistency of the deduced quarks structure with QCD and raise the hypothesis that color charge is magnetic. We verify consistency with QCD symmetry and find that P and CP symmetry are broken by the interaction, in agreement with weak force knowledge. Our logic leads to re–interpret the Dirac condition on magnetic monopole charge, explain why the detection of magnetic monopoles is so difficult and, when detected, why magnetic charge can depart from Dirac prediction. We deduce a possible root cause of gravitation, resulting in the Schwarzschild metric and probable non existence of dark matter. Full Text: PDF Supp. DOI: http://dx.doi.org/10.5539/apr.v4n2p144 Copyright © Canadian Center of Science and Education images_120. proquest_logo_120 lockss_logo_2_120 udl_120.
d511d7da98db626e
Quantum Microbiology J. T. Trevors and L. Masson Quantum Microbiology Curr. Issues Mol. Biol. 13: 43-50, 2011 A couple of sentence from the abstract to show what the paper is about. “During his famous 1943 lecture series at Trinity College Dublin, the renown physicist Erwin Schrödinger discussed the failure and challenges of interpreting life by classical physics alone and that a new approach, rooted in Quantum principles, must be involved.” “In this article we explore the role of quantum events in microbial processes and endeavor to show that after nearly 67 years, Schrödinger was prophetic and visionary in his view of quantum theory and its connection with some of the fundamental mechanisms of life.” I should say that the paper is written quite well and I have enjoyed reading it. The paper urges us to employ quantum mechanics in microbiology but I am not sure if I understand what the authors mean. I am a chemist and for me a cell after all is some small chemical reactor, hence let me look at this from a viewpoint of quantum chemistry. More than forty years ago, the professor who taught quantum chemistry at our department of chemistry used to tell us that chemistry is a part of physics. We have just to solve the Schrödinger equation, that’s it, all the answers are already there. Yet, even now the situation is far away from that ideal goal. Chemists learn quantum chemistry, no doubt, but quantum chemistry is just a a part of chemistry.  By the way, the best way to check whether chemistry is a part of physics is to take a physicist and to challenge him/her to develop a new drug or create a new material. Guess what happens. As for the paper, in my view it would be useful to look what molecular simulation is. Ten years ago I took part in developing a course “Molecular Simulation for MST Engineers” For the last ten years, the computer power has increased significantly but I believe that the situation expressed on my slides has changed not that dramatically. Molecular simulation starts with the the Schrödinger equation, either transient of stationary yet at the adiabatic approximation (movements of electrons are separated from movements of nuclei). So far, so good. The problem is however that there is no good way to solve it directly from the first principles even for relatively small molecules even at the adiabatic approximation. In real life one finds an eclectic mixture of methods from semi-physical to semi-empirical that solve not the Schrödinger equation but some simplified form of it. These methods scale much better than for example Hartree-Fock + Configuration Interaction but then the question is how do we know how good these methods are. During a seminar presenting a popular semi-empricial software, there was a good statement that when you use an experimental apparatus, you first have to calibrate it. Similarly when you use your quantum chemistry code, you first must calibrate it. Chemists are very good in this, but this is exactly the reason why physicists hate chemistry as well as chemists. Well, even semi-emprichal quantum methods scale up to some level only and then one cannot use them as well. Chemists continue then without delay with molecular mechanics, that is, with empirical classical forces between atoms expressed as classical balls. On the other hand, the Schrödinger equation is not enough in chemistry as well as in microbiology. The Schrödinger equation is for temperature zero Kelvin and there is no life there. In order to treat systems at room temperatures, one need to include molecular dynamics or statistics by means of the Monte-Carlo method. The majority simulations in molecular dynamics or Monte-Carlo are done at molecular mechanics level but even here there are own limits. I am not sure if one can imagine molecular dynamics of the whole cell even at molecular mechanics level. Then try for example run molecular dynamics for DNA  at molecular mechanics level for 1 s and see what time is needed, as you must make small timesteps at some femtoseconds. The hot buzzword nowadays is “multiscale method” but there are problems, problems, and problems. In my view it would be very nice to extend molecular simulation to biological objects but the question is how. Well, biologists as newcomers may do it better than chemists. Good luck. P.S. Quantum microbiology happens to be a trademark of Accelr8 P.P.S. On LinkedIn Vladimir Teif has made a link to Vasily V Ogryzko Erwin Schroedinger, Francis Crick and epigenetic stability Biology Direct 2008, 3:15 3 responses to “Quantum Microbiology” Comments are now closed 1. As long as my essay in ‘Biology Direct’ was brought up, I would like to direct your attention to this archive posting, where I suggest what kind of approximations to the ‘from the first principles’ quantum description we can use in order to describe intracellular processes: Obviously, there are huge technical difficulties ahead, but the potential implications for the biological organization and evolution could make these suggestions worthy of consideration. 2. Thanks for link. When I have time, I will try to look at your paper. 3. Vasiliy, I have written a small text on your paper: Well, it is in Russian but I guess that this is not a problem for you.
26b11e204f0f78a7
Magnetic field related topics {math, energy, light} {line, north, south} {@card@, make, design} A magnetic field is a field of force produced by moving electric charges, by electric fields that vary in time, and by the 'intrinsic' magnetic field of elementary particles associated with the spin of the particle. There are two separate but closely related fields to which the name 'magnetic field' can refer: a magnetic B field and a magnetic H field. The magnetic field at any given point is specified by both a direction and a magnitude (or strength); as such it is a vector field.[nb 1] The magnetic field is most commonly defined in terms of the Lorentz force it exerts on moving electric charges. The relationship between the magnetic and electric fields, and the currents and charges that create them, is described by the set of Maxwell's equations. In special relativity, electric and magnetic fields are two interrelated aspects of a single object, called the electromagnetic field tensor; the aspect of the electromagnetic field that is seen as a magnetic field is dependent on the reference frame of the observer. In quantum physics, the electromagnetic field is quantized and electromagnetic interactions result from the exchange of photons. Magnetic fields have had many uses in ancient and modern society. The Earth produces its own magnetic field, which is important in navigation since the north pole of a compass points toward the south pole of Earth's magnetic field, located near the Earth's geographical north. Rotating magnetic fields are utilized in both electric motors and generators. Magnetic forces give information about the charge carriers in a material through the Hall effect. The interaction of magnetic fields in electric devices such as transformers is studied in the discipline of magnetic circuits. Full article ▸ related documents Black hole Depth of field Maxwell's equations Special relativity Globular cluster Schrödinger equation General relativity Aurora (astronomy) Surface tension Cygnus X-1 Loop quantum gravity Andromeda Galaxy Binary star Dirac equation Bernoulli's principle Asteroid belt Speed of sound Langmuir probe Navier–Stokes equations
6068db3efd6791d4
Rubrica biografie Klein Oskar  Biografia estratta da  (1894-1977) Oskar Klein was the youngest son of Sweden's first rabbi, Gottlieb Klein, who was originally from the Southern Carpathian. Gottlieb Klein received his doctorate from Heidelberg and moved to Sweden in 1883. He evidently instilled an interest in learning in his young son, as Oskar became quite fond of biology at an early age. This interest changed to chemistry around the age of 15 and soon after, in 1910, Svante Arrhenius, at what seems to be the behest of Gottlieb, invited Oskar to work in his laboratory at the Nobel Institute. Here he took up an interest in solubility and he published his first paper in 1912 on the solubility of zinc hydroxide in alkalis. This was the very same year that he finished his secondary education. He waited, however, until 1914 to take the University exam. Arrhenius wanted to send Klein to work with Jean-Baptiste Perrin in his laboratory at the University of Paris but the plan was foiled by the outbreak of World War I. Klein found himself caught up in the tempest and saw military service in 1915 and 1916. After his service concluded, but with the war still raging, he returned to work with Arrhenius. Their work now centred around studying dielectric constants of alcohols in various solvents. During this particular stay in Stockholm, he met Hendrik A Kramers, who, at the time (1917), was a student of Niels Bohr in Copenhagen. Kramers and Klein met several times during the next few years both in Stockholm and in Copenhagen, which was to be Klein's next destination. In 1917 Klein received a fellowship to study abroad and, subsequently, arrived in Copenhagen in 1918. Over the course of the next two years he would travel between Stockholm and Copenhagen performing work for both Bohr and Arrhenius, spending the summer of 1919 with Kramers in Copenhagen, and finally returning to Stockholm in 1920. But that was not to be the end of his Copenhagen experience. In fact, it was merely the beginning. Bohr traveled to Stockholm in 1920 to visit Klein and convinced him to return to Copenhagen once more to work at Bohr's Institute. Klein agreed and began what would prove to be quite a fruitful relationship that eventually would lead him to his first teaching position. Around this time, Bohr was working with Svein Rosseland on the statistical equilibrium of a mixture of atomic and free electrons. At the time, it was believed that electrons colliding with atoms always lost energy. However, Klein, in conjunction with Rosseland, introduced "collisions of the second kind" where the electrons actually gained energy! Klein continued his work on the other side of the 'molecular aisle' by turning his attention to ions. In fact, this led him to his thesis research in which he studied the forces between ions in strong electrolytes using Gibbs' statistical mechanics. The result was a generalized formulation of Brownian motion. He defended his doctorate in 1921 at Stockholm Högskola and was opposed by Erik Ivar Fredholm the mathematical physicist best known for his work on integral equations and spectral theory. After his successful defence, Klein returned to Copenhagen, later assisting Bohr on a trip to Göttingen. Around this time Klein turned to publishing semi-popular writings on physics. His first work in this new arena was a philosophical paper that was a refutation of an objection to relativity theory by Swedish philosophers. Not surprisingly, it was around this time that he began to look for a job. In 1923, Oskar Klein married Gerda Agnete Koch and moved to Ann Arbor, Michigan to take up a post at the University of Michigan, a post he won with no small thanks to his venerable friend Niels Bohr. His first work in Ann Arbor dealt with the anomalous Zeeman effect which was a problem that arose out of the fact that no one at the time understood the behavior of atoms in a magnetic field. The classical Zeeman effect was explained, in a nutshell, as the splitting of spectral lines by the magnetic field. The problem was that the classical theory only effectively described atoms with a total electron spin of zero. The difference can be seen in the Hamiltonians of the two. For the time (1923), this was a fairly large problem to tackle, but Klein did not stop there. He went on to work on the interaction of diatomic molecules with precessing electrons, studying the angular momentum within the molecule itself. The following year, in 1924, he taught a course on electromagnetism and lectured on an electric particle in a combined gravitational and electromagnetic field. This was the beginning of his landmark work on a unified field theory. Klein chose to solve the problem by essentially extending his work to a fifth dimension, though his early unification ideas centred around quantum physics as the catalyst. After a time Klein argued less and less that quantum physics could lead to a unified picture, in fact he later abandoned the idea entirely. However, he did see the possibility of unification in five dimensions, which seems to have been present in his initial attempt. At this time, Klein apparently was unaware of the work of Theodor Kaluza. Kaluza, in 1919, sent a paper to Albert Einstein proposing a unification of gravity with Maxwell's theory of light. Einstein initially was uninterested in the paper, but later realized the highly original ideas contained within it and encouraged Kaluza to publish his ideas. In fact the paper was communicated by Einstein himself on 8 December 1921. In 1925, Klein returned to Copenhagen and contracted hepatitis. He was ill for half a year, though he was visited by Heisenberg in July of 1925 and Schrödinger in January of 1926. This was around the time he was finally able to return to work. It was at this time that he finally became aware of Kaluza's work. Wolfgang Pauli communicated this work to him and Klein. Klein's adaptation of Kaluza's work had a major difference from the original in that the extra or fifth dimension was curled up into a ball that was on the order of the Planck length, 10-33 cm. It is important to note, however, that the extra dimension, though curled up, was still Euclidean in nature. Basically, the fifth coordinate was not observable but was a physical quantity that was conjugate to the electrical charge. As Kragh explains, Klein attempted to explain the atomicity of electricity as a quantum law. He also attempted to account for the electron and the proton. Klein assumed the fifth dimension to be periodic: the dimension was on the order of the Planck length. Klein's results were published in Nature in the autumn of 1926 and generated interest from such eminent theorists as Vladimir Fock, Leon Rosenfeld, Louis de Broglie, and Dirk Struik. Unfortunately, despite a lot of initial interest in unification, most physicists eventually went on to more promising and experimentally testable research leaving Kaluza-Klein theory to be explored by another generation of physicists nearly half a century later. In Klein's own words:- Dirac may well say that my main trouble came from trying to solve too many problems at a time. It was also in 1926 that Klein was appointed as docent at Lund University and became, for the next five years, Bohr's closest collaborator both on correspondence and complimentarity, and apparently contributed to the development of the uncertainty principle, as Heisenberg recalled:- After several weeks of discussion, which were not devoid of stress, we soon concluded, not least thanks to Oskar Klein's participation, that we really meant the same, and that the uncertainty relations were just a special case of the more general complementarity principle. In fact, 1926 was a banner year for Klein. I n addition to finally recovering from the hepatitis and becoming docent at Lund, it was in this same year that he made his next great theoretical breakthrough. In a paper in which he determined the atomic transition probabilities (prior to Dirac), he introduced the initial form of what would become known as the Klein-Gordon equation. It is interesting to note that this equation appeared exactly as it has been written in David Bohm's 1951 book Quantum Theory but was not called the Klein-Gordon equation. However, Bethe and Jackiw's Intermediate Quantum Mechanics, originally written in 1964, does refer to the same equation as the Klein-Gordon equation. Klein and Walter Gordon were thus eventually honoured with having the equation named after them, though it seems to have taken over a quarter of a century to receive the honour. Oddly enough, Schrödinger himself privately developed a relativistic wave equation from his original wave equation, which, in reality, was not that difficult to do, and did so prior to Klein and Gordon, though he never published his results. The trouble came when the equation did not result in the correct fine structure of the hydrogen atom and when Pauli introduced the concept of spin a year later (1927). The equation turned out to be incompatible with spin and, as a result, is only useful for calculations involving spinless particles. But, nonetheless, it was an important point in quantum theory and, along with his unification theory, was to ensure a lasting legacy for Klein and cemented 1926 as a pivotal year in his life. In the years following 1926, Klein turned to teaching and continued his research, though possibly at a reduced pace. Brink [5] quotes a friend and mentor to Klein as having said:- You will now fulfill the words: go and teach the people. Your great pedagogical talents always were one of your strongest qualities. I am not of the opinion that finding new laws of nature and indicating new directions is one of your great strengths, although you always have developed a certain ambition in this direction. In 1927, Klein was appointed Lektor in Copenhagen but nonetheless continued his research working with Pascual Jordan on the second quantization in quantum mechanics. In his work with Jordan, he demonstrated the close connection between quantum fields and quantum statistics. It was known that second quantization guarantees that photons obey Bose-Einstein statistics, but Klein showed that second quantization is not confined to free particles only. He and Jordan showed that one can quantize the non-relativistic Schrödinger equation and, in honour of this work, he was the recipient of yet another named mathematical tool, the Jordan-Klein matrices. In subsequent years he collaborated with the Japanese physicist Yoshio Nishina who was in Copenhagen on an extended research visit and worked on the problem of Compton scattering of a Dirac electron. Despite the so-called Klein paradox, that being that the positron was not completely understood by physicists, he was able to convince physicists of the soundness of Dirac's relativistic wave equation. His continued work included the quantum mechanics of the second law of thermodynamics and Klein's lemma. In 1930, he was offered Fredholm's position at Stockholm Högskala and he finally returned to his native city to take up a post that he held until his retirement in 1962. During the 1930s, Klein helped many refugee physicists who were expelled from Germany and other nations largely due to their Jewish heritage. Of the many he helped, one included Walter Gordon who would later join Klein in being the beneficiaries of the named equation we have just discussed. In 1943, Klein also aided in Bohr's escape from Copenhagen. During the 1930s Klein also found time to attend conferences, not the least of which included the 1938 Warsaw Conference where he spoke on (almost) non-Abelian gauge theories. This conference included some of the leading theorists of the day including Sir Arthur Eddington, Eugene Wigner, and others. It was at this conference that Klein suggested that a spin-1 particle mediated beta decay and played a role in weak interactions in a similar manner to the photon in electromagnetism. Klein's hypothesis was yet another crack at a unified field theory, this time in attempt to unify the strong, weak, and electromagnetic forces. The work was not noticed until nearly twenty years later when it was resurrected by Julian Schwinger in 1957. In the 1940s Klein worked on a wide variety of subjects including superconductivity (with Jens Lindhard in 1945), biochemistry, universal p-decay, general relativity, and stellar evolution. Sometime after 1947 he, and independently Giovanni Puppi, realized that both the electron and the -meson were "weak" particles. In the 1950s and 1960s Klein remained active, addressing the 11th Solvay Conference in 1958, developing a new model for cosmology in conjunction with Hannes Alfven in 1963, and tackling Einstein's General Relativity in a paper published in Astrophisica Norvegica in 1964. During his later years, he also became very interested in philosophy and especially in analogies between science and religion. In addition, he took to writing a few popular books, most of which are out of print. Oskar Klein died in Stockholm, one of the finest theoretical physicists of the twentieth century. NOTA! Questo sito utilizza i cookie e tecnologie simili. Informativa sulla Privacy e Cookie Policy Ultima modifica: 28 maggio 2018 Il DPO è contattabile presso il seguente indirizzo e.mail: Riferimenti del Garante per la protezione dei dati personali:
43bd407a48fa7c75
Moustafa Hussein Aly Optimum Conditions for a High Bit Rate RZ Soliton Train in EDFA with Nonadiabatic Amplification In this paper, the optimum conditions to propagate RZ-train of solitons down EDFA amplifier at high bit rate and gain are studied. The suggested model is based on the wave equation of the carrier envelope. The two energy level system is used to study the evolution of train of solitons through the erbium doped fiber amplifier (EDFA). The polarization induced term representing the effect of doping atoms on the propagating electric field is obtained by solving Maxwell-Bloch equations [1]. By adding the induced polarization to the nonlinear Schrödinger equation (NSE), the split-step Fourier method is used to solve NSE in the EDFA. The higher doping level used makes the dopants respond not so fast that the induced polarization follows the optical field nonadiabatically. Trains of different soliton width, time slot and doping level are used to obtain the optimum conditions for maximum bit rate and EDFA gain.
7b5f5f9232bd06ec
Dismiss Notice Join Physics Forums Today! Semantics of Wave 1. Dec 20, 2007 #1 Semantics of "Wave" Hi all, I have two questions related to the use of the word "wave"...and I would like know whether this actually represents a physical wave nature. #1 The "wave"-function. This is what little I think I know about the wavefunction...it represents certain values about the subject (i.e. electron quantum numbers) and how they evolve with time, and the amplitude squared represents the probability of these values occurring. This of course may be wrong. Now I was wondering how the wavefunction is actually related to a sinusoidal wave. I don't think it means that subject travels along a wavelike path (could someone confirm this please) - but is the shape of the wavefunction on a graph actually sinusoidal, or gaussian? As it is related to probability, I would have said it was Gaussian, and this seems confusing...because it is then not exactly a "wave". #2 De Broglie wavelength. So all I know about this is that it implies all matter has a specific wavelength, related to it's momentum. I was wondering again how to interpret this. Does it mean that the mass actually "wiggles" along, travelling a sinusoidal path through spacetime (again a yes/no here would be helpful)? Or is it again somewhat like the wavefunction above, related to probabilities? Or is it another way of putting Heisenburgs Uncertainty principle...I thought of this possibility after reading the following from http://en.wikipedia.org/wiki/Wave%E2%80%93particle_duality" [Broken] Any ideas are welcome, Last edited by a moderator: May 3, 2017 2. jcsd 3. Dec 21, 2007 #2 User Avatar Staff Emeritus Science Advisor Gold Member There a number of 'interpretations' of quantum mechanics (summarised http://en.wikipedia.org/wiki/Interpretation_of_quantum_mechanics#Comparison"), all with differing idea's of what the wave function actually is. However, for the purposes of this post I will refer to the Copenhagen [non-real wave function] Interpretation since this is generally, the most widely accepted (and taught) interpretation, although the 'Many Worlds' interpretation is gaining ground. Roughly speaking, the wave function is just a complex function or a 'mathematical abstraction' or tool used to describe the state of a physical system. In other words, the wave function itself has no physical observables. So, whereas classically the wave function of a vibrating string describes the periodic variation of real physical observables (amplitude etc.) there is no such corresponding observable for a quantum mechanical wave-function. Therefore, the wave function of a particle tells us nothing of how it actually travels. As for the actual shape of the wave function, this very much depends on the system that the wave function is describing and furthermore, the specific state that the system is in. For example, the wave functions of a particle confined to a box (potential well) are generally sinusoidal; however, the wave function of a harmonic oscillator (e.g. diatomic molecule) can be Gaussian. The de Broglie hypothesis states that all particles have a wave-like nature (wave-particle duality). The de Broglie hypothesis does not say anything about the wave function of a particle only that a particle can be describe by a classical wave of angular frequency [itex]\omega = E/\hbar[/itex]. However, the de Broglie hypothesis can be used to find the wave function of a 'free particle' (via the dispersion relation [itex]\omega =k^2\habr/2m[/itex]), that is a particle that has a non-zero constant probability to be over all space. As you say, this is related to the Heisenberg Uncertainty Principle, since to apply the dispersion relation we must fix the momentum of the particle and therefore by HUP, the uncertainty in the position of the particle approaches infinity. To describe a localised particle, that is a particle which is restricted to some finite region in space, we must make use of quantum wave packets, which are analogous to classical wave packets. To construct a wave packet we must integrate over all possible values of the wave vector k, which can be related back to the momentum of the particle. Since we are integrating over many values of k, we do not have a definite value of momentum to assign the particle and hence, although we have reduced our uncertainty in the position of the particle (by localising it), we have increased the uncertainty in the momentum of the particle. I hope that make sense and apologise if it's a bit verbose in parts. Last edited by a moderator: May 3, 2017 4. Dec 21, 2007 #3 Thanks for the in depth reply Hootenanny! Good, good. Ok now I am somewhat confused. You say the wavefunction represents the physical state of a system, however this is not a variable...is it not comprised of many variables...position, momentum, spin etc? So when you say the shape of the wavefunction can be a certain shape...in what way have you obtained this shape? If it is drawn on the graph, then what are the two axis variables? I don't believe you can plot quantum physical state on the y, and time on the x... My next question stems from the answer to the previous one, but if you have a sinusoidal wavefunction...what is oscillating? Lets take a photon for example. When treated as a wave, the sinusoidal nature represents the oscillating EM field. If your particle in a box has a sinusoidal nature, what does this imply? Changing from a particle to an antiparticle (lol I know this isn't true)? Or does it indicate the probability of being at that point (for position in this case, opposed to whole quantum state), is changing from zero to a maximum? Again, somewhat confused. And with de Broglie: I feel this is true in the sense that photons have wave-like nature, but it does not mean the photon wiggles along through space. So I don't think the masses wiggle through space, even though they have wave-like nature. Ok moving further with this idea then, would HUP and duality pretty much be same thing? If one knows the momentum of something, its position is undefined, thus a wave. If one knows the position, it is a particle (with undefined momentum). So in a sense could not wave-like nature simply represent uncertainties in position? With respect to de Broglie wavelength...bigger something is (more momentum) means smaller wavelength...i.e. a more defined position, as one would expect by HUP and intuition. Thanks again for your help, 5. Dec 22, 2007 #4 User Avatar Staff Emeritus Science Advisor Gold Member To determine the wave function for a particular system you must solve the Schrödinger equation (SE) for that particular system. When you solve the SE you will obtain a set of allowed energy states (Eigenvalues) and the wave function (Eigenfunction) for that system, it is the quantum numbers (n,l,m etc.) that determines the state, and hence the energy, of the system. In the one-dimensional, time-independent case (the state of the system has no time evolution) the wave function is a function of a single variable (position), [itex]\psi = \psi(x)[/itex]. Now, a wave function that is purely a function of position is known as a probability amplitude(1), and the values of this function are probability amplitudes. It is these values that we plot, in other words when we say that a wave function is a certain shape (sinusoidal, Gaussian etc.), we mean that when we plot the values of the wave function against position we obtain a certain shape. So the oscillations represent how the probability amplitude varies with position. For example, the wave function for a particle undergoing one-dimensional simple harmonic oscillations in the ground state is given by; [tex]\psi_0(x) = A_0\exp\left(-\frac{x^2}{2\sigma^2}\right)[/tex] So, if we plot [itex]\psi(x)[/itex] against [itex]x[/itex] we obtain a Gaussian curve (see this http://hyperphysics.phy-astr.gsu.edu/hbase/quantum/imgqua/hoscom2.gif" [Broken]). To reiterate, the wave function doesn't have a physical observable and hence the oscillations of the wave function don't have any physical significance (since the wave function is complex-valued). I hope that answered your first two questions. I think you've got the general idea. However, in general you should note that there will be some uncertainty in both position and momentum of a particle described by quantum mechanics. For example, as I said in my previous post we can construct a wave packet by integrating of a range of values ([itex]\Delta k[/itex]) of the wave vector k. A wake packet is characterised by a zero probability amplitude over all space except for a small region [itex]\Delta x[/itex]. According to de Broglie the spread in the wave vector k results in a spread of momentum [itex]\Delta p_x[/itex]. http://hyperphysics.phy-astr.gsu.edu/hbase/uncer.html#c2" - further reading and excellent pictorial representations. I think you're confusing the issue a little here. The actual value of the momentum of a particle says nothing about the uncertainty in either position nor momentum, a smaller wavelength does not mean less uncertainty in position, the uncertainty will remain unchanged. For example, let us take the [1D] free particle solution; [tex]\Psi(x,t) = Ae^{i\left(kx-\omega t\right)}[/tex] And the us find the probability density; [tex]P = \Psi\cdot\bar{\Psi} = Ae^{i\left(kx-\omega t\right)}\cdot Ae^{-i\left(kx-\omega t\right)}[/tex] [tex]P = A^2[/tex] Hence, the probability density is constant throughout all space and is independent of the wave vector k and hence the momentum. So to conclude it is the width of the wave packet that determines the uncertainty in position, rather than the wavelength. (1)It should be stressed that the probability amplitude is not equivalent to the probability density. The probability amplitudes are simply the values of the wave function at some position in space and are therefore complex values and as such have no physical observables. Last edited by a moderator: May 3, 2017 6. Dec 22, 2007 #5 Ok thats kind of what I expected about the position and time etc, after reading a bit on probability and the like but I'm still confused with the sinusoidal wave nature of the wavefunction. When we square the wavefunction, for the case you are referring too, this then gives the probability for finding the particle at that certain position. But how come the probability oscillates from zero to a maximum and back again? For example in the link you gave previously... http://hyperphysics.phy-astr.gsu.edu/hbase/quantum/imgqua/hoscom2.gif" [Broken] I would expect the shape of the bottom graph, but not the graphs above it. How come these sinusoidal ones implie that there is zero chance of finding a particle at some positions and a maximum at others? I would have thought the probability of finding a particle would be shaped like the bottom graph...? Sorry I can't really put my problem into words...I hope you know what I'm asking. Wow ok thats really neat...I totally like thought of that idea myself and you (and the hyperphysics page) just confirmed it! That is really cool!! Ok now after reading the hyperphysics page, and your explanation I am somewhat confused. In the Hyperphysics page, it shows that summing some waves together (is this to simulate the uncertainties in momentum?) will give an interference pattern - the "wavepacket", and its width represents the uncertainty of position. However these waves were infinite weren't they, as they represented exact momentums? Therefore should not the intereference pattern repeat itself, and it too be infinite...thus giving infinitely many wavepackets, and thus giving the particle an infinitely undefined position? Last edited by a moderator: May 3, 2017 7. Dec 23, 2007 #6 User Avatar Staff Emeritus Science Advisor Gold Member From what I can gather, you believe that the probability density of a localised particle (i.e. Harmonic Oscillator, particle in a box etc.) should be Gaussian and you can't see why it would be otherwise? Furthermore, you can't understand how when we find the probability density from a wave function, the probability density oscillates. If I'm wrong, please correct me. To simplify things, lets step away from the harmonic oscillator and stick to a particle in a box. Now classically, if you put a single particle in a sealed box away from any other influences you know whats going to happen. The particle will travel with a uniform velocity until it collides with a the wall of the box, in which case it will bounce off the wall (accelerate) and then proceed with uniform motion once again. Therefore, there would be equal probability to find the particle anywhere in the box, it is equally probably to find the particle at any point in the box. However, if we use quantum mechanics to describe the 'particle in a box' system we find that the system doesn't behave as classically predicted. For a particle in a bound state (localised), the probability density will oscillate as a function of position and there will be points where the probability amplitude vanishes (nodes) and points where it is maximal (anti-nodes). How do we know this? We know this because when we solve the Schrödinger equation for a bound state, the solution we obtain does indeed oscillate. There is not classical explanation for this nor any intuitive explanation of why this is case, it is a purely quantum mechanical effect. There are some 'analogies' I've seen (and indeed, was taught at undergraduate level) but they tend to only confuse the matter further. Let us now take a concrete example so that you can see how we determine the probability density from a wave function. Let us take the example of the one-dimensional case of a particle in a infinite potential well (particle in a box) of width a. In this case the Schrödinger equation has solutions of the form; [tex]\psi_n(x) = \sqrt{\frac{2}{a}}\sin\left\{\frac{n\pi}{a}\left(x+\frac{a}{2}\right)\right\}[/tex] Now we find the probability density; [tex]P_n(x) = \psi_n(x)\cdot\overline{\psi_n}(x) = \psi_n^2(x)[/tex] [tex]P_n(x) = \frac{2}{a}\sin^2\left\{\frac{n\pi}{a}\left(x+\frac{a}{2}\right)\right\}[/tex] So, we have essentially squared the wave function to obtain the probability density, which effectively squares the amplitudes and reflects the portion of the curve below the x-axis in the x-axis. Hence, the probability density still oscillates, but is positive and has a greater amplitude than the wave function. Although you may not be satisfied with the explanation, hopefully now you can understand why (mathematically at least) the probability density of a localised particle oscillates. I apologies for confusing you, I will attempt to qualitatively clarify the idea of wave packets here. Firstly, the reason we construct a wave packet is not to simulate the uncertainty in momentum, we do so to localise the wave function (and hence the particle) into some small region Δx. The resultant uncertainty in momentum is a necessary 'bi-product' if you like of this process. In addition, we can't just 'choose' any old waves to superimpose, the wave packet must be constructed from the eigenfunctions of the system, that is the waves must be solutions of the Schrödinger equation. As for your final point, yes, when the waves 'interfere' they will generate more than one 'wave packet'. However, the probability amplitudes of these 'secondary wave packets' will be small compared to the probability amplitude of the 'primary wave packet', analogous to the amplitudes of the diffraction pattern observed for the double slit experiment. Furthermore, the probability amplitudes rapidly decrease away from the center of the wave packet. Take note that wave packets are difficult to accurately explain without going into the mathematics, and the above is a 'rough and ready guide'; but I hope I managed to answer your questions. If you would like to study wave packets more rigorously I can recommend some texts, but beware that the mathematics required is not trivial. I should also mention that the solution for a free particle (post #4) is not a true wave function, since a requirement for a valid wave function is that it should be square integrable(2). Hence, the solution given in post #4 is unphysical and the reason for this relates to my comment earlier; In other words, a free particle cannot have an exactly defined momentum, this implies that there must be sum uncertainty in the momentum of the particle and hence sum spread in the wave vector k. Therefore, physically acceptable solutions take the form of a wave packet. (2)If a function is square integrable over some interval, then the integral of the square of it's absolute value over that interval must be finite. In the case of a free particle the interval in question would be [itex](-\infty,\infty)[/itex]. Last edited by a moderator: May 3, 2017 8. Dec 23, 2007 #7 That is correct Yup, you figured out my problem. I was thinking like your classical example...there should be an equal probability of finding the particle anywhere in the box. I'm glad you pointed out then that there is no intuitive explanation for the probability osciallation, because thats what I was looking for. And unfortunately it seems that QM can't explain it, but I'm betting that in experiments QM will predict observations correctly. Thanks for this...normally I just pass over the maths because its way over my head, but this I can follow. QM is a mathematical model, so I suppose looking at the mathematics occasionally will probably help me understand it :smile:. However, I have one last question to do with this probability oscillation. With your particle in a box example, you get the oscillating probability density. I must assume the placement of nodes and anti-nodes is independent of your point of reference for position? I.e. the wavefunction will remain in the same place relative to the walls of the box, even if you take the position values from varying reference points - outside the box? Otherwise the wavefunction would move relative to the box, due to moving reference points; which, from the particles point of view, should have nothing to do with it or its placement of the wavefunction in the box. Just looking over the equation you gave previously, I just saw the value a, as the width of the well. Therefore I assume position is measured relative to the box, so I assume my prior question is somewhat irrelevant, as if reference is the box, then there should be no problem. I see what you are saying about localizing the position by adding the waves, but this must only apply if the waves are finite...i.e. already have a somewhat localised position. Ah, this must be where this comes in: Previously I was thinking these waves that you add together were infinite. Adding simple sine waves like in the Hyperphysics example, would not - I don't believe - give wavepackets that diminished, similiar to diffraction patterns. There would be some point where they would all be in phase again and the process would repeat itself all over again. However if these waves are not infinite (as have some certainty in position), then I can see how this could happen. However, then again, looking over the Schrodinger solution you showed me above, it seems to indicate that the wave is infinite...a normal sine wave, so maybe my train of thought is completely wrong. Apart from this little point on adding the waves, the answer to the following would be yes...thanks a bunch. I suspect that in the real world, one does not actually add the waves together...there is some other mathematical process that gives the wave packets (that are also diminishing). However as you said, the maths would be difficult, in which case I will probably avoid those books you refer to until my maths can keep up. Thanks again, 9. Dec 24, 2007 #8 User Avatar Staff Emeritus Science Advisor Gold Member Indeed, since Physics is based on Mathematics I firmly believe that one can never truly understand a phenomenon until one can follow the Mathematics. One may be able to get the general idea from a qualitative analysis, but there will often be observations that at first seem counter-intuitive, it is only when one follows through the mathematics that it becomes clear. As for QM not being able to explain oscillating probability densities, it can mathematically, it follows directly from the postulates of quantum mechanics. I shall try and answer all of the above in one fell swoop. You are entirely correct, we define our system relative to the box. So in the case of our one-dimensional infinite potential well x=0 is in the middle and at the bottom of the well and the two walls are located at x = +/- (a/2) as shown here; Of course you can define your coordinate system however you like, but a symmetric system usually makes life simpler. Writing the solutions to the Schrödinger equation in that form isn't entirely correct. In this case, our potential function (V) is defined piecewise thus; [tex]V(x) = \left\{ \begin{array}{cr} \infty & \left|x\right| > a/2 \\ 0 & \left|x\right| \leq a/2 \end{array}\right.[/tex] Which in words means, the potential energy of the system goes to infinity if the distance from the origin is greater than a/2, otherwise, the potential is equal to zero. This restricts our particle to exist in a 'box' of width a. Equally we must define our wave function (ψn) in a similar fashion since we have two distinct cases; the case where we have zero potential, and the case where our potential tends to infinity. Hence, we write the wave function for a particle of mass m in our one-dimensional potential well thus; [tex]\psi_n(x) = \left\{ \begin{array}{cr} 0 & \left|x\right| > a/2 \\ So you see in actual fact, the wave function of a particle in a box is finite, it only exists inside the box. Hopefully that will put your mind at rest in terms of the construction of wave packets and finite wave functions. Incidentally, something you may find interesting is if we consider a finite potential well, that is similar to the case above but where the potential energy function terminates at some finite value. We find that a particle with a kinetic energy that is less than the potential energy of the well has some non-zero probability to be found inside the walls of the potential well! This is forbidden classically and is the basis for Quantum Tunneling. The mathematical technique used to construct wave packets are called Fourier Transformations, which are part of the more general area of Fourier Analysis. If your looking to study Quantum Mechanics at any sort to depth, I would recommend taking courses in Linear Algebra and Analysis, specifically Functional Analysis. I hope I managed to clear everything up for you, if I didn't I'm sure you'll be back. In the meantime have a very merry Christmas. Last edited: Dec 24, 2007 10. Dec 24, 2007 #9 I suppose I best keep taking mathematics then, in an attempt to further understand QM! Firstly I'm not sure if you're a teacher or not already, but you should consider the profession! Very helpful replies! Ok thanks for going through the simple maths of it, I can now see how the wavepacket is finite, of width a, and although the waves forming it (differing values of n) are infinite sine waves, the wave packet itself is only defined for within the potential well. So thats good! However I still do have some questions still... = ) #1 Well firstly I think that this means the wavepacket has width a, so this represents the uncertainty in position (as a side question, I'm guessing the "main" packet is not a wide, so the width of the "main" packet does not indicate uncertainty in position, but actually the width of the whole wave packet). Anyway this would make classical sense; as the potential well and a gets bigger, so does the uncertainty in position. So thats whats I am assuming, however my question is: what happens as a tends to infinity - mathematically? I'm guessing, that by HUP, a and thus the uncertainty in position, approaching infinity would result in an exactly determined momentum. In Hyperphysics pages this means a perfect sine wave, so I'm guessing as the value of a approaches infinity, the n value in the equations become negligible, so that all the waves produced as solutions of Schrodingers equation are the same sine wave (or conversely ones with same period) so as to produce a final wave packet that is a perfect sine wave, i.e. defined momentum. I wonder if the actual mathematical solution for when a approaches infinity, agrees with what I've stated before?? #2 I just noticed the n in the equation. I assume the differing integer values of n give the differing sine waves that are added together to make the resultant wave packet (as referred to in Hyperphysics)? However I was wondering what the value for n actually represents? I don't believe it is just to represent a general solution to a trigonometric equation...? Yes I'm considering whether to try and get into Cambridge, and take the natural sciences course. However for first year, you can do a course that is something like the mathematics involved in physics instead...and then progress to the natural sciences in 2nd year. I'm betting this would be helpful!! Anyways thanks, and you have a great Christmas too, 11. Dec 24, 2007 #10 User Avatar Staff Emeritus Science Advisor Gold Member Thank you for the kind words :smile: Okay, I'm going to address your second question and hopefully this will make sense of your first question. First and foremost, the solution that I gave in post #6 is not a wave packet, it is a pure sinusoidal wave. The solution to the infinite potential well does not require the use of wave packets and can be obtained trivially by directly solving the Schrödinger equation. It is a pure sinusoidal wave, with no interference or summations. The n in the solution does not refer to any summations or integrations, it simply defines the energy eigenstates of the system. The energy eigenstates for a particular system are the set of eigenvalues (energies) and eigenfunctions (wave functions) that satisfy the [time independent] Schrödinger equation. Perhaps it would have been prudent to mention it soon, but in general, there isn't just one single solution that satisfies the Schrödinger equation for each system, there can be infinitely many solutions. This set of solutions are the energy eigenstates for that particular system. If we now go back to our particle in a box and examine our general solution together with the energy eigenvalues (just considering the case where [itex]\left|x\right| \leq a/2[/itex]); [tex]\psi_n(x) = \sqrt{\frac{2}{a}}\sin\left\{\frac{n\pi}{a}\left(x +\frac{a}{2}\right)\right\} \hspace{5cm} E_n = \frac{h^2}{8ma^2}n^2[/tex] And in this case, [itex]n\in\mathbb{Z}^+[/itex], that is n must be a positive integer. Hence, we can start writing our our energy eigenstates; [tex]\psi_1(x) = \sqrt{\frac{2}{a}}\sin\left\{\frac{\pi}{a}\left(x +\frac{a}{2}\right)\right\} \hspace{5cm} E_1 = \frac{h^2}{8ma^2}[/tex] [tex]\psi_2(x) = \sqrt{\frac{2}{a}}\sin\left\{\frac{2\pi}{a}\left(x +\frac{a}{2}\right)\right\} \hspace{5cm} E_2 = \frac{h^2}{2ma^2}[/tex] And so on. You can see a visual representation of the solutions http://hyperphysics.phy-astr.gsu.edu/hbase/quantum/imgqua/box1.gif" [Broken]. So rather than n representing combinations of several waveforms into a single solution (wave packet), it represents individual discrete solutions. I hope that makes more sense to you. While we're here we may as well discuss some consequences of the above solutions. Firstly, you should observe that we have quantised energy eigenvalues, the particle is only 'allowed' to have certain energies. For example, the particle can have an energy equivalent to E1 or E2 (or E3,4,5,...), but can't have anything in between. We say the particle has a discrete energy spectrum, which is in stark contrast to classical physics, were the energy spectrum is continuous. Secondly, note that our lowest permitted energy state (i.e. the energy eigenstate corresponding to n=1) is non-zero, this phenomenon is known as zero-point energy and I'm sure you've at least heard of it before. We can understand this phenomenon qualitatively in terms of the HUP, which states that the product in the uncertainty in two measurements must be of the order of [itex]\hbar[/itex]. However, if a particle has zero energy it will be at rest, and therefore it will have a uniquely define momentum (zero) and position, thus violating HUP. We can take this concept further and say that the particle in the infinite potential well of width a is restricted to [itex]|x|\leq a/2[/itex], and hence has an associated uncertainty in position of [itex]\Delta x \approx a[/itex] (we know that the particle must be somewhere in the well, but we don't know where). Hence, we can write; [tex]\Delta x \cdot \Delta p \approx \hbar \Rightarrow \Delta p \approx \frac{\hbar}{a}\hspace{5cm}(1)[/tex] Furthermore, we know that kinetic energy is related to momentum thus, [itex]E = p^2/2m[/itex], hence we can write; [tex]\Delta E \approx \frac{\hbar^2}{2ma^2} = \frac{h^2}{8ma^2}\pi^2[/tex] Which is 'qualitatively' in agreement with our first energy eigenvalue E1. Note that although this is a very 'rough and ready' analysis, a more formal treatment can show that the associated uncertainty in the energy is in exact agreement with the energy eigenvalues. Furthermore if we examine equation (1), we find that the uncertainty in momentum is inversely proportional to the width (a) of the well, which intuitively makes sense. If we reduce the the width of the well (as a approaches zero), we are increasing the spatial localisation of the particle and hence, decreasing the uncertainty in position. Therefore, by HUP we would expect the uncertainty in momentum to increase. Conversely, if we increase the size of the well (a approaches infinity), the particle becomes less localised and behaves more like a free particle, hence the uncertainty in momentum approaches zero. I think that, partially at least, answers your first question. A good friend of mine took natural sciences at Cambridge and he had very good things to say about it. From what he said the course sounded interesting and apparently, after your first year, you have virtually free choice over which modules you take. It's good to be interested in Quantum, but I wouldn't worry about understanding it all yet, any Quantum Mechanics you take in your first two years as an undergraduate will in all probability be fairly superficial. That said, there's no harm in getting ahead of the game, especially if your interested in the subject! Well at the outset I intended that to be quite a concise post, but it seems that it ran away with me a little. I apologise if it seemed a little hard going. Last edited by a moderator: May 3, 2017 12. Dec 24, 2007 #11 Ok these two confused me somewhat...in the first quote I read it as meaning the wave packet was constructed from the various solutions to the Schrodinger equation, which I then thought meant due to the differing values of n. I thought adding these waves together gave the wavepacket. However the second quote seems to go against this, and suggest the solutions to the Schrodinger equation is the wavepacket (which in this case is just sinusoidal)? Unless a wavepacket is only defined for each energy eigenstate... i.e. each n value gives a different wavepacket. In this case the solution to the Schrodinger equation is just the sinusoidal wavepacket? Actually thinking about it logically, you can't have a wavepacket for all n values, otherwise when added together it would be a mess, so to speak...as n goes to infinity. So then I'm guessing you can have a wave function to represent one variable (in this case position), however it only applies to certain eigenstates/eigenvalues...i.e. one n value? I am assuming then that one must know the energy eigenstate the particle is in, otherwise one would not know which solution to the Schrodinger equation to apply...? Lets clear this up too...is the wave function the same as the "wave packet" which is the same as a particular solution to the Schrodinger equation? Hmmm Fourier analysis is representing a function as the sum of sinusoidal terms, right? So in the case of the wave packet - wave function - in the Hyperphysics page, how it referred to adding waves together to get the wave packet, do these waves actually have a physical meaning (like the differing n values I implied, although as you have said, this is wrong), or are they simply the basis function in the fourier analysis? Thanks for this...I now know the role of the n value much more clearly! The relation to energy eigenstates is also neat...I assume that again one must know the energy eigenstate the system is in, for the wave function to be of any use?? Am I correct to assume that HUP still applies to energy, as momentum and energy are directly proportional? However this would then imply one could not know the exact energy eigenstate the system is in, so I am thus making contradictory assumptions!! Anyways quantised energy...thats an interesting proposition. I've heard it before, but thanks for showing me the maths to prove it...well to some degree anyway. Also yes, that answers my first point. You've agreed with me on that one, so that problems settled! The zero point energy is interesting...I have heard of it briefly before. I think applying this with quantum field theory results in the so called "Vacuum energy". Not sure my classical side really likes the idea. As a completely off topic bit of info, while reading this page from Wikipedia http://en.wikipedia.org/wiki/Vacuum_energy" [Broken] I found the following: It's somewhat reminiscent of aether isn't it? Anyways thanks again for your replies, Last edited by a moderator: May 3, 2017
eb2ad83d655d6fcf
Oliver Smart (c) O.S. Smart 1996, all rights reserved Back to main Molecular Forces index Back to previous unit Covalent Interactions On to next course unit The effect of solvent and hydrophobic interactions Non-bonded Interactions As the name implies non-bonded interactions act between atoms which are not linked by covalent bonds. Like most things this is simple to state but can be confusing to apply in practice! In most approaches then atoms which are involved in a bond angle are also not regarded as having a non-bonded interaction. 1-4 interactions (those between the end atoms involved in a dihedral angle) are sometimes given an additional scaled down nonbonded interaction. Similarly the interaction between a metal ion and its liganding atoms is usually regarded as non-bonded and treated by the kind of approach set out here but are sometimes represented by the bond and bond angle terms of the potential energy function (e.g., in representing the iron atom in the haem group found in globins). Electrostatic interactions As mentioned in the introduction to this section electromagnetic interactions dominate on the molecular scale and provide the fundamental basis for all the different bonded and non-bonded interactions discussed here. This is clearest in the case of electrostatic interactions where charges on nuclei and electrons interact according to Coulomb's law: where o and o are the magnitude of the charges, o is their separation, o the permittivity of free space and o the relative dielectric constant of the medium in which the charges are placed (if you do not remember this consult your high school Physics textbook!). The strictly correct way to use the law would be to consider every nucleus and electron separately, plug it into the Schrödinger equation and apply quantum chemical methods to solve the equation for the spatial configuration of nuclei we are interested. As already mentioned this is completely impractical for biomolecular systems. So instead we wish to develop a useful model for the interactions between nuclear centres (commonly called "atoms") without having to explicitly deal with the electrons in a system. The simplest approach is to just to consider the formal charges of the protein. Formal charges show whether chemical groups are ionized i.e., whether an atom or set of atoms has lost or gained an electron. Isolated amino acids (in neutral solution) are zwitter ionic - this means that although the molecule has no overall charge it carries both a negatively charged group and a positively charged group: an isolated alanine residue in solution Note that the atoms in a charged carboxylic acid are equivalent - the double bond is delocalized across the group. In proteins the individual amino acids are polymerized (in a condensation reaction - releasing water). This results in a peptide backbone which is electrically neutral with the exceptions of the ends of the chain. In a normal protein the amino end carries a positive charge (-NH2+) and the carboxyl end carries a positive charge (-CO2-). In some cases (e.g., a number membrane polypeptides) the ends are chemically modified to avoid these charges (for instance by acetylation of the amino end group). Most of the standard amino acids found in proteins have uncharged side chain groups. However, there are a number of basic residues which are positively charged at normal pH (if you do not know the meaning of the pH scale of acidity then consult any text book on physical chemistry) : (in this diagram carbon atoms are drawn using black lines and only polar hydrogen atoms are shown in green) In addition histidine is normally charged at neutral pH (the charge normally residing on the delta carbon but sometimes on the epsilon). When the residue is placed in a basic environment it loses a proton and becomes uncharged: There are two standard residues which normally carry a negative charge: glutamic and aspartic acids: Very many proteins bind inorganic ionic species such as metal ions - indeed such ions can play a crucial role in the mechanism of a protein. An example of this is the enzyme xylose isomerase which has two Mg2+ ions at the active site which coordinate to the substrate, polarizing it and stabilizing the transition state in the reaction. Salt bridges As might be expected a positively charged lysine or arginine residue can form a strong interaction with a negatively charged asp or glu group. In proteins this interaction is referred to as a salt bridge. Click hereo if you would like to see an example of a salt bridge. In practice salt bridges are relatively rare in proteins and in practice they normally occur on the surface as opposed to internally. An exception is when an internal salt bridge is involved in the catalytic mechanism of an enzyme such as in the asp-his-ser triad of serine proteases (a classic example of the structural basis of enzyme activity). The reason for this is that although an internal salt bridge is a strong interaction in comparison to having the isolated residues widely separated in a vacuum it is normally destabilizing for a protein. This apparent paradox is due to that fact that when considering the effect of an interaction one must consider the difference in the (free) energy between the folded and unfolded but solvated states. In the unfolded state the residues involved in a salt bridge would be widely separated but each making very favourable interactions with water molecules (for advanced students - there is an entropic contribution to this). These interactions are lost when the same residues are buried in the largely hydrophobic core of the protein. Similar arguments apply to practically all considerations of elucidation of the energetic contributions to protein folding or ligand binding - normally a small overall free energy advantage arises from the balance between large but cancelling contributions. This is one of the problems which make the computation prediction and analysis of protein behaviour so difficult. Hydrogen bonds The electrostatic interactions between groups which carry no formal overall electrical charge are of fundamental importance to biomolecular structure. The source of these is that uncharged species can still have a large inherent polarization - the orbitals around the molecule are distributed in such a way that parts of the molecule have less electrons and thus carry a positive charge and other parts have an excess and are therefore negatively charged. Some atoms (O, N and to a lesser extent S) have a tendency to attract electrons (filling the valence shells) and are termed electronegative. Others (notably metallic atoms) have a tendency to lose electrons. In extreme this tendency causes one atom completely to lose an electron to another - leading to the formation of formally charged species or ions (see previous section). In less extreme conditions electrons are shared between two atoms in a covalent bond but are pulled towards one partner. The classical example of this is the water molecule: picture of partial charges on water molecule As oxygen is electronegative it draws the electrons in the bonds it shares with the hydrogen atoms towards it. The hydrogen atoms are left with a net positive charge and the oxygen is negative. In the case of a water molecule the value for the effective charge on each hydrogen atom is quite large - around one third of an electron. Together with the short distance between the oxygen and hydrogen atoms this results in the water molecule having a large dipole moment. Two water molecules can therefore form a strong electrostatic interaction: picture of water dimer This interaction is known as a hydrogen bond (incidental note the lowest energy water dimer is not flat there is an angle of around 23 degrees between the planes of the molecule). The bond is normally around 2.8Å long (measured from oxygen to oxygen). This length results from the interplay between the electrostatic stabilizing factor and the repulsion between the oxygen atoms as they come closer. Water molecules can form a network of hydrogen bonds. The fact that it is strong has a number of important consequences: Many side chain groups in proteins can form hydrogen bonds: click to see larger picture In addition interactions can be formed between groups carrying a formal charge and hydrogen bonding atoms e.g., This is normally regarded as an especially strong variant of the hydrogen bond. Partial Charges We have seen that electrostatic interactions are of fundamental importance to proteins. We shall now briefly examine the manner in which they are normally treated in computational studies. The most common approach is to place a partial charge at each atomic centre (nucleus). These charges then interact by Coulomb's Law. The charge can take fractions of an electron and can be positive or negative. Charges on adjacent atoms (joined by one or two covalent) bonds are normally made invisible to one another - the interactions between these atoms being dealt with by covalent interactions. Note that the concept of a partial charge is only a convenient abstraction of reality. In practice many electrons and nuclei come together to form a molecule - partial charges give a crude representation of what a neighbouring atom will on average "see" due to this collection. The standard modern way to calculate partial charges is to perform a (reasonably high level) quantum chemical calculation for a small molecule which is representative of the group of interest (e.g., phenol is considered for tyrosine). The electrostatic potential is then calculated from the orbitals obtained for many points on the molecular surface. A least squares fitting procedure is then used to produce a set of partial charges which produce potential values most consistent with the quantum calculations. (Cieplak, P., Cornell, W.D., Bayly, C., Kollman, P.A. (1995) application of the multimolecule and multiconformational RESP methodology to biopolymers - charge derivation for DNA, RNA and proteins. J. Comp. Chem. 16:1357-1377). Older procedures used methods in which orbital populations are simply split between atoms (Mulliken Population Analysis). All though much simpler these charges do not produce a reasonable representation of the electrostatic potential around a molecule - which is usually what is of interest in a simulation. To see the kind of values typical for partial charges look at picture of a salt bridgeo . As mentioned previously using partial charges at nuclear centres is the crudest effective abstraction. To obtain a more accurate representation two approaches are common. The first is to add dipole, quadrapole and higher moments to the nuclear centres (do not worry if these terms are unfamiliar, see: Stone, A.J., Price, S.L. (1988) Some ideas in the theory of intermolecular forces: Anisotropic atom-atom potentials. J. Phys. Chem. 92:3325-3335). The second is to introduce further non-nuclear centres - this is commonly done to represent the anisotropy in potential cause by lone pairs on oxygen atoms (Cieplak, P., Cornell, W.D., Bayly, C., Kollman, P.A. (1995) application of the multimolecule and multiconformational RESP methodology to biopolymers - charge derivation for DNA, RNA and proteins. J. Comp. Chem. 16:1357-1377). In many respects electrostatic interactions provided the biggest problems to computational studies of protein behaviour. By their nature they are long range and dependent on the properties of the surrounding medium (see discussion of dielectric effects). A simple rule of thumb is that the more highly charged a system the harder it is to simulate - thus simulations of liquid argon can do a wonderful job, hydrocarbons are fairly easy, water becomes difficult and proteins more so. The limit is reached with nucleic acids like DNA which are aqueous complex salts (each base having a charge of minus two) with counter ions and solvent having important effects on structures. Usually some sort of "fudge" has to be made in simulations to keep DNA stable at all! The normal treatment for partial charges is to assume they are fixed. In practice the electric field caused by other atoms and molecules will polarize an atom effecting its electron distribution and thus its partial charge. In turn the partial charge produces an electric field which affects neighbouring charges and thus fields. To be able to work out the partial charges a self consistency cycle is normally used. The process of polarization has an energetic effect. In practice it is difficult to find adequate parameters to treat systems as complex as proteins (work has recently been concentrating on systems such as sodium ions in water). Induction effects can be shown to decay by a r-6 relations so they can normally be regarded as implicitly corrected for when the dispersion term is fitted as discussed in the next section. You will probably be familiar that at low temperatures gases such as argon liquefy. The attractive interactions which cause this are called dispersion. Although they also occur between charged atoms they are usually overwhelmed by the stronger electrostatic terms and so are normally only of importance for uncharged groups. To really understand dispersion effects one must turn to 2nd order perturbation theory in quantum mechanics. You will probably be happy to know that we shall not be doing this! Instead a simple physical picture will be given. Imagine that we have an atom of argon. It can be considered to be like a large spherical jelly with a golf ball embedded at the centre. The golf ball is the nucleus carrying a large positive charge and the jelly represents the clouds of electrons whizzing about this. At a point external to the atom the net average field will be zero because the positively-charged nucleus' field will be exactly balanced by the electron clouds: However, atoms vibrate (even at 0K) and so that at any instant the cloud is likely to be slightly off centre. This disparity creates an "instantaneous dipole": Suppose that we have another argon atom close to the first. This atom will see the electric field resulting from the instantaneous dipole. This field will effect the jelly inducing a dipole: The two dipoles attract one another - producing an attractive interaction. The Dispersion interaction can be shown to vary according to the inverse sixth power of the distance between the two atoms: The factor Bij depends on the nature of the pair of atoms interacting (in particular their polarizability). It is normal to parameterize the dispersion empirically using structural and energetic data from crystals of small molecules. It is not possible to use simple quantum chemical calculations to find parameters. This is because most quantum chemical calculations use the self consistent field approximation (SCF). In this each electron is solved independently keeping the other orbitals frozen (in a self consistency). This effectively means that electrons only experience a time averaged picture of other electrons - so that dispersion cannot come into effect. More advanced methods in quantum chemistry introduce methods to tackle "electron correlation" to avoid this problem. Repulsion terms When two atoms are brought increasing close together there is a large energetic cost as the orbitals start to overlap. In the limit that the atomic nuclei where coincident the electrons of the two atoms would have to share the same orbital system. The Pauli exclusion principle states that no two electrons can share the same state so that in effect half the electrons of the system would have to go into orbitals with an energy higher than the valence state. For this reason the repulsive core is sometimes termed a "Pauli exclusion interaction". The simplest and oldest way to represent the repulsive core for atoms is by using a "hard sphere" model. In this atoms have a characteristic radius (below the van der Waals radius) and cannot overlap. This approach can be adopted computationally but is more commonly seen in physical models using plastic spheres such as CPK: This "all or nothing" approach is useful but rather crude - both solids and liquids are compressible. It also leads to the problem that it has a discontinuous first derivative: when two atoms come slightly too close together they experience an infinite force. More realistic is to represent the energy costs of close approach using a term which varies as R-12: The repulsive term (it is always positive) drops away dramatically as the distance between the two atoms increases but conversely becomes very large at short distances - providing a "fuzzy" core. This is the approach normally adopted for protein potential energy functions. When more accuracy is required a two parameter model is normally adopted: This term together with a representation for dispersion by a term in Rij-6 is commonly known as the "Buckingham potential". It provides a more realistic representation particularly at short distances where a term in R-12 is too step. It is not normally used for macromolecular applications as the increase in complexity (introduction of an additional parameter) and in computational cost (it takes a long time to calculate exponentials) is undesirable. The Lennard-Jones potential and van der Waals Radii The dispersion and repulsion terms discussed above are commonly grouped together into the Lennard-Jones or 6-12 potential: The equation can be rewritten in an equivalent more instructive form (choosing the case for an interaction between two atoms of the same type): The minimum of the function is at r = 2R* and has an energy of minus E*. The distance R* is known as the van der Waals radius for an atom and E* is its van der Waals well depth. Typical values for these parameters (from the AMBER force field) are shown below. atom type van der Waals radius van der Waals well depth in Å in kcal/mol C (aliphatic) 1.85 0.12 O 1.60 0.20 H 1.00 0.02 N 1.75 0.16 P 2.10 0.20 S 2.00 0.20 It is important to note that the Lennard-Jones interaction between uncharged atoms (such as CH3 groups) is less attractive than that between charged groups such as oxygens. The difference is that the contribution from electrostatics will dominate the L-J interactions. In cases where uncharged groups form compact structures van der Waals energies are often cited as stabilizing the conformation. Although partly true very often the major contribution comes rather from hydrophobic exclusion. On to next course unit The effect of solvent and hydrophobic interactions Back to the Top
5b85704e9b8303a1
Second Edition of the Textbook The Front Cover Picture's Message to Principles of Physical Chemistry A Comment by Hans Kuhn and Horst-Dieter Försterling The picture symbolizes the importance of inventing simplifying models to treat complex phenomena. Such models are crucial in developing the modern topics considered in this book, and we want to emphasize their potential in future research and development. Modeling a randomly coiled molecule by a dumb-bell (the cover image) was suggested by Werner Kuhn to Hans Kuhn when he began to work for his doctorate, investigating decoiling in a flowing viscous solvent. HK was fascinated by the model's simplicity and by its great success in theoretically analyzing a broad variety of experiments in quantitative terms. This experience and his postdoctoral work with Linus Pauling and Niels Bohr, supported this fascination for powerful simple models and was determining for his life's work in research. Horst-Dieter Försterling and Hans Kuhn cooperated for many years in research and teaching and from this collaboration the idea grew to write a new kind of textbook on physical chemistry. A textbook that would transmit the thinking process that is essential for performing research. A process that HK had learned from his great teachers: their intuitive approach to scientific questions, their ability to discern the essential from the nonessential, and their independent ways of thinking, having in mind the broad scope and coherence of physical chemistry. Another instructive model based on this way of thinking is the particle-in-a-box model. It is used to describe the basic behavior of electrons in molecules. In this book it is a touchstone illustrating its value and its limits. It is fun to see how the box model arose. In Pauling's lab HK was trying to understand the color of polyenes by describing &pi-electrons as particles in a box and he was greatly disappointed - it did not work. Later, when applying the box-model to cyanine dyes he observed a quantitative agreement with experiment. The reason why he had failed in polyenes: an instability leading to an alternation between single- and double-bonds. Considering this instability the box-model had to be slightly improved and then the fundamental difference between the &pi-electron distribution of a polyene and a cyanine was explained. The box-model and its improvements developed into a theory on the light absorption of organic dyes. The particular properties of conducting polymers are based on the theoretical relation between bond alternation and equalization. Both items are discussed in this book, showing the power of a simple model. The box-model for &pi-electron systems was approximated considering only the component along the molecular chain by the standing waves of a vibrating string. A branched &pi-electron system was then intuitively handled by the standing waves of a branched vibrating string. In searching for a deeper understanding of this simple model, Niels Bohr, exposed to the problem, gave the splendid advice: "solve the 3-dimensional Schrödinger equation for a branched box ". Inventing simplifying models is the basis in attempting to construct supramolecular machines, stimulated by the revolutionary change in molecular biology. The idea was expressed in the early 1960ies that preparative chemistry should have a new goal: fabricating useful molecular machines by synthesizing different kinds of molecules in a planned manner to precisely interlock and interact purposefully. Simple prototypes were realized as described in the book. This new paradigm in chemistry is strongly developing today termed as supramolecular chemistry, molecular electronics and systems chemistry. The origin of life is understood as an important new topic in physical chemistry, beginning with the search for a basic theoretical understanding of why and how that kind of a process can take place. The way to approach this fundamental problem is described in the book: inventing a sequence of reasonable steps, each driven by a very particular and particularly changing environment leading to systems with a life-like genetic apparatus. Each step constitutes a simple model as symbolized by the front picture. The emergence of life is closely related to the above mentioned paradigm: constructing supramolecular machines. The skill of the experimentalist is replaced in life's origin by very particular conditions given by chance in a very particular small location on the prebiotic earth. The Back Cover to Principles of Physical Chemistry "This admirable text provides a solid foundation in the fundamentals of physical chemistry including quantum mechanics and statistical mechanics/thermodynamics. The presentation assists the students in developing an intuitive understanding of the subjects as well as skill in quantitative manipulations. Particularly exciting is the treatment of larger molecular systems. With a firm but gentle hand, the student is led to several organized molecular assemblies including supramolecular systems and models of the origin of life. By learning of some of the most productive areas of current chemical research, the student may see the discipline as an active, young science in addition to its many accomplishments of earlier years. This text makes physical chemistry fun and demonstrates why so many find it a stimulating and rewarding profession." Professor Edel Wasserman, President (1999) of the American Chemical Society Principles of Physical Chemistry takes readers from atoms to increasingly complex molecular assemblies including natural and artificial supramolecular machines Principles of Physical Chemistry presents a novel approach to physical chemistry that emphasizes the use of a few fundamental principles to quantitatively describe the nature of molecules and their assemblies. It begins with atoms and molecules, using the electron-in-a-box model to illustrate the essential features of quantum mechanics and why atoms and molecules exist. Thermodynamics is not introduced in the classical manner, considering the first and second law as postulates, but approached by studying assemblies of molecules statistically. The authors proceed to molecular assemblies of increasing complexity, evolving from ideal gases to real gases and solutions, then to macromolecules and supramolecular machines, and ending with the search for the logical conditions and chemical requirements for physicochemical processes leading to life's origin, the emergence of matter that carries information. This text is ideal for both undergraduate and graduate courses in physical chemistry. providing a basis for understanding the nature of chemical processes in biology, chemistry, and engineering. Principles of Physical Chemistry examines several important topics that are often overlooked, yet are critical to a full understanding of the field, including: Throughout the text, actual experimental data are used to help readers understand the practical implications of theoretical developments. Simple physical models and examples are used to explain molecular and supramolecular systems and processes. The CD-ROM packaged with the text offers problems, exercises, interactive Mathcad exercises and data tables with search functions that enable readers to apply their newfound skills and knowledge to solving actual problems. In addition, the CD contains Foundations and Justifications, in which mathematical proofs and derivations are presented. Contents to Principles of Physical Chemistry:
265dcee531c7d544
« first day (1772 days earlier)      last day (2572 days later) »  12:08 AM @HDE226868 I so wanted one with displays on the same surface you draw on, but those puppies start at US$1000. That's too steep for me. Especially just to learn if the tool is right for me. @dmckee I was thinking screen-less, and I was hoping for <$200. It ended up as a question on Hardware Recommendations beta. man I forgot how dark it gets when the weather gets cooler here Mine is an Intuos Medium, and I paid US$187 plus tax in a store. @dmckee There's an upload image button. Or just use imgur. either I should grow up and not be scared of the dark or I should not live alone :p 12:14 AM Gain some weight not sure how that will help People don't fuck with big guys Fancy that. It's been there all along and I just edit it out of my visual field because I never use it. I literally had to mask off sections of the screen before I saw it. Humans are weird. Palm tree physics 101 12:17 AM I've always used a tree and a bike to indicate two frames in relativity. Those are the tablet versions. I also did a surfer, but he's a little battered. Right after I bought the tablet the class I would have used i the most for got given to another teacher in a big rescheduling snafu we had this semester. So I don't have as many examples as I would have expected. I couldn't draw those with a mouse and whatever drawing program I would use. @dmckee Impressive I should get one so I can save all of my proofs and derivations For proofs and stuff like that you don't actually need the pressure sensitivity of a art tablet, and there are some other choices. Androids and windows tables with styluses. Microsoft surface and Samsung Notes and things like that. Not that I'm getting much out of the pressure sensitivity yet, but I convinced myself the brush tool lets me butcher japanese caligraphy much like I do in real life. 12:43 AM I'm not buying a whole tablet I have an iPad already 1:37 AM 1 hour later… 2:41 AM @FenderLesPaul GR talk tonight 2:52 AM why D: 3:06 AM @obe what do you want a GR talk i can do a GR talk with you @0celo7 That would be really cool. I have a phone now also. I'm actually in the humanities building now maybe I can steal a whiteboard do a skype video call now that would be epic You should sleep. uh, I don't have class until 9, mom and even then, that's just LA I should be asleep. Though I have data now. 3:12 AM wait don't you start class tomorrow For some reason it begins on monday. Even though other classes begin tomorrow. Reminds me, I had to do volunteer work for 3 days. I'm only on chapter 13. What order should I finish the rest of the book? @0celo7 Cellular data. brb getting shooed away Done being shooed away? yes, in my room now So are we discussing GR or not 3:39 AM Dude I have no GR to discuss, carroll ch3 remember? I think it would be cool to listen to you and FLP discuss. well he's a bum 3:50 AM I never understood how one measures a wave function. How do you do it? you don't Ok. The only property I know it has is $\psi^*\psi$ is the probability of finding it at a point at a particular time. What else characterizes a wave function? @0celo7 like for instance, if I can't measure $\psi$, why is it more fundamental than the probsbility itself? Like why not just use the probsbility distribution? 4:12 AM @StanShunpike well for one you can construct many different matter waves (1 particle schrodinger) with the same psi squared or rather, the same $\langle x|\psi\rangle\langle \psi|x\rangle$ @0celo7 that would be pretty cool, actually! @NeuroFuzzy Uh, I hadn't thought of that! That's a good point @StanShunpike On a related note I was actually looking for reviews of this answer physics.stackexchange.com/questions/206269/… @StanShunpike I swear to Master D.J. Trump that we've discussed this before @0celo7 We have. I just started playing Splinter Cell: Double Agent 4:24 AM What is that? old school stealth games are crazy no tutorial, I'm sneaking into some base or something well night y'all @NeuroFuzzy the issue is that I'm pretty sure using my front-facing laptop cam will show everything backwards I wonder if I can stream footage from my phone's back cam something to figure out tomorrow 4:44 AM @0celo7 or to mirror it! 5:05 AM @0celo7 oh I thought it was Friday night can we do it Friday night if you're free? 5:39 AM @dmckee Which one is Alice and which one is Bob 5:51 AM Q: Why is @_________ deleting his answers citing 'in order to comply to the site policy'? user36790While I was waiting for answer for my newly posted question, I noticed one question: A confusion regarding an example in The Feynman Lectures; there @user posted an excellent answer; I cherished his answer especially for the amazing pics he used. But there it was written: Answer deleted by __... 6:45 AM there needs to be more fields using Alice and Bob Currently it's just QM, relativity and computer science 7:29 AM @StanShunpike The wavefunction is a mathematical convenience, much more than a real physical object. Let's say that due to the mathematical formulation of quantum theories, hilbert spaces emerge naturally (and therefore wavefunctions, that are associated to quantum states). A state on the other hand encodes all the information of a quantum system about measurements. To operationally "measure"/identify a state is a quite difficult task in my opinion. In fact, a first problem is the fact that a measurement modifies the state, hence you need to be able to prepare a lot of identical states to test A second problem is that either you are lucky enough and your state is an eigenvector of some operator with multiplicity one (so in principle measuring many times such observable would help you identify unambiguously the state, for you obtain the same measurement over and over, and such value is associated to a single state) or you need to test many observables many times to identify it; in principle, you would need to test all the observables (that are usually infinite) an infinite number of times each, so it goes without saying that it is not an easy task. As far as I know there is people who disputes (in research work, not on forums) the structure of some a priori well-established type of states for example that the state of light produced by lasing is not a coherent state; but a mixed states with certain properties 8:11 AM @yuggib Plenty of QM formalisms don't have wavefunctions Well, maybe not plenty But at least 1 has no equivalent Does stochastic QM have a wavefunction? 8:25 AM I know; but everyone of them has the Gel'fand-Neumark-Segal construction; for the observables are always assumed to form a $*$-algebra of (maybe) non-commutative objects. Therefore, even if the theory does not nedd wavefunctions (in the broad sense of Hilbert space and vectors), you can always construct such a representation @Slereah not even QM in general, just quantum computation, and then because that's mostly computer science Then again QM tends not to involve people very much :-P Well, it is also used for mixed states in general it may not be the better/most convenient one, but it is always there. The point is that to not admit wavfunctions you have to radically change the notion of observables Well they are all equivalent to some degree in the end @ChrisWhite I couldn't let this go: faculty get more interaction, but grad students are creepier per capita, so... I'd have to go with grad students 8:28 AM It's not that incredible that you can recover one from the other well, keep in mind that even states that are not pure can be represented as Hilbert space vectors in a suitable GNS construction Well, in principle the meaning of formulating a different theory would be to predict something more than the old theory if else there is no need for the new one, and becomes just a matter of interpretation Well yes but the old theory predicts everything that happens So far not much need for a different thing Can you recover wavefunctions from the quantum logic formalism? I am not so familiar with quantum logic; but indeed you can recover them from quantum set theory I suspect that quantum logic is different though not an expert anyways Neither am I It just seems to be pretty different from most formalism Basically it redefine basic propositional logic but the scope is to define a new logic inspired by the quantum theory, or the contrary? 8:33 AM first one if you change so radically the point of view, it becomes very tough to recover the usual mathematical results @DavidZ haha I'll keep this in mind next time someone asks me to spend time tutoring undergrads that are based on the ZFC theory of first-order logic Apparently quantum logic couldn't do much and to expand it, you have to use quantum filtering In quantum probability, the Belavkin equation, also known as Belavkin-Schrödinger equation, quantum filtering equation, stochastic master equation, is a quantum stochastic differential equation describing the dynamics of a quantum system undergoing observation in continuous time. It was derived and henceforth studied by Viacheslav Belavkin in 1988. Unlike the Schrödinger equation, which describes deterministic evolution of wavefunction of a closed system (without interaction), the Belavkin equation describes stochastic evolution of a random wavefunction of an open quantum system interacting... Which looks pretty similar to wavefunctions :p @ChrisWhite lol honestly, the undergrads creep on each other way more than anything else like any college 8:36 AM in quantum set theory you have a ZFC transfer principle, so you can mutuate (to some extent) ZFC assertions to quantum set theory anyways, occam's razor would suggest that such a radical change is a bit far fetched seen the success of usual quantum theory and ZFC in math if it is just for computational convenience then it may be ok but restricted to that domain Well everything is for computational convenience, in the end ahhaha that may be true long time without JD...I am bored, and I need some divertissement :-D Time has 4 corners 9:46 AM Phew. This could go down as the yuck username for the ages: 10:08 AM 10:41 AM I kinda don't like the whole explanation of Hawking radiation via split pairs of virtual pairs It's not that helpful 11:18 AM too accurate "One day Shizuo Kakutani was teaching a class at Yale. He wrote down a lemma on the blackboard and announced that the proof was obvious. One student timidly raised his hand and said that it wasn't obvious to him. Could Kakutani explain? After several moments' thought, Kakutani realized that he could not himself prove the lemma. He apologized, and said that he would report back at their next class meeting. After class, Kakutani, went straight to his office. He labored for quite a time and found that he could not prove the pesky lemma. He skipped lunch and went to the library to track down the lemma. After much work, he finally found the original paper. The lemma was stated clearly and succinctly. For the proof, the author had written, 'Exercise for the reader.' The author of this 1941 paper was Kakutani." 11:34 AM "You've earned the "Nice Question" badge (Question score of 10 or more) for "Highest symmetric non-maximally symmetric spacetime"." @FenderLesPaul yeah but I had time yesterday 12:02 PM @0celo7 : no the most complicated and technical books on the market aren't popscience. But some of the stuff you believe is popscience. @Slereah : the "given" explanation for Hawking radiation is pseudoscience nonsense. Virtual particles are field quanta, not real particles that pop into existence like magic. In addition, there are no negative-energy particles. What there is, is near-infinite gravitational time dilation, which Hawking radiation totally ignores. hush duffield We're talking real science. 12:23 PM Q: Should there be a way to flag comments Matt SWhen experienced users make the first comment on a "bad" question, they are often condescending. This is fair enough, if the question is bad. However, this discourages other, less experienced users with less reputation, from attempting to answer the question. Particularly if the question is an in... @JohnDuffield you do know that's the popsci definition of Hawking radiation? Indeed, Hawking radiation can be done within the framework of AQFT, which does not refer to particles at all. It seems AQ has declared jihad on ISIS again. Not very original. So anyway If photons are for seeing And phonons are for hearing Where are the smellons And the tastons @0celo7 I don't know about Planetscape, but Planescape is very good. 12:39 PM >We're talking real science. [...] 10 minutes later: > If photons are for seeing > And phonons are for hearing > Where are the smellons > And the tastons strange definition of real science :-O @Slereah You forgot about feelons. @ACuriousMind like the cryon; not to be confused with the crayon @ACuriousMind mm. Typo @ACuriousMind Planescape is best Particles are easy to remember But then you have the laundry list of pseudo particles Also fuck mesons and hadrons, way too many of them Like half of the PDG book is mesons and hadrons 12:44 PM @Slereah Well, uh, that's what it's there for, isn't it? I suppose But still I want to know more about electrons Not about the p48589c meson That only appeared once in 1963 During a full moon @Slereah The word order is off there :D Shouldn't there be a small number of mesons, really Aren't most of them just superpositions of basic mesons Is the game about particles Wrong conversation Planescape is the best game It is about 12:49 PM What can change the nature of a man? A hot enough woman I didn't want an answer to the question, this is the question that appears over and over in the game's story Basically you are an immortal dude But every time you are killed, you lose some of your memories @0celo7 : How about if I ask a question about how Hawking radiation really works, and you explain it? You can tell us all how quantum fluctuations are immune from gravitational time dilation, and how the black hole isn't really black. And isn't really a black hole. And all the other stuff you've got hard mathematical evidence for. So you kinda have to reconstruct what your life was 12:51 PM @ACuriousMind what are quantum fluctuations @ACuriousMind : isn't there some gameboy website where you can ask questions like this? Isn't that your area of expertise @0celo7 I've never seen an explanation of that phrase that wasn't either nonsense or trivial. @JohnDuffield are you ready to finally back up your electron Dirac belt idea The "best" interpretation of the word "quantum fluctutation" I've found is that there is a standard deviation of observables that is not caused by classical (i.e. statistical/thermal) principles. 12:53 PM "Quantum funkiness" Like how quantum fluctuation of the stress energy tensor is <T²> - <T>² But that's a trivial consequence of the non-commutativity of observables/the uncertainty principle, so it isn't really mysterious. So you're telling me the vacuum is not a boiling sea of particles popping in and out of experience @0celo7 I think I've also told you that before, so yes^^ @Slereah Well, oddly, not the times when you are killed during the game (this always irked me) :P 12:56 PM @0celo7 : Ask the question. Meanwhile it's like I said. We make electrons in pair production out of electromagnetic waves. We can diffract electrons. We describe electrons with the Dirac equation, which is a wave equation. We know that in atomic orbitals electrons "exist as standing waves", and when we annihilate that electron we get an electromagnetic wave again. And I didn't make up Dirac's belt. A guy called Dirac did that. I didn't make up the wave nature of matter either. @ACuriousMind that was rhetorical @JohnDuffield asking a question about your pet theory is by definition non mainstream I'd probably VTC it myself I know ACM would and he'd enjoy it too @0celo7 Except for your pet's theory, right? @0celo7 : I am. There are no particles popping in and out of existence. That's a lies-to-children non-explanation. Or as Slereah might say, it's popscience crap for kids. @ACuriousMind Depends how badly you are killed, I think Plus you can die for real during gameplay Though it is pretty rare 1:01 PM Should I play this game You should Is it not too old I can only remember two occasions where you can die for real Old games are too hard for me And sometimes too boring If you piss off the Lady of Pain and if you piss off the giant smith guy I would say it is pretty good 1:02 PM Spoiler alert She is called the "Lady of Pain" On principle avoid pissing her off For some reason splinter cell double agent is locked at 720p...it's eye cancer @0celo7 It's a different kind of old than Morrowind. The gameplay is not that fun, it's mostly about the story and the world, which is told through giant chunks of unvoiced text. yeah, the fighting system is nothing special Know what else is a great story but a poor game? I have no mouth and I must scream @ACuriousMind I think the morrowind gameplay is sleep inducing 1:04 PM Great story, great atmosphere, reallly poor gameplay How long have you played Morrowind @0celo7 : re asking a question about your pet theory is by definition non mainstream. None of what I said above is my pet theory. It's all mainstream. No doubt you'll be dismissing Dirac like you dismissed Einstein, and generally trashing this website with your trollery. I stopped playing morrowind because the walking speed is too slow Yeah that's kind of a problem of morrowind It starts off pretty slow Early combat is boring You miss most of your hits @0celo7 For that, I would forgive you to just set your speed to 100 or something @ACuriousMind how many mainstream authors think electrons are photons going around a loop? @Slereah 1:05 PM It does get better after a while, though @0celo7 Exactly 0. I'm not even sure what that means I seem to have missed all the books that mention that I know plenty of weird theories about electrons, but none of those are that @JohnDuffield sorry, it's not mainstream 1:06 PM Very early atomic physics had the idea that electrons were rings around the atom There was also the whole electron as spacetime defects And I think my trashing is pretty beneficial Electrons as black holes Hm, what else was there which one Only one electron? 1:07 PM Oh yes @0celo7 A variant of Feynman-Wheeler where it is one electron going back and forth through time :D It never got really made into a real theory, but some wondered if there was only one electron in the universe @Slereah : electrons aren't black holes. You can diffract electrons. In atomic orbitals electrons exist as standing waves. Sometimes, it emits a photon, and goes back in time as a positron 1:07 PM Yeah @Slereah you know nothin about electron and black holes Well I didn't say the theory panned out @Slereah : that's Wheeler for you. It was just an idea put forward Well Einstein did put the idea forward of electrons as wormholes @JohnDuffield No, their position probability distribution is the square of something that might be interpreted as a standing wave. Wheeler didn't know the difference between curved spacetime and curved space. If he had, he wouldn't have called them geons. He would have called them... 1:10 PM Can we change John Duffield I think ours broke @ACuriousMind : that's cargo-cult woo. It's quantum field theory. Not quantum point-particle theory. He is on repeat @JohnDuffield That is literally what you obtain from solving the Dirac or Schrödinger equations. I think the notion of "it's a probability amplitude" is about as old as quantum theory itself Older than Dirac certainly And that's quantum mechanics. In quantum field theory, you can't have your standing waves or such, because you don't describe electrons or other things as solutions to the Dirac equation there. 1:13 PM Well you can have waves still But they are @Slereah I wouldn't call things that depend on field configurations instead of space or spacetime "waves", really :P Well I wouldn't call something that isn't made of water "waves", but here we are! Bah, anything you don't know about you think is non-mainstream, and yet you believe hook line and sinker in woo peddled by popscience quacks which flatly contradicts not just Einstein/Maxwell/Dirac/etc, but the patent blatant evidence of electron diffraction etc. What planet are you guys on? Oh, and have you ever seen this movie? @Slereah No waves in oil for you? 1:16 PM @Slereah No, I meant the delicious stuff you get when smashing olives Do not shake your olive oil please But perhaps olive oil also has sinful connotations for you... FFS. Talk about chatroom trolls. I volunteer to be a moderator. @JohnDuffield What's up? People are allowed to chat here, and that's what this is. 1:26 PM Q: Can jet fuel melt steel beams? Max RuuliCommon sense suggests that steel beams should not yield under burning jet fuel without presence of other substances that produce very high temperatures when burning, such as thermite. So can jet fuel melt steel beams? How timely 1:39 PM I think I was chat banned. It said "room is read only" were you What did I do? did you deny Einstein and the Evidence @0celo7 This time around, I didn't see anything banworthy. But there is one removed message from you, I just can't remember what it said This time around? Are you saying I've been rightfully banned in the past? I have to seriously disagree with that...I think. Although there might have been that one time where some idiot starred something obscene or something. 1:42 PM @0celo7 Well, in the other cases, I at least knew for what you were banned. This time, I have no clue I blame the astronomer for that one. @0celo7 Not in all cases. This is my third ban. @0celo7 Yeah, I'm thinking of that I can't remember what it was. I guess it shows how much time on 4chan. I wouldn't even think of flagging something "inappropriate" 1:44 PM Did you talk about the Tits group perhaps Oh ffs it was my lady of pain comment Who the hell flagged that lol what what did you dooo How the hell was that even flag worthy I said "I like my ladies to give me a bit of you know what" I swear to god if I get banned for that again 1:46 PM You know what rhymes with train How do chat bans work I can't believe someone actually flagged that 1:48 PM @0celo7 You're banned from chatting. ban @0celo7 to demonstrate @ACuriousMind thanks Do multiple people have to flag? Ah, how you get them? I think they're either dealt out by hand by a moderator or if flags on your posts are deemed valid (either by a mod or by enough 10k users (2, probably)) @ACuriousMind you seriously thought I wanted to know what "chat ban" means I have plenty of experience Nah, one person flags them, and then all 10k users get a blue thingy where they can look at the flagged post and deem the flag "valid", "invalid" or "not sure" @0celo7 No, I was just messing with you :D 1:50 PM Did you say valid I didn't even see a flag Who on earth said valid?? But I was away from my PC for a while vacuuming, so I probably missed it. That was totally not ban worthy, was it?? I'd not have thought so, but we'll never find out what exactly the thought process here was. 1:56 PM @ACuriousMind well as long as you believe in me 2:18 PM Q: What is the Cleanup badge awarded for? AniketIt is written in the 'Badges' page that the 'Cleanup' badge is awarded for the "first rollback". What does this mean? I could not understand. Can anyone help? @0celo7 tsk, tsk, tsk :P 2:36 PM I need a new avatar. Get one What should it be? I was considering that 2:44 PM I need to figure out if I'm in the Orange or white section I think we're checkering a picture of Einstein dressed as Sherlock Holmes Elementary my dear Do it. Who is the shoop master here Maybe @dmckee could draw it on his fancy tablet The redskin logo is cool. Oh I'd probably get banned for something so offensive 2:48 PM how about an ocelot That's what it is right now. The hell I'm paranoid about bans now This is sad Should I flag this as not an answer perhaps : physics.stackexchange.com/questions/190222/… @Slereah It has been flagged as NAA at least twice I think It also already has two delete votes on it, only one more 20k user required. 2:59 PM That is good.
dca9be149b4b1e8c
Skip to main content Front. Phys., 15 September 2016 Sec. Interdisciplinary Physics On the Role of Fluctuations in the Modeling of Complex Systems Michel Droz1* and Andrzej Pȩkalski2 • 1Department of Theoretical Physics, University of Geneva, Geneva, Switzerland • 2Department of Physics and Astronomy, Institute of Theoretical Physics, University of Wrocław, Wrocław, Poland The study of models is ubiquitous in sciences like physics, chemistry, ecology, biology, or sociology. Models are used to explain experimental facts or to make new predictions. For any system, one can distinguish several levels of description. In the simplest mean-field like description the dynamics is described in terms of spatially averaged quantities while in a microscopic approach local properties are taken into account and local fluctuations for the relevant variables are present. The properties predicted by these two different approaches may be drastically different. In a large body of research literature concerning complex systems this problem is often overlooked and simple mean-field like approximation are used without asking the question of the robustness of the corresponding predictions. The goal of this paper is twofold, first to illustrate the importance of the fluctuations in a self-contained and pedagogical way, by revisiting two different classes of problems where thorough investigations have been conducted (equilibrium and non-equilibrium statistical physics). Second, we present our original research on the dynamics of population of annual plants which are competing among themselves for just one resource (water) through a stochastic dynamics. Depending on the observable considered, the mean-field like and microscopic approaches agree or totally disagree. There is not a general criterion allowing to decide a priori when the two approaches will agree. 1. Introduction One generic type of question a scientist has to face is to understand and explain the behavior of a given system found in nature. This type of problems occur in different fields of physics, chemistry, biology, ecology but also in economics, and sociology. Often the problem is of interdisciplinary nature and has a complex character. A frequent approach to study such a situation is to introduce a model describing the properties of a system using a set of variables considered to be relevant. However, it is not always obvious to decide what are the relevant variables, because there are several levels of description of reality. As written by Einstein, “Everything should be made as simple as possible, but not simpler” [1]. Thus, one would like to be able to decide which is the simpler, yet acceptable, model before starting a detailed investigation. Let us consider, to illustrate this point, the case of a simple fluid. We would like to propose a model explaining and making some predictions concerning the flow of such a liquid in a particular situation (boundary conditions, external forces…). How to model such a fluid? At a macroscopic level, the relevant observables are the so-called hydrodynamic variables, namely the local density, the velocity field, and the pressure [2]. A simple approach is to write hydrodynamic equations based on the laws of classical mechanics and the conservation laws in the problem [3]. One ends up with partial differential equations (continuity and Navier-Stokes equations) which could be in some simple cases solved analytically or generally integrated numerically. But at a different level, one could argue that one knows that a fluid is nothing but a family of interacting molecules and that one knows how these molecules interact among each other. This may be a better description of the reality than the previous one. It is a bottom-up approach and the problem is then how to extract the properties of the hydrodynamic variables from this microscopic modeling. One way is to develop approximative analytical methods, like the Boltzmann formalism [2], another way is to integrate numerically the microscopic equations of motion for the interacting molecules and average out to obtain the hydrodynamic fields. However, this could be a tremendous task in view of the complex form of the inter-molecular interactions. This lead us to ask the following question: how important are the details of the molecular interactions in the determination of the generic form of the equations of motion for the hydrodynamic variables? Or formulated in a different way, is it possible to define a simpler model in which the molecules are replaced by fictitious agents interacting in a simple way, but not too simple, which respect the basic conservations laws of the system? This is indeed possible and realized by the cellular automata or lattice gas approaches which is now widely used in fluid mechanics [4]. The generic form of the equations of motion for the hydrodynamic variables is not affected by this simplification, however the values of the transport coefficients, like the viscosity, depend on the particular approximation used. It turns out that the three different approaches sketched above lead fortunately to the same conclusions for the dynamics of our fluid. However, depending on the particular problem one has to solve, the macroscopic approach and the lattice-gas one could have a drastically different cost. Sometimes however predictions provided by different levels of description could be totally different. Let us consider two limiting cases: the so-called mean-field approximation where the dynamics is described in terms of spatially averaged quantities and the microscopic approach in which the local properties of the system are taken into account. These two cases differ by the absence or presence of local fluctuations of the relevant variables. The properties predicted by these two different approaches may be drastically different. It is true that mean-field like approximations are often easy to perform while microscopic calculations can be very complicated due to different reasons, one being the large number of control parameters entering into the model. Thus, the cost of a microscopic approach may be tremendously higher than the cost of a mean-field like approach and it is tempting to restrict oneself to such a simple approach. However, in some cases the fluctuations play a crucial role. We realized that a large body of research papers, mainly in biological or ecological fields, are based on simple mean-field like approximation and that, when discussing with the authors, they often do not understand why fluctuations should enter into their model. Thus, the goal of this paper is twofold, first to illustrate the importance of the fluctuations in a self-contained and pedagogical way, by revisiting two different classes of problems where thorough investigations have been conducted on the role played by the fluctuations. The first one is equilibrium statistical physics (Section 2). To be explicit we shall discuss the so-called Ising model, describing the equilibrium phase transition between the paramagnetic and ferromagnetic phases of a spin system [5]. This model is a paradigm for equilibrium statistical mechanics and more than 40 thousand research papers have been devoted to this model. This will illustrate, in a pedagogical way what effects fluctuations can possibly have and the possibility to establish a criterion to decide on the relevance of the fluctuations. The second domain we consider is non-equilibrium statistical physics (Section 3) and we shall concentrate on a class of problems having numerous application in physical-chemistry [6], biology [7], and sociology [8], namely reaction-diffusion systems. New difficulties related to fluctuations in the initial conditions will be discussed and new methodological tools introduced to understand the role played by fluctuations. No criterion concerning the validity of mean-field like approximation can be established. In Section 4, as a new example of the role of approximation adopted, we present our original research on the dynamics of population of annual plants which are competing among themselves for just one resource (water) through a stochastic dynamics [9]. During each year, they are surviving or dying according to the availability of the resource and their tolerance to lack or surplus of water. At the end of the year, the alive plants produce seeds which are dispersed in their neighborhood and all plants die. At the beginning of a next year, seeds germinate and the yearly cycle starts again. At the crudest level, one considers the average plant density with no spatial dependence, this is the mean-field level. At the opposite end, the dynamics is described by an Individually Based Model (IBM) [10, 11], in which the dynamics of each single plant is followed separately. Which is the best description? Depending on the observable considered, sometimes the two different approaches agree, sometimes they totally disagree. It is difficult in this case to formulate a first principle criterion telling whether much simpler mean-field approximation is good enough, or if we have to turn to the IBM. To the best of our knowledge there is no such a criterion in biological literature neither a detailed study of the problem. Finally, some conclusions are drawn in Section 5. 2. Equilibrium Statistical Physics Some materials when cooled down exhibit a phase transition from a paramagnetic phase at high temperature to a ferromagnetic phase at low temperature. This transition takes place at a well defined temperature, the critical or Curie temperature Tc which depends on the material. The low temperature phase is characterized by the presence of a spontaneous magnetization per particle m(T) which depends on the temperature. At zero temperature T = 0, the magnetization is maximal and decreases with increasing T. If m(T) → 0 as TTc continuously, one speaks of a “continuous phase transition” while if m(T) → m0 ≠ 0 as TTc while m(T) = 0 for T > Tc, one speaks of a “discontinuous transition.” The continuous phase transition is then associated with a spontaneous symmetry breaking and the spontaneous magnetization is called the “order parameter” of the transition. Beside the magnetization m(T) several other physical quantities, like for example the magnetic susceptibility or the specific heat exhibit, near Tc a power law behavior of the type: X(t)~tx,    (1) where t = (TTc)/Tc is the so-called reduced temperature and x is the critical exponent associated to the observable X. Let us consider the simplest case in which the magnetization can be aligned only along a particular direction that we may call z. Thus, the magnetization is a scalar quantity. How to model such a system? A simple way is to suppose that at a mesoscopic level, the atoms constituting the material are carrying a classical spin variable si = ±1, where i denotes the position of the spin and corresponds to a site on a d-dimensional hypercubic lattice for example. The spins are interacting among themselves and with a heat bath at temperature T. The equilibrium properties of the system can then be described in the formalism of equilibrium statistical physics and particularly by the canonical ensemble. The Hamiltonian, describing the energy of the spins system is H=i,jJi,jsisjhisi,    (2) where the sums run over all the sites 1, …, N of the lattice, and h is an external magnetic field. This model is called the Ising model [12]. In the simplest case when only the nearest neighbors interact, Ji, j = J if i and j are nearest neighbor and zero otherwise. Moreover, if J > 0 and when h = 0 and T = 0, all the spins are aligned in the same direction. The ground states are then ferromagnetic and one speaks of ferromagnetic like interactions. The physical observables are directly related to the canonical partition function Z(T,N)=Tr[exp(βH)],    (3) where Tr means the sum over the 2N possible spin configurations of the system and β=[kBT]1, where kB is the Boltzmann constant. From Z, all thermodynamic quantities can be obtained. The free energy is F(T,N)=kBTlogZ(T,N),    (4) the total magnetization M(T,h)=(βh)logZ(T,N)    (5) and the magnetization per spin in the thermodynamic limit is thus m(T,h)=limNN1M(T,h).    (6) Finally, the zero field specific heat is C(T)=T2f(T)T2.    (7) where f(T) is the free energy per spin. The difficulty is to obtain an analytic solution of the partition function in the thermodynamic limit N → ∞. For a one dimensional system, the transfer matrix method allows one to solve this problem easily. As shown by Ising in 1925, in this case the critical temperature is zero, i.e., there is no phase transition [12]. For dimensions larger than 1, the computation of Z is a formidable task [13]. In d = 2, h = 0, and nearest neighbors interactions, Onsager in a seminal paper [14] showed that there is a transition for Tc given by sinh(2J/(kBTc)) = 1. Moreover, this transition is continuous and the spontaneous magnetization behaves, near the critical point, as m(T)={[1(2sinh 2ν)4]1/8~t1/8if t<0,0if t>0,    (8) where ν = J/(kBT). The specific heat C(T) has a symmetrical logarithmic divergence at the critical point νc, i.e., C(T)~log(|ννc|).    (9) The behavior of the physical quantities in the vicinity of the critical point has the form of power laws and it is usual to write m(T)~{|t|βif t<0,0if t>0,    (10) C(T)~|t|α    (11) for t → 0. The exponents β and α are called the critical exponents (Note that the symbol β has two different meanings, the inverse temperature as in Equation(3) and the critical exponent of the magnetization as in Equation (10). This is the standard notation for both quantities and there should be no ambiguities when reading the text). For d = 2, the exponents have the values α = 0 and β = 1/8 which are in agreement with experimental data. Moreover, the spatial decay of the two-spins correlation function is exponential with a characteristic length ξ(t) which diverges as a power law of the reduced temperature as TTc. There are still no analytical solutions for d > 2. However, different numerical approaches, like the Monte-Carlo method [15] show that there is a continuous transition for all dimensions d ≥ 2 in zero external field. The main reason for which it is so difficult to compute the canonical partition function Z(T, N) is the presence of the quadratic terms sisj. A simpler way to model the ferromagnetic interactions among the spins consists in assuming that the spin si feels the influence of the neighbor spins through an effective or average field. This is the basic idea in the so called mean-field or Curie Weiss theory [16]. Technically it can be realized as follows. Let us write each spin variable si as an ensemble average part 〈si〉 plus a fluctuating part δsi. Then the quadratic term can be written as sisj=sisj+siδsj+sjδsi+δsiδsj    (12) The mean-field approximation consist in neglecting the last term quadratic in the fluctuations. Moreover, 〈si〉 = m by translational invariance. Thus, the initial Ising Hamiltonian becomes very simple, HHmf and Hmf=12NzJm2hmfi=1Nsi,    (13) where z is the coordination number of the lattice (z = 2d for a d-dimensional hypercubic lattice) and hmf=Jzm+h.    (14) One is left with a very simple problem as the mean-field Hamiltonian is the sum of one spin Hamiltonians. The magnetization per spin obeys the following relation m(T)=tanh[(kBT)1(Jzm(T)+h)].    (15) In zero external field, the spontaneous magnetization is zero above the critical temperature Tcmf=Jz/kB and m(T)~{|t|βif t<0,0if t>0,    (16) where now t=(TTcmf)/(Tcmf) and β = 1/2. The specific heat (in zero external field) exhibits a discontinuity at Tcmf C(T)~{3/2kBif T=limϵ0Tcmfϵ,0if T>Tcmf.    (17) One notes that the predictions of the mean-field approach are quite different from the ones obtained when the trace over all fluctuations has been accounted for. For example, the mean-field approach predicts a phase transition with a finite critical temperature in all dimensions, while we know that in one dimension Tc = 0. In two dimensions, the behavior of the spontaneous magnetizations is quite different (different critical exponents) and the specific heat has a logarithmic divergence at Tc in one case and a discontinuity in the other case. Thus clearly, the mean-field model which does not take the fluctuations into account is too simple a model to describe correctly the reality. However, in view of its great simplicity it would be useful to know if its prediction are still valid under some conditions. We shall address this problem later. Before going to this point, we would like to review a different approach which allows us to obtain the mean-field results starting from a microscopic model without neglecting the fluctuations in a brute force manner. This is the Ising model with long range interactions [17]. This approach is not well known outside the community of statistical physicists and, as we shall see later, turns out to be quite useful when applied to ecological or biological problems for which the dynamics involve pair of sites separated by arbitrarily large distances. Let us return to the Ising Hamiltonian (see Equation 2) but now, we suppose that Ji, j couple all the pairs of spins. The interactions have to be properly normalized to guarantee that the thermodynamic limit exists. Thus, the Hamiltonian reads: H=J02N1i<jNsisjh1iNsi    (18) as si = ±1, the canonical partition function can be written as Z(T,N)=exp(ν/2)Trexp[ν2N(1i<jNsi)2+b1iNsi],    (19) where ν = βJ0 and b = βh. This form still causes problems because all the spins remain coupled by the term quadratic in si. However, the spins can be decoupled by using the Hubbard-Stratonovitch [18] transformation: exp(a2)=(2π)1/2exp(x2/2+ax)dx    (20) a1/2=(ν/N)1/21iNsi.    (21) Thus, the trace on the 2N spin configurations decouples and one ends up with the partition function expressed in terms of a very complicated integral, namely Z(T,N)=(2π)1/2exp(ν/2)dxexp(x2/2)                                         [2 cosh(x(ν/N)1/2+b)]N.    (22) The computation of this last integral seems to be hopeless. However, in the thermodynamic limit N → ∞, this integral can be computed using the saddle point method. From the value of Z, the free energy density follows and thus the magnetization per spin which obeys the relation m(t,h)=tanh(νm+b).    (23) The critical temperature in this case is given by νc = 1, thus kBTc = J0. We thus recover the results of the mean-field or Curie Weiss theory [16]. By forcing all the spins to interact, one introduces some rigidity among them and it is not really surprising that the fluctuations are suppressed. We can now return to the generic question of when neglecting the fluctuations is a reasonable approximation. This criterion is known as the Ginzburg-Landau criterion [19]. The idea leading to this criterion is quite simple. Let us consider a physical quantity as for example the specific heat C(T). As we have seen above for the 2-d Ising model, near the critical temperature, the mean-field approximation gives for the specific heat C(T) = C1(T), while the exact theory leads to a logarithmic singularity. Thus C(T) is composed of two parts C(T) = C1(T) + C2(T) where C2(T) is the contribution coming from the fluctuations. Let us call RGL=C2(T)C1(T)    (24) the Ginzburg-Landau parameter. If RGL ≪ 1, the fluctuations are negligible and the mean-field approximation is acceptable. If RGL ≫ 1, the fluctuations play a very important role. How to compute RGL? As we have seen above, in the vicinity of the critical point of a continuous phase transition, the correlation length diverges and cooperative phenomena are very strong. This indicates that the properties of the system are insensitive to the microscopic details. Moreover, the transition is associated with a continuous symmetry breaking characterized by an order parameter. These facts lead Ginzburg and Landau to formulate a phenomenological model [19] capable of describing a wide class of phase transitions. We shall not describe the Ginzburg-Landau theory in great detail, but only recall the main ideas. The key quantity is the order parameter. For simplicity, we restrict ourselves to a scalar order parameter m(r) which may vary continuously in a d-dimensional space (note however that the order parameter could be a vector or a tensor). The Ginzburg-Landau Hamiltonian HGL is then the spatial integral of a polynomial expression of m(r) containing powers and gradients. The polynomial should reflects the symmetries of the problem. The Ginzburg-Landau partition function is given by a functional integral over the possible realizations m(r) of the Boltzmann factor exp(−βHGL). The computation of this functional integral is not possible without some approximations. The crudest approximation consists in retaining m(r) which minimizes HGL. This is simply the solution obtained above in the mean-field approximation, mmf. The next approximation consists in keeping the fluctuations δm(r)=m(r)mmf to the lowest order. This defines the Gaussian model from which the contribution to the specific heat related to the fluctuations C2(T) can be computed. Thus, RGL can be written as RGL~|ζTt|(d  4)/2,    (25) where ζT is the so-called Ginzburg parameter which value depends on the system studied. First one notes that the dimension d = dc = 4, called the upper critical dimension, plays a particular role. For a system in dimension d > 4 the mean-field theory is essentially correct. However, when d < 4 and t small enough, the fluctuations play a very important role. This defines the critical region of width given by |ζTt| = 1. Inside the critical region the fluctuations are important, outside of it they can be neglected. As t increases when d < 4 the role of the fluctuation decreases and a mean-field approximation is again reasonable. In summary, the fluctuations play a crucial role in many respects. They are responsible for the fact that below d = dl = 2 there is no ordered phase (dl is called the lower critical dimension). For d > du = 4 the mean-field approximation is essentially correct. For 4 > d > 2 a simple mean-field approach is correct outside the critical region, but incorrect near the critical temperature. 3. Non-Equilibrium Statistical Physics Reaction-diffusion problems are simple examples of non-equilibrium systems. The understanding of the kinetics of reaction-diffusion problems is an important issue because potential applications of these ideas in different fields, as physics, chemistry, biology or sociology are numerous. For the sake of simplicity, we shall restrict ourselves in this discussion to some simple cases. However, a large literature is devoted to more complicated situations [20, 21]. 3.1. Reaction among Two Species with Homogeneous Initial Conditions Let us consider a simple model in which two species A and B are homogeneously distributed in a d-dimensional box. The two species diffuse independently and react when they meet to form an inert species: A + B → inert. Given some initial uniform densities ρA(0) and ρB(0) such that ρA(0) = ρB(0), what are the long term behaviors ρA(t) = ρB(t) = ρ(t)? This situation could for example model the recombination of electrons and holes or the annihilation of topological defects in solid-state physics. In the simplest approximation, one could assume that the agents are stirred rapidly and then the law of mass action [22] can be applied. Thus dρA(t)dt=dρB(t)dt=kρA(t)ρB(t),    (26) where k is the reaction rate. In the long time limit, the solution is ρ(t)~1kt.    (27) Thus, ρ(t) decreases with time as a power law and the dimension does not enter explicitly in this equation. This derivation corresponds to the mean-field approximation and should be valid only when the species are well stirred. If there is no stirring the species have to find each other by diffusion and thus one expects that the dynamics will be slower and depend on the dimensionality d of the system. Indeed, taking the diffusion into account, the dynamics for the local concentrations is given by the equations dρA(r,t)dt=DA2ρA(r,t)kρA(r,t)ρB(r,t),    (28) dρB(r,t)dt=DB2ρB(r,t)kρA(r,t)ρB(r,t),    (29) where DA and DB are respectively the diffusion coefficients of the A and B agents. Then, if DA = DB = D, as noticed by Toussaint and Wilczek [23] the local density differences ρA(r,t)ρB(r,t) obeys a diffusion equation. As a consequence, central limit arguments led Toussaint and Wilczek to conclude that number of agents decays as ρA,ρB~(Dt)d/4. This behavior can be explained by a simple and elegant heuristic argument [24]. Let us consider a box of volume V = ℓd inside the system. At time t = 0 the quantity of species A in this volume fluctuates and is NA=ρA(0)d±ρA(0)d. After a time ℓ2/D, where D is the diffusion constant (we suppose that DA = DB = D), the agents in V will have time to be mixed completely and react, leaving only the residual fluctuations. Residual A in this domain is ρA(0)d with density ρA(0)d. The system is then formed by a collection of alternating A rich and B rich domains of typical size ℓ ~ (Dt)1/2 and the global density is then ρ(t)~ρ(0)(Dt)d/4. This decay with the exponent d/4 (for d ≤ 4) is in agreement with the experimental data and the predictions of theoretical models taking the fluctuations into account. When d > 4 the slowest decay is given by the mean-field solution. Thus, in general ρ(t) ~ tz with z = min(d/4, 1). Here again, an upper critical dimension du = 4 enters into the kinetics of the problem. Neglecting the fluctuations when d < 4 leads to completely wrong conclusions. Thus, models which are able to describe the properties correctly should take the fluctuations into account and it is not a easy task to solve such a model. Two approaches, one numerical and one analytic are possible. In the first the agents are put on a regular lattice, and very effective algorithms have been developed to describe the diffusion and the reactions of the agents [25]. In the second, we are interested in the universal properties as the long-term behavior of reaction-diffusion models. A natural starting point is to describe the stochastic dynamics of the agents by a master equation governing the time evolution of the probability P({n}, t) that the system is in a given microstate {n} at time t [26]. Reaction-diffusion systems are characterized by the fact that the quantity of the chemical species is not conserved by the dynamics. The corresponding models can then be written in terms of the ladder operators, as shown by Doi [27]. This representation is familiar in quantum mechanics under the name of second quantization. A model allowing more than one agent per site will be described by bosonic creation and annihilation operators. As a result, the first-order temporal evolution of the master equation can be cast into an imaginary-time Schrödinger equation in which the non-hermitian Hamiltonian is expressed in terms of the creation and annihilation operators [28]. The main reason to introduce this second quantized representation is to be able to map the problem to a field theory [29]. Indeed, several powerful tools have developed to extract universal properties of a field theory, like the renormalization group method. Several reviews [2931] are devoted to these topics and we shall not go into more details here. In summary, fluctuations play a very important role in the dynamics of such homogeneous reaction-diffusion systems. 3.2. Reaction among Two Species with Inhomogeneous Initial Conditions Another important class of problems in which two species A and B diffuse and react is the case when initially, the two species are separated in space. Let us suppose moreover that when the two species meet, they produce a new species according to the reaction A + BC. The species C agents is forming a reaction-diffusion front. The understanding of the properties of this front is important to explain the pattern formation which could occur in the wake of this moving front [32]. It is for example part of the mechanism involved in the formation of Liesegang patterns and significant body of research has been recently devoted to this question [6]. In the mean-field spirit, the equations of motion for the local concentrations of the reactant are the ones introduced above (Equations 28 and 29), the novelty being the boundary conditions. Let us assume for simplicity that the different concentrations are then only a function of x. Thus one has ρA = a0, ρB = 0, for x < 0 and ρA = 0, ρB = b0, for x > 0. The production rate of C is simply R(x,t)=kρA(x,t)ρB(x,t),    (30) where k is the reaction rate. Assuming that DA = DB = D, the density differences u(x,t)=ρA(x,t)ρB(x,t)    (31) obeys a diffusion equation the solution of which reads u(x,t)=1q21+q2erf[x2t],    (32) where erf (z) is the error function and q = b0/a0, with the corresponding boundary conditions. One then notices that the width of the depletion zone Wd defined as the region where ρA and ρB are significantly smaller than their initial values scales with time as t. Moreover, the center of the reaction zone xf is given by the condition ρA(xf, t) = ρB(xf, t), giving u(xf, t) = 0. Thus, it follows that xf=(2Dft) where the diffusion constant of the front is determined from erf [(Df/2)]=(1q)/(1+q). By substitution, this leads to a non-linear partial differential equation of the form tρA(x,t)=2x2ρA(x,t)+u(x,t)ρA(x,t).    (33) One cannot solve this equation in the general case. However, making the assumption (which can be verified a posteriori) that, in the long time limit, the width of the front w, defined as the second moment of R(x, t) is negligible as compared to the depletion zone Wd, one finds [33] that w~tα,with α=1/6.    (34) Returning to Equation (33) one finds that ρA(x, t) can be written as a scaling form ρA(x,t)=tβ/2G[xxftα],    (35) where G is a scaling function fulfilling the appropriate boundary conditions. Thus, the production rate can be written as R(x,t)=tβF[xxftα]    (36) with the exponents satisfy the scaling relation α+β2=1/2.    (37) It should be stressed that the indices α and β appearing in this paragraph have nothing to do with the critical exponents introduced in the problem of phase transitions discussed above. A natural question arises: how are these results affected by the fluctuations of the two species? The first approach is numerical. Extensive numerical simulations have been done using cellular automata algorithms [25, 34]. The conclusions are that for dimensions d ≥ 2 the scaling relations are verified and the values of the exponents are the ones given by the mean-field approximation. For d = 1, the situation is less clear and the width exponent α turns out to be approximatively α(d = 1) ≈ 0.30 ± 0.01 [35] instead of the mean-field value αmf = 1/6. Clearly, an analytical approach is desirable to answer the above question. This was possible [36, 37] by realizing that the original time-dependent problem can be replaced by studying the reaction front formed quasi-statically by anti-parallel currents of A and B agents. It turns out [37] that dimensional analysis coupled with consistency arguments are enough to show that d = du = 2 is the upper critical dimension at and above which the fluctuations can be neglected. This fact is very important for the modeling of the pattern formed in the wake in the front [32]. Moreover, for d = 1 the width exponent is found to be α(d = 1) = 1/4. In conclusion, we have seen that for this relatively simple model of non-equilibrium statistical physics, the fluctuations can play a drastic role depending on the initial condition. It is not clear a priori, for a more complicated reaction-diffusion problem, what the best way to model the system is, weather or not to take the fluctuations into account. 4. Plants Dynamics and Biodiversity We now turn to from reaction-diffusion system to consider a biological model that describes the population of a system of annual plants. We are considering a system of annual plants characterized by their tolerance to a surplus of water, which is the only resource for the plants. It is common in studies of plant physiology to refer to the plants' tolerance to a shortage or surplus of water [38] rather than to their demand for it. The habitat on which the plants live is a square lattice of dimensions L × L with L = 200. Each cell could be either empty or contain one plant. In the simplest version of the model each cell receives the same amount of water (rainfall) γ, which is normalized by the plants demand for it, hence γ is a dimensionless quantity. We assume that there is an optimal amount of water for plants (relative to their demand for it) and a shortage or surplus of water has a negative effect on the plants' survival. In order to avoid having a system of clones, we allow for small fluctuations among the plants' tolerances. Therefore, the tolerance of a plant i is tli=tl(1±0.1·ri),    (38) where ri ∈ (0, 1) is a random number taken from a uniform distribution and tl is the average tolerance of the plants. The algorithm defining the dynamics of plants' evolution is based on our previous work [39, 40] and it goes as follows. Initially a certain number of plants (2000) is put in random positions on the lattice. In a given year, which is our time unit, all plants are randomly selected one by one. Fitness fi of the chosen plant i at the time t is calculated from the formula fi(t)=γtli(10.1·nni(t)),    (39) where nni is the number of plants in the nearest (von Neumann) neighborhood of the plant i. The factor in the parentheses describes interactions among plants. Namely, a part of the resource nominally available to the plant i is blocked by the roots of neighboring plants [41]. The factor 0.1 ensures that the blockage is only a small part of that water. The form of the survival probability is not known in biology, hence we took the simplest one, responding to general, common sense, requirements, like vanishing when there is no water or peaking at the value corresponding to the plants' demand for it. pi(t)={fi(t)if fi(t)1,(fi(t))2if fi(t)>1.    (40) Such a form puts similar restrictions on lack and surplus of water. A new random number ri ∈ (0, 1) is generated and if ri > pi the plant is eliminated. Otherwise it could produce seeds, in number given by nsi(t)=6·fi(t),    (41) where 6 is the maximum number of seeds a plant could produce when its needs are completely fulfilled i.e., its demand is equal to the supply of water. A larger maximum number of seeds has no effect on the results since from a cell only one seed is chosen for germination. Taking the maximum number of seeds equal 3 or less, leads to stochastic extinctions. The seeds are dispersed over 12 nearest cells and the cell on which the plant grows. Reducing this area to, say, von Neumann neighborhood does not cause any major changes. Increase to the whole lattice is discussed below. Then the plant dies and is removed from the system. Next comes the germination phase. All cells containing at least one seed are visited in a random order. A seed is chosen and put to the germination test, which has the form analogous to Equation (40), except that there is no blockage of water by neighboring seedlings, which have too short roots for that. Hence the seedlings do not interact among themselves. When all cells containing seeds have been checked, the seedlings become adult plants, the remaining seeds are removed (no seed bank) and a new year starts. The presented above description corresponds to the Individual Based Model (IBM), where plants differ in their tolerances, their survival depends on local conditions, and the seeds are spread over a restricted area. In a much simpler Individual Mean Field (IMF), we still deal with individual plants, but all of them have the same tolerance, and instead of Equation (39) we have fIMF(t)=γtl(10.1·4·ϱ(t)),    (42) where 4 is the number of nearest neighbors on the square lattice and ϱ(t) is the actual density of plants. The next simplification brought by IMF is spreading the seeds over the whole lattice. Since all plants now have the same tolerance and the fitness in Equation (42) does not depend on the local environment, all plants have the same fitness f and the index i is omitted. Next we pick each plant and we determine, as before, its survival chance by comparing its survival probability p from an analog of Equation (41) with a random number ri. Here comes the difference with the true MFA (see below), since despite the fact that all plants have the same value of p, for each of them a different random number is chosen. Hence, with the same probability, some plants survive, some not. The choice of the form of the survival probability used in Equation (40) is not crucial, it is probably the simplest. As we have shown in Droz and Pȩkalski [42], quite similar results are obtained when the survival probability has the form of a Gaussian. We may also consider a true Mean Field Approach (MFA), where we operate on the total number of plants and all their individual character is completely lost. The fitness of all plants is the same and equal to that in Equation (42). The following steps in the algorithm are however different. The number of seeds produced by all plants, ns, is equal ns(t)=|n(t)·f·6|    (43) where |x| means the integer part of x, n(t) is the number of plants at time t and 6 is the maximum number of seeds. The seeds are dispersed over the whole lattice and the number of cells containing at least one seed is equal to ce(t)=min(K,ns(t)),    (44) where K is the carrying capacity (number of cells) of the habitat. As in the IBM, seedlings do not compete, hence the number of seedlings which could germinate from ce(t) cells is n(t+1)=|ce(t)·γ/tl|.    (45) We may regard this situation with all local conditions neglected, as an analog of the Ising model with long range interactions described in the Equilibrium Physics paragraph. In Figure 1 we present the number of plants which survived till the end of simulations, as a function of the rainfall γ. We limited simulations to 150 years when the plant abundance reached a stationary state,. Shown are the results of three approaches—IBM, IMF, and MFA. We took the tolerance of the plants equal to 0.8, although for other values the results are quite similar. Apart from some small range of γ, the results from the IBM and IMF methods are quite similar, and it would be impractical to use more complex IBM approach. The moral of this part of our research is that the dynamics of one type of plants in a homogeneous habitat could be roughly described by a MFA method. If more accurate results are needed, taking into account individual plants, IMF or IBM approaches are needed. Figure 1. Number of plants as a function of water supply γ. One species. Homogeneous habitat. IBM, IMF, and MFA cases. Clearly MFA gives much different results. It is possible, but at a quite large cost, to construct the MFA algorithm also for systems of many plants. Therefore, in the following we shall restrict the investigations to the IBM and IMF cases. Let us now complicate the situation by considering a system of several, say 20, species, which differ only by their tolerances. We assume that the tolerances increase by 0.1 from the lowest value tl(1) equal 0.4. Introducing different types of plants means that there are cells containing seeds of different plants and we have to define the way one seed is chosen for germination. We follow here the lottery model of Chesson and Warner [43]. The probability of choosing a seed of a given species is equal to the fraction of such seeds in the cell. All other features of the model remain the same also for a system of many types of plants. We introduce at the beginning 500 plants of each species, located at random positions. In the neighborhood we do not distinguish types of plants, hence the factor nni means the number of nearest neighbors, irrespective of their type. Similarly in the IMF the density ϱ means the total density of all types of plants. Now the central question is not how many plants survived, but how many species are alive at the end of simulations when the rainfall γ is changing. The results are shown in Figure 2, for the two approaches—IBM and IMF. Averaging is over 500 independent runs, meaning that we start the simulations with the same number of plants, but placed in different positions, As we see, now the difference between the two approaches is quite large and simplifying the algorithm leads to reduction of the number of species alive. There is practically no difference between averaging over 100 and 500 runs. Figure 2. Number of species of 20 types as a function water supply γ for the IBM and IMF approach. Homogeneous habitat. The vertical lines indicates the statistical error bars. Let us add another feature—the habitat will now be heterogeneous. We assume that the rainfall decreases in a given direction, say, along the X-axis, forming a gradient of steepness g. Such a situation is quite often encountered in nature where the plants are living on the slopes of a hill [44]. Heterogeneous environment is often regarded as one of the possible sources of maintaining biodiversity [45, 46]. When g = 0 we come back to the homogeneous habitat considered previously. Otherwise the amount of water for all cells having the coordinate x is equal γ(x)=γ(1gxL),    (46) hence the rainfall decreases with x from its maximum value at x = 1 to its lowest at x = L. How the type of approach used influences the average number of species alive at the end of simulations, is shown in Figure 3 for the case of 20 species and medium value of the gradient (g = 0.5). The difference between the IBM and IMF methods is quite large and certainly now choosing IMF as a tool for studying systems of several plants, cannot be recommended. It should be also noted that while introduction of heterogeneous habitat leads to a large increase of the number of surviving species when the IBM method is used, it has rather weak effect for the IFM data. Figure 3. Same as above, but for heterogeneous habitat. Gradient of steepness 0.5. The vertical lines indicates the statistical error bars. The differences are better seen in a more detailed study of a system of just five species with tolerances 0.6, 0.8, 1.0, 1.2, and 1.4. In Figure 4 we show the time dependence of the abundances of the five types of plants obtained from the IBM method and in the Figure 5 shown are the results from the IMF method. Habitat is heterogeneous with g = 0.5 and the rainfall is equal γ = 1.0 and 1.5. While coexistence of species is possible, also for extended time, within the IBM approach, simple IMF predicts that one species will soon dominate and eliminate all other, hence coexistence is impossible. How comes that in a more sophisticated method plants with different tolerances can exist in a heterogeneous habitat, is shown in Figures 6, 7. We see that different types of plants are able to colonize regions where the living conditions match their tolerance best. In these regions they can eliminate seeds of other types of plants, since they have the largest chance to germinate, then grow up and produce seeds, which, in the case of IBM, will be dispersed in the neighborhood. This means that in a given cell there will be more seeds of plants better fit to local conditions and therefore such seeds will be privileged by the lottery mechanism, forming a positive feedback. When the seeds are dispersed over the whole lattice some of them fall into rather hostile environment and even if one of them is chosen for germination it may not succeed in it. As the results many seeds are lost and eventually seeds from plants which have tolerances close to, but a bit lower than the actual value of the rainfall γ will be best off, as seen in Figure 7, since in many parts of the system they could find, if not ideal, at least satisfactory conditions for germination. Due to dispersal of seeds over the whole lattice, formation of local clusters of plants of the same type is impossible. Figure 4. Number of plants of five types as a function of time in heterogeneous habitat. Left panel γ = 1.0, right panel γ = 1.5, IBM approach. Shown is a restricted time interval. Figure 5. Same system as in the previous figure but in an IMF approach. Figure 6. Density of plants of five types along the gradient. γ = 1.5. IBM approach. Figure 7. The same system as in the previous figure but for the IMF. We have shown that in a very simple case of just one plant living in a homogeneous habitat using IMF or IBM does nor really matter. However, in more complex situations, like many species and/or heterogeneous environment, results coming from those two approaches could be vastly different. In particular, observed in nature long term coexistence of species cannot be obtained from the more simple IMF approach. The very crude MFA method always gives results differing from both IMF and IBM and therefore is not recommended in more detailed studies. 5. Conclusions The problem of modeling complex systems is a difficult one and different levels of modeling are proposed in the literature. The simplest one uses mean-field like approximation while more sophisticated one are spatially extended models which are taking into account the local fluctuations of the relevant variables. The resolution of these more sophisticated models can be very complicated for several reasons among which the proliferation of the number of control parameters is an non-trivial one. An important point is that the predictions of the mean-field models may strongly differ from the ones given by more microscopic models. It would be important to have a criterion allowing us to decide if a simple mean-field like model is enough to describe the properties of a given system or not. As we have shown, except in some particular situations, there no such general criterion. In a large body of research papers this issue is simply ignored and conclusions are based on simple mean-field like models. This is why in this paper we wanted first to illustrate the importance of the fluctuations in a self-contained and pedagogical way, by revisiting two different classes of problems (equilibrium and non-equilibrium statistical mechanics) for which thorough studies on the role played by the fluctuations have been achieved. Second to apply these ideas to the study on an important question of biodiversity in which mean-field and more microscopic models lead to different predictions. Author Contributions MD wrote the Introduction, Equilibrium statistical physics, and Non-equilibrium statistical physics sections. AP wrote the section Plants dynamics and biodiversity. We both wrote the Conclusions. Research in the Plant dynamics part has been made in close cooperation. Conflict of Interest Statement 1. Schilpp PA, Jehle H. Albert Einstein-philosopher-scientist. Am J Phys. (1951) 19:252–3. Google Scholar 2. Van Vliet CM. Equilibrium and Non-equilibrium Statistical Mechanics. Singapore: World Scientific (2008). Google Scholar 3. Landau L, Lifchitz E. Mécanique des Fluides, Mir Edn. Moscow (1971). 4. Chopard B, Droz M. Cellular Automata Modeling of Physical Systems, Vol. 6. Cambridge: Cambridge University Press (2005). 5. McCoy BM, Wu TT. The Two-Dimensional Ising Model. Boston, MA: Courier Corporation (2014). Google Scholar 6. Droz M. Recent theoretical developments on the formation of Liesegang patterns. J Stat Phys. (2000) 101:509–19. doi: 10.1023/A:1026489416640 CrossRef Full Text | Google Scholar 7. Fisher RA. The wave of advance of advantageous genes. Ann Hum Genet. (1937) 7:355–69. Google Scholar 8. Droz M. Modeling cooperative behavior in the social sciences. In: Eight Granada Lectures, AIP Proceedings 779. New York (2005). 9. Droz M, Pȩkalski A. Model of annual plants dynamics with facilitation and competition. J Theor Biol. (2013) 335:1–12. doi: 10.1016/j.jtbi.2013.06.010 PubMed Abstract | CrossRef Full Text | Google Scholar 10. Macal CM, North MJ. Tutorial on agent-based modeling and simulation. In: Proceedings of the 37th Conference on Winter Simulation. Argonne, IL: Winter Simulation Conference (2005). pp. 2–15. 11. Durrett R, Levin S. The importance of being discrete (and spatial). Theor Popul Biol. (1994) 46:363–94. Google Scholar 12. Ising E. Beitrag zur theorie des ferromagnetismus. Z Phys A (1925) 31:253–8. Google Scholar 13. Malarz K, Magdoń-Maksymowicz M, Maksymowicz A, Kawecka-Magiera B, Kułakowski K. New algorithm for the computation of the partition function for the Ising model on a square lattice. Int J Mod Phys C (2003) 14:689–94. doi: 10.1142/S012918310300484X CrossRef Full Text | Google Scholar 14. Onsager L. Crystal statistics I. A two-dimensional model with an order-disorder transition. Phys Rev. (1944) 65:117. Google Scholar 15. Binder K. Introduction: Theory and Technical Aspects of Monte Carlo Simulations. Heidelberg: Springer (1986). Google Scholar 16. Curie P. Sur la possibilité d'existence du magnétisme libre. J Phys. (1894) 3:415. 17. Thompson CJ. Mathematical Statistical Mechanics. Princeton, NJ: Princeton University Press (2015). Google Scholar 18. Kleinert H. Hubbard-Stratonovich transformation: Successes, failure, and cure. arXiv:1104.5161 (2011). Google Scholar 19. Landau L, Lifchitz E. Physique Statistique. Mir Edn. Moscow (1967). Google Scholar 20. Ben-Avraham D, Havlin S. Diffusion and Reactions in Fractals and Disordered Systems. Cambridge: Cambridge University Press (2000). Google Scholar 21. Cantrell RS, Cosner C. Spatial Ecology via Reaction-Diffusion Equations. Chichester: John Wiley & Sons (2004). Google Scholar 22. Zel'Dovich YB, Ovchinnikov A. The mass action law and the kinetics of chemical reactions with allowance for thermodynamic fluctuations of the density. Z Eksp Teor Fiz. (1978) 74:1588–98. Google Scholar 23. Toussaint D, Wilczek F. Particle–antiparticle annihilation in diffusive motion. J Chem Phys. (1983) 78:2642–7. Google Scholar 24. Kang K, Redner S. Scaling approach for the kinetics of recombination processes. Phys Rev Lett. (1984) 52:955. Google Scholar 25. Chopard B, Droz M. Cellular automata model for the diffusion equation. J Stat Phys. (1991) 64:859–92. Google Scholar 26. Grassberger P, Scheunert M. Fock-space methods for identical classical objects. Fortschr Phys. (1980) 28:547–78. Google Scholar 27. Doi M. Stochastic theory of diffusion-controlled reaction. J Phys A Math Gen. (1976) 9:1479. Google Scholar 28. Täuber UC, Howard M, Vollmayr-Lee BP. Applications of field-theoretic renormalization group methods to reaction–diffusion problems. J Phys A Math Gen. (2005) 38:R79. doi: 10.1088/0305-4470/38/17/R01 CrossRef Full Text | Google Scholar 29. Cardy JL, Täuber UC. Field theory of branching and annihilating random walks. J Stat Phys. (1998) 90:1–56. Google Scholar 30. Cardy J. Renormalisation group approach to reaction-diffusion problems. arXiv cond-mat/9607163 (1996). Google Scholar 31. Täuber UC. Renormalization group: applications in statistical physics. Nucl Phys B Proc Suppl. (2012) 228:7–34. doi: 10.1016/j.nuclphysbps.2012.06.002 CrossRef Full Text 32. Antal T, Droz M, Magnin J, Rácz Z. Formation of Liesegang patterns: a spinodal decomposition scenario. Phys Rev Lett. (1999) 83:2880. Google Scholar 33. Gálfi L, Rácz Z. Properties of the reaction front in an A+B to C type reaction-diffusion process. Phys Rev A (1988) 38:3151. PubMed Abstract | Google Scholar 34. Cornell S, Droz M, Chopard B. Some properties of the diffusion-limited reaction nA + mB to c with homogeneous and inhomogeneous initial conditions. Phys A Stat Mech Appl. (1992) 188:322–36. Google Scholar 35. Cornell S, Droz M, Chopard B. Role of fluctuations for inhomogeneous reaction-diffusion phenomena. Phys Rev A (1991) 44:4826. PubMed Abstract | Google Scholar 36. Ben-Naim E, Redner S. Inhomogeneous two-species annihilation in the steady state. J Phys A Math Gen. (1992) 25:L575. Google Scholar 37. Cornell S, Droz M. Exotic reaction fronts in the steady state. Phys D Nonlinear Phenomena (1997) 103:348–56. Google Scholar 38. Tardieu F. Virtual plants: modelling as a tool for the genomics of tolerance to water deficit. Trends Plant Sci. (2003) 8:9–14. doi: 10.1016/S1360-1385(02)00008-0 PubMed Abstract | CrossRef Full Text | Google Scholar 39. Ka̧cki Z, Pȩkalski A. The impact of competition and litter accumulation on germination success in a model of annual plants. Phys A (2011) 390:2520–30. doi: 10.1016/j.physa.2011.03.014 CrossRef Full Text | Google Scholar 40. Pȩkalski A, Szwabiński J. Dynamics of three types of annual plants competing for water and light. Phys A (2013) 392:710–21. doi: 10.1016/j.physa.2012.09.029 CrossRef Full Text 41. Schenk H. Root competition: beyond resource depletion. J Ecol. (2006) 94:725–39. doi: 10.1111/j.1365-2745.2006.01124.x CrossRef Full Text | Google Scholar 42. Droz M, Pȩkalski A. Species richness in a model with resource gradient. Theor Ecol. (2016). doi: 10.1007/s12080-016-0298-8. [Epub ahead of print]. PubMed Abstract | CrossRef Full Text | Google Scholar 43. Chesson P, Warner R. Environmental variability promotes coexistence in lottery competitive systems. Am Nat. (1981) 117:923–43. Google Scholar 44. Travis J, Brooker R, Clark E, Dytham C. The distribution of positive and negative species interactions across environmental gradients on a dual-lattice model. J Theor Biol. (2006) 241:896–902. doi: 10.1016/j.jtbi.2006.01.025 PubMed Abstract | CrossRef Full Text | Google Scholar 45. Tilman D. Competition and biodiversity in spatially structured habitats. Ecology (1994) 75:2–16. Google Scholar 46. Holmes E, Wilson H. Running from trouble: long distance dispersal and the competitive coexistence of inferior species. Am Nat. (1998) 151:578–86. PubMed Abstract | Google Scholar Keywords: modeling, different levels of description, mean-field approximation, individually based model (IBM), annual plant dynamics, biodiversity Citation: Droz M and Pȩkalski A (2016) On the Role of Fluctuations in the Modeling of Complex Systems. Front. Phys. 4:38. doi: 10.3389/fphy.2016.00038 Received: 08 April 2016; Accepted: 19 August 2016; Published: 15 September 2016. Edited by: Alex Hansen, Norwegian University of Science and Technology, Norway Reviewed by: Krzysztof Malarz, AGH University of Science and Technology, Poland Stephen Frederick Strain, University of Memphis, USA *Correspondence: Michel Droz,
2e80061c460f8acc
Diabatic transformation From Citizendium Jump to navigation Jump to search This article is developing and not approved. Main Article Related Articles  [?] Bibliography  [?] External Links  [?] Citable Version  [?] In quantum chemistry, the solution of a set of coupled nuclear motion Schrödinger equations can be eased by a diabatic transformation. Such a set of coupled equations, describing the motions of nuclei (in a molecule often nuclear vibrations), arises when the Born-Oppenheimer approximation breaks down. The term diabatic was coined in the 1960s. Around that time shortcomings of the Born-Oppenheimer approximation (also known as the adiabatic approximation) became apparent and improvements of the adiabatic approximation were put forward under the name diabatic approximation. Linguistically, the term "diabatic" is unfortunate because there is no connection whatsoever to the Greek word diabasis (going through). Break-down of Born-Oppenheimer approximation See the article Born-Oppenheimer approximation for more details. The Born-Oppenheimer approximation, mainly designed for quantum mechanical computation of molecular properties, but also applicable to the solid state and molecular scattering, consists of two steps. In the first step the nuclei of the system are fixed in a certain constellation and the nuclear kinetic energies are dropped from the problem, i.e., the nuclei are assumed to be at rest. One or more electronic Schrödinger equations are solved yielding the corresponding (usually the lowest or the lowest few) electronic energies. Changing sufficiently often the nuclear constellation of the system under study and solving the electronic Schrödinger equations over and over again, gives the electronic energies as functions of the nuclear coordinates. These functions are known as adiabatic potential energy surfaces. The second step of the original Born-Oppenheimer approximation sets the nuclei in motion. This step consists of the solution of single (uncoupled) Schrödinger equations for the nuclei. In each of these equations the nuclear kinetic energy is reintroduced and a single adiabatic potential obtained from the first step serves as potential. This simple approximation breaks down when the potential energy surfaces approach—or maybe even intersect—each other. In that case the nuclear motion equations that are coupled formally by nuclear kinetic energy terms, may no longer be taken to be uncoupled, that is, the off-diagonal, nuclear kinetic energy, coupling terms may no longer assumed to be negligibly small. The equations cannot be solved one by one, but the coupled set must be tackled in its entirety. In summary, when the Born-Oppenheimer approximation breaks down because of the presence of close lying potential energy surfaces, the solution of the nuclear motion problem requires the solution of a coupled set of Schrödinger equations. A diabatic transformation of this set of equations has the purpose of making the equations easier to solve. It is a linear (usually unitary) transformation that minimizes (preferably makes zero) the off-diagonal nuclear kinetic energy terms. The adiabatic potential energy surfaces are combined linearly to a set of diabatic potentials, which include off-diagonal terms—a diabatic transformation is a transformation from an adiabatic representation to a diabatic representation. In the latter representation the nuclear kinetic energy operator is (almost) diagonal, but the equations are still coupled, this time by off-diagonal diabatic potential terms. The advantage of the newly introduced off-diagonal potential terms is that they are significantly easier to estimate numerically than the off-diagonal nuclear kinetic energy terms that appeared before transformation. The diabatic potential energy surfaces are smooth, so that low order Taylor series expansions of the surfaces may be applied and the expansions do not introduce a great loss of the complexity of the original system. Unfortunately, in general a strictly diabatic transformation does not exist, it is not possible to transform the off-diagonal nuclear kinetic energy rigorously to zero. Hence, diabatic potentials generated from mixing linearly multiple electronic energy surfaces are generally not exact. These surfaces are sometimes called pseudo-diabatic potentials, but generally the term is not used unless it is necessary to highlight this subtlety; usually diabatic potentials are synonymous with pseudo-diabatic potentials. Mathematical formulation In order to introduce mathematically the diabatic transformation we assume now, for the sake of argument, that only two adiabatic potential energy surfaces (PES), E1 and E2, approach each other and that all other surfaces are well separated (do not come close to E1 or E2); the argument can be generalized to more surfaces. Let the collection of electronic coordinates be indicated by r, while R indicates dependence on nuclear coordinates. Thus, we assume E1(R) ≈ E2(R) with corresponding orthonormal electronic eigenstates χ1(r;R) and χ2(r;R). In the absence of magnetic interactions these electronic states, which depend parametrically on the nuclear coordinates, may be taken to be real-valued functions. The nuclear kinetic energy is a sum over nuclei A with mass MA, (Atomic units are used here and ∇Aα is the a component of the gradient operator, short-hand for a differential.) By applying the Leibniz rule for differentiation, the matrix elements of Tn are (where coordinates are suppressed for clarity reasons): The subscript r indicates that the integration inside the braket is over electronic coordinates only. The round brackets indicate the range of differentiation. Assume that the off-diagonal matrix elements may not be neglected (in agreement with the assumption that only two surfaces approach each other, off-diagonal matrix elements with k, k′ > 2 are negligible, so that only a set of two coupled equations has to be considered). Upon making the expansion the two coupled nuclear Schrödinger equations take the form (see the article Born-Oppenheimer approximation) where E is the total (electronic plus nuclear motion) energy of the molecule. In order to remove the problematic off-diagonal kinetic energy terms, two new orthonormal states are defined by a diabatic transformation of the adiabatic states χ1(r;R) and χ2(r;R) where γ(R) is the diabatic angle. Transformation of the matrix of nuclear momentum for k′, k =1,2 gives for diagonal matrix elements: These elements are zero because is real and is Hermitian and pure-imaginary. The off-diagonal elements of the momentum operator satisfy, Assume that a diabatic angle γ(R) exists, such that to a good approximation the right-hand side of the last equation vanishes, i.e., and diagonalize the 2 x 2 matrix of the nuclear momentum. By the definition of Felix Smith and are diabatic states.[1] (Smith was the first to define this concept; earlier the term diabatic was used somewhat loosely by Lichten.[2]) By a small change of notation these differential equations for γ(R) can be rewritten in the following more familiar form reminiscent of Newton's equations, It is well-known that the differential equations have a solution (i.e., the "potential"   V exists) if and only if the vector field ("force")   FAα(R) is irrotational, It can be shown that these conditions are rarely ever satisfied, so that a strictly diabatic transformation rarely ever exists. It is common to use approximate functions leading to pseudo diabatic states. Under the assumption that the momentum operators are represented exactly by 2 x 2 matrices, which is consistent with neglect of off-diagonal elements other than the (1,2) element, and the assumption of "strict" (not pseudo) diabaticity, it can be shown that On the basis of the diabatic states the nuclear motion problem [Eq. (1)]   takes the following form It is important to note that the off-diagonal elements (that appear only in the second term on the left-hand side) depend on the diabatic angle and adiabatic electronic energies only. The adiabatic surfaces E1(R) and E2(R) are PESs obtained from clamped nuclei electronic structure calculations and Tn is the usual nuclear kinetic energy operator defined above. Finding approximations for γ(R) is the remaining problem before a solution of the coupled nuclear Schrödinger equations can be attempted. Much of the current research in quantum chemistry is devoted to this determination. Once γ(R) has been found and the coupled equations have been solved, the final vibronic (vibration—i.e., nuclear motion—plus electronic) wave function in the diabatic approximation is 1. F. T. Smith, Diabatic and Adiabatic Representations for Atomic Collision Problems, Physical Review, vol. 179, p. 111–123 (1969) DOI] 2. W. Lichten, Resonant Charge Exchange in Atomic Collisions, Physical Review, vol. 131, p. 229–238 (1963) DOI
8f5ae7e8081a80f5
A Physicist’s Guide to Machine Learning and Its Opportunities Share This: A Physicist’s Guide to Machine Learning and Its Opportunities Kendra Redmond, Editor The ATLAS collaboration upgrades parts of its detectors in preparation for the LHC upgrade. Image by Maximilien Brice, copyright CERN.As we browse, drive, watch, and order, data pours into the ether. Our behaviors and preferences are collected and analyzed, and in response, the world changes. The combination of advances in semiconductor computation devices and this new and extremely large influx of data has powered rapid growth in machine learning. Evgeni Gousev is senior director at Qualcomm Technologies Inc. and chairman of the board of directors of the tinyML Foundation, www.tinyML.org. He has a PhD in solid state physics. Photo courtesy of Gousev.“The world is becoming more digitized, whether or not we want it,” says Evgeni Gousev, a PhD physicist and senior director at Qualcomm Technologies Inc., a company working toward an internet-of-things reality where billions of devices are intelligently connected. “We are all living in a data-driven world now,” he says. Companies like Facebook (now Meta) are paying unfathomable sums of money to acquire technology startups, often for access to their data. And it’s not just tech companies buying tech startups. The pharmaceutical company Pfizer gave Israel COVID-19 vaccine priority in 2021, in part because Israel agreed to share health data on its citizens. “Data is an integral component of the digital economy,” says Sandeep Giri, a staff project manager at Google and honorary member of Sigma Pi Sigma. With data―and the ability to interpret it―comes power. That may include economic power and the power to influence public opinion, but it can also include the power to improve access to education and healthcare, the diagnosis and treatment of diseases, car crash survival rates, severe weather predictions, our understanding of the universe, and many other aspects of the human experience. Finding meaning in massive amounts of messy, real-world data is a challenge, but it’s one that physicists are uniquely poised to tackle. We’re in the midst of “a once-in-a-century opportunity for physicists to play a bigger role in society,” says Gousev. Machine learning That opportunity lies in the rapidly growing field of machine learning. A subset of artificial intelligence (AI), machine learning is perhaps the most powerful tool we have for making sense of data that isn’t neatly organized or for which we don’t know all the governing rules. Machine learning describes a system in which an algorithm, or set of algorithms, learns from data and adapts. It’s a salient correlation to the process of applying physics to the real world. “Machine learning is essentially a system in which rather than building an algorithm or a model from an explicit description of desired behavior, we provide a set of examples that define the desired behavior of the system,” says Chris Rowen, vice president for engineering for Collaboration AI at Cisco. Rowen gives this example: Say you want a program that classifies something as a dog or a cat. You probably don’t want to try to describe what makes a cat a cat or what makes a dog a dog, in algorithmic terms. Instead, machine learning allows you to train a generalized system with a bunch of pictures of cats and dogs. From these inputs, the system extracts the relevant features of dogs and cats and infers an algorithm that distinguishes between these two classes of inputs across a wide variety of kinds of pictures.1 “Machine learning is really great for cases where you don’t have an algorithm with explicit rules on how to accomplish a certain task,” says Michelle Kuchera, a computational physicist and assistant physics professor at Davidson College. She says that it’s also great for discovery―looking for patterns, outliers, or unexpected behavior in data—and for making fast theoretical predictions. In cases when a prediction would typically take an extremely long time to calculate, you can use machine learning to build a surrogate model that can make much faster calculations. From toolbox to sandbox Machine learning has direct applications in physics and astronomy research. As co-PI of the Algorithms for Learning in Physics Applications group at Davidson, Kuchera collaborates with theoretical and experimental physicists to address computational challenges. Machine learning is ideal for overcoming some of these challenges, such as identifying interesting particle interactions among the huge data sets produced at particle accelerators and speeding up time-consuming theoretical predictions. “If you look at the Large Hadron Collider (LHC), or any of the scientific instruments where there’s a lot of fine-tuning that’s all happening in real time with the magnets and so forth, and if you want to control them, it’s great to be able to do that using machine learning. . .You’re going to infer very complex patterns much more easily,” says Vijay Janapa Reddi, associate professor of engineering and applied science at Harvard University. It’s not just the big particle physics collaborations that use machine learning. Scientists are using it to design new materials, find turbulent motion on the sun, uncover anomalies in the US power grid, give robots humanlike sensitivity to touch, and much more. Machine learning isn’t a magic bullet for all situations. “If you have a really solid understanding of the physics and the explicit mathematical rules to accomplish a task that you’re interested in, then that’s the preferred method, unless there’s some challenge with implementing it or it’s taking too long to be reasonable,” Kuchera says. But it’s one more powerful tool in the data analysis toolbox. Applying machine learning to areas outside of physics and astronomy also constitutes a gratifying and fulfilling career for many physicists and astronomers. For his PhD thesis, Sean Grullon studied neutrino fluxes at the IceCube particle detector at the South Pole. He dabbled with machine learning at times, as one of many data analysis techniques. When he graduated and decided to leave academia, machine learning was starting to take off. Grullon jumped in and has been applying machine learning to healthcare-related challenges ever since. He’s now the lead AI scientist at Proscia, a startup that builds tools to help pathologists find better ways to fight cancer. They’re using deep learning, a subset of machine learning that utilizes neural networks, to analyze pathology images for melanoma. Deep learning is particularly powerful for natural language processing and computer vision applications, which are notoriously difficult to do with conventional approaches. “A physics background is really appropriate for the field of deep learning,” Grullon says, in part because of the math background physics requires―most machine learning algorithms reflect different applications of linear algebra—and in part because physicists understand data. Compared to what you might find in a computer science class, data from the real world is messy. But physicists are comfortable with error bars, uncertainties, and probabilities. Grullon has found his career path to be gratifying. “I’ve found it rewarding, very interesting, and also very impactful,” he says. Machine learning is “a wonderful, wide-open area,” says Helen Jackson, a PhD nuclear physicist and machine learning researcher. Jackson’s PhD thesis focused on the effects of radiation on high-electron-mobility transistors. Upon graduation she had lots of data analysis and software experience, and while looking for a job, she taught herself machine learning. That opened the door to a position applying deep learning to airport security―using computer vision to detect threats in the cluttered airport environment. Since then she’s worked as a contractor on machine learning applications ranging from position-sensitive detection in computer vision to complex document understanding. In Jackson’s opinion, physicists are primed to work in machine learning. She has found that some companies “actually prefer to hire someone like a physicist or chemist rather than a straight computer science major, because a computer science major knows the mechanics of the code, but we know the underlying application and what this machine learning [system] is supposed to do.” She says the work is a lot of fun, and the applications are “just fascinating.” At Cisco, Rowen leads the team charged with improving the audio and video environment of the WebEx collaborative platform with machine learning and AI. With a bachelor’s degree in physics and a PhD in electrical engineering, he finds the mixture of important societal questions, computer architecture, and fundamental physics in machine learning fascinating. Machine learning deals with computationally hard problems, like what makes up speech, but uses physical systems that you can trace all the way down to electrons, Rowen says. “This continuity of understanding from physics on up through the computer architecture questions, the computationally hard algorithm questions, and the application questions surrounding machine learning and neural networks has been so exciting and interesting,” he says. Opportunities for physicists Sandeep Giri is a staff project manager at Google, cloud.google.com/tpu, and on the AIP Foundation’s board of trustees. He has a BS in physics and an MS in materials science and engineering. Photo courtesy of Giri.Gousev earned his PhD in solid state physics and has spent most of his career at IBM and Qualcomm developing new technologies, many involving machine learning. As the AI-based economy comes racing toward us, he sees not just an opportunity but a need for physicists to get involved. “We look at the whole world around us through a different type of lens, through a different type of mindset. We look at connecting dots in the environment, because we’ve been trained to look at the laws of physics and understand how things are connected in the world,” he says. That holistic picture of machine learning ranges from electrical components to program architecture and even ethics. What is the problem? What are possible solutions? Should we even be solving this problem? Who else might utilize this solution? What biases and inequities might emerge if this method is used with other data sets, like data on humans? Sorting through these questions requires a well-equipped, critically thinking, and creative workforce. “We have to prepare students for this new economy, and I strongly believe physics departments have a big opportunity,” Giri says. But taking advantage of that opportunity will require some changes. “There is a disconnect between physics departments and the AI-based economy that is inevitably coming our way,” he says. After earning bachelor’s degrees in physics and mathematics, Giri was on his way to a PhD in materials science and engineering when he decided to change course and take a job in industry. He’s worked at Qualcomm and then for Google on projects ranging from head-mounted displays to supercomputers. He’s also been an advisor for undergraduate physics education efforts through the American Institute of Physics (the parent organization of Sigma Pi Sigma) and the American Physical Society, and is a board member of the AIP Foundation. Giri says that the tools exist to prepare physics and astronomy students for this new paradigm, but physics departments need to embrace them. Physics departments often leave students feeling intimidated by and unprepared for careers in industry, whether by lack of knowledge or in favor of promoting a more traditional academic degree path. Many young students think the only physics career path is academia, and some choose not to major in physics for this reason. “I believe that a majority of physics majors today don’t only want to learn Newton’s laws or the Schrödinger equation. They want to know ‘What type of skills do I need to solve the problems that bring meaning to me? How can I build a product or service that leaves an impact on the world?’” Giri says. “Physics students would benefit from an awareness of all the technical and nontechnical career paths that exist in the machine learning and AI space,” says Giri. That ranges from software engineering to hardware design, systems engineering, supply chain, operations, product and project management, sales and business development, and beyond. These are all careers that people with a physics background can and do grow into. Machine learning is at the intersection of skills, opportunity, and change-the-world capacity, and that’s a huge opportunity for physics and astronomy departments to attract and retain new students―including students from groups that are traditionally underrepresented in physics. For example, in 2020 the TEAM-UP report noted the following key findings during its study of systemic issues that contribute to the underrepresentation of African Americans in physics and astronomy:2 The connection of physics to activities that improve society or benefit one’s community is especially important to African American students. Having multiple pathways into and through the major helps to recruit and retain students who may not have initially considered physics or astronomy as an option. There is a vast set of existing resources that departments, physics students, and professional physicists can utilize to take advantage of machine learning and its opportunities. Many are free or low cost and don’t require anything but curiosity, a willingness to learn and explore, some logical thinking, and a bit of math―all things every physicist and astronomer has in good measure. 1. To read more about classification algorithms in machine learning, see Sidath Asiri, “Machine Learning Classifiers,” Towards Data Science (blog), June 11, 2018, towardsdatascience.com/machine-learning-classifiers-a5cc4e1b0623. 2. The TEAM-UP report was written by the AIP National Task Force to Elevate African American Representation in Undergraduate Physics & Astronomy (TEAM-UP) in 2020. It’s the result of a two-year investigation into the long-term systemic issues within physics and astronomy that have contributed to the underrepresentation of African Americans in these fields and includes actionable recommendations for reversing the trend. See TEAM-UP Task Force, The Time Is Now: Systemic Changes to Increase African Americans with Bachelor’s Degrees in Physics and Astronomy (American Institute of Physics, 2020), www.aip.org/diversity-initiatives/team-up-task-force. Spotlight on TinyML In its early days, machine learning was done at large-scale data centers, but now the technology has moved into our phones and homes—think Alexa and Siri. There’s so much data that it’s not cost-effective, energy efficient, or at times even practical to move all of this data into the cloud for processing. In the cutting-edge research area of TinyML (tiny machine learning), scientists are running machine learning models on ultra-low-power microcontrollers. They aim to keep the data processing as close as possible to the data, thereby enabling always-on sensors or other devices, more secure networks, and the ability to add features like voice recognition to small devices that can’t be recharged frequently. Learn more at www.tinyML.org. Get Up to Speed on Machine Learning There are many widely available, internet-based resources on machine learning. This list is compiled from recommendations given by the physicists interviewed for A Physicists Guide to Machine Learning and Its Opportunities. In most cases URLs are not listed, but if you’re interested in machine learning you won’t have any trouble finding them. Learning Python • Machine learning is commonly done using Python. Google’s Python Class and Microsoft’s Introduction to Python are good, free online classes. Blogs and background • To get a sense of machine learning, its vocabulary, and what’s happening in the field, check out blogs like Google AI, Facebook AI, Berkeley AI Research, and Stanford AI Lab. If what they’re writing about excites you, that’s a good indication you should investigate it further. • Towards Data Science is another great blog if you’re just getting started. They have a lot of introductory articles that explain machine learning and deep learning algorithms and how to get started. Setting up your system • Scikit has a package for Python for machine learning with a good overview of machine learning algorithms and how to incorporate them in Python. • Environments like TensorFlow (Google) and PyTorch (Facebook) allow you to quickly build models for whatever kind of data you have. Online courses • Platforms like edX, Coursera, Udemy, and Udacity have free or low-cost Python classes and machine learning classes with projects that you can complete and show a prospective employer. Andrew Ng’s machine learning course out of Stanford is very popular, and it’s free on Coursera. • HarvardX’s Tiny Machine Learning (TinyML) and Google are collaborating on a series of courses focused on TinyML. The courses cover topics from the fundamentals of machine learning to collecting data, designing and optimizing machine learning models, and assessing their outputs. The first three courses are available now on edX, https://tinyml.seas.harvard.edu/courses/. • The Google Cloud AI Platform has tools, videos, and documentation for data science and machine learning, https://developers.google.com/learn/topics/datascience. The following resources may be especially helpful: - Google’s codelab “TensorFlow, Keras and deep learning, without a PhD” - Online learning channel, www.youtube.com/user/googlecloudplatform - Product documentation, https://cloud.google.com/docs • fast.ai has courses, tools, and articles for people interested in getting into machine learning. Getting data • Don’t have data? There are public domain data repositories with data on almost anything you could want, and most machine learning courses will direct you to them. Kaggle has lots of public datasets. Resources for teaching • In support of departments that want to teach their students about machine learning, Harvard has made much of its TinyML content and classroom materials, open source licensed and available at https://tinyml.seas.harvard.edu/#courses. More from this Department
944bedea92261c15
Saturday, May 08, 2021 What did Einstein mean by “spooky action at a distance”? [This is a transcript of the video embedded below.] Quantum mechanics is weird – I am sure you’ve read that somewhere. And why is it weird? Oh, it’s because it’s got that “spooky action at a distance”, doesn’t it? Einstein said that. Yes, that guy again. But what is spooky at a distance? What did Einstein really say? And what does it mean? That’s what we’ll talk about today. The vast majority of sources on the internet claim that Einstein’s “spooky action at a distance” referred to entanglement. Wikipedia for example. And here is an example from Science Magazine. You will also find lots of videos on YouTube that say the same thing: Einstein’s spooky action at a distance was entanglement. But I do not think that’s what Einstein meant. Let’s look at what Einstein actually said. The origin of the phrase “spooky action at a distance” is a letter that Einstein wrote to Max Born in March 1947. In this letter, Einstein explains to Born why he does not believe that quantum mechanics really describes how the world works. He begins by assuring Born that he knows perfectly well that quantum mechanics is very successful: “I understand of course that the statistical formalism which you pioneered captures a significant truth.” But then he goes on to explain his problem. Einstein writes: “I cannot seriously believe [in quantum mechanics] because the theory is incompatible with the requirement that physics should represent reality in space and time without spooky action at a distance...” There it is, the spooky action at a distance. But just exactly what was Einstein referring to? Before we get into this, I have to quickly remind you how quantum mechanics works. In quantum mechanics, everything is described by a complex-valued wave-function usually denoted Psi. From the wave-function we calculate probabilities for measurement outcomes, for example the probability to find a particle at a particular place. We do this by taking the absolute square of the wave-function. But we cannot observe the wave-function itself. We only observe the outcome of the measurement. This means most importantly that if we make a measurement for which the outcome was not one hundred percent certain, then we have to suddenly „update” the wave-function. That’s because the moment we measure the particle, we know it’s either there or it isn’t. And this update is instantaneous. It happens at the same time everywhere, seemingly faster than the speed of light. And I think *that’s what Einstein was worried about because he had explained that already twenty years earlier, in the discussion of the 1927 Solvay conference. In 1927, Einstein used the following example. Suppose you direct a beam of electrons at a screen with a tiny hole and ask what happens with a single electron. The wave-function of the electron will diffract on the hole, which means it will spread symmetrically into all directions. Then you measure it at a certain distance from the hole. The electron has the same probability to have gone in any direction. But if you measure it, you will suddenly find it in one particular point. Einstein argues: “The interpretation, according to which [the square of the wave-function] expresses the probability that this particle is found at a given point, assumes an entirely peculiar mechanism of action at a distance, which prevents the wave continuously distributed in space from producing an action in two places on the screen.” What he is saying is that somehow the wave-function on the left side of the screen must know that the particle was actually detected on the other side of the screen. In 1927, he did not call this action at a distance “spooky” but “peculiar” but I think he was referring to the same thing. However, in Einstein’s electron argument it’s rather unclear what is acting on what. Because there is only one particle. This is why, Einstein together with Podolsky and Rosen later looked at the measurement for two particles that are entangled, which led to their famous 1935 EPR paper. So this is why entanglement comes in: Because you need at least two particles to show that the measurement on one particle can act on the other particle. But entanglement itself is unproblematic. It’s just a type of correlation, and correlations can be non-local without there being any “action” at a distance. To see what I mean, forget all about quantum mechanics for a moment. Suppose I have two socks that are identical, except the one is red and the other one blue. I put them in two identical envelopes and ship one to you. The moment you open the envelope and see that your sock is red, you know that my sock is blue. That’s because the information about the color in the envelopes is correlated, and this correlation can span over large distances. There isn’t any spooky action going on though because that correlation was created locally. Such correlations exist everywhere and are created all the time. Imagine for example you bounce a ball off a wall and it comes back. That transfers momentum to the wall. You can’t see how much, but you know that the total momentum is conserved, so the momentum of the wall is now correlated with that of the ball. Entanglement is a correlation like this, it’s just that you can only create it with quantum particles. Suppose you have a particle with total spin zero that decays in two particles that can have spin either plus or minus one. One particle goes left, the other one right. You don’t know which particle has which spin, but you know that the total spin is conserved. So either the particle going to the right had spin plus one and the one going left minus one or the other way round. According to quantum mechanics, before you have measured one of the particles, both possibilities exist. You can then measure the correlations between the spins of both particles with two detectors on the left and right side. It turns out that the entanglement correlations can in certain circumstances be stronger than non-quantum correlations. That’s what makes them so interesting. But there’s no spooky action in the correlation themselves. These correlations were created locally. What Einstein worried about instead is that once you measure the particle on one side, the wave-function for the particle on the other side changes. But isn’t this the same with the two socks? Before you open the envelope the probability was 50-50 and then when you open it, it jumps to 100:0. But there’s no spooky action going on there. It’s just that the probability was a statement about what you knew, and not about what really was the case. Really, which sock was in which envelope was already decided the time I sent them. Yes, that explains the case for the socks. But in quantum mechanics, that explanation does not work. If you think that really it was decided already which spin went into which direction when they were emitted, that will not create sufficiently strong correlations. It’s just incompatible with observations. Einstein did not know that. These experiments were done only after he died. But he knew that using entangled states you can demonstrate whether spooky action is real, or not. I will admit that I’m a little defensive of good, old Albert Einstein because I feel that a lot of people too cheerfully declare that Einstein was wrong about quantum mechanics. But if you read what Einstein actually wrote, he was exceedingly careful in expressing himself and yet most physicists dismissed his concerns. In April 1948, he repeats his argument to Born. He writes that a measurement on one part of the wave-function is a “physical intervention” and that “such an intervention cannot immediately influence the physically reality in a distant part of space.” Einstein concludes: “For this reason I tend to believe that quantum mechanics is an incomplete and indirect description of reality which will later be replaced by a complete and direct one.” So, Einstein did not think that quantum mechanics was wrong. He thought it was incomplete, that something fundamental was missing in it. And in my reading, the term “spooky action at a distance” referred to the measurement update, not to entanglement. 1. ”In nineteen 27” ”In April nineteen 48” I guess that the transcription was done by software. :-) 2. Hello Sabine, this is the topic that interests me the most. At the moment only two spelling mistakes or not even that: "here is an example from Science" A link would be nice, if you have one. "nineteen 47" and the same with 27 looks a bit strange. They are not really errors... Best regards and enjoy the good weather 3. So Dr. Hossenfelder has quantum-entangled all the socks in the Universe somehow so that half of mine somehow lost their partners. Seriously though, I appreciate the explanation of what S.A.A.A.D actually referred to, since most descriptions refer to entanglement. 4. The point is that the case with a global upgrade of the wave function after registration of a particle does not fundamentally differ from the case of two entangled particles (or socks). Both in the first and in the second case, we are talking about a formal transition from knowledge about the statistical distribution to knowledge about a single event (see my blog). I also think that Einstein understood this affinity of situations, and his statements were implicitly about both the first and the latter cases. In my opinion, Einstein wanted to show with this remark that Bohr's interpretation, based on understanding the wave function as a property of an individual particle, is contradictory. Einstein, as you know, offered his own interpretation, which was based on the understanding of the wave function as a description of some statistics of particles, and not one particle. Later this interpretation was called 'ensemble interpretation', and was supported by Blokhintsev in Russia. But Einstein failed to convince most physicists. 5. Sabine, thanks for raising this interesting question. I always thought that Einstein's "spooky" relates to entanglement. But I wonder if the "update" (later called collapse) of the wave function worried him then why didn't he appreciate Everetts relative states interpretation which avoids the collapse? I understand that entanglement is just "a type of correlation". But isn't the real spooky thing the fact that the two entangled particles behave as if they were one particle regardless of being theoretically separated light years from each other? In an instant of time the outcome at A fixes the outcome at B. Einstein died 2 years before Everett finished his PhD thesis. 2. Even so Everett avoids the collapse when the observer updates his knowledge, it is not clear whether he also avoid nonlocality. Not avoiding nonlocality is not necessarily a bug, it could also be a feature. After all, quantum mechanics itself is nonlocal. So when Jim Al-Khalili says "... I think more in the direction of de Broglie-Bohm, because metaphysically I find it hard to buy into an infinite number of realities coexisting just because an electron on the other side of the universe decided to go left and right and the universe split into ..." in Fundamental physics and reality | Carlo Rovelli & Jim Al-Khalili, he is not complaining about Everett being just as nonlocal as other interpretations. Instead, he highlights a case of "the tail wagging the dog", because an insignificant decision of a minuscule electron incredibly far away will split my universe. 3. Scott, you are right, so one can only speculate how Einstein would have thought about MWI. 4. Jakito, I think whether the collapse of the wave function is a serious problem is interpretation dependent. MWI followers and instrumentalists don't have this problem. Zeilinger e.g. asks how can a mathematical rule collapse? In contrast, however, the EPR results can't be "explained away" and thus represent the nonlocality of quantum mechanics unambiguously. And as I understand it Einstein hoped to disprove what he understood as "spooky" by means of this Gedankenexperiment. At that time he couldn't imagine that he was wrong. 5. Timm, in many ways I am an instrumentalist. As an instrumentalist, I find it attractive to believe that nonlocal randomness is a feature of QM. But the decision of the electron at the other end of the universe doesn't help to generate randomness where I am. And it could only help me to create nonlocal randomness together with the other end of the universe, if it had a significant entanglement with a "qubit" at my place. The rate of branching of worlds from the perspective of MWI is probably orders of magnitude higher than the rate of actual randomness that can be generated where I am. You wrote that Einstein couldn't imagine that he was wrong. Einstein certainly believed that the prediction of QM would be the one actually observed in nature. He probably didn't expect Bell's inequality, but I am not sure whether this is enough to claim that Einstein was wrong. He knew about the riddle of conservation of energy, angular momentum, and linear momentum. He didn't believe in a simple solution to that riddle in the form of particles which explicitly carried those locally preserved properties. 6. Jakito, regarding MWI I have two main objections, first it's hard to believe that the wave function is ontic, second you have to forget the Born rule. Also that unitarity holds isn't set in stone. So, welcome dear instrumentalist. I agree that "Einstein couldn't imagine that he was wrong" goes too short. He was convinced of local realism and that the EPR nonlocality can be taken as a sign that quantum theory is not complete. 6. Sabine, What a delightful topic! I liked your emphasis on how Einstein first got himself into hot water with the quantum community he helped create with his brilliant insight into the need for photon quantization. (It’s also ironic that this was the work, not relativity, for which Einstein got his only Nobel Prize.) My favorite rendition of the incident in which Einstein first shocked the quantum community is on page 111 of the carefully researched but written-as-if-there book The Age of Entanglement [1] by Louisa Guilder. Einstein’s talk immediately followed de Broglie’s proposal of pilot waves. As a young Frenchman proposing abject heresy to an audience of mature German physicists, de Broglie had gotten a cold reception indeed. Einstein unexpectedly supported de Broglie by pointing out that Born’s wave-collapse re-interpretation of what had previously been wave intensity contained a profound paradox. Einstein noted that “This [Born] interpretation presupposes a very peculiar mechanism of instantaneous action-at-a-distance to prevent the [electron particle] wave from acting on more than one place on the screen.” Einstein’s example also highlights a sad episode in more recent physics. All waves become chaotic over time. If that chaos is allowed to become infinitely detailed, it leads to the ultraviolet catastrophe, the generation of infinite energy via the emergence of infinitely complex harmonics. Without quantization, this happens to all waves, even classical waves if one ignores friction. While entertaining, many-worlds was also a colossal waste of time because it ignored Einstein’s point from 1927: Matter waves (electrons) must quantize, just as photons must. Many-worlds is what happens if the observably false ultraviolet catastrophe is applied to matter waves. Thus many-worlds cannot even qualify as a legitimately “quantum” theory. To be precise, it is an entirely classical, conservation-ignoring example of infinite precision math run amok. It’s excellent sci-fi fodder, but it’s not physics. Back to spooky action: The most delightful and intriguing explanation I’ve ever encountered of why “spooky action” is not really spooky was by the remarkable Asher Peres. Peres had an utterly unique way of interpreting reality that was impressive for its self-consistency, if nothing else. Peres, for example, quite rightly castigates Einstein for contradicting Einstein by using the word “instantaneous” several times in the EPR paper. It was, after all, Einstein who had vividly shown that this word has no experimental meaning! Here is Asher Peres’ explanation [2] of why spooky action at a distance is a figment of our imaginations: When Alice measures her spin, the information she gets is localized at her position, and will remain so until she decides to broadcast it. Absolutely nothing happens at Bob’s location. From Bob’s point of view, all spin directions are equally probable, as can be verified experimentally by repeating the experiment many times with a large number of singlets without taking in consideration Alice’s results... For Bob, the state of his particle suddenly changes, not because anything happens to that particle, but because Bob receives information about a distant event. Quantum states are not physical objects: they exist only in our imagination. Every time I read that, I agree enthusiastically with every word Peres said and have no real idea what he means. That is, while I understand what Peres is saying intellectually, I cannot think like Peres did. I don’t think many people can. He understood reality differently from most of us and arguably with more self-consistency even than Einstein. What a great mind! [1] Lisa Guilder, The Age of Entanglement: When Quantum Physics was Reborn. Vintage Books, 2008. [2] Asher Peres, Quantum Information and General Relativity. Fortschritte der Physik: Progress of Physics, Wiley Online Library, 2004, 52, 1052-1055. Online: 1. Peres was an antirealist with respect to the meaning of the quantum state. "It is not measurable directly, therefore it's a calculational tool." The real problem is not that it is a calculational tool, but that, as a tool (together with the projection postulate), it is not compatible with General Relativity on microscopic- (how do two states interact gravitationally? what happens at the end of black hole evapoation?), mesoscopic- (what is the gravitational field of a gravcat?), stellar (where does the Hawking radiation originate?) and Galactic scales (why does MOND work better than LCDM for rotation curves and BTFR?). Sabine had a number of illuminating posts about these issues. If you ignore gravity, there is not much to worry about, unitary evolution+projection explains nearly everything we observe, and that's as much as an antirealist can hope for. 2. Terry, there is nothing surprising in the fact that the transition from a statistical function (quantum state) to the values ​​of variables (measurement results) takes place in formalism and in the mind of the researcher, and not in space-time. It would be strange if it were the other way around. After all, the 'quantum state' with measurement does not disappear anywhere. Сontinuing a series of experiments in uniform conditions, you will receive uniform statistical results. Statistics are not tied to individual cases. I think that the failure of Einstein who failed to convince his young colleagues that he was right was also due to the fact that the new generation of physicists wanted to think that in the quantum physics they are dealing with something special that distinguishes this field from classical one. It was a kind of attempt at revolution not only in physics but also in a philosophical approach. Therefore Einstein's objections may have seemed too retrograde to them. Hence their attempts at a total separation of the 'classical' device and the 'quantum system', leading to confusion that physics has not overcome until now. There are differences between quantum objects and classical ones but they are not as insurmountable as Bohr imagined. 3. Terry Bollinger, "When Alice measures her spin, the information she gets is localized at her position" Information about what? About Bob's particle, maybe? "Absolutely nothing happens at Bob’s location." Right, so the particle already had that state (-h/2 in the paper). "From Bob’s point of view, all spin directions are equally probable" So what? Who cares? The EPR argument does not make any assumption about the distant observer (if any). The only relevant observation is that Alice can predict the result of a measurement before that measurement takes place. This can only be explained in two ways: 1. The particle had that state since the time of emission, or 2. The particle acquires that state when its partner was measured. 1 is the hidden variable explanation, 2 is non-local. So, Einstein's point was that the assumption that QM is complete (the rejection of hidden variables) leads to non-locality. Peres does nothing to challenge EPR. His considerations about Bob's view are irrelevant, it's just a red herring. 4. Fuchs and Peres wrote Quantum Theory Needs No ‘Interpretation’. I understand why Fuchs did that, but it is an utter mystery to me why Peres coauthored that paper. A long time after I read that article, I dived into certain chapters of Peres book Quantum Theory, Concepts and Methods. I was surprised that Peres can actually be quite reasonable, even so I did not trust everything he said. For example, he says in his book that there is no time-energy uncertainty. I don't fully understand his argument, and I am not sure whether he is right or wrong. But because of that article with Fuchs, the possibility that he might be wrong about the time-energy uncertainty occurs to be very real to me. Without that article, I probably would have tried much harder to understand his argument. 5. Unperformed experiments have no results. Asher Peres wrote this long before he wrote his book Quantum Theory, Concepts and Methods. It is spot on, and highlights a key property of QM. It is actually the title of a paper with only 3 pages, so I just read it. Turns out that paper itself is really good too. In fact, it was so good that I read that other article again, just to be sure that it is really "not great". Indeed, it is disappointing, and represents neither Copenhagen, nor QBism, nor Peres' own antirealism. 7. "Quantum states are not physical objects: they exist only in our imagination." It's interesting why theorists working with information don't consider (at least explicitly don't mention it) that information where to look for is in itself a kind of a guide, e.g. like where to direct a telescope for an astronomer. Even Galilean relativity only makes sense to one who has either heard of it (and solved a bunch of issues and observed behavior around himself) or re-discovered after running a lot of experiments on his own, so coming up with a principle. It's true, that it works anyway whether the observer knows of it or not. But it's not true that knowledge of the principle will not change the behavior of the observer. It does change it. I'm not sure how physicists deal with it. Sloppily looking over Von Neumann's text it seemed that he proved that whenever we make the cut in QM, the calculations lead to similar results. But that is only for the cases where we know what we are looking for. And information based on what we choose what to look for seems to be neglected. I might be terribly confused here, but it's the question that keeps popping up. It's like a lot of tacit knowledge of what is already known is neglected, but which is made available due to collective knowledge in rotation. Like making measurements on different branches forgetting about the information contained in the root from which they branched (yeah-yeah, Markov's evolution and Bell's causality make situations of measurement equivalent, but that's not the point, it seems that prerequisite information for the settings is neglected). E.g. an aborigen on an island would not benefit if he discovers Newton's Principia (even if he could read it). And in order to benefit from it, he has to at least study physics, i.e. perform some proof of work. But how is that "PoW done by aborigen" can be metricated and included in the measurement (even if it doesn't change it)? Currently it's implicitly included in education process, peer review and experimental evidence. But from at least one perspective seems to be fuzzy (that of course might be a characteristic of such perspective). PS A fantastic video! 1. Hi Vadim, I reckon you're not giving your indigenous islander enough credit. They're on the same level as the average Alice, Bob, or whoever. The only reason I know slightly more than your aborigine is that I've had the benefit of education including science, and the internet. :) 8. The correlations with entanglement is stronger than the Bertelsmann's socks idea. If a spin 0 state decays into two spin states, usually we think of 1/2 spins say of an electron and positron, then at face value it does seem to fit the socks idea. However, if the electron enters a region with a magnetic field and at the opposite side we have a radio frequency detector the electron spin resonance is found as the positron will transition states. This is for the positron far away from the magnetic field. If the socks had color change with temperature, the warming of one so its color changes will have no influence on the other. Realistically I suppose we could think of a photon entering a birefringent crystal with polarization dependent refraction. This produces two parametric down shifted entangled photons. If one photon interacts so its polarization rotates, then so will the polarization of the other photon. At the end of it all this is really about quantum phases. The collapse of a wave function is a switch of a superposition of a wave with an entanglement of that state with a needle state. The apparent collapse is then something which lies outside of quantum mechanics, which is a continuous deterministic evolution of a wave. 1. This comment has been removed by the author. 2. (after correcting mistakes) Lawrence, just my one pence. If the electron enters a region with a magnetic field at the opposite side nothing happens otherwise we would already have the super-luminary connection in action. Thus, in the quantum case, as in the case of socks, if we start to do something with one sock, nothing happens to the second. But with a large number of pairs of socks we can still change the conditions for registering socks at one end, for example instead of all socks start registering only those arriving in an upright position. And then the correlation between the position of the socks in the tangled pairs will 'instantly' change. True, we learn about the change in this correlation only a posteriori when we compare the information on the properties of the registered socks at both ends. Thus, the instant transfer of information from A to B is not work, and the principles of the theory of relativity are not violated. 3. Igor, I am afraid you are making the usual error of thinking large sample space statistics recovers quantum strangeness. Classical statistics obey the Bell inequalities, but quantum physics violates them. 4. Lawrence, in addition to the fact that there is some clever way to lead to violation of Bell's inequalities by classical means (see my blog), and to make sure that violation of them is not a property of exclusively quantum systems, there is another important consideration in favor of the absence of fundamental differences between quanta and classics. As you know, in classical mechanics, all events, including, of course, measurements, are distributed in advance and are located in the Minkowski space-time. They 'already are' always, although that time may not have come yet. If we reason in a similar way for quantum systems, and assume that the probabilities of the distribution of measurement results are not true, but only apparent to us due to our ignorance, and all future measurements indeed already exist and are built into the Minkowski space-time, then the question of when the properties of particles born is resolved. The properties of future particles already exist and are distributed in the Minkowski space-time, yet in the future. For some reason, physicists call this 'superdeterminism' but if you look at it, it is in no way different from Newtonian-Einstein's determinism. I guess I would be more enticed if this were a published paper and even more a report on experiments. 6. Lawrence, the results in question have been previously published only in my blog so far, but very similar results for the discrete case can be read in: Richard David Gill. 'The Triangle Wave Versus the Cosine: How Classical Systems Can Optimally Approximate EPR-B Correlations'. 9. I always enjoy your articles, Sabine, though this may be the first I've said so, and this is one of the best. I was beginning to think that I understood quantum entanglement, and even that it was obvious, until I got to the bit where you say "Yes, that explains the case for the socks. But in quantum mechanics, that explanation does not work." 1. Hi Athel, I just wondered if an analogue for socks would be, you don't know if there's a pair of socks or only one until you take your laundry out of the washing machine and put it in a clothes basket. If you leave one in the machine, it's automatically corellated with the one in the basket. You won't find out which state the socks are in - singletons or pairs - until you are in a hurry to go somewhere and you open your sock drawer to find there's only one sock of the pair you wanted. 10. Quantum theory is best understood as a form of perspectivism not physicalism Nietzsche introduced the idea of perspectivism: in the final analysis, all we really have is a manifold of interlocking perspectives. For example, consider the following toy model. If humans are small finite, represent each possible human perspective by a small non-empty subset of {1,...,n} where n is a large natural number. Then, there are minimal perspectives, but no maximal human perspective. Still, there is an ideal finite perspective which sees everything! If n=infinity, then there is still an ideal infinite perspective which sees everything! (God's eye-view!) If one accepts the standard quantum logic, then one has a manifold of perspectives which cannot-by Gleason's Theorem-be embedded into any single perspective! There are now maximal perspectives, but no universal perspective! 1. That's interesting. But I think there is something universal in all those situations, some invariant, no matter what perspective one is in. And more, it seems that information about that invariant is constantly neglected (at best, indicated). I'll reframe it in a simpler space, where it's indicated explicitly. Let's take Kolmogorov's triplet - (Ω, F, P). We've got the space (i.e. the basis for a perspective) - Ω. We've got the measure - P (i.e. the representation or mapping in that perspective). And we've also got F as an event space, initially naively taking it as 'a set of possible sets of elementary events, what can be simpler'. We even have no trouble in working with it in discrete spaces. But shifting to continuous spaces, something pops up. Namely, it becomes clear that in order to measure (i.e. produce representation, compress, etc.), we must superimpose some rules on our event space. It suddenly becomes obvious that it has to be homogeneous in some sense to set up adequate measure. So we naturally (for what?) end up with σ-algebra as F. What's happened? We had to set some linking between the space of elementary events and measure in order that it would have sense. Based on what did we do it? Based on some tacit 'intuitive' knowledge or logic of the observer that was used to close the space for operations. So coming up with a particular configuration for Ω. Without that knowledge - it would not work. That's just an example. But in that example σ-algebra basically encodes some knowledge of invariant (i.e. universal enough configuration for any observer; basically the observer - that is, any observer, ant, god's eye, whatever - converges to σ-algebra in such space), the relation between the space and the measure. And if one starts looking in any system there is always something that contains that (usually) implicitly. Now the question is can it be considered explicitly or at least indicated universally enough (at least so that we can be aware of it)? Universal Turing machine? But with included instruction set and metrics how to use it. It always converges to some form of bootstrapping... 11. Very interesting and intriguing post, thanks. "These experiments were done only after he died." Do you here mean the Aspect's experiment about Bell inequality? 12. For us non-science people, sometimes the "cookies" are too high on the shelf (as I/we may not fully understand the topic without more of a backstory on the subject). So maybe one day you could explain further in more simpler terms, placing the "cookies" on a lower shelf so that we non-science people might more clearly understand. I try to understand but often the concepts are difficult. 13. Oscillating charges from a distant sun send EM energy to earth across a vacuum where nothing empirical (hypothetical yes) happens.The energy arrives and interacts with certain arrangements of charges (material objects) in a certain definite way. Is this spooky action at a distance? 1. I think the answer is no, because I assume the energy takes time to travel. I think "spooky action at a distance" is instantaneous action at a distance, with no known mechanism of transmittal. See Andrei's comment above. (Which seemed excellent to me.) (Note however that whatever the transmittal mechanism is it cannot be used for instantaneous communication. Any attempt to force one entangled particle into some coded condition would wind up providing an initial measurement and then breaking the entanglement.) See "Bell's Theorem" for more information. 2. Hello Morris, no, it's not. It's just normal electromagnetic radiation that travels at the speed of light. Spoky action is what you get in the following experiment: A light source emits pairs of photons simultaneously. One photon to the left (A) and one photon to the right (B). If you measure the polarisation of the photons, they are always perpendicular to each other. So, for example, A is vertical and B is horizontal or vice versa. And of course all angles in between are possible. But between A and B you always have 90°. The experiments were carried out in large numbers and very carefully. Some of the places of measurement were 100km apart. The result is as follows: 1. at the start of the photons, their polarisation is undetermined. That is why the sock model fails here. 2. if the polarisation is measured at A, the polarisation at B is as required above without any loss of time. If one imagines that A and B "talk" to each other during the polarisation measurement, then this must take place take place at over 1000 times the speed of light. This looks like it would contradict the theory of relativity. And that is why we speak of spooky action at a distance. (My opinion: I like this spooky action because it gives us a chance to explain it for many other experiments. But a theory about it is difficult to develop. So far there is none). The other explanation is called superdeterminism. Sabine is the expert for that. Many greetings 3. Thanks Stefan. Maybe I should have said that my comment was tongue in cheek. I realize that the instantaneous occurrences is what required to meet the definition. To my mind what I posted above is as spooky as it gets. Add Rutherford's experiment and nothing else after that is really any more more mysterious or demonstrative that we are inherently unable to understand our world. We can know of course. But why recycle this stuff which is old and seemingly not useful? 14. Terry Bollinger wrote: Personally, I think there is a deeper question residing in the woods of what Peres is proposing. And it is a question that is based on the fact that Alice could have measured the particle for any number of attributes (such as velocity or position, for example) which can display physically observable results that are completely different from each other. So the question is, what is the true nature of the unmeasured quantum realm if indeed its primal state seems to be like some kind of infinitely malleable "cosmic clay," so to speak, that is transformable into whatever it is we wish to see based on the shape and purpose of a measuring device? 15. Good morning Sabine, I hope you are well and sitting comfortably. I plan to destroy one of the biggest mainstream belief within the next 90 seconds: Please take a sheet of A4/letter size paper (landscape format) to hand and a black or blue pen. Please draw with your hand about 25 to 30 small circles (diameter approx. 5mm) in random order on the sheet. Now please choose 5 of the 25 circles and colour them in. Now please connect these 5 with each other, with a continuous line (10 lines for 5 circles). Now please connect about 10 pairs of the remaining circles with each other with a dashed line. (it is best to connect neighbouring circles) Now please connect each painted circle with a non-painted neighbouring circle with a dashed line Now it gets serious: The 5 coloured circles connected to each other form a (network) cluster, they form an object. The remaining circles represent the environment. This can be a crystal, a laboratory, the universe. I call the circles nodes, the connecting lines edges. (Mathematicians call it a graph, others a complex network). Within an edge, signals are transmitted instantaneously, without time delay. Since the cluster nodes are connected with edges, they know instantaneously what is happening to each other. They form a point-like object, so to speak. And since the object is connected to the environment at various points, it's also arbitrarily large. In other words, I have a model that is small and big at the same time. You may now think: "Ridiculous". Well I assure you: "It is ridiculous". But it has the following advantages: - It is a mechanistic model for wave-particle duality. - I have the chance to fly through a double slit with such an object... and to see both slits, as well as to get a point-like effect on the detector behind it. - The edges provide the spooky distant effect. This can be used to explain - Experiments with entangled objects - Experiments with Mach Zehnder-interferometers with an object in one arm - Delayed-choice experiments - If we look at an object from the point of view of its surroundings, "smallness" must be specially created. The cluster points could be far away on your paper and form a large object. The cluster points could be near to each other and form a small object. With this one has the chance that the symmetry of special relativity disappears and changes into the gauge symmetry of QCD and vice versa. Likewise, all theory calculations that need a cut-off get a natural explanation. There are no arbitrarily small distances, at least not by themselves. - A muon in the atmosphere can see this atmosphere and "react" accordingly. - There is a chance that there are only a few stable arrangements for the clusters. One arrangement could then represent an electron and the "opposite" of that could represent the positron. And a completely different arrangement in the cluster could be a photon. This then gives us the chance to describe the pair annihilation, i.e. not only the "before" and "after" but the "how". - The same applies to the proton and anti-proton. Can one describe how these annihilate to gamma photons, you have in your hands the unification of QED and QCD. - The Heisenberg uncertainty relation can arise naturally. - And so some questions and problems disappear - the measurement problem - Schrödinger's cat (and dog?), Wigner's friend - Question: "Where is the electron?" Answer: "Everywhere and nowhere". to be continued 16. But after these neat words, all the questions and unsolved problems emerge: How is an object represented? Is it signals that are exchanged from node to node? Or is it just the nodes and edges? Do nodes and/or edges split? If so, how. How does an object move? What is most important? What is the most important question? In the meantime, I have a network in which simple structures emerge. If I increase the number of edges in the network above a threshold, (very simple) structures appear. And if I reduce the number of edges, the structures disappear. Near the threshold value, the network is undecided for a while. The duration of the "while" looks like a Poisson distribution. In my eyes, the network is a good model to study emergence. Many greetings, have a nice Sunday and call me if you like the above idea PS: Of course you won't call me. Why should you? Because of the neat words up there?! No, I don't think so. But if physics is a room, we are in opposite corners. And now you know what my corner looks like. I think that's fair. 1. So, "all the worlds a graph, and all the fields and particles merely nodes and edges"? I would think Stephen Wolfram would be sympathetic to ideas like this. However, just pointing out the obvious, even with the toy model you described, it would already not be trivial to get the electron behaviour right when you vary the distance between slits of a double slit experiment. You either need to complicate the substructure with more and more nodes and edges, or you'd have to have variable length edges. That is, if those structures even correspond to spatial dimensions in a straightforward way. Also, to the best of current knowledge, electrons do not have substructure. This of course means that your nodes and edges do not form part of the "physical" (for lack of a better word) structure of the electron but are rather a modelling tool (just like one view on the wave function). Now, that is not inherently a problem, but it seems you would quickly run in a totally intractable description of just single particle. Most likely, that would also imply intractability for many-particle systems, which in turn would make the formalism useless in practice. 2. Dear G. Bahle, Thank you very much for your kind comment. I agree with all your concerns. I share all your concerns. My difficulties are even more fundamental. So far I have not succeeded - creating many objects of the same size - a three-dimensional space with metrics or more precisely, a three-dimensional space with relativistic symmetry, i.e. a relativistic aether. I am well away from the double-slit experiment. I am not sure that the path I have pointed out is the right one, but I am sure that Modern Physics is extremely bad. I am sure that all these endless discussions in quantum mechanics have their reason in the use of the wrong mathematics. Half a year ago I had compared quantum mechanics with the Kobayashi Maru test, this well known "no win" situation. One can only pass this test successfully if one changes the rules. And that is exactly what I intend to do. Thank you for your kind comment 3. Dear Stefan, in the vein of new ideas (or rather old ones in this case), I just remembered an article from last year, which you might find interesting. It's on quanta magazine and deals with intuitionist mathematics ( Good luck being Captain Kirk ;) 4. This comment has been removed by the author. 5. I deleted a question that seemed sensible when I asked it, but probably wasn't. 6. Bahle / Mulder / Sabine Hello G. Bahle, I looked at the article in Quantuum Magazine and then at two recent publications by Nicolas Gisin. Nature Physics 16, pages 114-116 (2020) following thought: "Is a real number an object or does it arise as a never ending process?! known as intuitionistic mathematics" ==> I like the question, but he does not turn to physics and leaves the real work to others (in this article) at the beginning "at any time every number contains finite information" at the end "story telling is important e.g. how the moon drives the tides" I fully agree with both. In between, I again miss the physics. My rough approach to everything is emergence. So far I can create objects that are stable and flexible at the same time. I also need a minimum number of nodes and edges, otherwise nothing emerges. Now I can look to see how much the phase transition depends on the exact properties of the objects or vice versa, how resistant it is to them, and so on. ("Results in Physics" is having a hard time finding a reviewer. Meanwhile I uploaded it in ResearchGate). Gerben Mulder I still think the white socks are brilliant, because as far as I know, the polarisation at the start must be undetermined, otherwise this contradicts the experiments. (Possibly superdeterminism provides a way out. Sabine will have to say/explain that). Thank you for the blog post, the time and effort you put in. Thank you especially for approving my comments which run contrary to the mainstream and in one essential point also contradict you. Thank you for your patience and broadmindedness. Bee, hugs to you. Have a nice day everyone 17. Giving physical meaning to mathematical tools used to describe Reality is meaningless, only what is observable have physical meaning, when using a wave function to describe a particle observable behavior, the wave function is a mathematical tool without any real physical meaning, only what can be inferred as observable from it has physical meaning. The idea that Reality should follow a single set of Universal physical principles enough to fully describe it is really naive.This Galilean/Newtonian ideal is really outdated. Classical Reality is emergent from the Quantum Reality, and Quantum Mechanics is not "incomplete" just because Classical Reality don't follow the same principles, almost in the same way that Real numbers "emergent" from Rational numbers don't follow the same principles. All notions that we have about Reality are classical notions, not only because we are classical objects but because basic elements to describe Reality are classical: Identity, measurements, space and time. 1. I think the opposite: 1. the most meaningful and useful contributions made by theorists (in mathematics) is to develop suitable tools describing the physical reality, the experiments, with accuracy, precision, exactitude. 2.Pure mathematical abstractions may have a non immediate utility, but find a concrete application in physics a few years later; two important examples: Christoffel's work on the invariance of differential forms is a basic stone in Einstein's master work (Relativity); Cartan's spinors are another example since this mathematical tool has been elaborated before the birth/formulation of quantum mechanics. 2. It seems that we are talking about different things. Mathematics is a tool used to describe Reality, but that tool is not Reality. Only what is observable has physical meaning, and many times the same observable results can be explained or predicted using not isomorphic mathematical arguments. While the mathematical ideas to describe Reality are always changing/evolving, Reality is always the same and accessible to us only via observable facts. 18. "If you think that really it was decided already which spin went into which direction when they were emitted, that will not create sufficiently strong correlations." More please, or have I missed this? 1. Goggle "Bell's Theorem". "Boojums All The Way Down" by N. David Mermin gives some examples. 19. Hi Sabine, A little suggestion! Instead of mailing just one of the blue and red pairs, a more appropriate analogy will be to have say 10000 pairs of blue and red socks, put each of the pairs in two envelopes and randomly mail those envelopes to two of your friends with time stamps on the envelopes. When they compare color of the socks in the envelopes with time stamps, then it will correspond to entanglement!! 1. Kayshap, Sabine, The additional problem of QM is that if the envelopes are opened facing the same way one will be blue and one red, but if one is turned before opening, then both will be the same colour. The logical problem Einstein identified and Bell analysed was that if 'A' reversed her envelope to be the same as B, & found Red she dictates that B finds Blue. But if she decided NOT to reverse it - then B must find RED! Many experiments at long range with a 'switch' made at the last instant have born this out (using QM's assumptions). This is the 'faster than c' communication Einstein objected to. But it seems a solution satisfying Einstein IS possible, by changing Bohr's original assumptions about the 2nd 'quantum spin' state of conjugate pairs, substituting Maxwell's 2nd orthogonal momenta distributed as Poincare's sphere. (linear and curl). Comprehensible? 2. No Peter! My understanding is that in the singlet case the two sides have exactly opposite polarizations. So if the two measure at the same time, if one is red, the other one has to be blue. That is why I want the envelope to be time stamped. There is no way the two can get the same colored socks. As far as I know there is no way, Einstein could be right. 3. Kayshap, Your understanding of the data is incomplete. We ALSO have red/red & blue/blue cases where the setting are opposite! Einstein's objection was on logic; If A reversed her setting 1 light yr away at the last moment, she dictates or REVERSES B's finding! so requiring 'action-at-a-distance'. But I agree the PAIR really has opposite polarity. The new 'Discrete Field' model (DFM) solution (today accepted for a Nature journal!) uses Bohr's "detector is PART OF" the system, but BOTH the (orthogonal inverse) momenta in OAM (Linear AND 'Curl'), and simple vector addition on interactions! (but on all 3 axes). Einstein WAS then right about QM, but SR was 'incomplete', which he knew, and corrected on the same DFM physical basis in 1952 (Relativity Ed.XV, Appdx. V). He wrote it's; "not yet part of scientific thinking". (it was ignored so still isn't!). But I'm sure Sabine may agree most in academia are still not prepared to admit such an update to the 1905 interpretation? 4. Again no Peter! Singlet state (opposite polarity) is determined by the source. Observers have no control over it. Rest of the matter is a philosophical issue! 5. True BEFORE the 'measurement' interaction, and of course in current belief, but in fact we know light and electrons are REPOLARISED by interactions! As Bell pointed out we have no access to PRE-arrival state, only POST INTERACTION state! Bohr and Von Neumann well knew and stated that detectors modulate findings. That proves the key to the new solution (just passed stage 2 peer review). But it is very new, so unfamiliar. As Lorentz, Feynman etc & my mentor Dyson said; "We can't advance without some hypothesis that at first looks wrong". 6. Sorry Peter! I see that we have been missing each other's POV! To me (and I believe Sabine) the main point is that the spins(color of socks) are correlated since their birth. But your point was that in the actual expt. they are measured along different random directions. But changing direction of measurement has nothing to do with entanglement. For a single electron or a photon it is a special quantum, non classical property that when it is quantized in one direction before and then you measure in another direction, in the second case, it will be superposition of the two states. So in a way you are right that in the actual expt. there are correlations ++,--,+- and -+. Then you can have red/red blue/blue. But this has nothing to do with entanglement. That is because the two observers choose different axes. I am not sure if Einstein was worried about measurement along different axes. I think he was worried about + measurement by one forcing - on the other observer in the singlet case along the same axes. So admittedly the socks analogy stops being perfect in the final analysis of the way experiments are done! May be that is why Sabine said it won't!! 7. Not quite. I agree with Sabine of course on both initial inverse states AND that those are inadequate to explain the data. I find a key is to use BOTH the (inverse) momenta in OAM; Linear, and also (Maxwell's orthogonal) 'CURL', opposite at each pole, zero at the equator. That then implies Dr B's socks were REVERSIBLE! (or each with the other colour lining). That's where the 'vector addition' at measurement interactions enters the equation. 'Quantum logic' applies. Check out this (top peer scored 2015) "Red/Green sock trick" essay; Make any sense yet? 20. Hi Sabine If I set a sheet of paper on an incline and let it slide down and collide with some obstacle it does not surprise me that, on some microscopic level, every atom in that sheet of paper ‘knows’ that something has happened to the leading edge of the paper. Drawing on this analogy it should not seem remarkable that the wave function of the electron ‘knows’ that something (collapse) has occurred somewhere else within it. But, when Einstein offers his electron thought experiment he is actually being very careful to NOT make the same sloppy assumption that I have just made because he had devised no foolproof experiment that would corroborate that statement. He was just being a very careful scientist. 1. Hi Brad, just pointing out the obvious. It's not that every atom in the sheet of paper will eventually (<- key word) know that something happened to the leading edge. The key point and the reason why this analogy does have nothing to do with the problem here is that the "knowledge" of the leading edge stopping will only arrive at the trailing edge after a delay, with a maximum speed of c. So, if you had a sheet that's one lightsecond long, it would take one second or more for the end of the sheet to know the front stopped. 2. Hi, G. Bahle I apologize for being terse, science writing is not the same as storytelling. I know that information travels at a finite speed. But, in his 1927 correspondence Einstein didn’t want to acknowledge that the trailing edge of the wave appeared to know instantly that collapse had occurred, without some corroborating evidence. So he alluded to that strangeness 3. Hi Brad, it seems i misunderstood your argument. What i wanted to highlight was that Einstein was not worried that the wave function collapses, but that it does so instantaneously, which raises questions in regard to special relativity. Basically, spatially separated events that happen at the same time in a given reference frame are time separated in others. Or, simultaneity is not rotationally invariant (i.e. no global clock). Anyway, it's an interesting area of research. You'll also find many posts / videos on this blog that deal with this in one way or another. 21. Back in the 90s I read a beautiful romantic tale of entanglement where one of the two correlated particles could be made move in a certain way and the other, far away, moved in sympathy; this does not seem the case? 22. Is it possible to put it this way: entanglement is the condition that enables "spooky action at a distance"? In other words the socks in the envelopes may be white as long as the envelopes are closed but the act of opening one of the envelopes paints the sock in it red or blue with 50% chance. And the other sock becomes, regardless of its distance, instantaneously blue or red. So you can transfer a one bit’s number instantaneously to guy on Mars. Unfortunately the number is random and therefore it does not contain information. 1. Hello Gerben Mulder, yes. I would say it exactly like this. "White socks" was a really good idea. Thanks for that. 23. Sabine wrote: And in my reading, the term “spooky action at a distance” referred to the measurement update, not to entanglement. I think the reason people say Einstein's comment about spookiness refers to quantum entanglement is because quantum entanglement operationally involves the kind of measurement updates that he described in 1927, even though his original description involved the wave function of just a single particle. In general he was talking about the fact that a measurement in one location affects the wave function in another location. That's the generic peculiar or spooky "action" he has in mind, and in 1927 he described this in terms of a single particle's wave function, whereas in the later EPR scenario the point is made more vividly by considering the wave function of two particles. In both cases he was referring to what could be called entanglement of the distant parts of the wave function. It just so happens that he first considered the entanglement between the distant parts of the wave function of a single particle, and then later considered the entanglement between the distant parts of the wave function of two particles. In both cases I think the "spooky action" refers to the measurement update. (Answer: The correlations may or may not be spooky, but they are not technically an "action", e.g., no conveyance of energy or information.) It may be a matter of semantics whether we apply the word "entanglement" to the parts of a wave function of a single particle, or restrict the use of that term to the parts of a wave function of multiple particles. 1. Amos, I suspect that this is correct, but if you think about it for a moment this makes absolutely no formal sense. Entanglement is a property of the wave-function, it's independent of the update. And entanglement itself is locally created. There is nothing problematic about two particles propagating apart and creating a non-local correlation (as the example with the socks illustrates). 2. Yes but socks do not violate Bell's inequality. Einstein did not know about Bell's inequality at the time but he did understand that QM created a problem with locality. Also Einstein clearly applied his logic to entanglement with his EPR argument. 3. ppnl, "Yes but socks do not violate Bell's inequality." Who said they did? As Sabine made clear, they can't. They CAN however give some 'relationship', i.e. antiparallel, adequate for the 'entanglement' within the data and any solution. I find a 'black box' mechanism then CAN give Bells inequalities, if not with Bohrs assumptions. Bell himself insisted one would be found (along with Einstein & many others, including Weinberg, as his quote in Sabine's book). I think Einstein's 1927 comment was referring to the entanglement of the wavefunction of the electron at various locations on the screen. As you noted, he said that the probabilistic interpretation of the wave function requires a “peculiar mechanism” for preventing an action to occur at more than one location. In crude terms, if we imagine the wave function as some kind of probability-inducing pixie dust that is sprayed over the entire screen (as needed to account for 2-slit interference, etc.), there needs to be some way of ensuring that this "dust" never results in a particle at more than one location. So the pixie dust (wave function) must be entangled in this sense. I think that was Einstein’s point. Feynman famously argued that the 2-slit experiment (single particle hitting a screen, as in Einstein 1927) is sufficient to exhibit the only mystery of quantum mechanics. His point was that this already shows the entanglement of the wave function, which seems to have been what Einstein had in mind in 1927. Terminology in the literature varies, but classically we can have two locally correlated things that then become spacelike-separated by normal subluminal transport, whereas I would reserve the phrase “non-local correlation” for a correlation that arises at spacelike-separated events, involving the (assumed independent) selection of measurements performed at those separate locations (e.g., based on information from outside the past light cone of the other measurement). This kind of procedure highlights the entanglement of the wavefunction (of two particles) in a palpable way, but, as Feynman said, it’s still the same entanglement of the wave function that is already exhibited in a more primitive way by the single-particle wave function on the screen, and this is what Einstein referred to in 1927. Naturally we set aside superdeterminism in a discussion like this, since that would put everything in a completely different context. I don’t think Einstein had superdeterminism in mind when he made his spookiness comments, so that would be a separate topic. 1. Amos, 'one-particle wave function' is just a metaphor for a probability distribution function in many homogeneous experiments with one particle. 'Interference of one particle' is therefore also a metaphor, denoting the probability distribution of detecting one particle in multiple experiments with one particle. No probabilistic function can guarantee the conservation of the number of particles without violating relativistic locality. But the interpretation of quantum mechanics, in which the wave function is secondary, and reflects the behavior of real particles in an ensemble of experiments, perfectly guarantees this. Einstein was right. However, he was unable to defend his understanding of quantum theory, perhaps because he did not fully formulate the interpretation of ensembles at the origins of which he stood. And one more thing: no special 'superdeterminism' is needed here. The usual classical Einstein's determinism is enough, when all events of both the past and the future, including the results of measurements, are distributed in the Minkowski space-time. Thus, the whole world already exists, and it is what it is, and it cannot be different. There is no 'true probability', there is only our ignorance of the future. The movement is illusory, in fact, all events are already built into the 4-dimensional continuum, and only our consciousness moves at a certain speed, which leads to the visible speed limit. No, it isn't. Factually wrong. 25. Currently, the most convincing experiments in the field of "psi" is the Feeling Future experiments DARYL BEM. 26. I read over my earlier “just an aside” comment about the Many-Worlds Interpretation (MWI or many-worlds) and realized that I covered one critical point a bit too glibly in my haste. Most folks are accustomed to thinking of MWI in terms of “splitting universes” whenever a quantum decision occurs, but that was not the logic by which Everett originally derived his idea. Everett’s key idea was much simpler, and its simplicity was part of its appeal: The Schrödinger equation is everything. That is, in Everett’s framework, the key to understanding quantum mechanics is to keep elaborating the wave equation and never worry about some annoying and mysterious concept of “collapse” in a wave function. Instead, you permit the wave function to become more complex and detailed as it expands and interacts with other Schrödinger wave functions. Within this increasingly chaotic and detailed wave function, all of those split universes emerge as “signals” encoded as subsets of the increasingly complex overall wave function of the universe (or, in MWI, multiverse). This view can be profoundly appealing for folks who like waves: No collapse, and instead just waves forever, expanding according to beautifully smooth differential equations. From a wave perspective, this expansion even feels more straightforward rather than bafflingly complex. There is, of course, more to it than that. For one thing, there must be some mechanism by which those “subsets” maintain a coherent view of just one universe for observers within them, rather than seeing the entirety at once. I rather suspect this is at least partially why space-as-entanglement ideas sometimes pop up within the MWI perspective since the entanglement becomes the method by which the overall and extremely chaotic universal wave function sorts itself out into an expanding and quickly near-infinite number of universe-like subsets. It’s all for naught because it disregards the subtle underlying driver of all quantum wave “collapse,” a driver that Einstein noticed way back in 1927: the absolute and unforgiving conservation of mass-energy. Einstein did not disagree with Born’s probabilistic interpretation of the quantum wave intensity since even at that time, experimentation had demonstrated the Born leap-of-faith to be unnervingly effective at universally describing actual results of quantum experiments. Einstein got his Nobel Prize for recognizing that somehow, quantum mechanics prevented electromagnetic waves from becoming infinitely detailed and thus infinitely energetic: the ultraviolet catastrophe. By elaborating the very similar electron case to an exceptionally tough audience back in 1927, Einstein pointed out that quantum mechanics similarly did not permit the wave functions of electrons to become infinitely detailed and thus infinitely energetic, which would have allowed it to show up at multiple locations on the screen by violating conservation of mass-energy. The case of electron waves acquiring infinite detail and mass-energy had no name at the time but by straightforward analogy would have been called the “infinite mass” catastrophe, the unlimited replication of electrons in violation of mass-energy conservation. The name we would use for it now is many-worlds. For just that reason, I sincerely suspect that had Einstein lived long enough to encounter MWI, his dismissal of it would have been instantaneous and likely a bit on the derisive side. (Notice how hard I’m trying not to say “he would have broken out howling in laughter.”) Einstein knew there was a locality problem with quantum mechanics, but as seen in his dismissal of de Broglie’s pilot waves and Bohm’s later reincarnation of the same idea in more detail, he never, ever took the cheap paths out of that conundrum. Casually allowing an unquantized universal Schrödinger equation to violate the mass-energy conservation rule he created with his most famous equation would likely have struck him as a cheap path indeed. 1. I agree that to understand Einstein, one should not neglect "a driver that Einstein noticed way back in 1927: the absolute and unforgiving conservation of mass-energy". Bob Doyle (The Information Philosopher) in "My God, He Plays Dice! How Albert Einstein Invented Most Of Quantum Mechanics" defends Einstein. (He also includes analysis and translations of papers by Einstein from 1931, 1933, 1936, and 1948 as part of that defense.) Boyle highlights in various places the importance of conserved quantities for Einstein, and ironically calls them "hidden constants": 'There may be no hidden variables, local or nonlocal. But as we saw in the previous chapter, there are “hidden constants.” Hidden in plain sight, they are the “constants of the motion,” conserved quantities like energy, momentum, angular momentum, and spin, both electron and photon.' 27. I think you are right on that Einstein meant way more than entanglement by “spooky action at a distance”. I think a real concern is the speed of gravity with superposition and wave collapse. If you have an electron in superposition where is the center of gravity? And then upon detection you now know 100% where the centre of gravity is. Was that shift instantaneous and faster than the speed of light - this would break causality. Now some would say centre of gravity is not information, but certainly where the gravitational force is interacting on he particle creating that centre of gravity must be. Now we say the gravitational force of that electron is small so who cares, but it must be there. Or alternatively is it NOT there and if so, why? (And could this problem fit in with electrons in atoms, why isn't gravity seemingly acting on them. If gravity is there an aberration, shouldn't they be wobbling and then flying out?) That is on par I think with the entanglement concern, if when you decide the spin of the one particle by observation how did it communicate with the spin of the other almost instantaneously. I wish Einstein were here that could ask him where and what gravitational curvature of space-time by a particle in superposition would be. 1. If as I like to think, everything is discrete as Zeno's argument suggested, then certain effects (perhaps including the effect of an electron's mass on its nuclear orbit) don't reach the minimum increment and are therefore zero. Sort of the negation of the the ultraviolet catastrophe on a small scale. (The negation of the wobble catastrophe?) I understand experiments have not detected any discrete effect in the Lorentz Transformation with high confidence, which puts a very low limit on discreteness there, but it seems to me a discrete universe would in principle work as well (and in fact better, as in the ultraviolet catastrophe) than a continuous one. (I can't rule out some sort of mixture, though.) Meanwhile, the issue of when gravitational force could cause collapse has been discussed at this site previously and recently, my one opinion being that gravity would not collapse events in which its effect is not relevant, such as the spin of an entangled particle (in most cases), whether a cat is dead or just sleeping, and whether a C60 or larger molecule is diffracted by a vertical grating in a vertical gravity field. (The last case is in accordance with experimental results and perhaps lends some credence to my discreteness/minimum-effect hypothesis also.) 2. JimV, Yes, I think there is a, as Sabine put it, phase change; perhaps a minimum increment of energy density (absolute, i.e. including the zero-point energy). Above it and you are in real position with a centre of gravity on spacetime. Below it and you are in superposition with no center of gravity on spacetime. Like a rock that sinks in water while paper gets to float. Observation causing collapse in my mind then is you putting in the energy to cross that density. I like it because now the very hot and dense gets ionized and goes into superposition while the very small and cold go into superposition. Virtual particles have no gravity. Electrons through the double slit experiment would have no gravity? But what about an electron in superposition around a nucleus? In trying to work out some math I went to electrons which are in superposition around the nucleus. I need some help though because I am all over the place now. Went to the Earth-Sun system as the analog, how does it deal with aberration (gravity retardation) because the Earth doesn't fly away even though aberration should cause it to (the Earth you'd think due to causality would be rotating around a wobble based on the suns position 8 minutes 20 seconds ago). Well maybe what is preventing that from happening is an analog which shows gravity isn't applying to the electron. That uh did not work out. Instead thanks to a poster on here read a Feynman lecture that shows that an electron in a field also does not appear to have aberration because it has been offset by the past constant velocity (with no impact of acceleration) to a pseudo projection point. I thought that was crazy because if you are at a velocity and accelerate that means your center of gravity will end up projected "incorrectly" as if you were at a constant velocity and vis versa. Well, wait a minute.. I do feel a gravitational force back in inertia. (Sidebar, if this is true then the center of gravity for a galaxy will be projected based on the retarded velocity and not the real velocity and thus uh, not where you think it should be?). S. Carlip who argued that this is what is happening in the sun-Earth system (1999) notes that this cancellation is due to gravitational radiation as I think happens with an electron at velocity with electromagnetic radiation in Feyman's lecture. So now I am thinking well.. aberration should send you flying off to space but giving off radiation should have you crash into the center so maybe energy pulling you in = energy pulling you out and the speed of light is a result of it being the value you need for aberration to cancel out gravitation (making this virtual particle stable and everything else gone?). Right now I think what science believes is keeping the electron from "falling in" I think is the Heisenberg Uncertainty principle in that you can have a lot of momentum or a lot of location but not both. Is there some equivalence there? So now I am learning Feymen and "Lorentz Transformations of the Fields" but taking a step back.. none of this now would prove no gravity so... : ( . I'm hoping Sabine does some video that can somehow help connect this all. 3. It seems to me that the concept of superposition of spacetime is not general covariant. That is, probably, it will not make sense to Einstein. Penrose's theory of gravitational induced collapse is based upon the idea that superpositions of spacetimes violates general covariance. He tries to make sense of it, through trying to specify a way that the violation is still consistent with quantum mechanics. I think it fails, essentially because when speaking of general covariance, there is no intrinsic scale. 4. Mistake I made in my above post. My thought is that the speed of light is such that it ensures the force to make an electron collapse by loss of energy re electromagnetic radiation is equal to the force of the wobble brought on by electromagnetic aberration trying to throw the charge out... just as it appears for the Earth-sun system the force of the wobble brought on by gravitational aberration is countered by the velocity dependent offset. I just can’t see that being a coincidence, maybe everything else ends up winking out as a virtual particle. 28. Craig, those are good and intriguing questions. 29. I think Einstein meant that correlations in spacetime need a proper mechanism which might explain deeper than statistics. 1. Eusa, That’s exactly it! And we are never going to find such a mechanism because we cannot even conceive a “definite and familiar domain of objects (Gödels proof)” that produces random numbers, let alone all the subtleties of quantum mechanics. 30. Why do we systematically separate (synonym : consider as being two distinct notions) particle and topology /metric ? What if a particle of a given type is a type of topology/geometry ? Was this suggestion not the underlying idea implicitly contained in the 1935 ER article ? If the geometry is the carpet, why do we represent a particle as a ball rolling on it instead as a moving deformation of the carpet itself ? In that sense, a particle (matter/ wave) is not telling the carpet how to deform; it is the deformation and simultaneously a part of the carpet. In that vein too, does it makes sense to ask where is a particle? Sorry for that short speculation, not sure it can help concretely in understanding what à superposition is, except perhaps if one considers that we should think in terms of superposed or interferring geometries. 1. I like that view! Yes. I am still getting to grips with the implications of my latest (amateur) paper on Jan 2021 on antiparticles and the nature of space, but what you have written above fits. My paper brings together two ideas into one whole. (1) Farnes has negative mass causing DM and DE. (2) Chappell has time being dependent on an overall background twist in space in geometric algebra. Geometric algebra can set an overall background of space with a positive twist or a negative twist (either ε = +1 or ε = -1) The universe has say ε = +1 which sets the universe's time direction and the direction of the thermodynamic arrow of time. If one treats an elementary particle as a universe, it has its own, independent value of ε. So it can be travelling with or against the universal arrow of time. So a particle can carry its own spacetime with it very much like carrying its own piece of carpet with it. In the usual analogy (which I normally dislike) a particle/antiparticle can ride on a convex piece or on a concave piece of carpet as appropriate to it. Meeting the Bell correlation is trivial to calculate using backwards-in-time positrons, paired with forwards-in-time electrons. Although IMO I have shown very simply that this works, it still remains for me to show why QM also meets meets the Bell correlation. I suspect that if GA is an alternative way of representing space equivalently to QM, then the possibility of + or - time directions in GA must be somehow embedded in the QM calculations using Projection operators and the Pauli matrices. (I have of course followed (Susskind's) QM calculations in a shut-up-and-calculate fashion.) Just a few points: in QM, if two measurements are instantaneous, which measurement dominates? A or B? Or could the results fibrillate (joke). In my model, the positron measurement dominates as the electron gets imposed on it by 'entanglement' the polarisation of the partner positron. Austin Fearnley 2. If two measurements are close enough in time and far enough apart in space then simple special relativity tells us that which measurement happened first is observer dependent. Since no physical information or object traveled between A and B it cannot matter anyway. All you have is a correlation that you cannot even see until information is sent at <light speed from one to the other. 3. Paps57 ,So in the case of the particle that crosses the grid, it travels with an accompanying wave, and when the particle has not yet crossed the grid and its accompanying wave has passed the two openings and interferes with each other; It does not matter through which opening the particle passes and to the interference of the two components of the wave they decided where the particle goes; with a single opening the solution is another; could it be so? I love that quote and find it ironic that current evidence suggest it’s also appropriate to substitute general relativity for quantum mechanics; yet, that thinking is too disruptive for many to consider seriously, even if strong empirical evidence suggest otherwise. We continue putting too much effort trying to squeeze round balls into the square framework we’ve adopted. A serious effort should be made to restate the foundations of these theories with assumptions that agree with this part of an Einstein quote you used, “…physics should represent reality in space and time…”. The foundations are too obvious, too simple, and too compatible with testable observations to continue being ignored in favor of squeezing adaptations of our current theories into the mathematical framework we’re stuck in. Unless we start over from the bottom up I believe we’ll never have compatibly or definitively answer the nagging inconsistencies we find in our observations. 1. "Unless we start over from the bottom up I believe we’ll never have compatibly or definitively answer the nagging inconsistencies we find in our observations." What nagging inconsistencies in observations do you have in mind? 2. You asked Louis; "What nagging inconsistencies in observations do you have in mind?" Are we really so used to the 'weirdness' of QM we forget it's a logical inconsistency? ..or (EPR) paradox? John Bell certainly thought "the founding fathers were in fact wrong" ..somewhere. (p.171). I certainly agree with Louis. Do you not? 32. Didn't Schrödinger come up with the idea of entanglement to explain the EPR paper? And the EPR paper embodied Einstein's spooky action at a distance. So while entanglement and spooky action at a distance may not be exactly the same thing, they are very much related. 1. Not sure what you mean. The EPR paper uses an entangled state. It isn't *called* an entangled state in the paper. I can't remember off the top of my head who came up with this term. (Other than that the German term is "Verschränkung"). 2. Peter, Einstein didn’t see non-locality (spooky action) as a consequence of entanglement per se, but as a consequence of entanglement + the assumption that QM is complete. This quote is, I think, relevant here: The conclusion of the EPR paper is that QM is incomplete. The argument in the EPR paper is based on the existence of entanglement, which enables one to predict the outcome of a distant measurement by performing another measurement locally. The possibility of making such a prediction proves (in the absence of non-locality) that the distant measurement must be predetermined, hence QM must be incomplete since it cannot account for that predetermination. 3. Actually I think it was the other way around. Schrödinger developed the idea of entanglement first and Einstein tried to explain it as hidden classical states that would complete quantum mechanics. 4. The word "Verschränkung" first appeared in a letter Schrödinger wrote to Einstein in response to the EPR paper. Its translation "entanglement" is also due to Schrödinger. The EPR paper was not influenced by Schrödinger's development of his idea. Schrödinger's decision to publish his idea on the other hand was strongly influenced by the EPR paper (and Bohr's disappointing reaction). Schrödinger concludes his famous cat paper as follows: “Perhaps, the simple procedure of the nonrelativistic theory is only a convenient calculational trick, but it has at the moment attained a tremendous influence over our basic view of Nature.” The abstract of another paper on entanglement by Schrödinger from 1936 ends: "It is suggested that these conclusions, unavoidable within the present theory but repugnant to some physicists including the author, are caused by applying non-relativistic quantum mechanics beyond its legitimate range. An alternative possibility is indicated." I interpret this as an active attack on Bohr's obscurantism, which doesn't even acknowledged that a unified theory of QM and (special) relativity was not yet available (in 1936). Later papers by Schrödinger get even more explicit in his attacks on Bohr's obscurantism. 33. @Amos points out "In general [Einstein] was talking about the fact that a measurement in one location affects the wave function in another location." I think this is the core of the problem. It might be, however, that no change (of state) following a measurement is necessary. Encoded in the state are all correlations that come out in measurements. If you measure something in one place (say the existence of an electron) you measure something related in another place (say the non-existence of an electron). These correlations are in the state, and no change of state after a measurement is necessary to make it happen this way. In particular no collapse of wave function. There are, of course, equivalence classes of states that offer identical measurement outcomes. For example a singlet state of two spins where one particle has been measured in z-direction (result: pointing down) and another state where a single spin has been prepared in z-direction pointing up. In both cases, a measurement on the (remaining) spin yields a result conforming with a measurement on a spin pointing up in z-direction. Having, in the first state, measured the first spin does not imply, however, a change of the member state of the equivalence class. Such a change indeed would have space-like consequences (for the most likely non-physical wave function), but the change of state is not necessary to explain the measurement results. So maybe it doesn't happen at all. In fact, no collapse of a wave function has been necessary since the Big Bang. It is merely a matter of convenience to describe states with equivalence class members that do not require to take explicitly into account all measurements since the beginning of time in order to evaluate (via correlations the consequences for) the remaining measurements. 1. Ok but this does not address violations of Bell's inequality. It is this violation that makes it look like a measurement here changed a state there. 34. If it wasn't for coherence and superposition, life would not exist. Quantum biology has shown that light energy is transferred between process centers in green plants via superposition of the exciton created by an incoming photon. The exciton exists in a state of superposition and that charge separation will take all possible paths to the process center where the photon energy is converted to sugar and oxygen. This transfer of energy is 100% efficient and happens instantaneously. The transfer of energy in living things is something that spooky action does very well. 35. When Sabine says “If you think that really it was decided already which spin went into which direction when they were emitted, that will not create sufficiently strong correlations. It’s just incompatible with observations.” what makes it incompatible with observation. Can someone point me to an explicit example of the incompatibility? Watching Alain Aspect’s video at shows what must be the QM probability curve vs Classical Mechanics at 39m15s, but then at 54m34s he shows a plot of experimental results where the violations occur. This video seems to be the most detailed (layperson) explanation that I have come across or maybe it is too simplistic. Is the probability curve just translated into a “violation” curve by the EPR experiments? Is the big deal about QM Entanglement/Measurement just that QM does not match CM and the difference maxes out at 22.5 degrees? And if that is the case is it a big deal that polarizers have this mode of operation? It is just a fact of life and I see no problem with that. The correlation is made at pair creation and read with polarizers that work the way they do. Or is it that this polarizer operation 'cannot be' or 'is not' explained by QM and we need a better explanation of reality? 1. Peter, I've found your last line correct. But did the Aspect video reveal he had to discard over 95% of his data to make it fit the QM prediction? That's not in his short paper but is in his thesis. Others such as Weihs/Zeilinger with different kit then did the same, still not able to find the source of the 'rotational anisotropies'. Some may suggest 'confirmation bias'! Is that fair? 2. Peter Becher, The point is that classical mechanics cannot produce the quantum probabilities without using faster than light information transfer. See this video: Peter Jackson, The last line you reference is wrong. QM predicts perfectly the polarizer operation. Classical mechanics will not allow it at all. It is true that Aspect had discarded a lot of data because photon detectors were pretty inefficient at the time. But first, that is no longer true. And second, no it is not confirmation bias. Not fair at all. 3. ppnl, "classical mechanics cannot produce the quantum probabilities without using faster than light information transfer" If by "classical mechanics" you mean Newtonian mechanics with contact forces only (billiard balls) I agree. If you include field theories (classical electromagnetism, general relativity) your claim is unsupported by evidence. In fact, any complete (fundamental) theory that does not include hidden variables must be non-local in order to account for EPR/Bell correlations. 4. Peter Jackson Aspect was mostly talking about his 1982 experiment that had better equipment than his earlier one. He did not mention rejecting that much data but did mention something about the noise levels of detectors or something like that. That video is one I watched a while back when looking into EPR experiments. It is great along with its companion video from 3blue1brown. But both videos go on about how strange/unknown the effect is at 22.5 degrees. So is it just that the math of QM is good at describing the effect at various polarizer angles but cannot explain why a given photon with vertical polarization has a 85% chance of going through a polarizer set at 22.5 degrees? Put another way, QM cannot explain the physical interaction in the polarizer that results in a sine wave transmission curve. Thanks for your statement “The point is that classical mechanics cannot produce the quantum probabilities without using faster than light information transfer”. I just didn't see it having anything to do with greater than speed of light effects, but when you put it that way, that in order for CM to get the QM results the polarizers would have to be switching to get the 85% rate. Now that makes sense. Again, the only mystery then is that CM and QM ‘cannot explain the physical interaction in the polarizer that results in a sine wave curve’. Would that be a correct statement? 5. Peter Becher, I agree it's common belief that "the only mystery is that CM and QM ‘cannot explain the physical interaction in the polarizer that results in a sine wave curve’. But I suggest the solution to that also explains the Aspect etc majority discarded data. (Weighs/Zeilinger etc. followed suit). That "rotational anisotropy" was indeed dismissed as you suggest, but the solution predicts the FULL data set. The impossibility of a classical solution only results when using Bohr's starting assumptions. Bell agreed these were wrong and a classical solution exists (p.171). Shocking? It will be for most! The paper is imminent. For now consider the DFM inverse Cos^2 distributions of the 2 (Maxwell) orthogonal (OAM) momenta of the Poincare sphere. 36. Good morning Sabine, I really find this topic the most interesting, because the experimental setup is simple and the costs are low. The results have been confirmed by different groups in many parts of the world and there is still no satisfactory solution - which makes it so fascinating. I prefer "spooky remote action". It looks the simplest to me. If one takes "spooky remote action" as the basis/basic assumption for our world, one loses all the beautiful and helpful mathematics (differential calculus). I am aware of this. You favour superdeterminism. I'd like to know what that is, how it works. Maybe you have time for a blog post when you get a chance. After all, it is your "baby". Many greetings and have a nice day 1. Hi Stefan, Dr. Hossenfelder wrote a 'guide for the perplexed': I'm still somewhat perplexed but did find the paper helpful. I would love to see a video in Sabine style on the subject. Have a good one. 2. I probably don't understand this comment, but it seems to be asking for more information from Dr. Hossenfelder on super-determinism. I happen to know that Dr. H has written at least one blog post on the subject, and some papers which are available on arXive. Dr. Tim Palmer has worked with her on this subject and also has some papers on arXive. Google ("Sabine Hossenfelder superdeterminism") quickly gives me links to these sources, as well as reviews by others. So I wonder if Stefan Freundt has read that material and wants more, or was unaware of it. (What I took from those readings is that superdeterminism, or lack of statistical independence between detector settings and measurements, at first glance sounds like a conspiracy theory, but actually encompasses a much larger range of possibilities. I see a vague analogy between this and how some people look at the complexity of DNA life in this universe and conclude our specific physical laws are the only way to produce life, and therefore must be fine-tuned. The actual range of possibilities could be much larger, in my opinion.) 3. Hi JimV, After my 'tizzy' about Superdeterminism a few months ago I made a concerted effort to understand the subject as well as I could. I found Superdeterminism to be both reducible to a basic premise and rather hard to accept in its entirety. My take-away is that 'whatever will be, will be' so if one is making a major decision, stressing about it won't change the ultimate outcome, so one may as well remain as calm as possible. (So I'm enrolling in a Bachelor of Music, since I already did anyway, if I 'choose' to then do so.) All other explanations, I'll quote the experts. The question of Superdeterminism vs. the possibilities of life in the Universe is yet another intriguing layer to this. 4. Hello C. Thompson, I have looked at the article. When reading an article some questions are helpful: Why was the article written? What gap in knowledge is it trying to fill? Is there a derivation? What is the start of the derivation? What are the assumptions? Have I understood this? If the assumptions are already unclear, the rest will not become clearer. What are the conclusions? What is the result? By the way: derivations and results are most of the time correct. Well-educated people can follow a derivation and anyone can check results. Checking the assumptions of a theory is much harder, often impossible. (The mainstream is sometimes very good at accepting false premises.) If I am confused about an article, I do just that. If an article is good, the article does that automatically. "Superdeterminism is presently the only known consistent description of nature that is local, deterministic, and can give rise to the observed correlations of quantum mechanics." That answers the first question. That is good. Question: What is superdeterminism? What is the problem? The 2nd page above answers that: "The one unfamiliar property of superdeterminism is the violation of statistical independence." "That is because once we drop Statistical Independence, there is nothing preventing us from developing a consistent local and deterministic theory" I can understand that: In the experiments on entangled objects, there must be no correlation between the measured object and the detector. Or vice versa: if there is a correlation between measurement object and detector, then this correlation can/will also be responsible for our result. Then the article goes into the following things: - hidden variable - finetuning - conspiracy - faster-than-light speed I didn't understand all that in detail. But I trust Sabine. What I miss is a concrete example or an idea, how object and detector get their correlation. The article says: If they correlate with each other, then everything is fine. And I ask: Very nice, but how is this correlation formed? I see a big explanatory gap here. And then there is something else, right at the beginning: Why does a good scientific theory have to be "local"??? (Abstract 1st sentence) A theory must be local to justify the use of differential calculus. I think: The mathematics must be appropriate to the problems. The mathematics must fit the problems. And not the other way round: I consider the problems until they fit the available mathematics. Many greetings 5. Hello Sabine, as the diabolical Zorg said in the movie the "5th Element": "If you want something done, do it yourself." That's probably the only belief we have in common. That's a pity. It's really too bad because. - Your English is better than mine (by a factor of 10..100) - You are very well connected in the mainstream (by a factor - I don't know such a big number) - You know more publications and experiments than I do (by a factor of 10..100) Many greetings 6. Stefan, "And I ask: Very nice, but how is this correlation formed? I see a big explanatory gap here." Scientific theories explain observations. They never explain their axioms. If you see an "explanatory gap" you are confused about what requires explanation to begin with. 7. I found the paper helped me get the gist of the workings of Superdeterminism and how to show the effects experimentally, but the parts that were mathematical went over my head. I need to re-read it to refresh my memory. As far as axioms go, if the author doesn't understand correctly they don't have a leg to stand on, but as far as I could tell, the Guide for the Perplexed is rock-solid; Dr. Hossenfelder is a careful author. My biggest issue with the Guide is I remember the muffins/weight-loss/amputation example better than the passage on retro-causation, but I won't forget the paper due to that. To understand the entire paper I'll need to actually study physics. 8. If the issue is what sort of mechanism could cause the experimental results to seem random yet correlated, I think Dr. Tim Palmer's papers propose a chaos-attractor mechanism. I think Dr. Hossenfelder does not endorse any specific mechanism, but proposes experiments which might shed some light. (Just my superficial opinions.) I think the main purpose of their joint paper was to explain that a super-determinism model need not be just a coincidental result of initial conditions at the Big Bang (referred to as a "conspiracy theory" by several other scientists), but might have some mechanism which would allow us to make better predictions (once we understood it). I think it is fair to say that the paper has not convinced many of the conspiracy-theory skeptics (extrapolation from the two or three I know of). 9. Also Stefan, There's a lecture Dr. Hossenfelder gave on Superdeterminism that I just remembered on YouTube. Again, the mathematics went over my head but it was interesting. 10. @Jimv, I can get why it looks like a conspiracy/too fine-tuned. It's weird. The objections I've seen and read (3 or 4 of them) are metaphysical/philosophical in nature and object to the bald premise without taking into consideration anything extra Dr. H added by way of further explanations of how Superdeterminism plays out in life. They ask, 'what about morals? What about fun?!' As if Sabine Hossenfelder is some sort of soulless creature without ethics, which is entirely absurd. I'm yet to encounter a solid scientific argument. 37. Koennten Sie bitte Ihre Meinung aeussern. Ist theoretische Voraussetzung der Existenz der Zufälligkeit als ein natürliches Phänomen äquivalent zum Verstoß gegen Energieerhaltungssatz? Jede Veraenderung des Objekt als Stand seiner Energie (Beginn der Bewegung, Richtungswechsel, Geschwindigkeitsänderung, Bewegungseinstellung usw) impliziert extra Energieanwendung. Trotzdem, wenn ein Teilchen als "Wellezustand" ihre Wellenfunktion selbst aufhört, ist diese Veraenderung ganz ohne Ursache und ..ohne extra Energieanwendung. Ob es was anderes ausser Verstoß gegen Energieerhaltungssatz bedeutet (ausser dem Fall, wo es um Viele-Welten-Interpretation geht)? 38. I am getting a phishing warning for this site from google. I clicked the feedback link to let them know about the error. I had to laugh at the warning that this site is dangerous since certain status quo physicists might feel that way. 1. Quite ironic seeing that this site is hosted by Google to begin with! 2. Dr. H wants your soul. :-9 Without non local quantum entanglement, the development of multi celled organization would not have been possible. This ability to network matter/energy may be central to the weak Anthropic Principle. Quantum entanglement and its associated quantum processes as a fundamental life principle may be what makes both us specifically and life in general possible. 1. I resisted responding to your previous statement about "life", assuming that you meant specifically "life of the sort we know" but now that you say "life in general" I must object (as an alternate opinion). We do not know and cannot know the requirements of "life in general" other than some sort of complex physical laws which allow for computations, but elaborations of cellular automation rules have been shown to produce reproducing structures and Turing-complete ability. This implies to me that a) it is difficult to assess in advance whether any set of physical laws is capable of producing self-reproducing organizations which can do computations, and b) there are probably many more such sets than we can imagine. Of course any life we find in this universe will use the physical laws that occur here, that is just a tautology. 40. Hi all, For whatever it's worth, I think the easiest solution to information transfer is that everything is predetermined and all that any observer and/or measurement will do is bring a uncollapsed/entangled wave, particle etc. to the state that continues what was set in motion at the Big Bang. (Superderminism.) Previously in the video/post 'Schrödinger's Cat - Still Not Dead' after looking at superposition on a larger-than-quantum scale, Dr Hossenfelder asks which of 3 assumptions is wrong: 1. No Superdeterminism. 2. Measurements have definite outcomes. 3. No spooky action at a distance. In the case of Sabine's Random Socks and for the discussions in the above comments, I think the wrong assumption is 'Spooky action at a distance'. I acknowledge that there's possibly issues with my idea that I may be ignorant of or haven't considered; if so, I welcome them being pointed out. 1. After these discussions I just ask myself (and anyone who is interested by the topic): do we really understand how (the mechanism) the light propagates ? Can it be that we have missed something essential explaining what seems to be a spooky action at a distance but, is perhaps only another mode of propagation ? In electricity, the nature of the current (continuous or not) matters. And it has consequences in the mathematical treatment; for example, the introduction of complex numbers (impedance). Can you imagine that something similar could explain what appears to be strange behaviors ? 2. Hi Paps57 Not only have "we" missed something and do not understand the mechanism of light propagation, the physics community has totally ignored the possibility of any real mechanism. Most just use the math and shut-up-and-calculate mentality. The only way to get a real explanation is to work with a light medium like every other wave in nature. 3. Hi Paps57, It had never occurred to me that light needed to be propagated, it just seemed to move along from its own energy. I don't know what else to think of. 4. It is the kern of the question. And a question of semantic perhaps ? How do you describe the motion between the emitter and the receptor ? I presume: a propagation. In empty regions of the universe, through what is it propagating ? Answer following from Morley and Michelson experiment analysis: nothing. So, officially, it propagates through nothing. In a perfect vacuum, the signal is something that, as you notice, is carrying its own motor with it, without loss of energy, without changing anything in it and around it but, that we, at the end of its travel, can receive and/or perceive ! It sounds quite miraculous to me and I would add: the scénario is probably not complete. It can be argued that the signal interfers along the way with some gravitational fields but this fact doesnot explain its intrinsic and fantastic property to be permanently self-sufficient. Recall: physics doesnot recognize the existence of perpetual motion (thermodynamics) even if some specific mathematical circumstances in Riemannian spaces allow it (see Lichnerowicz, 1955). This is why I think we are missing something important. Thank you for your reactions. 5. Paps57 I do not disgree that we are missing something important. Probably true of everything! Q = T3 + 0.5Yw (The electric charge, Q, is related to weak isospin, T3, and weak hypercharge) (from Wiki) The electric charge is related to weak isospin, and the higgs field has a property of weak isospin (+ 0.5 or - 0.5). The vacuum is not 'nothing' and I believe that the higgs field cannot be removed from a vacuum. The electron has a property of weak isospin but the photon does not. I have a naive preon model for elementary particle compositions. A long time ago I suggested that a photon behaved like a boat with two counter-rotating engines. Such a boat travels fast. A (chiral) boat with say two left-rotating engines and no rudder would stay approximately in one spot but would still require the seemingly perpetual motion engines. So, paradoxically (?), in my naive model it requires engine motion to stand still. The engines are at several levels down the chain within elementary particles, and as energy increases with small time intervals, there may be enough energy to give the appearance of perpetual motion. Austin Fearnley 6. @Dr Hossenfelder Could you shed any light on this? Maybe electrons emit photons with enough thrust to propel them? (Although that seems too simplistic to be valid). 7. Empty space isn't nothing. It's empty space. 8. If the space is empty, then isn't there nothing in it? 9. "Empty space" means it's a vacuum state. It doesn't have "nothing" in it, it has quantum fields in the ground state in it. And in any case, as I said, space itself is arguably something and not nothing. 10. So, we have to accept the idea that photons interact along the way with quantum and gravitational fields in the ground state in such a manner that c stays invariant ? 11. @Dr. Hossenfelder Thanks. I wasn't sure if that's what you meant. Do you have any enlightenment for the question of how photons travel? 12. C Thompson, We calculate how photons travel either with Maxwell's equations or with QED, depending on whether quantum effects are relevant. I don't understand why people ask "how" questions but when you give them an answer they dismiss it because they don't understand the math. 13. Dr. Hossenfelder, Thank you, your answer is appreciated. I'll follow that information up. I don't dismiss answers so much as I am not equipped to understand said maths. I am piecing together some understanding as I go along from online resources and slowly making progress. I guess people want to be spoon-fed easy answers. 14. People will probably show up to defend Aristotle, but my recollection is that in his physics, objects tended to stay at rest unless some force was pushing or pulling them, and Newton corrected that to, objects in motion tend to stay in motion unless something resists that motion, such as friction. I believe Newton, as do I think all engineers who have to calculate things that move, such as turbines. Newton also believed that photons are particles, i.e., objects, which obey his law. QM tells us that particles also have properties similar to waves, the magnitude of which depends on their size, so baseballs no, photons yes. Still they also have a particle nature which dominates in some situations. So I don't have a problem with them continuing to move until something absorbs them. Note however they move through different media at different speeds and c only applies in a vacuum. Perhaps in a vacuum which had no quantum fields (if that were even possible) c would be slightly larger, but we have no way of testing this. (My mental model of this, for what little it is worth, is that media which photons move through causes them to bounce around and travel a longer distance than the straight lines used to calculate their measured speeds. I don't walk in straight lines either so my pedometers show more distance traveled than a map shows between A and B.) (Another possibility it that they get absorbed and re-emitted in the media, with a slight time gap between. I myself do not have that problem though, so far.) Anyway, talk of photons needing some motor or rocket to move reminds me of Aristotle, and I refuse to go back to his physics, which would make many of the calculations I have done incorrect. (I apologize for inflicting my mental models on possible readers. Dr. Hossenfelder is of course right that it is the math that is important, but many of us need some sort of story to remember the rules with.) (Dr. Scott Aaronson says that the Many Worlds Interpretation, which most laypeople detest, is actually the story that seems most helpful to the students he has taught QM to. Which doesn't make it right, but somewhat useful.) 15. Louis de Broglie thought light was a double particle and Andre Michaud has written some papers with that in mind. Michaud has some interesting points but I don't see it as anything final. At some point a type of space medium (the 'nothing' of empty space) will be recognized as the medium of light waves and a good explanation of the medium will solve most of these questions. It only makes sense that all waves need a medium, it just needs to be figured out. 16. Thanks for elaborating, JimV. I'm not apologising for inflicting myself upon this blog and I don't think you should either. 17. Thank you @SH for allowing my interventions as “some people”, @all for your diverse reactions in a tempered pro and contra style; especially @Peter Becher for moderating the attacks; at the end of the day, I am glad to have learnt that I am in some way an ancient Greek “à la Aristotle”. The three years old child asks: “Why? Why this? Why is this thing so?” Older people observe attentively long enough and tell themselves: “How? How is it possible that…?” Looking for the mechanism (a misleading word, I agree) explaining the propagation of light belongs to that more mature approach. I introduced that word because, in my own paradigm, there must exist a way to explain how the light (or any particle) interfere with the geometry, hence my huge interest for geometric algebra. Due to the courage of former generations (going from Spartacus until the terrific twentieth century) asking uncomfortable questions to the mainstream, some people nowadays have reached a social situation (retired, no institution) allowing them to leave the “do the math and shut-up” status. And they continue to ask unpleasant questions, testing as deeply as possible the coherence of the paradigm which they are obliged to live with. I know the official story. Objects receive an initial impulse and then travel following a straight line or a geodesic at invariant speed if nothing interact with them. I understand that stars emit light (a specific object perhaps, not sure) and that that light may meet quasi-no obstacle in these empty interstellar regions. I also know that this light may meet electromagnetic and/or gravitational fields deviating them from the initial trajectory. I can decode the math for the calculation of the deviations, and I know that it works relatively well because some people repeatedly did the math before me and measure the adequation with observations. Well: end of the story and nothing else to be discovered? Shut-up and go away? Then I do not understand why my taxes are paying so many professional researchers trying to understand what the official story does not yet explain (spooky action at a distance, dark matter, dark energy, absence of antimatter, masses of neutrinos and quarks, electronic gyromagnetic moment, etc.) or why this blog exists. “Der Orakel in Delphi sagt nichts, er deutet nur an.“ (Das Prinzip. Jérome Ferrari. Über Heisenbergs Schicksal). Some people. 18. I have not mentioned 'engines' and 'counter-rotating screws' in my online papers as they are (very) hypothetical features at sub-preon level. I think of the chiral engines as not disobeying Newtons Laws and they would require a force to stop motion. It is only a means of giving mass to a fermion. A massive particle will not simply depart at near speed c without an external force. But a photon will. I use the engine analogy to imagine how this might happen. This did arise as a question needing an answer in my preon model. If I can build 'two photons' from 'an electron plus a positron' by rearranging (four types of) preons, then there should be some property within the preon arrangements that allows the photon to move at speed c but prevents the electron from doing so. Some arrangement allow c and others do not, hence the counter-rotation idea. Back to entanglement... I followed Susskind's course on entanglement and, in it, he calculated how QM breaks the Bell Inequalities. I am not clear that his proof is sufficient as it seems not random enough. He used Projection operators to project a singlet-entangled particle(s) onto the up direction and also onto the 45-degree-angle diagonal direction. But his singlet is |up, down> - |down,up> which is already parallel with the 'up' projection target direction. Are there any calculations already done with the general case of using the same projection operators but using a generalised singlet say |m, opposite_of_m> - |opposite_of_m, m> where m is a random direction? Austin Fearnley 19. RE: "Are there any calculations already done with the general case of using the same projection operators but using a generalised singlet..." N. David Mermin, in his popular-science book, "Boojums All The Way Through" presents a couple different examples of possible experiments similar to Bell's example (seeming to rule out hidden variables) and gives the parameters of the experimental setups and the calculation results, but does not go through the calculations themselves (unless they are in an appendix which I haven't gotten to--checking ... no, but he does reference some old papers and an old Scientific American article, circa 1970's). 20. @Paps57, I don't think there's been any attacks as such. We asked questions and got answers from the subject-matter expert. That I don't know what Maxwell's equations or about QED is for me to sort out. (I'm happy that I actually got an answer from Dr. Hossenfelder at all, given how beneath others' knowledge levels I am.) I took JimV's remarks on Aristotle to refer to how ideas have progressed. As far as I can tell, the blog exists to host discussions like this one, where multiple people can contribute. I hope you do find out more beyond just the official answers, and you share them on this blog. 21. Paps57, I'm with C Thompson, don't think of questioning as attacks. I like your questions and thoughts. Keep thinking about and asking "HOW"! (even if Sabine cannot understand why we do it) I have read how de Broglie got hammered by Bohr and others and lost sight of his insights into the possible true underlying workings of "particles". It took him decades to sort of get back to them but seems to me that he never went deeper with his ideas on the 'high frequency energy center of a corpuscle'. I don't think he thought much about the aether in his day since E had disavowed it and de Broglie grew up with that. I just have a deep feeling that a space medium is the key to the underlying nature of the universe. Not sure if I will be able to show it well or prove anything but who knows. 22. C Thompson, On the 'nothing of space' the mathematical model has all those "quantum fields" for particles. It allows for the accurate calculations of QM. As Don Lincoln mentions in at least one of his videos the holy grail would be to find a "Single Stuff" that would unit all particles and forces. As I mentioned for Paps57, that would be a space medium. So "space is not empty", it can be empty of 'sensable' stuff; matter, light, etc, but the space medium would be the 'non-emptiness' of space and the basis for all that sensable stuff and would be the 'single stuff'. The properties of the space medium will determine the ultimate answer to the title of this blog post. 23. Peter: "Keep thinking about and asking "HOW"! (even if Sabine cannot understand why we do it)" You entirely missed the point. What I said is that I don't understand why people ask how-questions but then ignore the answers that you give them. How does gravity work? Einsteins field equations. That's the best answer we have and it's a remarkably successful and accurate answer. How do photons propagate? QED. And so on. What all those people with their "how" question want is a dumbed-down interpretation which they will then complain doesn't work, which of course it doesn't. Same problem with quantum mechanics. 24. @Dr. Hossenfelder: I asked you specifically because I hoped you would answer from better knowledge; as I said up-thread now it's on me to take it from there. Any time you answer my queries I appreciate it. I don't know if Paps57 or anyone else was expecting a 'dumbed-down' interpretation. @Peter Becher: Thanks for your answer too. 25. Sabine I did get your point but you seem to miss our point. We ask 'how' to try to find a deeper explanation of how things do actually work in nature. You say "How does gravity work? Einsteins field equations. That's the best answer we have". GR is a good mathematical description of how gravity works and that is fine. I don't know if you think or have said 'the GR curvature of spacetime IS gravity' but many physicists seem to think that. But saying that is like saying a QM description of water IS water, just try to drink a page of math or computer output. A good quantum version of gravity will probably never be had because it is getting further and further from what the actual cause of gravitation is. It would be really good if physicists actually tried to find this fundamental cause but they are too lost in the math, as some smart person once wrote a book about. 26. Peter, Maybe read the book again because in it I explain how to find better explanations. It's not by asking nonsense questions and demanding intuitive answers. That's a stereotypic crank approach. And I am not a platonist, and not a realist either. 27. Would anyone like to give an answer to my comment at the top of this thread? 28. I'm not really qualified to have an opinion (although that rarely stops me from giving one) on which of the three axioms is wrong, and unlikely to give one that hasn't already been discussed in the previous blog post where those axioms were presented. However, from what I can tell, most physicists would stake their honor on the choice of super determinism being wrong. (I encountered this most recently at Dr. Scott Aaronson's blog.) The reason (I think) is that QM gives precise mathematical recipes for calculating outcomes of events at the quantum level without assuming SD, and nature appears to agree rather perfectly with those recipes, and nobody (yet) can conceive of any way (except the most fantastic set of coincidences ever) how SD could produce those results. It is mind-stretching to conceive there might even be a model for that. It is also mind-stretching to realize, as Einstein and others pointed out, that QM results imply that all of reality is not real all of the time, but can become real instantaneously (or close to it) across large distances, when it needs to. However, Bohr and others argued from the very beginning that nature was telling us that, and we needed to accept it. So you can have a) a method that works and implies certain uncomfortable facts; or b) a method which could eliminate the uncomfortable facts if we knew how to implement it, but so far nobody does. Most physicists prefer a), and teach physics accordingly. 29. JimV, Yes, but the choice is not between QM and superdeterminism. Superdeterminism does not imply a denial of QM, only of QM's completness. If QM is assumed to be complete one has to deny locality and hence go into conflict with relativity. If one wants locality, superdeterminism is the only choice. The reason most physicists reject superdeterminism is because they not know or understand the consequences of EPR and Bell arguments. Very few of them would openly deny locality (Tim Maudlin is one of those few). I can conceive a way. The source and detectors interact electromagnetically, being aggregates of charged particles. The correlations are the effect of those interactions. This is the explanation for any correlation ever observed. Non-local effects were never found. "Reality" is a red-herring in fact. What is at stake is the existence of hidden variables and locality. Locality implies hidden variables. The rejection of hidden variables implies non-locality. You cannot explain EPR correlation in a local way by denying "reality". No, we either have a complete/fundamental QM and physics is non-local, or we have superdeterminism. Both options require a reformulation of some accepted physical theories. Non-locality requires a reintroduction of an absolute reference frame and a reformulation of relativity. Superdeterminism would replace QM completely. 't Hooft seems to be doing just that: "Explicit construction of Local Hidden Variables for any quantum theory up to any desired accuracy" 30. Sabine It is unfortunate that they are seen as nonsense questions. Little hope for the foundations in that. 31. C Thompson I suppose I helped steer your question in the wrong direction, but it was mostly a statement. You mention: "Dr Hossenfelder asks which of 3 assumptions is wrong: 1. No Superdeterminism. 2. Measurements have definite outcomes. 3. No spooky action at a distance." You state: "I think the wrong assumption is 'Spooky action at a distance'." So do you think 'spooky action', or 'no spooky action' as 3 states, is wrong? In my opinion 3 is the only one that is correct. 1. Superdeterminism is extremely unlikely. 2. Measurements are not definite but follow the probabilities. If they were definite then #1 might be correct. 3. There is 'no spooky action at a distance'. As Sabine said in this video "But there’s no spooky action in the correlation themselves. These correlations were created locally". So then there are no spooky actions in the detection because that is the way the polarizers work. The probabilistic ability for the polarizers to pass photons according to the QM sine wave function is just how they work. The "spooky-est action is what is going on in the polarizer for any given photon at any given polarizer/photon angle. And this is where, as I mentioned elsewhere, that CM and QM both cannot explain what is going on in the polarizer. It is also why I feel that the HOW questions (by cranky people like me) are so important. I feel that once the polarizer action is explained 'how' it will be will be able to be done by a Classical Mechanics solution that matches QM. 32. Mr. Andrei, we have discussed your EM-based super determinism before. My guess is that if EM configurations of experimental settings had a strong influence on measurement results, than placing two sets of such equipment next to the active one would vary the experimental results as the settings of the non-functional equipment were varied, but in actual experiments of that sort no effects would be detected. (I don't know if it has been tried.) Secondly, your model, as you have admitted, cannot be calculated and a model without predictive ability is not much improvement on no model. But as my friend Mario says, if it lets you sleep soundly at night, that is worth something--to you. Currently I am not usually able to sleep soundly myself, but it isn't the lack of a rigorous SD model that bothers me. Lastly, about reality, those are the terms which Dr. N. David Mermin attributes to Einstein in his book, but perhaps he was mistaken, or Einstein was. My impression though was that Dr. Sabine's post which we are commenting on supports that broader interpretation of Einstein's views. In any case, my opinions are not worth a hill of beans, and perhaps the impressions I have of what others in the physics community think are not either. 33. @JimV "It is mind-stretching..." I like this remark. "To be (real, visible, detectable) or not to be, that is the question. (inspired by Hamlet)" and perhaps a possible explanation for the probabilist behavior of real objects. Just a crazzy suggestion. 34. @Peter I agree that: 1. Superdeterminism is unlikely. 2. Measurement outcomes are determined by probabilities. 1. I prefer retrocausality (of antimatter) to superdeterminism. It is easy to show that retrocausality can produce the Bell QM correlation. (My June 2020 online paper.) 2. This is a difficult one as 'are the outcomes really random [maybe allowing free will]' or are the outcomes only apparently random [maybe removing free will]. But I can leave aside the free will issue here. The important point is that hidden variable static vectors cannot underly the Bell experiment measurements. The quantum Randi test rules out a measurement being solely determined by a hidden variable (static vector) and a detector setting. [And I have tried enough computer simulations to believe this.] So the same (= 'same' hidden variable) incoming particle meeting the same [= 'same' detector setting] detector can have different results (so 'no counterfactual definiteness'). Peter wrote: "I feel that once the polarizer action is explained 'how' it will be will be able to be done by a Classical Mechanics solution that matches QM." I already have a Classical model of an electron that gives the Malus Law results. (See my June 2020 paper online.) The electron in this model has a (varying) hidden variable. Imagine the electron pointing upwards. It is not static but is precessing and nutating so that it is pointing always in the same upwards hemisphere. But it points mostly |up> and less frequently points at 90 deg away from |up>. So if you take |up> polarised electrons and measure them at 45 deg setting then more than expected would pass through the 45 deg filter. You might expect 75% to pass the filter (Classical correl = 2 * 0.75 - 1 = 0.5) but in fact 85.3% will pass the filter (2 * 0.853 - 1 = 0.707 = Bell QM correlation), as the hidden variable is more often pointing towards |up> than you would expect. Every |up> electron would look like this so there is only one hidden variable and it applies to all electrons. The hidden variable is not a tag to be used to identify an individual electron. 3. The only QM proof of the Bell correl that I have seen is flawed (Susskind online lecture). It finds the value 85.3% but is really just proving the Malus Law. And I can do that with my classical model. But I am trying to find a more general QM proof of the Bell correlation or calculate it myself. I like 't Hooft's (superdeterminism?) idea of the particle hidden variable changing instantaneously or just after a detector setting is chosen, even with rapidly changing settings. [Thanks to JimV for the arxiv reference.] This is also what happens in my retro model. When a positron is measured, the positron travels backwards in time to the source [just as it appears to do in a Feynman diagram] immediately after it is re-polarised by the measurement to take on a new polarisation equal to that of setting of that detector. This really makes the physics again equivalent to Malus and not a general QM solution for a Bell experiment. And I have a classical model to give agreement with Malus's Law. So 'not spooky'. (I have computer program code online that gives Stern Gerlach measurement outcomes using this classical model.) The precessing and nutating motion of the electron makes the measurement outcome a variable which is dependent on the time of the measurement, and that gives an apparently random aspect to a measurement. Austin Fearnley 35. In response to: "I already have a Classical model of an electron that gives the Malus Law results."--Austin Fearnley That sounds like a result of a lot of clever work and research. At first glance, it seems more like a new interpretation of QM (the Fearnley Interpretation) than a classical model, because it incorporates quantum randomness and is not explicitly deterministic, and adds no new predictive ability to QM (that I can see--but my vision is getting worse and worse). I have not tried to apply it to all quantum experiment situations, such as EPR, so I don't know if it qualifies as a complete interpretation. But as far as I understand it, it seems the randomness of the polar orientation is the key element which would cause it to match other interpretations. 36. @JimV IMO more dogged determination than 'clever work', but thanks for the first ever appreciative comment on my physics work! JimV wrote: "I have not tried to apply it to all quantum experiment situations, such as EPR, so I don't know if it qualifies as a complete interpretation." I have used computer simulations of my model in a conventional Bell experiment. It cannot provide the QM correlation without the positron acting retrocausally. The retrocausality turns the Bell experiment into a kind of Malus Law experiment. My model works for Malus's Law. The strange thing is that Malus's Law calculations dovetail nicely into the Bell QM correlation calculations in the limited circumstances when the incoming particles are already polarised along one of the detector settings. That's what Malus is about: take a polarised beam of particles (all polarised in the same direction, say zero degrees) and then measure them as a detector setting 45 degrees. But QM calculations are supposed to cover any random polarisation angles of incoming particles. My model cannot cover this without using retrocausality. I am now trying to see if QM really can do this. The only QM Bell calculations I have seen have the incoming beam as entangled |up, down> -|down, up> particles which are then projected onto the |up> axis. That part of the calculations looks to me more like a Malus situation than a generalised Bell experiment. I expect QM will work in the generalised case but maybe QM has retrocausality already built into it covertly. My next step will be to try to simulate a Bell experiment testing QM calculations based on random polarisation directions of incoming particles instead of |u,d> -|d,u> directions. Austin Fearnley 37. JimV, All matter in the universe interacts electromagnetically, not only the devices used in a particular Bell test. There is nothing special about them, they are just large groups of atoms, like chairs and tables. Earth, Moon, and everything else in involved in this interaction, so placing another small peace of equipment near the experiment is not expected to make any difference. The point of this model is not to make predictions, although it could, in principle. The point is to show that Bell's claim, that classical physics is in conflict with QM is unsupported. Classical electromagnetism cannot be shown to be in conflict with QM. The fact that we cannot simulate that experiment is not very relevant. We cannot simulate any system with more than 100 particles or so. We cannot simulate a tree or a cat or a pot. But we don't assert that those systems imply some failures of some physical principles. If one makes that claim, the burden is on him to provide evidence for that claim. In a Bell test a photon source we know nothing about (the microscopic conditions at the emission locus are unknown) produces some photons with some polarization. So what? What does this prove? Nothing at all. I am looking forward to see someone providing some evidence that such an observation contradicts classical electromagnetism. Until then, I have no reason for concern. 41. When I first became acquainted with the “Spooky action at a distance” conundrum, probably in the 80’s, I assumed that the correlation between distant entangled photons or electrons was 100%. Later, I discovered it was only an 85% but still above the 75% predicted by chance (going from an August 1, 2019 article on titled “Entangled Quantum Particles Can “Communicate” Through Time”). That article refers to correlations through time rather than through space, but I think the same percentages apply. So one day it dawned on me that 85% experimental correlation directly linked to an idea that I had developed following a 1996 epiphany which postulates real, very specific, hidden variables as the basis of de Broglie, or matter, waves. I’m positive that I noted this down in one of multiple ‘ideas’ notebooks, but realized it needed to be studied in more detail to determine its viability. 42. You are going to drive me crazy; I explain ; I think Einstein uses quantum entanglement to argue that quantum mechanics is incomplete; I said "I believe"; I put an example; two classical bodies with a given mass and speed collide, after the collision I measure the speed of one and immediately know the speed of the other without having to measure it; just doing a little calculation, this would be a kind of classic entanglement, mathematically the two bodies are a unique system after the collision; It seems to me that Einstein makes the like with his quantum particles; How could the final result be known if they are telling us that it is not defined and that it is probabilistic? ; If the result can be known, it is that there is something else that defines it (hidden variables) or there is magic (spoky action); It's what I think he meant. Please , Am I wrong? 1. I am not sure this helps, but from N. David Mermin's book, which I have referred to above, If you excite a calcium atom to emit two photons, and select a polarization angle (rotating a polarized lens which the photon goes through), it is random whether the photon is polarized for that angle or not. An experiment is set up to pass both photons through (different) polarizing lenses, each set randomly (independent of each other) to either -60, 0, or 60 degrees (from the vertical). The settings take place after the photons are emitted but before they reach the lenses. The spacing of the lenses and detectors is such that information of one lens' setting would have to travel faster than c to reach the opposite photon before it reached its lens. After thousands of random runs of this experiment, the statistics are that whenever the two random lens angles were identical, both photons had the same result: either both made it through the lens, or neither did. However, over all cases in which the settings differed, the two results agreed only 25% of the time. If the photons each carried identical polarization values for the three possible settings (e.g., pass at -60 degrees, pass at 0 degrees, blocked at 60 degrees, or PPB), The statistical agreement should be at least 33%. Quantum Mechanics predicted the 25%. The >=33% assumes each photon has fixed values for polarizations of the three angles at the moment of creation, and these values are not dependant on the random lens selection. (Just enumerate all the possible pairs of settings, assuming each occurs the same number of times--roughly--over many random trials, and compare to possible random photon pre-settings.) According to Dr. Mermin, Einstein et al wrote their EPR paper assuming that the QM prediction of 25% (for the above experiment, they had a similar but different experiment in mind) had to be wrong, but did not live long enough to learn the above results, which took place years later. Meanwhile, Bohr wrote a response, basically saying that reality does not have to have the fixed nature which we see ("classical behavior") in our daily lives. One perhaps glib rationalization of the QM result is that if all three polarizations of a photon were "real" at its creation, the photon would be over-determined, in violation of the Uncertainty Principle. (Dr. Popper later objected strenuously against this interpretation--that the UP is in fact a law of reality, but the consensus is against him.) My very-under-baked feeling is that reality not being entirely "real" (in Einstein's terms) is perhaps consistent with a universe which ultimately started from nothing. (But no one should care what I think! Science just tells us how things seem to work, not why.) 43. Quite an interesting topic, for sure. I like the example of transferring momentum by bouncing a ball off a wall... not 'spooky' at all. But is there a way to introduce spookiness, say, by bouncing two balls off the wall at the same time? Can the wall be used as an intermediary to entangle the momentum of both balls little bit? More importantly, is there a way to determine whether the correlation is due to spookiness or local effects...? 1. Hi DJ, Only if both balls were able to hit exactly the same part of the wall at the same time, I imagine. 44. This comment has been removed by the author. 45. People think of space as somewhere so far out there that there is nothing you can bump into, but if you put a very expensive thermometer way out there it would register a temperature so empty space is at the very least not completely empty. 46. Photons in Spaccceee... Its an unusually cool day in Southern Oregon. As I write this. I have an electric heater turned on about a meter from my feet. Photons are leaving the heater and warming my feet quite nicely. How do they do this? How do they get from the heater to my feet and why would that short travel by anything different from what they do in empty space? You could say they warm the air near the heater and the air travels to my feet on air currents. But even so they still have to get from the heater to the nearby air molecules to warm them up. How do the photons do that? The distance is shorter but it should be clear that the principle is the same as for empty space. So why is it more mysterious how photons travel in space and not anymore mysterious how they travel from the heater to warm my feet? 47. Hi Steve, IMO: We're used to prosaic everyday stuff like radiant heaters so we don't think about it. Space is mysterious because it's beyond our (non-scientists') ken, first-hand. I didn't really think about how photons travelled at all until Paps57's comment. I just thought of heat transfer in terms of vibrating atoms and energy transfer. To your earlier comment, that makes sense. If there's a measurable temperature, there's at the least some energy. 48. Good points by Steve Bullfox, relative to the space we are in, which contains the Cosmic Microwave Background radiation. However, not every physicist I have read is sure that space was not infinite and empty except for the Big Bang at the Big Bang. In which case there is still space that no material or radiation has reached, and although there is no thermometer there to read it, it has a temperature of absolute zero. (Since virtual particles have to borrow energy from the vacuum it seems to me their effect would average to zero, but of course I could be wrong, and for that matter I don't like the infinite space concept anyway.) (Boy are we off-topic, I almost hope this gets moderated.) 49. While reading several comments above, a naughty problem crossed my mind: Suppose a mafia big shot on earth holds one of two entangled electrons while his companion on March holds the other one. The two partners in crime made the following deal: the companion will measure the spin (up or down) at 10:02 pm. If the spin is up he will chop off the head of some hostage, if down he will release the hostage. At 10:00 pm the big shot measures the spin (up or down) of his electron and at 10:02 pm the hostage loses his head. Question: did the big shot cause the death of the hostage? Hint: the distance between the earth and March is 10 light minutes. 50. Jimv, I once had a conversation with my fundamentalist Christian brother in law about why there is something rather than nothing. He thought that that is the case was proof of God. I said to him there might be a lot of nothing somewhere, but there has never been not no nothing nowhere around here. We still talk, but not on that subject. 51. T get a better layman’s understanding of this, I recommend: 52. Ms. Hossenfelder, thank-you for your fantastic videos. Your logic is piercing and your arguments are compelling. Regarding spooky action at a distance, I feel like perhaps there is an unwarranted presumption about "distance". We proclaim that space and time are intricately linked but then all references to "local" rely on the colloquial definition of word (i.e. spatial distance only). In fact, literal contact between two objects does not generally occur at all, so "local" is not just loosely defined but factually in error. I believe that if we define objects to be in contact which share an interval distance of zero then we can recover a local theory from QM. 53. Does this basically come down to the quantum probabilities communicating with each other so they "know" when to cancel out. So catching one electron couldn't produce another from the probabilities of being found elsewhere? 54. Dr. Hossenfelder, Very interesting subject and very interesting conversation. You wrote that: I think that what worried Einstein was that, at the time of the 1927 Solvay conference, some people were saying that, while the total spin was conserved, the particles themselves didn't "know" which spin they each had. The spins were undetermined - even for the particles themselves - until a measurement was done. Only then would the particles have their spins defined instantaneously regardless of the distance between the two particles. It meant that for two particles separated by a galaxy, the measurement of the spin of one particle "informed" the other particle to adopt the opposite spin, not because the spins were already determined from the start when the particles were in contact but because of the rules of Quantum Mechanics. Of course, it was a position that Einstein could not accept (since there was a potential violation of one of the tenets of the theory of relativity). He believed that particles had definite characteristics even when we don't "look" at them whereas some others thought that only measurements "gave" particles their particular values (like position, momentum, spin, etc) leading to this notion that reality depends on the observer whereas Einstein believed that reality was independent from our observations. 55. If we live in a universe which creates itself out of nothing, without any outside intervention, particles, particle properties must be as much the cause, the source as the effect, the product of their interactions (with all other particles within their interaction horizon), then can we avoid the conclusion that their communication then must be instantaneous? 1. Anton, I don't know but what I can say is that it is a big "IF". Also, when you speak of "a universe which creates itself out of nothing, without any outside intervention, etc", are we still dealing with physics... or metaphysics? 56. There's nothing spooky if we are living in a simulation (sic). Particles reference their states by pointers. The state is changing at the pointer location which is then access by both particles. 57. To assume that the universe has been created by some outside intervention is metaphysics. The challenge to physicists is to find out how it manages to create itself. I think that the confusion about ‘spooky action at a distance’ originates in our present notion of time, the assumption that (ignoring velocity and gravitational time dilation) it is the same time, that time passes at the same pace everywhere.
562c0f1815b42004
Molecular Hamiltonian From Wikipedia, the free encyclopedia Jump to: navigation, search In atomic, molecular, and optical physics and quantum chemistry, the molecular Hamiltonian is the Hamiltonian operator representing the energy of the electrons and nuclei in a molecule. This operator and the associated Schrödinger equation play a central role in computational chemistry and physics for computing properties of molecules and aggregates of molecules, such as thermal conductivity, specific heat, electrical conductivity, optical, and magnetic properties, and reactivity. The elementary parts of a molecule are the nuclei, characterized by their atomic numbers, Z, and the electrons, which have negative elementary charge, −e. Their interaction gives a nuclear charge of Z + q, where q = −eN, with N equal to the number of electrons. Electrons and nuclei are, to a very good approximation, point charges and point masses. The molecular Hamiltonian is a sum of several terms: its major terms are the kinetic energies of the electrons and the Coulomb (electrostatic) interactions between the two kinds of charged particles. The Hamiltonian that contains only the kinetic energies of electrons and nuclei, and the Coulomb interactions between them, is known as the Coulomb Hamiltonian. From it are missing a number of small terms, most of which are due to electronic and nuclear spin. Although it is generally assumed that the solution of the time-independent Schrödinger equation associated with the Coulomb Hamiltonian will predict most properties of the molecule, including its shape (three-dimensional structure), calculations based on the full Coulomb Hamiltonian are very rare. The main reason is that its Schrödinger equation is very difficult to solve. Applications are restricted to small systems like the hydrogen molecule. Almost all calculations of molecular wavefunctions are based on the separation of the Coulomb Hamiltonian first devised by Born and Oppenheimer. The nuclear kinetic energy terms are omitted from the Coulomb Hamiltonian and one considers the remaining Hamiltonian as a Hamiltonian of electrons only. The stationary nuclei enter the problem only as generators of an electric potential in which the electrons move in a quantum mechanical way. Within this framework the molecular Hamiltonian has been simplified to the so-called clamped nucleus Hamiltonian, also called electronic Hamiltonian, that acts only on functions of the electronic coordinates. Once the Schrödinger equation of the clamped nucleus Hamiltonian has been solved for a sufficient number of constellations of the nuclei, an appropriate eigenvalue (usually the lowest) can be seen as a function of the nuclear coordinates, which leads to a potential energy surface. In practical calculations the surface is usually fitted in terms of some analytic functions. In the second step of the Born–Oppenheimer approximation the part of the full Coulomb Hamiltonian that depends on the electrons is replaced by the potential energy surface. This converts the total molecular Hamiltonian into another Hamiltonian that acts only on the nuclear coordinates. In the case of a breakdown of the Born–Oppenheimer approximation—which occurs when energies of different electronic states are close—the neighboring potential energy surfaces are needed, see this article for more details on this. The nuclear motion Schrödinger equation can be solved in a space-fixed (laboratory) frame, but then the translational and rotational (external) energies are not accounted for. Only the (internal) atomic vibrations enter the problem. Further, for molecules larger than triatomic ones, it is quite common to introduce the harmonic approximation, which approximates the potential energy surface as a quadratic function of the atomic displacements. This gives the harmonic nuclear motion Hamiltonian. Making the harmonic approximation, we can convert the Hamiltonian into a sum of uncoupled one-dimensional harmonic oscillator Hamiltonians. The one-dimensional harmonic oscillator is one of the few systems that allows an exact solution of the Schrödinger equation. Alternatively, the nuclear motion (rovibrational) Schrödinger equation can be solved in a special frame (an Eckart frame) that rotates and translates with the molecule. Formulated with respect to this body-fixed frame the Hamiltonian accounts for rotation, translation and vibration of the nuclei. Since Watson introduced in 1968 an important simplification to this Hamiltonian, it is often referred to as Watson's nuclear motion Hamiltonian, but it is also known as the Eckart Hamiltonian. Coulomb Hamiltonian[edit] The algebraic form of many observables—i.e., Hermitian operators representing observable quantities—is obtained by the following quantization rules: • Write the classical form of the observable in Hamilton form (as a function of momenta p and positions q). Both vectors are expressed with respect to an arbitrary inertial frame, usually referred to as laboratory-frame or space-fixed frame. • Replace p by -i\hbar\boldsymbol{\nabla} and interpret q as a multiplicative operator. Here \boldsymbol{\nabla} is the nabla operator, a vector operator consisting of first derivatives. The well-known commutation relations for the p and q operators follow directly from the differentiation rules. Classically the electrons and nuclei in a molecule have kinetic energy of the form p2/(2m) and interact via Coulomb interactions, which are inversely proportional to the distance rij between particle i and j. r_{ij} \equiv |\mathbf{r}_i -\mathbf{r}_j| = \sqrt{(\mathbf{r}_i -\mathbf{r}_j)\cdot(\mathbf{r}_i -\mathbf{r}_j)} = \sqrt{(x_i-x_j)^2 + (y_i-y_j)^2 + (z_i-z_j)^2 } . In this expression ri stands for the coordinate vector of any particle (electron or nucleus), but from here on we will reserve capital R to represent the nuclear coordinate, and lower case r for the electrons of the system. The coordinates can be taken to be expressed with respect to any Cartesian frame centered anywhere in space, because distance, being an inner product, is invariant under rotation of the frame and, being the norm of a difference vector, distance is invariant under translation of the frame as well. By quantizing the classical energy in Hamilton form one obtains the a molecular Hamilton operator that is often referred to as the Coulomb Hamiltonian. This Hamiltonian is a sum of five terms. They are 1. The kinetic energy operators for each nucleus in the system; 2. The kinetic energy operators for each electron in the system; 3. The potential energy between the electrons and nuclei – the total electron-nucleus Coulombic attraction in the system; 4. The potential energy arising from Coulombic electron-electron repulsions 5. The potential energy arising from Coulombic nuclei-nuclei repulsions - also known as the nuclear repulsion energy. See electric potential for more details. 1. \hat{T}_n = - \sum_i \frac{\hbar^2}{2 M_i} \nabla^2_{\mathbf{R}_i} 2. \hat{T}_e = - \sum_i \frac{\hbar^2}{2 m_e} \nabla^2_{\mathbf{r}_i} 3. \hat{U}_{en} = - \sum_i \sum_j \frac{Z_i e^2}{4 \pi \epsilon_0 \left | \mathbf{R}_i - \mathbf{r}_j \right | } 4. \hat{U}_{ee} = {1 \over 2} \sum_i \sum_{j \ne i} \frac{e^2}{4 \pi \epsilon_0 \left | \mathbf{r}_i - \mathbf{r}_j \right | } = 5. \hat{U}_{nn} = {1 \over 2} \sum_i \sum_{j \ne i} \frac{Z_i Z_j e^2}{4 \pi \epsilon_0 \left | \mathbf{R}_i - \mathbf{R}_j \right | } = \sum_i \sum_{j > i} \frac{Z_i Z_j e^2}{4 \pi \epsilon_0 \left | \mathbf{R}_i - \mathbf{R}_j \right | }. Here Mi is the mass of nucleus i, Zi is the atomic number of nucleus i, and me is the mass of the electron. The Laplace operator of particle i is : \nabla^2_{\mathbf{r}_i} \equiv \boldsymbol{\nabla}_{\mathbf{r}_i}\cdot \boldsymbol{\nabla}_{\mathbf{r}_i} = \frac{\partial^2}{\partial x_i^2} + \frac{\partial^2}{\partial y_i^2} + \frac{\partial^2}{\partial z_i^2} . Since the kinetic energy operator is an inner product, it is invariant under rotation of the Cartesian frame with respect to which xi, yi, and zi are expressed. Small terms[edit] In the 1920s much spectroscopic evidence made it clear that the Coulomb Hamiltonian is missing certain terms. Especially for molecules containing heavier atoms, these terms, although much smaller than kinetic and Coulomb energies, are nonnegligible. These spectroscopic observations led to the introduction of a new degree of freedom for electrons and nuclei, namely spin. This empirical concept was given a theoretical basis by Paul Dirac when he introduced a relativistically correct (Lorentz covariant) form of the one-particle Schrödinger equation. The Dirac equation predicts that spin and spatial motion of a particle interact via spin-orbit coupling. In analogy spin-other-orbit coupling was introduced. The fact that particle spin has some of the characteristics of a magnetic dipole led to spin-spin coupling. Further terms without a classical counterpart are the Fermi-contact term (interaction of electronic density on a finite size nucleus with the nucleus), and nuclear quadrupole coupling (interaction of a nuclear quadrupole with the gradient of an electric field due to the electrons). Finally a parity violating term predicted by the Standard Model must be mentioned. Although it is an extremely small interaction, it has attracted a fair amount of attention in the scientific literature because it gives different energies for the enantiomers in chiral molecules. The remaining part of this article will ignore spin terms and consider the solution of the eigenvalue (time-independent Schrödinger) equation of the Coulomb Hamiltonian. The Schrödinger equation of the Coulomb Hamiltonian[edit] The Coulomb Hamiltonian has a continuous spectrum due to the center of mass (COM) motion of the molecule in homogeneous space. In classical mechanics it is easy to separate off the COM motion of a system of point masses. Classically the motion of the COM is uncoupled from the other motions. The COM moves uniformly (i.e., with constant velocity) through space as if it were a point particle with mass equal to the sum Mtot of the masses of all the particles. In quantum mechanics a free particle has as state function a plane wave function, which is a non-square-integrable function of well-defined momentum. The kinetic energy of this particle can take any positive value. The position of the COM is uniformly probable everywhere, in agreement with the Heisenberg uncertainty principle. By introducing the coordinate vector X of the center of mass as three of the degrees of freedom of the system and eliminating the coordinate vector of one (arbitrary) particle, so that the number of degrees of freedom stays the same, one obtains by a linear transformation a new set of coordinates ti. These coordinates are linear combinations of the old coordinates of all particles (nuclei and electrons). By applying the chain rule one can show that H = -\frac{\hbar^2}{2M_\textrm{tot}} \nabla^2_{\mathbf{X}} + H' \quad\text{with }\quad H'= -\frac{\hbar^2}{2} \sum_{i=1}^{N_\textrm{tot} -1 } \frac{1}{m_i} \nabla^2_{i} +\frac{\hbar^2}{2 M_\textrm{tot}}\sum_{i,j=1}^{N_\textrm{tot} -1 } \nabla_{i} \cdot \nabla_{j} +V(\mathbf{t}). The first term of H is the kinetic energy of the COM motion, which can be treated separately since H' does not depend on X. As just stated, its eigenstates are plane waves. The potential V(t) consists of the Coulomb terms expressed in the new coordinates. The first term of H' has the usual appearance of a kinetic energy operator. The second term is known as the mass polarization term. The translationally invariant Hamiltonian H' can be shown to be self-adjoint and to be bounded from below. That is, its lowest eigenvalue is real and finite. Although H' is necessarily invariant under permutations of identical particles (since H and the COM kinetic energy are invariant), its invariance is not manifest. Not many actual molecular applications of H' exist; see, however, the seminal work[1] on the hydrogen molecule for an early application. In the great majority of computations of molecular wavefunctions the electronic problem is solved with the clamped nucleus Hamiltonian arising in the first step of the Born–Oppenheimer approximation. See Ref.[2] for a thorough discussion of the mathematical properties of the Coulomb Hamiltonian. Also it is discussed in this paper whether one can arrive a priori at the concept of a molecule (as a stable system of electrons and nuclei with a well-defined geometry) from the properties of the Coulomb Hamiltonian alone. Clamped nucleus Hamiltonian[edit] The clamped nucleus Hamiltonian describes the energy of the electrons in the electrostatic field of the nuclei, where the nuclei are assumed to be stationary with respect to an inertial frame. The form of the electronic Hamiltonian is \hat{H}_\mathrm{el} = \hat{T}_e + \hat{U}_{en}+ \hat{U}_{ee}+ \hat{U}_{nn}. The coordinates of electrons and nuclei are expressed with respect to a frame that moves with the nuclei, so that the nuclei are at rest with respect to this frame. The frame stays parallel to a space-fixed frame. It is an inertial frame because the nuclei are assumed not to be accelerated by external forces or torques. The origin of the frame is arbitrary, it is usually positioned on a central nucleus or in the nuclear center of mass. Sometimes it is stated that the nuclei are "at rest in a space-fixed frame". This statement implies that the nuclei are viewed as classical particles, because a quantum mechanical particle cannot be at rest. (It would mean that it had simultaneously zero momentum and well-defined position, which contradicts Heisenberg's uncertainty principle). Since the nuclear positions are constants, the electronic kinetic energy operator is invariant under translation over any nuclear vector. The Coulomb potential, depending on difference vectors, is invariant as well. In the description of atomic orbitals and the computation of integrals over atomic orbitals this invariance is used by equipping all atoms in the molecule with their own localized frames parallel to the space-fixed frame. As explained in the article on the Born–Oppenheimer approximation, a sufficient number of solutions of the Schrödinger equation of H_\textrm{el} leads to a potential energy surface (PES) V(\mathbf{R}_1, \mathbf{R}_2, \ldots, \mathbf{R}_N). It is assumed that the functional dependence of V on its coordinates is such that V(\mathbf{R}_1, \mathbf{R}_2, \ldots, \mathbf{R}_N)=V(\mathbf{R}'_1, \mathbf{R}'_2, \ldots, \mathbf{R}'_N) \mathbf{R}'_i =\mathbf{R}_i + \mathbf{t} \;\;\textrm{(translation)\;\; and}\;\; \mathbf{R}'_i =\mathbf{R}_i + \frac{\Delta\phi}{|\mathbf{s}|} \; ( \mathbf{s}\times \mathbf{R}_i) \;\;\textrm{(infinitesimal\;\; rotation)}, where t and s are arbitrary vectors and Δφ is an infinitesimal angle, Δφ >> Δφ2. This invariance condition on the PES is automatically fulfilled when the PES is expressed in terms of differences of, and angles between, the Ri, which is usually the case. Harmonic nuclear motion Hamiltonian[edit] In the remaining part of this article we assume that the molecule is semi-rigid. In the second step of the BO approximation the nuclear kinetic energy Tn is reintroduced and the Schrödinger equation with Hamiltonian \hat{H}_\mathrm{nuc} = -\frac{\hbar^2}{2}\sum_{i=1}^N \sum_{\alpha=1}^3 \frac{1}{M_i} \frac{\partial^2}{\partial R_{i\alpha}^2} +V(\mathbf{R}_1,\ldots,\mathbf{R}_N) is considered. One would like to recognize in its solution: the motion of the nuclear center of mass (3 degrees of freedom), the overall rotation of the molecule (3 degrees of freedom), and the nuclear vibrations. In general, this is not possible with the given nuclear kinetic energy, because it does not separate explicitly the 6 external degrees of freedom (overall translation and rotation) from the 3N − 6 internal degrees of freedom. In fact, the kinetic energy operator here is defined with respect to a space-fixed (SF) frame. If we were to move the origin of the SF frame to the nuclear center of mass, then, by application of the chain rule, nuclear mass polarization terms would appear. It is customary to ignore these terms altogether and we will follow this custom. In order to achieve a separation we must distinguish internal and external coordinates, to which end Eckart introduced conditions to be satisfied by the coordinates. We will show how these conditions arise in a natural way from a harmonic analysis in mass-weighted Cartesian coordinates. In order to simplify the expression for the kinetic energy we introduce mass-weighted displacement coordinates \boldsymbol{\rho}_i \equiv \sqrt{M_i} (\mathbf{R}_i-\mathbf{R}_i^0). \frac{\partial}{\partial \rho_{i \alpha}} = \frac{\partial}{\sqrt{M_i} (\partial R_{i \alpha} - \partial R^0_{i \alpha})} = \frac{1}{\sqrt{M_i}} \frac{\partial}{\partial R_{i \alpha}} , the kinetic energy operator becomes, T = -\frac{\hbar^2}{2} \sum_{i=1}^N \sum_{\alpha=1}^3 \frac{\partial^2}{\partial \rho_{i\alpha}^2}. If we make a Taylor expansion of V around the equilibrium geometry, V = V_0 + \sum_{i=1}^N \sum_{\alpha=1}^3 \Big(\frac{\partial V}{\partial \rho_{i\alpha}}\Big)_0\; \rho_{i\alpha} + \frac{1}{2} \sum_{i,j=1}^N \sum_{\alpha,\beta=1}^3 \Big( \frac{\partial^2 V}{\partial \rho_{i\alpha}\partial\rho_{j\beta}}\Big)_0 \;\rho_{i\alpha}\rho_{j\beta} + \cdots, and truncate after three terms (the so-called harmonic approximation), we can describe V with only the third term. The term V0 can be absorbed in the energy (gives a new zero of energy). The second term is vanishing because of the equilibrium condition. The remaining term contains the Hessian matrix F of V, which is symmetric and may be diagonalized with an orthogonal 3N × 3N matrix with constant elements: \mathbf{Q} \mathbf{F} \mathbf{Q}^\mathrm{T} = \boldsymbol{\Phi} \quad \mathrm{with}\quad \boldsymbol{\Phi} = \operatorname{diag}(f_1, \dots, f_{3N-6}, 0,\ldots,0). It can be shown from the invariance of V under rotation and translation that six of the eigenvectors of F (last six rows of Q) have eigenvalue zero (are zero-frequency modes). They span the external space. The first 3N − 6 rows of Q are—for molecules in their ground state—eigenvectors with non-zero eigenvalue; they are the internal coordinates and form an orthonormal basis for a (3N - 6)-dimensional subspace of the nuclear configuration space R3N, the internal space. The zero-frequency eigenvectors are orthogonal to the eigenvectors of non-zero frequency. It can be shown that these orthogonalities are in fact the Eckart conditions. The kinetic energy expressed in the internal coordinates is the internal (vibrational) kinetic energy. With the introduction of normal coordinates q_t \equiv \sum_{i=1}^N\sum_{\alpha=1}^3 \; Q_{t, i\alpha} \rho_{i\alpha}, the vibrational (internal) part of the Hamiltonian for the nuclear motion becomes in the harmonic approximation \hat{H}_\mathrm{nuc} \approx \frac{1}{2} \sum_{t=1}^{3N-6} \left[-\hbar^2 \frac{\partial^2}{\partial q_{t}^2} + f_t q_t^2 \right] . The corresponding Schrödinger equation is easily solved, it factorizes into 3N − 6 equations for one-dimensional harmonic oscillators. The main effort in this approximate solution of the nuclear motion Schrödinger equation is the computation of the Hessian F of V and its diagonalization. This approximation to the nuclear motion problem, described in 3N mass-weighted Cartesian coordinates, became standard in quantum chemistry, since the days (1980s-1990s) that algorithms for accurate computations of the Hessian F became available. Apart from the harmonic approximation, it has as a further deficiency that the external (rotational and translational) motions of the molecule are not accounted for. They are accounted for in a rovibrational Hamiltonian that sometimes is called Watson's Hamiltonian. Watson's nuclear motion Hamiltonian[edit] In order to obtain a Hamiltonian for external (translation and rotation) motions coupled to the internal (vibrational) motions, it is common to return at this point to classical mechanics and to formulate the classical kinetic energy corresponding to these motions of the nuclei. Classically it is easy to separate the translational—center of mass—motion from the other motions. However, the separation of the rotational from the vibrational motion is more difficult and is not completely possible. This ro-vibrational separation was first achieved by Eckart[3] in 1935 by imposing by what is now known as Eckart conditions. Since the problem is described in a frame (an "Eckart" frame) that rotates with the molecule, and hence is a non-inertial frame, energies associated with the fictitious forces: centrifugal and Coriolis force appear in the kinetic energy. In general, the classical kinetic energy T defines the metric tensor g = (gij) associated with the curvilinear coordinates s = (si) through 2T = \sum_{ij} g_{ij} \dot{s}_i \dot{s}_j. The quantization step is the transformation of this classical kinetic energy into a quantum mechanical operator. It is common to follow Podolsky[4] by writing down the Laplace–Beltrami operator in the same (generalized, curvilinear) coordinates s as used for the classical form. The equation for this operator requires the inverse of the metric tensor g and its determinant. Multiplication of the Laplace–Beltrami operator by -\hbar^2 gives the required quantum mechanical kinetic energy operator. When we apply this recipe to Cartesian coordinates, which have unit metric, the same kinetic energy is obtained as by application of the quantization rules. The nuclear motion Hamiltonian was obtained by Wilson and Howard in 1936, [5] who followed this procedure, and further refined by Darling and Dennison in 1940.[6] It remained the standard until 1968, when Watson[7] was able to simplify it drastically by commuting through the derivatives the determinant of the metric tensor. We will give the ro-vibrational Hamiltonian obtained by Watson, which often is referred to as the Watson Hamiltonian. Before we do this we must mention that a derivation of this Hamiltonian is also possible by starting from the Laplace operator in Cartesian form, application of coordinate transformations, and use of the chain rule.[8] The Watson Hamiltonian, describing all motions of the N nuclei, is \hat{H} = -\frac{\hbar^2}{2M_\mathrm{tot}} \sum_{\alpha=1}^3 \frac{\partial^2}{\partial X_\alpha^2} +\frac{1}{2} \sum_{\alpha,\beta=1}^3 \mu_{\alpha\beta} (\mathcal{P}_\alpha - \Pi_\alpha)(\mathcal{P}_\beta - \Pi_\beta) +U -\frac{\hbar^2}{2} \sum_{s=1}^{3N-6} \frac{\partial^2}{\partial q_s^2} + V . The first term is the center of mass term \mathbf{X} \equiv \frac{1}{M_\mathrm{tot}} \sum_{i=1}^N M_i \mathbf{R}_i \quad\mathrm{with}\quad M_\mathrm{tot} \equiv \sum_{i=1}^N M_i. The second term is the rotational term akin to the kinetic energy of the rigid rotor. Here \mathcal{P}_\alpha is the α component of the body-fixed rigid rotor angular momentum operator, see this article for its expression in terms of Euler angles. The operator \Pi_\alpha\, is a component of an operator known as the vibrational angular momentum operator (although it does not satisfy angular momentum commutation relations), \Pi_\alpha = -i\hbar \sum_{s,t=1}^{3N-6} \zeta^{\alpha}_{st} \; q_s \frac{\partial}{\partial q_t} with the Coriolis coupling constant: \zeta^{\alpha}_{st} = \sum_{i=1}^N \sum_{\beta,\gamma=1}^3 \epsilon_{\alpha\beta\gamma} Q_{s, i\beta}\,Q_{t,i\gamma} \;\; \mathrm{and}\quad\alpha=1,2,3. Here εαβγ is the Levi-Civita symbol. The terms quadratic in the \mathcal{P}_\alpha are centrifugal terms, those bilinear in \mathcal{P}_\alpha and \Pi_\beta\, are Coriolis terms. The quantities Q s, iγ are the components of the normal coordinates introduced above. Alternatively, normal coordinates may be obtained by application of Wilson's GF method. The 3 × 3 symmetric matrix \boldsymbol{\mu} is called the effective reciprocal inertia tensor. If all q s were zero (rigid molecule) the Eckart frame would coincide with a principal axes frame (see rigid rotor) and \boldsymbol{\mu} would be diagonal, with the equilibrium reciprocal moments of inertia on the diagonal. If all q s would be zero, only the kinetic energies of translation and rigid rotation would survive. The potential-like term U is the Watson term: U = -\frac{1}{8} \sum_{\alpha=1}^3 \mu_{\alpha\alpha} proportional to the trace of the effective reciprocal inertia tensor. The fourth term in the Watson Hamiltonian is the kinetic energy associated with the vibrations of the atoms (nuclei) expressed in normal coordinates qs, which as stated above, are given in terms of nuclear displacements ρ by q_s = \sum_{i=1}^N \sum_{\alpha=1}^3 Q_{s, i\alpha} \rho_{i\alpha}\quad\mathrm{for}\quad s=1,\ldots, 3N-6. Finally V is the unexpanded potential energy by definition depending on internal coordinates only. In the harmonic approximation it takes the form V \approx \frac{1}{2} \sum_{s=1}^{3N-6} f_s q_s^2. See also[edit] 1. ^ W. Kołos and L. Wolniewicz (1963). "Nonadiabatic Theory for Diatomic Molecules and Its Application to the Hydrogen Molecule". Rev. Mod. Phys. 35 (3): 473–483. Bibcode:1963RvMP...35..473K. doi:10.1103/RevModPhys.35.473.  2. ^ R. G. Woolley and B. T. Sutcliffe (2003). "P.-O. Löwdin and the Quantum Mechanics of Molecules". In E. J. Brändas and E. S. Kryachko. Fundamental World of Quantum Chemistry 1. Kluwer Academic Publishers. pp. 21–65.  3. ^ Eckart, C. (1935). "Some studies concerning rotating axes and polyatomic molecules". Physical Review 47 (7): 552–558. Bibcode:1935PhRv...47..552E. doi:10.1103/PhysRev.47.552.  4. ^ Podolsky, B. (1928). "Quantum-mechanically correct form of Hamiltonian function for conservative system". Phys. Rev. 32 (5): 812. Bibcode:1928PhRv...32..812P. doi:10.1103/PhysRev.32.812.  5. ^ E. Bright Wilson, Jr. and J. B. Howard (1936). "The Vibration–Rotation Energy Levels of Polyatomic Molecules I. Mathematical Theory of Semirigid Asymmetrical Top Molecules". J. Chem. Phys. 4 (4): 260–268. doi:10.1063/1.1749833.  6. ^ B. T. Darling and D. M. Dennison (1940). "The water vapor molecule". Phys. Rev. 57 (2): 128–139. Bibcode:1940PhRv...57..128D. doi:10.1103/PhysRev.57.128.  7. ^ Watson, James K.G. (1968). "Simplification of the molecular vibration-rotation hamiltonian". Molecular Physics 15 (5): 479. Bibcode:1968MolPh..15..479W. doi:10.1080/00268976800101381.  8. ^ Biedenharn, L. C.; Louck, J. D. (1981). "Angular Momentum in Quantum Physics". Encyclopedia of Mathematics 8. Reading: Addison–Wesley.  Further reading[edit]
f8bb022a0e68b73e
Interpretations of quantum mechanics From Wikipedia, the free encyclopedia Jump to: navigation, search History of interpretations[edit] Main quantum mechanics interpreters The definition of quantum theorists' terms, such as wavefunctions and matrix mechanics, progressed through many stages. For instance, Erwin Schrödinger originally viewed the electron's wavefunction as its charge density smeared across the field, whereas Max Born reinterpreted it as the electron's probability density[disambiguation needed] distributed across the field. Although the Copenhagen interpretation was originally most popular, quantum decoherence has gained popularity. Thus the many-worlds interpretation has been gaining acceptance.[1][2] Moreover, the strictly formalist position, shunning interpretation, has been challenged by proposals for falsifiable experiments that might one day distinguish among interpretations, as by measuring an AI consciousness[3] or via quantum computing.[4] Nature of interpretation[edit] More or less, all interpretations of quantum mechanics share two qualities: 1. They interpret a formalism—a set of equations and principles to generate predictions via input of initial conditions 2. They interpret a phenomenology—a set of observations, including those obtained by empirical research and those obtained informally, such as humans' experience of an unequivocal world Two qualities vary among interpretations: 1. Ontology—claims about what things, such as categories and entities, exist in the world 2. Epistemology—claims about the possibility, scope, and means toward relevant knowledge of the world In philosophy of science, the distinction of knowledge versus reality is termed epistemic versus ontic. A general law is a regularity of outcomes (epistemic), whereas a causal mechanism may regulate the outcomes (ontic). A phenomenon can receive interpretation either ontic or epistemic. For instance, indeterminism may be attributed to limitations of human observation and perception (epistemic), or may be explained as a real existing maybe encoded in the universe (ontic). Confusing the epistemic with the ontic, like if one were to presume that a general law actually "governs" outcomes—and that the statement of a regularity has the role of a causal mechanism—is a category mistake. In a broad sense, scientific theory can be viewed as offering scientific realism—approximately true description or explanation of the natural world—or might be perceived with antirealism. A realist stance seeks the epistemic and the ontic, whereas an antirealist stance seeks epistemic but not the ontic. In the 20th century's first half, antirealism was mainly logical positivism, which sought to exclude unobservable aspects of reality from scientific theory. Since the 1950s, antirealism is more modest, usually instrumentalism, permitting talk of unobservable aspects, but ultimately discarding the very question of realism and posing scientific theory as a tool to help humans make predictions, not to attain metaphysical understanding of the world. The instrumentalist view is carried by the famous quote of David Mermin, "Shut up and calculate", often misattributed to Richard Feynman.[5] Other approaches to resolve conceptual problems introduce new mathematical formalism, and so propose alternative theories with their interpretations. An example is Bohmian mechanics, whose empirical equivalence with the three standard formalisms—Schrödinger's wave mechanics, Heisenberg's matrix mechanics, and Feynman's path integral formalism, all empirically equivalent—is doubtful. Challenges to interpretation[edit] Difficulties reflect a number of points about quantum mechanics: 1. Abstract, mathematical nature of quantum field theories 2. Existence of apparently indeterministic and yet irreversible processes 3. Role of the observer in determining outcomes 4. Classically unexpected correlations between remote objects 5. Complementarity of proffered descriptions 6. Rapidly rising intricacy, far exceeding humans' present calculational capacity, as a system's size increases The mathematical structure of quantum mechanics is based on rather abstract mathematics, like Hilbert space. In classical field theory, a physical property at a given location in the field is readily derived. In Heisenberg's formalism, on the other hand, to derive physical information about a location in the field, one must apply a quantum operation to a quantum state, an elaborate mathematical process.[6] Schrödinger's formalism describes a waveform governing probability of outcomes across a field. Yet how do we find in a specific location a particle whose wavefunction of mere probability distribution[disambiguation needed] of existence spans a vast region of space? The act of measurement can interact with the system state in peculiar ways, as found in double-slit experiments. The Copenhagen interpretation holds that the myriad probabilities across a quantum field are unreal, yet that the act of observation/measurement collapses the wavefunction and sets a single possibility to become real. Yet quantum decoherence grants that all the possibilities can be real, and that the act of observation/measurement sets up new subsystems.[7] Quantum entanglement, as illustrated in the EPR paradox, seemingly violates principles of local causality.[8] Complementarity holds that no set of classical physical concepts can simultaneously refer to all properties of a quantum system. For instance, wave description A and particulate description B can each describe quantum system S, but not simultaneously. Still, complementarity does not usually imply that classical logic is at fault (although Hilary Putnam took such view in "Is logic empirical?"); rather, the composition of physical properties of S does not obey the rules of classical propositional logic when using propositional connectives (see "Quantum logic"). As now well known, the "origin of complementarity lies in the non-commutativity of operators" that describe quantum objects (Omnès 1999). Since the intricacy of a quantum system is exponential, it is difficult to derive classical approximations. Instrumentalist interpretation[edit] Any modern scientific theory requires at the very least an instrumentalist description that relates the mathematical formalism to experimental practice and prediction. In the case of quantum mechanics, the most common instrumentalist description is an assertion of statistical regularity between state preparation processes and measurement processes. That is, if a measurement of a real-value quantity is performed many times, each time starting with the same initial conditions, the outcome is a well-defined probability distribution agreeing with the real numbers; moreover, quantum mechanics provides a computational instrument to determine statistical properties of this distribution, such as its expectation value. Calculations for measurements performed on a system S postulate a Hilbert space H over the complex numbers. When the system S is prepared in a pure state, it is associated with a vector in H. Measurable quantities are associated with Hermitian operators acting on H: these are referred to as observables. Repeated measurement of an observable A where S is prepared in state ψ yields a distribution of values. The expectation value of this distribution is given by the expression \langle \psi \vert A \vert \psi \rangle. This mathematical machinery gives a simple, direct way to compute a statistical property of the outcome of an experiment, once it is understood how to associate the initial state with a Hilbert space vector, and the measured quantity with an observable (that is, a specific Hermitian operator). As an example of such a computation, the probability of finding the system in a given state \vert\phi\rangle is given by computing the expectation value of a (rank-1) projection operator \Pi = \vert\phi\rangle \langle \phi \vert. The probability is then the non-negative real number given by P = \langle \psi \vert \Pi \vert \psi \rangle = \vert \langle \phi \vert \psi \rangle \vert ^2. By abuse of language, a bare instrumentalist description could be referred to as an interpretation, although this usage is somewhat misleading since instrumentalism explicitly avoids any explanatory role; that is, it does not attempt to answer the question why. Summary of common interpretations of quantum mechanics[edit] Classification adopted by Einstein[edit] An interpretation (i.e. a semantic explanation of the formal mathematics of quantum mechanics) can be characterized by its treatment of certain matters addressed by Einstein, such as: • The mathematical formalism M consists of the Hilbert space machinery of ket-vectors, self-adjoint operators acting on the space of ket-vectors, unitary time dependence of the ket-vectors, and measurement operations. In this context a measurement operation is a transformation which turns a ket-vector into a probability distribution (for a formalization of this concept see quantum operations). The crucial aspect of an interpretation is whether the elements of I are regarded as physically real. Hence the bare instrumentalist view of quantum mechanics outlined in the previous section is not an interpretation at all, for it makes no claims about elements of physical reality. The current usage of realism and completeness originated in the 1935 paper in which Einstein and others proposed the EPR paradox.[9] In that paper the authors proposed the concepts element of reality and the completeness of a physical theory. They characterised element of reality as a quantity whose value can be predicted with certainty before measuring or otherwise disturbing it, and defined a complete physical theory as one in which every element of physical reality is accounted for by the theory. In a semantic view of interpretation, an interpretation is complete if every element of the interpreting structure is present in the mathematics. Realism is also a property of each of the elements of the maths; an element is real if it corresponds to something in the interpreting structure. For example, in some interpretations of quantum mechanics (such as the many-worlds interpretation) the ket vector associated to the system state is said to correspond to an element of physical reality, while in other interpretations it is not. Local realism has two aspects: • The value returned by a measurement corresponds to the value of some function in the state space. In other words, that value is an element of reality; • The effects of measurement have a propagation speed not exceeding some universal limit (e.g. the speed of light). In order for this to make sense, measurement operations in the interpreting structure must be localized. The Copenhagen interpretation[edit] The Copenhagen interpretation is the "standard" interpretation of quantum mechanics formulated by Niels Bohr and Werner Heisenberg while collaborating in Copenhagen around 1927. Bohr and Heisenberg extended the probabilistic interpretation of the wavefunction proposed originally by Max Born. The Copenhagen interpretation rejects questions like "where was the particle before I measured its position?" as meaningless. The measurement process randomly picks out exactly one of the many possibilities allowed for by the state's wave function in a manner consistent with the well-defined probabilities that are assigned to each possible state. According to the interpretation, the interaction of an observer or apparatus that is external to the quantum system is the cause of wave function collapse, thus according to Paul Davies, "reality is in the observations, not in the electron".[10] What collapses in this interpretation is the knowledge of the observer and not an "objective" wavefunction. Many worlds[edit] The many-worlds interpretation is an interpretation of quantum mechanics in which a universal wavefunction obeys the same deterministic, reversible laws at all times; in particular there is no (indeterministic and irreversible) wavefunction collapse associated with measurement. The phenomena associated with measurement are claimed to be explained by decoherence, which occurs when states interact with the environment producing entanglement, repeatedly splitting the universe into mutually unobservable alternate histories—distinct universes within a greater multiverse. In this interpretation the wavefunction has objective reality. Consistent histories[edit] The consistent histories interpretation generalizes the conventional Copenhagen interpretation and attempts to provide a natural interpretation of quantum cosmology. The theory is based on a consistency criterion that allows the history of a system to be described so that the probabilities for each history obey the additive rules of classical probability. It is claimed to be consistent with the Schrödinger equation. According to this interpretation, the purpose of a quantum-mechanical theory is to predict the relative probabilities of various alternative histories (for example, of a particle). Ensemble interpretation, or statistical interpretation[edit] The ensemble interpretation, also called the statistical interpretation, can be viewed as a minimalist interpretation. That is, it claims to make the fewest assumptions associated with the standard mathematics. It takes the statistical interpretation of Born to the fullest extent. The interpretation states that the wave function does not apply to an individual system – for example, a single particle – but is an abstract statistical quantity that only applies to an ensemble (a vast multitude) of similarly prepared systems or particles. Probably the most notable supporter of such an interpretation was Einstein: —Einstein in Albert Einstein: Philosopher-Scientist, ed. P.A. Schilpp (Harper & Row, New York) The most prominent current advocate of the ensemble interpretation is Leslie E. Ballentine, professor at Simon Fraser University, author of the graduate level text book Quantum Mechanics, A Modern Development. An experiment illustrating the ensemble interpretation is provided in Akira Tonomura's Video clip 1.[11] It is evident from this double-slit experiment with an ensemble of individual electrons that, since the quantum mechanical wave function (absolutely squared) describes the completed interference pattern, it must describe an ensemble The Ensemble interpretation is not popular, and is regarded as having been decisively refuted by some physicists. John Gribbin writes:- "There are many difficulties with the idea, but the killer blow was struck when individual quantum entities such as photons were observed behaving in experiments in line with the quantum wave function description. The Ensemble interpretation is now only of historical interest."[12] de Broglie–Bohm theory[edit] The de Broglie–Bohm theory of quantum mechanics is a theory by Louis de Broglie and extended later by David Bohm to include measurements. Particles, which always have positions, are guided by the wavefunction. The wavefunction evolves according to the Schrödinger wave equation, and the wavefunction never collapses. The theory takes place in a single space-time, is non-local, and is deterministic. The simultaneous determination of a particle's position and velocity is subject to the usual uncertainty principle constraint. The theory is considered to be a hidden variable theory, and by embracing non-locality it satisfies Bell's inequality. The measurement problem is resolved, since the particles have definite positions at all times.[13] Collapse is explained as phenomenological.[14] Relational quantum mechanics[edit] The essential idea behind relational quantum mechanics, following the precedent of special relativity, is that different observers may give different accounts of the same series of events: for example, to one observer at a given point in time, a system may be in a single, "collapsed" eigenstate, while to another observer at the same time, it may be in a superposition of two or more states. Consequently, if quantum mechanics is to be a complete theory, relational quantum mechanics argues that the notion of "state" describes not the observed system itself, but the relationship, or correlation, between the system and its observer(s). The state vector of conventional quantum mechanics becomes a description of the correlation of some degrees of freedom in the observer, with respect to the observed system. However, it is held by relational quantum mechanics that this applies to all physical objects, whether or not they are conscious or macroscopic. Any "measurement event" is seen simply as an ordinary physical interaction, an establishment of the sort of correlation discussed above. Thus the physical content of the theory has to do not with objects themselves, but the relations between them.[15][16] An independent relational approach to quantum mechanics was developed in analogy with David Bohm's elucidation of special relativity,[17] in which a detection event is regarded as establishing a relationship between the quantized field and the detector. The inherent ambiguity associated with applying Heisenberg's uncertainty principle is subsequently avoided.[18] Elementary cycles[edit] The idea at the base of this interpretation is the empirical fact that, as noted by Louis de Broglie with the wave-particle duality, elementary particles have recurrences in time and space determined by their energy and momentum through the Planck constant. This implies that every system in nature can be described in terms of elementary space-time cycles. These recurrences are imposed as semiclassical quantization conditions, similarly to the quantization of a particle in a box. The resulting cyclic mechanics are formally equivalent to both the canonical formulation and Feynman formulation of quantum mechanics,[19] for a review see.[20] It is an evolution of the Bohr-Sommerfeld quantization or the zitterbewegung and suggests that quantum mechanics emerges as statistical description of extremely fast periodic dynamics, as proposed by 't Hooft Determinism.[21] The idea has originated applications in modern physics, such as a geometrical description of gauge invariance [22] and an interpretation of the Maldacena duality.[23] Transactional interpretation[edit] The transactional interpretation of quantum mechanics (TIQM) by John G. Cramer is an interpretation of quantum mechanics inspired by the Wheeler–Feynman absorber theory.[24] It describes a quantum interaction in terms of a standing wave formed by the sum of a retarded (forward-in-time) and an advanced (backward-in-time) wave. The author argues that it avoids the philosophical problems with the Copenhagen interpretation and the role of the observer, and resolves various quantum paradoxes. Stochastic mechanics[edit] An entirely classical derivation and interpretation of Schrödinger's wave equation by analogy with Brownian motion was suggested by Princeton University professor Edward Nelson in 1966.[25] Similar considerations had previously been published, for example by R. Fürth (1933), I. Fényes (1952), and Walter Weizel (1953), and are referenced in Nelson's paper. More recent work on the stochastic interpretation has been done by M. Pavon.[26] An alternative stochastic interpretation was developed by Roumen Tsekov.[27] Objective collapse theories[edit] Objective collapse theories differ from the Copenhagen interpretation in regarding both the wavefunction and the process of collapse as ontologically objective. In objective theories, collapse occurs randomly ("spontaneous localization"), or when some physical threshold is reached, with observers having no special role. Thus, they are realistic, indeterministic, no-hidden-variables theories. The mechanism of collapse is not specified by standard quantum mechanics, which needs to be extended if this approach is correct, meaning that Objective Collapse is more of a theory than an interpretation. Examples include the Ghirardi-Rimini-Weber theory[28] and the Penrose interpretation.[29] von Neumann/Wigner interpretation: consciousness causes the collapse[edit] In his treatise The Mathematical Foundations of Quantum Mechanics, John von Neumann deeply analyzed the so-called measurement problem. He concluded that the entire physical universe could be made subject to the Schrödinger equation (the universal wave function). He also described how measurement could cause a collapse of the wave function.[30] This point of view was prominently expanded on by Eugene Wigner, who argued that human experimenter consciousness (or maybe even dog consciousness) was critical for the collapse, but he later abandoned this interpretation.[31][32] Variations of the von Neumann interpretation include: Subjective reduction research This principle, that consciousness causes the collapse, is the point of intersection between quantum mechanics and the mind/body problem; and researchers are working to detect conscious events correlated with physical events that, according to quantum theory, should involve a wave function collapse; but, thus far, results are inconclusive.[33][34] Participatory anthropic principle (PAP) John Archibald Wheeler's participatory anthropic principle says that consciousness plays some role in bringing the universe into existence.[35] Other physicists have elaborated their own variations of the von Neumann interpretation; including: • Henry P. Stapp (Mindful Universe: Quantum Mechanics and the Participating Observer) • Bruce Rosenblum and Fred Kuttner (Quantum Enigma: Physics Encounters Consciousness) • Amit Goswami (The Self-Aware Universe) Many minds[edit] Quantum logic[edit] Quantum information theories[edit] Quantum informational approaches[36] have attracted growing support.[37][38] They subdivide into two kinds[39] • Information ontologies, such as J. A. Wheeler's "it from bit". These approaches have been described as a revival of immaterialism[40] • Interpretations where quantum mechanics is said to describe an observer's knowledge of the world, rather than the world itself. This approach has some similarity with Bohr's thinking.[41] Collapse (also known as reduction) is often interpreted as an observer acquiring information from a measurement, rather than as an objective event. These approaches have been appraised as similar to instrumentalism. The state is not an objective property of an individual system but is that information, obtained from a knowledge of how a system was prepared, which can be used for making predictions about future measurements. ...A quantum mechanical state being a summary of the observer’s information about an individual physical system changes both by dynamical laws, and whenever the observer acquires new information about the system through the process of measurement. The existence of two laws for the evolution of the state vector...becomes problematical only if it is believed that the state vector is an objective property of the system...The “reduction of the wavepacket” does take place in the consciousness of the observer, not because of any unique physical process which takes place there, but only because the state is a construct of the observer and not an objective property of the physical system[42] Modal interpretations of quantum theory[edit] Modal interpretations of quantum mechanics were first conceived of in 1972 by B. van Fraassen, in his paper “A formal approach to the philosophy of science.” However, this term now is used to describe a larger set of models that grew out of this approach. The Stanford Encyclopedia of Philosophy describes several versions:[43] • The Copenhagen variant • Kochen-Dieks-Healey Interpretations Time-symmetric theories[edit] Several theories have been proposed which modify the equations of quantum mechanics to be symmetric with respect to time reversal.[44] [45][46][47][48][49] This creates retrocausality: events in the future can affect ones in the past, exactly as events in the past can affect ones in the future. In these theories, a single measurement cannot fully determine the state of a system (making them a type of hidden variables theory), but given two measurements performed at different times, it is possible to calculate the exact state of the system at all intermediate times. The collapse of the wavefunction is therefore not a physical change to the system, just a change in our knowledge of it due to the second measurement. Similarly, they explain entanglement as not being a true physical state but just an illusion created by ignoring retrocausality. The point where two particles appear to "become entangled" is simply a point where each particle is being influenced by events that occur to the other particle in the future. Branching space-time theories[edit] BST theories resemble the many worlds interpretation; however, "the main difference is that the BST interpretation takes the branching of history to be a feature of the topology of the set of events with their causal relationships... rather than a consequence of the separate evolution of different components of a state vector."[50] In MWI, it is the wave functions that branches, whereas in BST, the space-time topology itself branches. BST has applications to Bells theorem, quantum computation and quantum gravity. It also has some resemblance to hidden variable theories and the ensemble interpretation.: particles in BST have multiple well defined trajectories at the microscopic level. These can only be treated stochastically at a coarse grained level, in line with the ensemble interpretation.[50] Other interpretations[edit] As well as the mainstream interpretations discussed above, a number of other interpretations have been proposed which have not made a significant scientific impact for whatever reason. These range from proposals by mainstream physicists to the more occult ideas of quantum mysticism. Comparison of interpretations[edit] The most common interpretations are summarized in the table below. The values shown in the cells of the table are not without controversy, for the precise meanings of some of the concepts involved are unclear and, in fact, are themselves at the center of the controversy surrounding the given interpretation. No experimental evidence exists that distinguishes among these interpretations. To that extent, the physical theory stands, and is consistent with itself and with reality; difficulties arise only when one attempts to "interpret" the theory. Nevertheless, designing experiments which would test the various interpretations is the subject of active research. Most of these interpretations have variants. For example, it is difficult to get a precise definition of the Copenhagen interpretation as it was developed and argued about by many people. Interpretation Author(s) Deterministic? Wavefunction Local? Counterfactual Ensemble interpretation Max Born, 1926 Agnostic No Yes Agnostic No No No No No Copenhagen interpretation Niels Bohr, Werner Heisenberg, 1927 No No1 Yes No Yes2 Causal No No No de Broglie–Bohm theory Louis de Broglie, 1927, David Bohm, 1952 Yes Yes3 Yes4 Yes No No No Yes Yes von Neumann interpretation John von Neumann, 1932, John Archibald Wheeler, Eugene Wigner No Yes Yes No Yes Causal No No Yes Quantum logic Garrett Birkhoff, 1936 Agnostic Agnostic Yes5 No No Interpretational6 Agnostic No No Many-worlds interpretation Hugh Everett, 1957 Yes Yes No No No No Yes No Yes Popper's interpretation[51] Karl Popper, 1957[52] No Yes Yes Yes No No Yes Yes13 No Time-symmetric theories Satosi Watanabe, 1955 Yes Yes Yes Yes No No Yes No Yes Stochastic interpretation Edward Nelson, 1966 No No Yes No No No No No No Many-minds interpretation H. Dieter Zeh, 1970 Yes Yes No No No Interpretational7 Yes No Yes Consistent histories Robert B. Griffiths, 1984 Agnostic8 Agnostic8 No No No Interpretational6 Yes No No Objective collapse theories Ghirardi–Rimini–Weber, 1986, Penrose interpretation, 1989 No Yes Yes No Yes No No No No Transactional interpretation John G. Cramer, 1986 No Yes Yes No Yes9 No No14 Yes No Relational interpretation Carlo Rovelli, 1994 Agnostic No Agnostic10 No Yes11 Intrinsic12 Yes No No • 1 According to Bohr, the concept of a physical state independent of the conditions of its experimental observation does not have a well-defined meaning. According to Heisenberg the wavefunction represents a probability, but not an objective reality itself in space and time. • 2 According to the Copenhagen interpretation, the wavefunction collapses when a measurement is performed. • 3 Both particle AND guiding wavefunction are real. • 4 Unique particle history, but multiple wave histories. • 5 But quantum logic is more limited in applicability than Coherent Histories. • 6 Quantum mechanics is regarded as a way of predicting observations, or a theory of measurement. • 7 Observers separate the universal wavefunction into orthogonal sets of experiences. • 8 If wavefunction is real then this becomes the many-worlds interpretation. If wavefunction is less than real, but more than just information, then Zurek calls this the "existential interpretation". • 9 In the TI the collapse of the state vector is interpreted as the completion of the transaction between emitter and absorber. • 10 Comparing histories between systems in this interpretation has no well-defined meaning. • 11 Any physical interaction is treated as a collapse event relative to the systems involved, not just macroscopic or conscious observers. • 12 The state of the system is observer-dependent, i.e., the state is specific to the reference frame of the observer. • 13 Caused by the fact that Popper holds both CFD and locality to be true, it is under dispute whether Popper's interpretation can really be considered an interpretation of Quantum Mechanics (which is what Popper claimed) or whether it must be considered a modification of Quantum Mechanics (which is what many Physicists claim), and, in case of the latter, if this modification has been empirically refuted or not. Popper exchanged many long letters with Einstein, Bell etc. about the issue. • 14 The transactional interpretation is explicitly non-local. • 15 The assumption of intrinsic periodicity is an element of non-locality consistent with relativity as the periodicity varies in a causal way. See also[edit] • Bub, J. and Clifton, R. 1996. “A uniqueness theorem for interpretations of quantum mechanics,” Studies in History and Philosophy of Modern Physics 27B: 181-219 • Rudolf Carnap, 1939, "The interpretation of physics," in Foundations of Logic and Mathematics of the International Encyclopedia of Unified Science. University of Chicago Press. • Dickson, M., 1994, "Wavefunction tails in the modal interpretation" in Hull, D., Forbes, M., and Burian, R., eds., Proceedings of the PSA 1" 366–76. East Lansing, Michigan: Philosophy of Science Association. • --------, and Clifton, R., 1998, "Lorentz-invariance in modal interpretations" in Dieks, D. and Vermaas, P., eds., The Modal Interpretation of Quantum Mechanics. Dordrecht: Kluwer Academic Publishers: 9–48. • Fuchs, Christopher, 2002, "Quantum Mechanics as Quantum Information (and only a little more)." arXiv:quant-ph/0205039 • -------- and A. Peres, 2000, "Quantum theory needs no ‘interpretation’," Physics Today. • Herbert, N., 1985. Quantum Reality: Beyond the New Physics. New York: Doubleday. ISBN 0-385-23569-0. • Hey, Anthony, and Walters, P., 2003. The New Quantum Universe, 2nd ed. Cambridge Univ. Press. ISBN 0-521-56457-3. • Roman Jackiw and D. Kleppner, 2000, "One Hundred Years of Quantum Physics," Science 289(5481): 893. • Max Jammer, 1966. The Conceptual Development of Quantum Mechanics. McGraw-Hill. • --------, 1974. The Philosophy of Quantum Mechanics. Wiley & Sons. • Al-Khalili, 2003. Quantum: A Guide for the Perplexed. London: Weidenfeld & Nicholson. • de Muynck, W. M., 2002. Foundations of quantum mechanics, an empiricist approach. Dordrecht: Kluwer Academic Publishers. ISBN 1-4020-0932-1.[53] • Roland Omnès, 1999. Understanding Quantum Mechanics. Princeton Univ. Press. • Karl Popper, 1963. Conjectures and Refutations. London: Routledge and Kegan Paul. The chapter "Three views Concerning Human Knowledge" addresses, among other things, instrumentalism in the physical sciences. • Hans Reichenbach, 1944. Philosophic Foundations of Quantum Mechanics. Univ. of California Press. • Max Tegmark and J. A. Wheeler, 2001, "100 Years of Quantum Mysteries," Scientific American 284: 68. • Bas van Fraassen, 1972, "A formal approach to the philosophy of science," in R. Colodny, ed., Paradigms and Paradoxes: The Philosophical Challenge of the Quantum Domain. Univ. of Pittsburgh Press: 303-66. • John A. Wheeler and Wojciech Hubert Zurek (eds), Quantum Theory and Measurement, Princeton: Princeton University Press, ISBN 0-691-08316-9, LoC QC174.125.Q38 1983. 1. ^ Vaidman, L. (2002, March 24). Many-Worlds Interpretation of Quantum Mechanics. Retrieved March 19, 2010, from Stanford Encyclopedia of Philosophy: 2. ^ A controversial poll mentioned in The Physics of Immortality (1994) found that of 72 "leading cosmologists and other quantum field theorists", 58% including Stephen Hawking, Murray Gell-Mann, and Richard Feynman supported a many-worlds interpretation ["Who believes in many-worlds?",, Accessed online: 24 Jan 2011]. 3. ^ Quantum theory as a universal physical theory, by David Deutsch, International Journal of Theoretical Physics, Vol 24 #1 (1985) 4. ^ Three connections between Everett's interpretation and experiment Quantum Concepts of Space and Time, by David Deutsch, Oxford University Press (1986) 5. ^ For a discussion of the provenance of the phrase "shut up and calculate", see [1] 7. ^ Guido Bacciagaluppi, "The role of decoherence in quantum mechanics", The Stanford Encyclopedia of Philosophy (Winter 2012), Edward N Zalta, ed. 8. ^ La nouvelle cuisine, by John S Bell, last article of Speakable and Unspeakable in Quantum Mechanics, second edition. 9. ^ A. Einstein, B. Podolsky and N. Rosen, 1935, "Can quantum-mechanical description of physical reality be considered complete?" Phys. Rev. 47: 777. 10. ^,Werner/Heisenberg,%20Werner%20-%20Physics%20and%20philosophy.pdf 11. ^ "An experiment illustrating the ensemble interpretation". Retrieved 2011-01-24.  12. ^ John Gribbin. Q is for Quantum. ISBN 978-0684863153.  13. ^ Why Bohm's Theory Solves the Measurement Problem by T. Maudlin, Philosophy of Science 62, pp. 479-483 (September, 1995). 14. ^ Bohmian Mechanics as the Foundation of Quantum Mechanics by D. Durr, N. Zanghi, and S. Goldstein in Bohmian Mechanics and Quantum Theory: An Appraisal, edited by J.T. Cushing, A. Fine, and S. Goldstein, Boston Studies in the Philosophy of Science 184, 21-44 (Kluwer, 1996) 1997 arXiv:quant-ph/9511016 15. ^ "Relational Quantum Mechanics (Stanford Encyclopedia of Philosophy)". Retrieved 2011-01-24.  16. ^ For more information, see Carlo Rovelli (1996). "Relational Quantum Mechanics". International Journal of Theoretical Physics 35 (8): 1637. arXiv:quant-ph/9609002. Bibcode:1996IJTP...35.1637R. doi:10.1007/BF02302261.  17. ^ David Bohm, The Special Theory of Relativity, Benjamin, New York, 1965 18. ^ [2]. For a full account [3], see Q. Zheng and T. Kobayashi, 1996, "Quantum Optics as a Relativistic Theory of Light," Physics Essays 9: 447. Annual Report, Department of Physics, School of Science, University of Tokyo (1992) 240. 22. ^ Dolce, D "Gauge Interaction as Periodicity Modulation", Annals of Physics, Volume 327, Issue 6, June 2012, pp. 1562-1592 Donatello Dolce (2012). "Gauge Interaction as Periodicity Modulation". Annals of Physics 327 (6): 1562–1592. arXiv:1110.0315. Bibcode:2012AnPhy.327.1562D. doi:10.1016/j.aop.2012.02.007.  24. ^ "Quantum Nocality - Cramer". Retrieved 2011-01-24.  25. ^ Nelson,E. (1966) Derivation of the Schrödinger Equation from Newtonian Mechanics, Phys. Rev. 150, 1079-1085 26. ^ M. Pavon, “Stochastic mechanics and the Feynman integral”, J. Math. Phys. 41, 6060-6078 (2000) 27. ^ Roumen Tsekov (2012). "Bohmian Mechanics versus Madelung Quantum Hydrodynamics". Ann. Univ. Sofia, Fac. Phys. SE: 112–119. arXiv:0904.0723. Bibcode:2009arXiv0904.0723T.  28. ^ "Frigg, R. GRW theory" (PDF). Retrieved 2011-01-24.  29. ^ "Review of Penrose's Shadows of the Mind". Retrieved 2011-01-24.  32. ^ Zvi Schreiber (1995). "The Nine Lives of Schrödinger's Cat". arXiv:quant-ph/9501014 [quant-ph]. 33. ^ Dick J. Bierman and Stephen Whitmarsh. (2006). Consciousness and Quantum Physics: Empirical Research on the Subjective Reduction of the State Vector. in Jack A. Tuszynski (Ed). The Emerging Physics of Consciousness. p. 27-48. 34. ^ C. M. H. Nunn et al. (1994). Collapse of a Quantum Field may Affect Brain Function. Journal of Consciousness Studies. 1(1):127-139. 35. ^ "- The anthropic universe". 2006-02-18. Retrieved 2011-01-24.  36. ^ "In the beginning was the bit". New Scientist. 2001-02-17. Retrieved 2013-01-25.  39. ^ Information, Immaterialism, Instrumentalism: Old and New in Quantum Information. Christopher G. Timpson 40. ^ Timpson,Op. Cit.: "Let us call the thought that information might be the basic category from which all else flows informational immaterialism." 41. ^ "Physics concerns what we can say about nature". (Niels Bohr, quoted in Petersen, A. (1963). The philosophy of Niels Bohr. Bulletin of the Atomic Scientists, 19(7):8–14.) 42. ^ Hartle, J. B. (1968). Quantum mechanics of individual systems. Am. J. Phys., 36(8):704– 712. 43. ^ "Modal Interpretations of Quantum Mechanics (Stanford Encyclopedia of Philosophy)". Retrieved 2011-01-24.  45. ^ Aharonov, Y. et al., "Time Symmetry in the Quantum Process of Measurement." Phys. Rev. 134, B1410-1416 (1964). 50. ^ a b Sharlow, Mark; "What Branching Spacetime might do for Phyiscs" p.2 52. ^ Karl Popper: The Propensity Interpretation of the Calculus of Probability and of the Quantum Theory. Observation and Interpretation. Buttersworth Scientific Publications, Korner & Price (eds.) 1957. pp 65–70. 53. ^ de Muynck, Willem M (2002). Foundations of quantum mechanics: an empiricist approach. Klower Academic Publishers. ISBN 1-4020-0932-1. Retrieved 2011-01-24.  Further reading[edit] Almost all authors below are professional physicists. External links[edit]
b90944fe06e93c4f
Higgs mechanism From Wikipedia, the free encyclopedia Jump to: navigation, search In particle physics, the Higgs mechanism is essential to explain the generation mechanism of the property "mass". This is a most-important property of almost all elementary particles.[1] According to this theory, particles gain mass by interacting with the so-called Higgs field that permeates all space. More precisely, the Higgs mechanism endows the three so-called gauge bosons Z, W+ and W with mass. These particles would otherwise be massless; but actually they are very heavy, with values around 80 GeV/c2. But also more common particles are endowed with mass by this mechanism, e.g. the simple electron, or the quark constituents of, e.g., protons, technically through spontaneous symmetry breaking, where, however, due to the specific form of the symmetry breaking (see below), instead of the usual transverse Nambu–Goldstone bosons a longitudinal boson, named the Higgs boson, appears. The simplest implementation of the mechanism adds an extra Higgs field to the gauge theory. The specific spontaneous symmetry breaking of the underlying local symmetry, which is similar to that one appearing in the theory of superconductivity, triggers conversion of the longitudinal field component to the Higgs boson, which interacts with itself and (at least of part of) the other fields in the theory, so as to produce mass terms for the above-mentioned three gauge bosons, and also to the above-mentioned fermions (see below). In the Standard Model, the phrase "Higgs mechanism" refers specifically to the generation of masses for the W±, and Z weak gauge bosons through electroweak symmetry breaking.[2] The Large Hadron Collider at CERN announced results consistent with the Higgs particle on July 4, 2012 but stressed that further testing is needed to confirm the Standard Model. The mechanism was proposed in 1962 by Philip Warren Anderson. The relativistic model was developed in 1964 by three independent groups: by Robert Brout and François Englert; by Peter Higgs; and by Gerald Guralnik, C. R. Hagen, and Tom Kibble. The Higgs mechanism is therefore also called the Brout–Englert–Higgs mechanism or Englert–Brout–Higgs–Guralnik–Hagen–Kibble mechanism,[3] Anderson–Higgs mechanism,[4] Anderson–Higgs-Kibble mechanism,[5] Higgs–Kibble mechanism by Abdus Salam[6] and ABEGHHK'tH mechanism [for Anderson, Brout, Englert, Guralnik, Hagen, Higgs, Kibble and 't Hooft] by Peter Higgs.[6] On October 8, 2013, it was announced that Peter Higgs and François Englert share the 2013 Nobel Prize in Physics "for the theoretical discovery of a mechanism that contributes to our understanding of the origin of the mass of subatomic particles, and which recently was confirmed through the discovery of the predicted fundamental particle, by the ATLAS and CMS experiments at CERN’s Large Hadron Collider".[7] Standard model[edit] The Higgs mechanism was incorporated into modern particle physics by Steven Weinberg and Abdus Salam, and is an essential part of the standard model. In the standard model, at temperatures high enough that electroweak symmetry is unbroken, all elementary particles are massless. At a critical temperature the Higgs field becomes tachyonic, the symmetry is spontaneously broken by condensation, and the W and Z bosons acquire masses. (EWSB, ElectroWeak Symmetry Breaking, is an abbreviation used for this.) Fermions, such as the leptons and quarks in the Standard Model, can also acquire mass as a result of their interaction with the Higgs field, but not in the same way as the gauge bosons. Structure of the Higgs field[edit] In the standard model, the Higgs field is an SU(2) doublet, a complex spinor with four real components (or equivalently with two complex components). Its (weak hypercharge) U(1) charge is 1. That means that it transforms as a spinor under SU(2). Under U(1) rotations, it is multiplied by a phase, which thus mixes the real and imaginary parts of the complex spinor into each other—so this is not the same as two complex spinors mixing under U(1) (which would have eight real components between them), but instead is the spinor representation of the group U(2). The Higgs field, through the interactions specified (summarized, represented, or even simulated) by its potential, induces spontaneous breaking of three out of the four generators ("directions") of the gauge group SU(2) × U(1): three out of its four components would ordinarily amount to Goldstone bosons, if they were not coupled to gauge fields. However, after symmetry breaking, these three of the four degrees of freedom in the Higgs field mix with the three W and Z bosons (W+, W and Z), and are only observable as spin components of these weak bosons, which are now massive; while the one remaining degree of freedom becomes the Higgs boson—a new scalar particle. The photon as the part that remains massless[edit] The gauge group of the electroweak part of the standard model is SU(2) × U(1). The group SU(2) is all 2-by-2 unitary matrices; all the orthonormal changes of coordinates in a complex two dimensional vector space. Rotating the coordinates so that the second basis vector points in the direction of the Higgs boson makes the vacuum expectation value of H the spinor (0, v). The generators for rotations about the x, y, and z axes are by half the Pauli matrices σx, σy, and σz, so that a rotation of angle θ about the z-axis takes the vacuum to (0, v e^{-i\theta/2}). While the Tx and Ty generators mix up the top and bottom components of the spinor, the Tz rotations only multiply each by opposite phases. This phase can be undone by a U(1) rotation of angle 1/2θ. Consequently, under both an SU(2) Tz-rotation and a U(1) rotation by an amount 1/2θ, the vacuum is invariant. This combination of generators Q = T_z + \frac{Y}{2} defines the unbroken part of the gauge group, where Tz is the generator of rotations around the z-axis in the SU(2) and Y is the hypercharge generator of the U(1). This combination of generators (a z rotation in the SU(2) and a simultaneous U(1) rotation by half the angle) preserves the vacuum, and defines the unbroken gauge group in the standard model, namely the electric charge group. The part of the gauge field in this direction stays massless, and amounts to the physical photon. Consequences for fermions[edit] In spite of the introduction of spontaneous symmetry breaking, the mass terms oppose the chiral gauge invariance. For these fields the mass terms should always be replaced by a gauge-invariant "Higgs" mechanism. One possibility is some kind of "Yukawa coupling" (see below) between the fermion field ψ and the Higgs field Φ, with unknown couplings Gψ, which after symmetry breaking (more precisely: after expansion of the Lagrange density around a suitable ground state) again results in the original mass terms, which are now, however (i.e. by introduction of the Higgs field) written in a gauge-invariant way. The Lagrange density for the "Yukawa" interaction of a fermion field ψ and the Higgs field Φ is where again the gauge field A only enters Dμ (i.e., it is only indirectly visible). The quantities γμ are the Dirac matrices, and Gψ is the already-mentioned "Yukawa" coupling parameter. Already now the mass-generation follows the same principle as above, namely from the existence of a finite expectation value |\langle\phi\rangle |, as described above. Again, this is crucial for the existence of the property "mass". History of research[edit] Spontaneous symmetry breaking offered a framework to introduce bosons into relativistic quantum field theories. However, according to Goldstone's theorem, these bosons should be massless.[8] The only observed particles which could be approximately interpreted as Goldstone bosons were the pions, which Yoichiro Nambu related to chiral symmetry breaking. A similar problem arises with Yang–Mills theory (also known as nonabelian gauge theory), which predicts massless spin-1 gauge bosons. Massless weakly interacting gauge bosons lead to long-range forces, which are only observed for electromagnetism and the corresponding massless photon. Gauge theories of the weak force needed a way to describe massive gauge bosons in order to be consistent. Five of the six 2010 APS Sakurai Prize Winners – (L to R) Tom Kibble, Gerald Guralnik, Carl Richard Hagen, François Englert, and Robert Brout Number six: Peter Higgs 2009 The mechanism was proposed in 1962 by Philip Warren Anderson,[9] who discussed its consequences for particle physics but did not work out an explicit relativistic model. The relativistic model was developed in 1964 by three independent groups – Robert Brout and François Englert;[10] Peter Higgs;[11] and Gerald Guralnik, Carl Richard Hagen, and Tom Kibble.[12][13][14] The mechanism is closely analogous to phenomena previously discovered by Yoichiro Nambu involving the "vacuum structure" of quantum fields in superconductivity.[15] A similar but distinct effect (involving an affine realization of what is now recognized as the Higgs field), known as the Stueckelberg mechanism, had previously been studied by Ernst Stueckelberg. These physicists discovered that when a gauge theory is combined with an additional field that spontaneously breaks the symmetry group, the gauge bosons can consistently acquire a nonzero mass. In spite of the large values involved (see below) this permits a gauge theory description of the weak force, which was independently developed by Steven Weinberg and Abdus Salam in 1967. Higgs's original article presenting the model was rejected by Physics Letters. When revising the article before resubmitting it to Physical Review Letters, he added a sentence at the end,[16] mentioning that it implies the existence of one or more new, massive scalar bosons, which do not form complete representations of the symmetry group; these are the Higgs bosons. The three papers by Brout and Englert; Higgs; and Guralnik, Hagen, and Kibble were each recognized as "milestone letters" by Physical Review Letters in 2008.[17] While each of these seminal papers took similar approaches, the contributions and differences among the 1964 PRL symmetry breaking papers are noteworthy. All six physicists were jointly awarded the 2010 J. J. Sakurai Prize for Theoretical Particle Physics for this work.[18] Benjamin W. Lee is often credited with first naming the "Higgs-like" mechanism, although there is debate around when this first occurred.[19][20][21] One of the first times the Higgs name appeared in print was in 1972 when Gerardus 't Hooft and Martinus J. G. Veltman referred to it as the "Higgs–Kibble mechanism" in their Nobel winning paper.[22][23] The Higgs mechanism occurs whenever a charged field has a vacuum expectation value. In the nonrelativistic context, this is the Landau model of a charged Bose–Einstein condensate, also known as a superconductor. In the relativistic condensate, the condensate is a scalar field, and is relativistically invariant. Landau model[edit] The Higgs mechanism is a type of superconductivity which occurs in the vacuum. It occurs when all of space is filled with a sea of particles which are charged, or, in field language, when a charged field has a nonzero vacuum expectation value. Interaction with the quantum fluid filling the space prevents certain forces from propagating over long distances (as it does in a superconducting medium; e.g., in the Ginzburg–Landau theory). This simple model treats superconductivity as a charged Bose–Einstein condensate. Suppose that a superconductor contains bosons with charge q. The wavefunction of the bosons can be described by introducing a quantum field, ψ, which obeys the Schrödinger equation as a field equation (in units where the reduced Planck constant, ħ, is set to 1): The operator ψ(x) annihilates a boson at the point x, while its adjoint ψ creates a new boson at the same point. The wavefunction of the Bose–Einstein condensate is then the expectation value ψ of ψ(x), which is a classical function that obeys the same equation. The interpretation of the expectation value is that it is the phase that one should give to a newly created boson so that it will coherently superpose with all the other bosons already in the condensate. \psi &\rightarrow e^{iq\phi(x)} \psi \\ A &\rightarrow A + \nabla \phi. The condensate wave function can be written as and taking the density of the condensate ρ to be constant, This is a harmonic oscillator with frequency \sqrt{\frac{1}{m} q^2 \rho^2}. Abelian Higgs mechanism[edit] In order for the phase of the vacuum to define a gauge, the field must have a phase (also referred to as 'to be charged'). In order for a scalar field Φ to have a phase, it must be complex, or (equivalently) it should contain two fields with a symmetry which rotates them into each other. The vector potential changes the phase of the quanta produced by the field when they move from point to point. In terms of fields, it defines how much to rotate the real and imaginary parts of the fields into each other when comparing field values at nearby points. The only renormalizable model where a complex scalar field Φ acquires a nonzero value is the Mexican-hat model, where the field energy has a minimum away from zero. The action for this model is S(\phi) = \int {1\over 2} |\partial \phi|^2 - \lambda \left(|\phi|^2 - \Phi^2\right)^2, which results in the Hamiltonian This potential energy, V(z, Φ) = λ(|z|2 − Φ2)2,[24] has a graph which looks like a Mexican hat, which gives the model its name. In particular, the minimum energy value is not at z = 0, but on the circle of points where the magnitude of z is Φ. Higgs potential V. For a fixed value of λ the potential is presented upwards against the real and imaginary parts of Φ. The Mexican-hat or champagne-bottle profile at the ground should be noted. S(\phi, A) = \int - {1\over 4} F^{\mu\nu} F_{\mu\nu} + |(\partial - i q A)\phi|^2 - \lambda (|\phi|^2 - \Phi^2)^2. Furthermore, choosing a gauge where the phase of the vacuum is fixed, the potential energy for fluctuations of the vector field is nonzero. So in the abelian Higgs model, the gauge field acquires a mass. To calculate the magnitude of the mass, consider a constant value of the vector potential A in the x direction in the gauge where the condensate has constant phase. This is the same as a sinusoidally varying condensate in the gauge where the vector potential is zero. In the gauge where A is zero, the potential energy density in the condensate is the scalar gradient energy: E = {1 \over 2} \left |\partial \left (\Phi e^{iqAx} \right) \right |^2 = {1 \over 2} q^2\Phi^2 A^2. This energy is the same as a mass term 1/2m2A2 where m = qΦ. Nonabelian Higgs mechanism[edit] The Nonabelian Higgs model has the following action: where now the nonabelian field A is contained in D and in the tensor components F^{\mu \nu} and F_{\mu \nu} (the relation between A and those components is well-known from the Yang–Mills theory). Again, the expectation value of Φ defines a preferred gauge where the vacuum is constant, and fixing this gauge, fluctuations in the gauge field A come with a nonzero energy cost. Affine Higgs mechanism[edit] Ernst Stueckelberg discovered[25] a version of the Higgs mechanism by analyzing the theory of quantum electrodynamics with a massive photon. Effectively, Stueckelberg's model is a limit of the regular Mexican hat Abelian Higgs model, where the vacuum expectation value H goes to infinity and the charge of the Higgs field goes to zero in such a way that their product stays fixed. The mass of the Higgs boson is proportional to H, so the Higgs boson becomes infinitely massive and decouples, so is not present in the discussion. The vector meson mass, however, equals to the product eH, and stays finite. \theta \rightarrow \theta + e\alpha\, A \rightarrow A + \alpha. \, D\theta = \partial \theta - e A.\, In order to keep θ fluctuations finite and nonzero in this limit, θ should be rescaled by H, so that its kinetic term in the action stays normalized. The action for the theta field is read off from the Mexican hat action by substituting \phi \;=\; He^{\frac{1}{H}i\theta}. since eH is the gauge boson mass. By making a gauge transformation to set θ = 0, the gauge freedom in the action is eliminated, and the action becomes that of a massive vector field: S= \int {1 \over 4} F^2 + {1 \over 2} m^2 A^2.\, See also[edit] 1. ^ An exception is the "photon", the quantum of "light radiation", which is massless. 3. ^ "Englert–Brout–Higgs–Guralnik–Hagen–Kibble Mechanism on Scholarpedia". Scholarpedia.org. Retrieved 2012-06-16.  4. ^ Liu, G. Z.; Cheng, G. (2002). "Extension of the Anderson-Higgs mechanism". Physical Review B 65 (13). arXiv:cond-mat/0106070. Bibcode:2002PhRvB..65m2513L. doi:10.1103/PhysRevB.65.132513.  5. ^ Matsumoto, H.; Papastamatiou, N. J.; Umezawa, H.; Vitiello, G. (1975). "Dynamical rearrangement in the Anderson-Higgs-Kibble mechanism". Nuclear Physics B 97: 61. doi:10.1016/0550-3213(75)90215-1.  7. ^ "Press release from Royal Swedish Academy of Sciences". 8 October 2013. Retrieved 8 October 2013.  8. ^ "Guralnik, G S; Hagen, C R and Kibble, T W B (1967). Broken Symmetries and the Goldstone Theorem. Advances in Physics, vol. 2". Datafilehost.com. Retrieved 2012-06-16.  9. ^ P. W. Anderson (1962). "Plasmons, Gauge Invariance, and Mass". Physical Review 130 (1): 439–442. Bibcode:1963PhRv..130..439A. doi:10.1103/PhysRev.130.439.  10. ^ F. Englert and R. Brout (1964). "Broken Symmetry and the Mass of Gauge Vector Mesons". Physical Review Letters 13 (9): 321–323. Bibcode:1964PhRvL..13..321E. doi:10.1103/PhysRevLett.13.321.  11. ^ Peter W. Higgs (1964). "Broken Symmetries and the Masses of Gauge Bosons". Physical Review Letters 13 (16): 508–509. Bibcode:1964PhRvL..13..508H. doi:10.1103/PhysRevLett.13.508.  12. ^ G. S. Guralnik, C. R. Hagen, and T. W. B. Kibble (1964). "Global Conservation Laws and Massless Particles". Physical Review Letters 13 (20): 585–587. Bibcode:1964PhRvL..13..585G. doi:10.1103/PhysRevLett.13.585.  13. ^ Gerald S. Guralnik (2009). "The History of the Guralnik, Hagen and Kibble development of the Theory of Spontaneous Symmetry Breaking and Gauge Particles". International Journal of Modern Physics A24 (14): 2601–2627. arXiv:0907.3466. Bibcode:2009IJMPA..24.2601G. doi:10.1142/S0217751X09045431.  14. ^ History of Englert–Brout–Higgs–Guralnik–Hagen–Kibble Mechanism on Scholarpedia. 15. ^ Nambu, Y (1960). "Quasiparticles and Gauge Invariance in the Theory of Superconductivity". Physical Review 117 (3): 648–663. Bibcode:1960PhRv..117..648N. doi:10.1103/PhysRev.117.648.  16. ^ Higgs, Peter (2007). "Prehistory of the Higgs boson". Comptes Rendus Physique 8 (9): 970–972. Bibcode:2007CRPhy...8..970H. doi:10.1016/j.crhy.2006.12.006  17. ^ "Physical Review Letters – 50th Anniversary Milestone Papers". Prl.aps.org. Retrieved 2012-06-16.  18. ^ "American Physical Society – J. J. Sakurai Prize Winners". Aps.org. Retrieved 2012-06-16.  19. ^ Department of Physics and Astronomy. "Rochester's Hagen Sakurai Prize Announcement". Pas.rochester.edu. Retrieved 2012-06-16.  20. ^ FermiFred (2010-02-15). "C.R. Hagen discusses naming of Higgs Boson in 2010 Sakurai Prize Talk". Youtube.com. Retrieved 2012-06-16.  21. ^ Sample, Ian (2009-05-29). "Anything but the God particle by Ian Sample". Guardian. Retrieved 2012-06-16.  22. ^ G. 't Hooft and M. Veltman (1972). "Regularization and Renormalization of Gauge Fields". Nuclear Physics B 44 (1): 189–219. Bibcode:1972NuPhB..44..189T. doi:10.1016/0550-3213(72)90279-9.  23. ^ "Regularization and Renormalization of Gauge Fields by t'Hooft and Veltman (PDF)" (PDF). Retrieved 2012-06-16.  25. ^ Stueckelberg, E. C. G. (1938), "Die Wechselwirkungskräfte in der Elektrodynamik und in der Feldtheorie der Kräfte", Helv. Phys. Acta. 11: 225 Further reading[edit] External links[edit]
515d919f226e7538
What's new in p-Adic Length Scale Hypothesis and Dark Matter Hierarchy Note: Newest contributions are at the top! Year 2007 Are the abundances of heavier elements determined by cold fusion in interstellar medium? According to the standard model, elements not heavier than Li were created in Big Bang. Heavier elements were produced in stars by nuclear fusion and ended up to the interstellar space in super-nova explosions and were gradually enriched in this process. Lithium problem forces to take this theoretical framework with a grain of salt. The work of Kervran [1] suggests that cold nuclear reactions are occurring with considerable rates, not only in living matter but also in non-organic matter. Kervran indeed proposes that also the abundances of elements at Earth and planets are to high degree determined by nuclear transmutations and discusses some examples. For instance, new mechanisms for generation of O and Si would change dramatically the existing views about evolution of planets and prebiotic evolution of Earth. This inspires the question whether elements heavier than Li could be produced in interstellar space by cold nuclear reactions. In the following I consider a model for this. The basic prediction is that the abundances of heavier elements should not depend on time if the interstellar production dominates. The prediction is consistent with the recent experimental findings challenging seriously the standard model. 1. Are heavier nuclei produced in the interstellar space? TGD based model for cold fusion by plasma electrolysis and using heavy water explains many other anomalies: for instance, H1.5 anomaly of water and Lithium problem of cosmology (the amount of Li is considerably smaller than predicted by Big Bang cosmology and the explanation is that part of it transforms to dark Li with larger value of hbar and present in water). The model allows to understand the surprisingly detailed discoveries of Kervran about nuclear transmutations in living matter (often by bacteria) by possible slight modifications of mechanisms proposed by Kervran. If this picture is correct, it would have dramatic technological implications. Cold nuclear reactions could provide not only a new energy technology but also a manner to produce artificially various elements, say metals. The treatment of nuclear wastes might be carried out by inducing cold fissions of radioactive heavy nuclei to stable products by allowing them to collide with dark Lithium nuclei in water so that Coulomb wall is absent. Amazingly, there are bacteria which can live in the extremely harsh conditions provided by nuclear reactor were anything biological should die. Perhaps these bacteria carry out this process in their own body. The model also encourages to consider a simple model for the generation of heavier elements in interstellar medium: what is nice that the basic prediction differentiating this model from standard model is consistent with the recent experimental findings. The assumptions are following. 1. Dark nuclei X(3k, n), that is nuclear strings of form Li(3,n), C(6,n), F(9,n), Mg(12,n), P(15,n), A(18,n), etc..., form as a fusion of Li strings. n=Z,Z+1 is the most plausible value of n. There is also 4He present but as a noble gas it need not play an important role in condensed matter phase (say interstellar dust). The presence of water necessitates that of Li(3,n) if one accepts the proposed model as such. 2. The resulting nuclei are in general stable against spontaneous fission by energy conservation. The binding energy of He(2,2) is however exceptionally high so that alpha decay can occur in dark nuclear reactions between X(3k,n) allowed by the considerable reduction of the Coulomb wall. The induced fissions X(3k,n)→ X(3k-2,n-2)+He(2,2) produces nuclei with atomic number Z mod 3= 1 such as Be(4,5), N(7,7), Ne(10,10), Al(13,14), S(16,16), K(19,20),... Similar nuclear reactions make possible a further alpha decay of Z mod 3=1 nuclei to give nuclei with Z mod 2 such as B(5,6), O(8,8), Na(11,12), Si(14,14), Cl(17,18), Ca(20,20),... so that most stable isotopes of light nuclei could result in these fissions. 3. The dark nuclear fusions of already existing nuclei can create also heavier Fe. Only the gradual decrease of the binding energy per nucleon for nuclei heavier than Fe poses restrictions on this process. 2. The abundances of nuclei in interstellar space should not depend on time The basic prediction of TGD inspired model is that the abundances of the nuclei in the interstellar space should not depend on time if the rates are so high that equilibrium situation is reached rapidly. The hbar increasing phase transformation of the nuclear space-time sheet determines the time scale in which equilibrium sets on. Standard model makes different prediction: the abundances of the heavier nuclei should gradually increase as the nuclei are repeatedly re-processed in stars and blown out to the interstellar space in super-nova explosion. Amazingly, there is empirical support for this highly non-trivial prediction [2]. Quite surprisingly, the 25 measured elemental abundances (elements up to Sn(50,70) (tin) and Pb(82,124) (lead)) of a 12 billion years old galaxy turned out to be very nearly the same as those for Sun. For instance, oxygen abundance was 1/3 from that from that estimated for Sun. Standard model would predict that the abundances should be .01-.1 from that for Sun as measured for stars in our galaxy. The conjecture was that there must be some unknown law guaranteing that the distribution of stars of various masses is time independent. The alternative conclusion would be that heavier elements are created mostly in interstellar gas and dust. 3. Could also "ordinary" nuclei consist of protons and negatively charged color bonds? The model would strongly suggest that also ordinary stable nuclei consist of protons with proton and negatively charged color bond behaving effectively like neutron. Note however that I have also consider the possibility that neutron halo consists of protons connected by negatively charged color bonds to main nucleus. The smaller mass of proton would favor it as a fundamental building block of nucleus and negatively charged color bonds would be a natural manner to minimizes Coulomb energy. The fact that neutron does not suffer a beta decay to proton in nuclear environment provided by stable nuclei would also find an explanation. 1. Ordinary shell model of nucleus would make sense in length scales in which proton plus negatively charged color bond looks like neutron. 2. The strictly nucleonic strong nuclear isospin is not vanishing for the ground state nuclei if all nucleons are protons. This assumption of the nuclear string model is crucial for quantum criticality since it implies that binding energies are not changed in the scaling of hbar if the length of the color bonds is not changed. The quarks of charged color bond however give rise to a compensating strong isospin and color bond plus proton behaves in a good approximation like neutron. 3. Beta decays might pose a problem for this model. The electrons resulting in beta decays of this kind nuclei consisting of protons should come from the beta decay of the d-quark neutralizing negatively charged color bond. The nuclei generated in high energy nuclear reactions would presumably contain genuine neutrons and suffer beta decay in which d quark is nucleonic quark. The question is whether how much the rates for these two kinds of beta decays differ and whether existing facts about beta decays could kill the model. [1] C. L. Kervran (1972), Biological transmutations, and their applications in chemistry, physics, biology, ecology, medicine, nutrition, agriculture, geology, Swan House Publishing Co. [2] J. Prochaska, J. C. Howk, A. M. Wolfe (2003), The elemental abundance pattern in a galaxy at z = 2.626, Nature 423, 57-59 (2003). See also Distant elements of surprise. For details see the chapter Nuclear String Hypothesis. The work of Kanarev and Mizuno about cold fusion in electrolysis The article of Kanarev and Mizuno [1] reports findings supporting the occurrence of cold fusion in NaOH and KOH hydrolysis. The situation is different from standard cold fusion where heavy water D2O is used instead of H2O. 1. One can understand the cold fusion reactions reported by Mizuno as nuclear reactions in which part of what I call dark proton string having negatively charged color bonds (essentially a zoomed up variant of ordinary nucleus with large Planck constant) suffers a phase transition to ordinary matter and experiences ordinary strong interactions with the nuclei at the cathode. In the simplest model the final state would contain only ordinary nuclear matter. 2. Negatively charged color bonds could correspond to pairs of quark and antiquark or to pairs of color octet electron and antineutrino having mass of order 1 MeV. Also quantum superpositions of quark and lepton pairs can be considered. Note that TGD predicts that leptons can have colored excitations and production of neutral leptopions formed from them explains the anomalous production of electron-positron pairs associated with heavy ion collisions near Coulomb wall. 3. The so called H1.5O anomaly of [2] can be understood if 1/4 of protons of water forms dark lithium nuclei or heavier nuclei formed as sequences of these just as ordinary nuclei are constructed as sequences of 4He and lighter nuclei in nuclear string model. The results force to consider the possibility that nuclear isotopes unstable as ordinary matter can be stable dark matter. In the formation of these sequence the negative electronic charge of hydrogen atoms goes naturally to the color bonds. The basic interaction would generate charge quark pair (or a pair of color octet electron and antineutrino or a quantum superposition of quark and lepton pair) plus color octet neutrino. By lepton number conservation each electron pair would give rise to a color singlet particle formed by two color octet neutrinos and defining the analog of leptobaryon. Di-neutrino would leave the system unless unless it has large enough mass. Neutrino mass scale .1 eV gives for the Compton time scale the estimate .1 attoseconds which would suggest that di-neutrinos do not leak out. Recall that attosecond is the time scale in which H1.5O behavior prevails. 4. The data of Mizuno requires that the protonic strings have net charge of three units and by em stability have neutral color bonds at ends and negatively charged bonds in between. Dark variants of Li isotopes would be in question. The so called lithium problem of cosmology (the observed abundance of lithium is by a factor 2.5 lower than predicted by standard cosmology [3]) can be resolved if lithium nuclei transform partially to dark lithium nuclei. 5. Biologically important ions K+, Cl-, Ca++ appear in cathode in plasma electrolysis and would be produced in cold nuclear reactions of dark Li nuclei of water and Na+. This suggests that cold nuclear reactions occur also in living cell and produce metabolic energy. There exists evidence for nuclear transmutations in living matter [4]. In particular, Kervran claims that it is very difficult to understand where the Ca in egg shells comes from. Cell membrane would provide the extremely strong electric field perhaps creating the plasma needed for cold nuclear reactions somewhat like in plasma electrolysis. 6. The model is consistent with the model for cold fusion of deuterium nuclei [5]. In this case nuclear reaction would however occur on the "dark side". The absence of He from reaction products can be understood if the D nuclei in Pd target are transformed by weak interactions between D and Pd nuclei to their neutral counterparts analogous to di-neutrons. Neutral color bond could transform to negatively charged one by the exchange of W+ boson of a scaled version of weak interactions with the range of interaction given by atomic length scale. Also exchange of charge ρ meson of scaled down variant of QCD could affect the same thing. This interaction might be at work also for ordinary nuclei in condensed matter and ordinary nuclei could contain protons and negatively charged color bonds neutrons. The difference in mass would be very small since the quarks have mass of order MeV. The model leads also to a new understanding of ordinary [6] and plasma electrolysis of water [7], and allows to identify hydrogen bond as dark OH bond. 1. The model for plasma hydrolysis relies on the observation of Kanarev that the energy of OH bonds in water is reduce from about 8 eV to a value around .5 eV which corresponds to the fundamental metabolic energy quantum resulting in dropping of proton from atomic k=137 space-time sheet and also to a typical energy of hydrogen bond. This suggests the possibility that hydrogen bond is actually a dark OH bond. From 1/hbar-proportionality of perturbative contribution of Coulomb energy for bond one obtains that dark bond energy scales as 1/hbar so that dark OH bond could be in question. In Kanarev's plasma electrolysis the temperature is between .5-1 eV and thermal radiation could induce producing 2H2+O2 by the splitting of the dark OH bonds. One could have hbar=24×hbar0. Also in the ordinary electrolysis the OH bond energy is reduced by a factor of order 2 which suggest that in this case one has hbar=2×hbar0. 2. The transformation of OH bonds to their dark counterparts requires energy and this energy would come from dark nuclear reactions. The liberated (dark) photons could kick protons from (dark) atomic space-time sheets to smaller space-time sheets and remote metabolism would provide the energy for the transformation of OH bond. The existence of dark hydrogen bonds with energies differing by integer scaling is predicted and powers of 2 are favored. It is known that at least two hydrogen bonds for which energies differ by factor 2 exist in ice [8]. 3. In plasma electrolysis the increase of the input voltage implies a mysterious reduction of the electron current with the simultaneous increase of the size of the plasma region near the cathode. The electronic charge must go somewhere and the natural place are negative color bonds connecting dark protons to dark lithium isotopes. The energy liberated in cold nuclear reactions would create plasma by ionizing hydrogen atoms which in turn would generate more dark protons fused to dark lithium isotopes and increase the rate of energy production by dark nuclear reactions. This means a positive feedback loop analogous to that occurring in ordinary nuclear reactions. The model explains also the burning of salt water discovered by Kanzius [9] as a special case of plasma electrolysis since the mechanism does not necessitate the presence of either anode, cathode, or electron current. 1. The temperature of the flame is estimated to be 1500 C. The temperature in water could be considerably higher and 1500 C defines a very conservative estimate. Hydrolysis would be preceded by the transformation of HO bonds to hydrogen bonds and dark nuclear reactions would provide the energy. Again positive feedback loop should be created. Dark radio wave photons would transform to microwave photons and together with nuclear energy production would keep the water at the temperature corresponding to the energy of.017 eV (for conservative estimate T=.17 eV in water) so that dark OH bonds would break down thermally. 2. For T=1500 C the energy of dark OH bond (hydrogen bond) would be very low, around .04 eV for hbar=180×hbar0 and nominal value 8 eV OH bond energy (this is not far from the energy assignable to the membrane resting potential) from the condition that dark radio wave frequency 13.65 MHz corresponds to the microwave frequency needed to heat water by the rotational excitation of water molecules. 3. Visible light would result as dark protons drop from k=165 space-time sheet to any larger space-time sheet or from k=164 to k=165 space-time sheet (2 eV radiation). 2 eV photons would explain the yellow color in the flame (not red as I have claimed earlier). The red light present in Kanarev's experiment can be also understood since there is entire series E(n)= E× (1-2-n) of energies corresponding to transitions to space-time sheets with increasing p-adic length scale. For k=165 n<6 corresponds to red or infrared light and n>5 to yellow light. 4. There is no detectable or perceivable effect on hand by the radio wave radiation. The explanation would be that dark hydrogen bonds in cellular water correspond to a different values of Planck constant. One should of course check whether the effect is really absent. For more details see the chapter Nuclear String Hypothesis. [1] Cold fusion by plasma electrolysis of water, Ph. M. Kanarev and T. Mizuno (2002), [2] M. Chaplin (2005), Water Structure and Behavior, For 41 anomalies see http://www.lsbu.ac.uk/water/anmlies.html. For the icosahedral clustering see http://www.lsbu.ac.uk/water/clusters.html. J. K. Borchardt(2003), The chemical formula H2O - a misnomer, The Alchemist 8 Aug (2003). R. A. Cowley (2004), Neutron-scattering experiments and quantum entanglement, Physica B 350 (2004) 243-245. R. Moreh, R. C. Block, Y. Danon, and M. Neumann (2005), Search for anomalous scattering of keV neutrons from H2O-D2O mixtures, Phys. Rev. Lett. 94, 185301. [3] C. Charbonnel and F. Primas (2005), The lithium content of the Galactic Halo stars. See also Lithium. P. Tompkins and C. Bird (1973), The secret life of plants, Harper and Row, New York. [5] Cold fusion is back at the American Chemical Society. See also Cold fusion - hot news again . [6] Electrolysis of water. [7] P. Kanarev (2002), Water is New Source of Energy, Krasnodar. [8] J-C. Li and D.K. Ross (1993), Evidence of Two Kinds of Hydrogen Bonds in Ices. J-C. Li and D.K. Ross, Nature, 365, 327-329. [9] Burning salt water. Ultra high energy cosmic rays as super-canonical quanta? Lubos tells about the announcement of Pierre Auger Collaboration relating to ultrahigh energy cosmic rays. I glue below a popular summary of the findings. Scientists of the Pierre Auger Collaboration announced today (8 Nov. 2007) that active galactic nuclei are the most likely candidate for the source of the highest-energy cosmic rays that hit Earth. Using the Pierre Auger Observatory in Argentina, the largest cosmic-ray observatory in the world, a team of scientists from 17 countries found that the sources of the highest-energy p"../articles/ are not distributed uniformly across the sky. Instead, the Auger results link the origins of these mysterious p"../articles/ to the locations of nearby galaxies that have active nuclei in their centers. The results appear in the Nov. 9 issue of the journal Science. Active Galactic Nuclei (AGN) are thought to be powered by supermassive black holes that are devouring large amounts of matter. They have long been considered sites where high-energy particle production might take place. They swallow gas, dust and other matter from their host galaxies and spew out p"../articles/ and energy. While most galaxies have black holes at their center, only a fraction of all galaxies have an AGN. The exact mechanism of how AGNs can accelerate p"../articles/ to energies 100 million times higher than the most powerful particle accelerator on Earth is still a mystery. About million cosmic ray events have been recorded and 80 of them correspond to p"../articles/ with energy above the so called GKZ bound, which is .54 × 1011 GeV. Electromagnetically interacting p"../articles/ with these energies from distant galaxies should not be able to reach Earth. This would be due to the scattering from the photons of the microwave background. About 20 p"../articles/ of this kind however comes from the direction of distant active galactic nuclei and the probability that this is an accident is about 1 per cent. P"../articles/ having only strong interactions would be in question. The problem is that this kind of p"../articles/ are not predicted by the standard model (gluons are confined). 1. What does TGD say about the finding? TGD provides an explanation for the new kind of p"../articles/. 1. The original TGD based model for the galactic nucleus is as a highly tangled cosmic string (in TGD sense of course, see this). Much later it became clear that also TGD based model for black-hole is as this kind of string like object near Hagedorn temperature (see this and this). Ultrahigh energy p"../articles/ could result as decay products of a decaying split cosmic string as an extremely energetic galactic jet. Kind of cosmic fire cracker would be in question. Originally I proposed this decay as an explanation for the gamma ray bursts. It seems that gamma ray bursts however come from thickened cosmic strings having weaker magnetic field and much lower energy density (see this). 2. TGD predicts p"../articles/ having only strong interactions (see this). I have christened these p"../articles/ super-canonical quanta. These p"../articles/ correspond to the vibrational degrees of freedom of partonic 2-surface and are not visible at the quantum field theory limit for which partonic 2-surfaces become points. 2. What super-canonical quanta are? Super-canonical quanta are created by the elements of super-canonical algebra, which creates quantum states besides the super Kac-Moody algebra present also in super string model. Both algebras relate closely to the conformal invariance of light-like 3-surfaces. 1. The elements of super-canonical algebra are in one-one correspondence with the Hamiltonians generating symplectic transformations of δM4+× CP2. Note that the 3-D light-cone boundary is metrically 2-dimensional and possesses degenerate symplectic and Kähler structures so that one can indeed speak about symplectic (canonical) transformations. 2. This algebra is the analog of Kac-Moody algebra with finite-dimensional Lie group replaced with the infinite-dimensional group of symplectic transformations (see this). This should give an idea about how gigantic a symmetry is in question. This is as it should be since these symmetries act as the largest possible symmetry group for the Kähler geometry of the world of classical worlds (WCW) consisting of light-like 3-surfaces in 8-D imbedding space for given values of zero modes (labelling the spaces in the union of infinite-dimensional symmetric spaces). This implies that for the given values of zero modes all points of WCW are metrically equivalent: a generalization of the perfect cosmological principle making theory calculable and guaranteing that WCW metric exists mathematically. Super-canonical generators correspond to gamma matrices of WCW and have the quantum numbers of right handed neutrino (no electro-weak interactions). Note that a geometrization of fermionic statistics is achieved. 3. The Hamiltonians and super-Hamiltonians have only color and angular momentum quantum numbers and no electro-weak quantum numbers so that electro-weak interactions are absent. Super-canonical quanta however interact strongly. 3. Also hadrons contain super-canonical quanta One can say that TGD based model for hadron is at space-time level kind of combination of QCD and old fashioned string model forgotten when QCD came in fashion and then transformed to the highly unsuccessful but equally fashionable theory of everything. 1. At quantum level the energy corresponding to string tension explaining about 70 per cent of proton mass corresponds to super-canonical quanta (see this). Supercanonical quanta allow to understand hadron masses with a precision better than 1 per cent. 2. Super-canonical degrees of freedom allow also to solve spin puzzle of the proton: the average quark spin would be zero since same net angular momentum of hadron can be obtained by coupling quarks of opposite spin with angular momentum eigen states with different projection to the direction of quantization axis. 3. If one considers proton without valence quarks and gluons, one obtains a boson with mass very nearly equal to that of proton (for proton super-canonical binding energy compensates quark masses with high precision). These kind of pseudo protons might be created in high energy collisions when the space-time sheets carrying valence quarks and super-canonical space-time sheet separate from each other. Super-canonical quanta might be produced in accelerators in this manner and there is actually experimental support for this from Hera (see this). 4. The exotic p"../articles/ could correspond to some p-adic copy of hadron physics predicted by TGD and have very large mass smaller however than the energy. Mersenne primes Mn= 2n-1 define excellent candidates for these copies. Ordinary hadrons correspond to M107. The protons of M31 hadron physics would have the mass of proton scaled up by a factor 2(107-31)/2=238≈ 2.6×1011. Energy should be above 2.6 × 1011 GeV and is above .54 × 1011 GeV for the p"../articles/ above the GKZ limit. Even super-canonical quanta associated with proton of this kind could be in question. Note that CP2 mass corresponds roughly to about 1014 proton masses. 5. Ideal blackholes would be very long highly tangled string like objects, scaled up hadrons, containing only super-canonical quanta. Hence it would not be surprising if they would emit super-canonical quanta. The transformation of supernovas to neutron stars and possibly blackholes would involve the fusion of hadronic strings to longer strings and eventual annihilation and evaporation of the ordinary matter so that only super-canonical matter would remain eventually. A wide variety of intermediate states with different values of string tension would be possible and the ultimate blackhole would correspond to highly tangled cosmic string. Dark matter would be in question in the sense that Planck constant could be very large. For more details see the chapter p-Adic Particle Massivation: New Physics. Does Higgs boson appear with two p-adic mass scales? The p-adic mass scale of quarks is in TGD Universe dynamical and several mass scales appear already in low energy hadron mass formulas. Also neutrinos seem to correspond to several mass scales and the large variation of electron's effective mass in condensed matter might be also partially due to the variation of p-adic mass scale. The values of Higgs mass deduced from high precision electro-weak observables converges to two values differing by order of magnitude (see this and this) and this raises the question whether also Higgs mass scale could vary and depend on experimental situation. 1. Higgs mass in standard model In standard model Higgs and W boson masses are given by mH2= 2v2λ=μ2λ3, mW2= g2v2/4= [e2/8sin2W)] μ2λ2 . This gives λ= [π/2αemsin2W)] (mH/mW)2 . In standard model one cannot predict the value of mH. 2. Higgs mass in TGD In TGD framework one can try to understand Higgs mass from p-adic thermodynamics as resulting via the same mechanism as fermion masses so that the value of the parameter λ would follow as a prediction. One must assume that p-adic temperature equals to Tp=1. The natural assumption is that Higgs can be regarded as superposition of pairs of fermion and anti-fermion at opposite throats of wormhole contact. With these assumptions the thermal expectation of the Higgs conformal weight is just the sum of contributions from both throats and two times the average of the conformal weight over that for quarks and leptons: sH= 2× <s> = 2× [∑q sq +∑L sL]/(Nq+NL) = 2∑g=02 smod(g)/3+ (sL+sνL+ sU+sD)/2 = 26+5+4+5+8/2= 37 . A couple of comments about the formula are in order. 1. The first term - two times the average of the genus dependent modular contribution to the conformal weight - equals to 26, and comes from modular degrees of freedom and does not depend on the charge of fermion. 2. The contribution of p-adic thermodynamics for super-conformal generators gives same contribution for all fermion families and depends on the em charge of fermion. The values of thermal conformal weights deduced earlier have been used. Note that only the value sνL=4 (also sνL=5 could be considered) is possible if one requires that the conformal weight is integer. If the standard form of the canonical identification mapping p-adics to reals is used, this must be the case since otherwise real mass would be super-heavy. 3. What p-adic mass scale Higgs corresponds? The first guess would be that the p-adic length scale associated with Higgs boson is M89. Second option is p≈ 2k, k=97 (restricting k to be prime). If one allows k to be non-prime (these values of k are also realized) one can consider also k=91=7×13. By scaling from the expression for the electron mass, one obtains the estimates mH(89)≈ (37/5)1/2× 219me≈ 727.3 GeV , mH(91)≈ (37/5)1/2× 217me≈ 363.5 GeV, mH(97)≈ (37/5)1/2× 215me≈ 45.5 GeV. A couple of comments are in order. 1. From the article of Giudice one learns that the latest estimates for Higgs mass give two widely different values, namely mH= 3133-19 GeV and mH=420420-190 GeV. Since the p-adic mass scale of both neutrinos and quarks and possibly even electron can vary in TGD framework, one cannot avoid the question whether - depending on experimental situation- Higgs could appear in two different mass scales corresponding to k=91 and 97. 2. The low value of mH(97) might be consistent with experimental facts since the couplings of fermions to Higgs can in TGD framework be weaker than in standard model because Higgs expectation does not contribute to fermion masses. 4. Unitary bound and Higgs mass The value of λ is given in the three cases by λ(89)≈ 4.41 , λ(91)≈ 1.10, λ(97)= .2757. Unitarity would thus favor k=97 and k=91 also favored by the high precision data and k=91 is just at the unitarity bound λ=1) (here I am perhaps naive!). A possible interpretation is that for M89 Higgs mass forces λ to break unitarity bound and that this corresponds to the emergence of M89 copy of hadron physics. For more details see the chapter Massless p"../articles/ and particle massivation. Connes tensor product and perturbative expansion in terms of generalized braid diagrams Many steps of progress have occurred in TGD lately. 1. In a given measurement resolution characterized by the inclusion of HFFs of type II1 Connes tensor product defines an almost universal M-matrix apart from the non-uniqueness due to the facts that one has a direct sum of hyper-finite factors of type II1 (sum over conformal weights at least) and the fact that the included algebra defining the measurement resolution can be represented in a reducible manner. The S-matrices associated with irreducible factors would be unique in a given measurement resolution and the non-uniqueness would make possible non-trivial density matrices and thermodynamics. 2. Higgs vacuum expectation is proportional to the generalized position dependent eigenvalue of the modified Dirac operator and its minima define naturally number theoretical braids as orbits for the minima of the universal Higgs potential: fusion and decay of braid strands emerge naturally. Thus the old speculation about a generalization of braid diagrams to Feynman diagram likes objects, which I already began to think to be too crazy to be true, finds a very natural realization. In the previous posting I explained how generalized braid diagrams emerge naturally as orbits of the minima of Higgs defined as a generalized eigenvalue of the modified Dirac operator. The association of generalized braid diagrams to incoming and outgoing 3-D partonic legs and possibly also vertices of the generalized Feynman diagrams forces to ask whether the generalized braid diagrams could give rise to a counterpart of perturbation theoretical formalism via the functional integral over configuration space degrees of freedom. The question is how the functional integral over configuration space degrees of freedom relates to the generalized braid diagrams. The basic conjecture motivated also number theoretically is that radiative corrections in this sense sum up to zero for critical values of Kähler coupling strength and Kähler function codes radiative corrections to classical physics via the dependence of the scale of M4 metric on Planck constant. Cancellation occurs only for critical values of Kähler coupling strength αK: for general values of αK cancellation would require separate vanishing of each term in the sum and does not occur. The natural guess is that finite measurement resolution in the sense of Connes tensor product can be described as a cutoff to the number of generalized braid diagrams. Suppose that the cutoff due to the finite measurement resolution can be described in terms of inclusions and M-matrix can be expressed as a Connes tensor product. Suppose that the improvement of the measurement resolution means the introduction of zero energy states and corresponding light-like 3-surfaces in shorter time scales bringing in increasingly complex 3-topologies. This would mean following. 1. One would not have perturbation theory around a given maximum of Kähler function but as a sum over increasingly complex maxima of Kähler function. Radiative corrections in the sense of perturbative functional integral around a given maximum would vanish (so that the expansion in terms of braid topologies would not make sense around single maximum). Radiative corrections would not vanish in the sense of a sum over 3-topologies obtained by adding radiative corrections as zero energy states in shorter time scale. 2. Connes tensor product with a given measurement resolution would correspond to a restriction on the number of maxima of Kähler function labelled by the braid diagrams. For zero energy states in a given time scale the maxima of Kähler function could be assigned to braids of minimal complexity with braid vertices interpreted in terms of an addition of radiative corrections. Hence a connection with QFT type Feyman diagram expansion would be obtained and the Connes tensor product would have a practical computational realization. 3. The cutoff in the number of topologies (maxima of Kähler function contributing in a given resolution defining Connes tensor product) would be always finite in accordance with the algebraic universality. 4. The time scale resolution defined by the temporal distance between the tips of the causal diamond defined by the future and past light-cones applies to the addition of zero energy sub-states and one obtains a direct connection with p-adic length scale evolution of coupling constants since the time scales in question naturally come as negative powers of two. More precisely, p-adic p-adic primes near power of two are very natural since the coupling constant evolution comes in powers of two of fundamental 2-adic length scale. There are still some questions. Radiative corrections around given 3-topology vanish. Could radiative corrections sum up to zero in an ideal measurement resolution also in 2-D sense so that the initial and final partonic 2-surfaces associated with a partonic 3-surface of minimal complexity would determine the outcome completely? Could the 3-surface of minimal complexity correspond to a trivial diagram so that free theory would result in accordance with asymptotic freedom as measurement resolution becomes ideal? The answer to these questions seems to be 'No'. In the p-adic sense the ideal limit would correspond to the limit p→ 0 and since only p→ 2 is possible in the discrete length scale evolution defined by primes, the limit is not a free theory. This conforms with the view that CP2 length scale defines the ultimate UV cutoff. For more details see the chapter Massless P"../articles/ and Particle Massivation. Number theoretic braids and global view about anti-commutations of induced spinor fields The anti-commutations of induced spinor fields are reasonably well understood locally. The basic objects are 3-dimensional light-like 3-surfaces. These surfaces can be however seen as random light-like orbits of partonic 2-surfaces taking which would thus seem to take the role of fundamental dynamical objects. Conformal invariance in turn seems to make the 2-D partons 1-D objects and number theoretical braids in turn discretizes strings. And it also seems that the strands of number theoretic braid can in turn be discretized by considering the minima of Higgs potential in 3-D sense. Somehow these apparently contradictory views should be unifiable in a more global view about the situation allowing to understand the reduction of effective dimension of the system as one goes to short scales. The notions of measurement resolution and number theoretic braid indeed provide the needed insights in this respect. 1. Anti-commutations of the induced spinor fields and number theoretical braids The understanding of the number theoretic braids in terms of Higgs minima and maxima allows to gain a global view about anti-commutations. The coordinate patches inside which Higgs modulus is monotonically increasing function define a division of partonic 2-surfaces X2t= X3l\intersection δ M4+/-,t to 2-D patches as a function of time coordinate of X3l as light-cone boundary is shifted in preferred time direction defined by the quantum critical sub-manifold M2× CP2. This induces similar division of the light-like 3-surfaces X3l to 3-D patches and there is a close analogy with the dynamics of ordinary 2-D landscape. In both 2-D and 3-D case one can ask what happens at the common boundaries of the patches. Do the induced spinor fields associated with different patches anti-commute so that they would represent independent dynamical degrees of freedom? This seems to be a natural assumption both in 2-D and 3-D case and correspond to the idea that the basic objects are 2- resp. 3-dimensional in the resolution considered but this in a discretized sense due to finite measurement resolution, which is coded by the patch structure of X3l. A dimensional hierarchy results with the effective dimension of the basic objects increasing as the resolution scale increases when one proceeds from braids to the level of X3l. If the induced spinor fields associated with different patches anti-commute, patches indeed define independent fermionic degrees of freedom at braid points and one has effective 2-dimensionality in discrete sense. In this picture the fundamental stringy curves for X2t correspond to the boundaries of 2-D patches and anti-commutation relations for the induced spinor fields can be formulated at these curves. Formally the conformal time evolution scaled down the boundaries of these patches. If anti-commutativity holds true at the boundaries of patches for spinor fields of neighboring patches, the patches would indeed represent independent degrees of freedom at stringy level. The cutoff in transversal degrees of freedom for the induced spinor fields means cutoff n≤ nmax for the conformal weight assignable to the holomorphic dependence of the induced spinor field on the complex coordinate. The dropping of higher conformal weights should imply the loss of the anti-commutativity of the induced spinor fields and its conjugate except at the points of the number theoretical braid. Thus the number theoretic braid should code for the value of nmax: the naive expectation is that for a given stringy curve the number of braid points equals to nmax. 2. The decomposition into 3-D patches and QFT description of particle reactions at the level of number theoretic braids What is the physical meaning of the decomposition of 3-D light-like surface to patches? It would be very desirable to keep the picture in which number theoretic braid connects the incoming positive/negative energy state to the partonic 2-surfaces defining reaction vertices. This is not obvious if X3l decomposes into causally independent patches. One can however argue that although each patch can define its own fermion state it has a vanishing net quantum numbers in zero energy ontology, and can be interpreted as an intermediate virtual state for the evolution of incoming/outgoing partonic state. Another problem - actually only apparent problem -has been whether it is possible to have a generalization of the braid dynamics able to describe particle reactions in terms of the fusion and decay of braid strands. For some strange reason I had not realized that number theoretic braids naturally allow fusion and decay. Indeed, cusp catastrophe is a canonical representation for the fusion process: cusp region contains two minima (plus maximum between them) and the complement of cusp region single minimum. The crucial control parameter of cusp catastrophe corresponds to the time parameter of X3l. More concretely, two valleys with a mountain between them fuse to form a single valley as the two real roots of a polynomial become complex conjugate roots. The continuation of light-like surface to slicing of X4 to light-like 3-surfaces would give the full cusp catastrophe. In the catastrophe theoretic setting the time parameter of X3l appears as a control variable on which the roots of the polynomial equation defining minimum of Higgs depend: the dependence would be given by a rational function with rational coefficients. This picture means that particle reactions occur at several levels which brings in mind a kind of universal mimicry inspired by Universe as a Universal Computer hypothesis. Particle reactions in QFT sense correspond to the reactions for the number theoretic braids inside partons. This level seems to be the simplest one to describe mathematically. At parton level particle reactions correspond to generalized Feynman diagrams obtained by gluing partonic 3-surfaces along their ends at vertices. Particle reactions are realized also at the level of 4-D space-time surfaces. One might hope that this multiple realization could code the dynamics already at the simple level of single partonic 3-surface. 3. About 3-D minima of Higgs potential The dominating contribution to the modulus of the Higgs field comes from δ M4+/- distance to the axis R+ defining quantization axis. Hence in scales much larger than CP2 size the geometric picture is quite simple. The orbit for the 2-D minimum of Higgs corresponds to a particle moving in the vicinity of R+ and minimal distances from R+ would certainly give a contribution to the Dirac determinant. Of course also the motion in CP2 degrees of freedom can generate local minima and if this motion is very complex, one expects large number of minima with almost same modulus of eigenvalues coding a lot of information about X3l. It would seem that only the most essential information about surface is coded: the knowledge of minima and maxima of height function indeed provides the most important general coordinate invariant information about landscape. In the rational category where X3l can be characterized by a finite set of rational numbers, this might be enough to deduce the representation of the surface. What if the situation is stationary in the sense that the minimum value of Higgs remains constant for some time interval? Formally the Dirac determinant would become a continuous product having an infinite value. This can be avoided by assuming that the contribution of a continuous range with fixed value of Higgs minimum is given by the contribution of its initial point: this is natural if one thinks the situation information theoretically. Physical intuition suggests that the minima remain constant for the maxima of Kähler function so that the initial partonic 2-surface would determine the entire contribution to the Dirac determinant. For more details see the chapter Massless states and Particle Massivation. Fractional Quantum Hall effect in TGD framework The generalization of the imbedding space discussed in previous posting allows to understand fractional quantum Hall effect (see this and this). The formula for the quantized Hall conductance is given by σ= ν× e2/h,ν=m/n. Series of fractions in ν=1/3, 2/5 3/7, 4/9, 5/11, 6/13, 7/15..., 2/3, 3/5, 4/7 5/9, 6/11, 7/13..., 5/3, 8/5, 11/7, 14/9... 4/3 7/5, 10/7, 13/9... , 1/5, 2/9, 3/13..., 2/7 3/11..., 1/7.. with odd denominator have bee observed as are also ν=1/2 and ν=5/2 state with even denominator. The model of Laughlin [Laughlin] cannot explain all aspects of FQHE. The best existing model proposed originally by Jain [Jain] is based on composite fermions resulting as bound states of electron and even number of magnetic flux quanta. Electrons remain integer charged but due to the effective magnetic field electrons appear to have fractional charges. Composite fermion picture predicts all the observed fractions and also their relative intensities and the order in which they appear as the quality of sample improves. I have considered earlier a possible TGD based model of FQHE not involving hierarchy of Planck constants. The generalization of the notion of imbedding space suggests the interpretation of these states in terms of fractionized charge and electron number. 1. The easiest manner to understand the observed fractions is by assuming that both M4 an CP2 correspond to covering spaces so that both spin and electric charge and fermion number are quantized. With this assumption the expression for the Planck constant becomes hbar/hbar0 =nb/na and charge and spin units are equal to 1/nb and 1/na respectively. This gives ν =nna/nb2. The values n=2,3,5,7,.. are observed. Planck constant can have arbitrarily large values. There are general arguments stating that also spin is fractionized in FQHE and for na=knb required by the observed values of ν charge fractionization occurs in units of k/nb and forces also spin fractionization. For factor space option in M4 degrees of freedom one would have ν= n/nanb2. 2. The appearance of nb=2 would suggest that also Z2 appears as the homotopy group of the covering space: filling fraction 1/2 corresponds in the composite fermion model and also experimentally to the limit of zero magnetic fiel [Jain]. Also ν=5/2 has been observed. 3. A possible problematic aspect of the TGD based model is the experimental absence of even values of nb except nb=2. A possible explanation is that by some symmetry condition possibly related to fermionic statistics kn/nb must reduce to a rational with an odd denominator for nb>2. In other words, one has k propto 2r, where 2r the largest power of 2 divisor of nb smaller than nb. 4. Large values of nb emerge as B increases. This can be understood from flux quantization. One has eBS= nhbar= n(nb/na)hbar0. The interpretation is that each of the nb sheets contributes n/na units to the flux. As nb increases also the flux increases for a fixed value of na and area S: note that magnetic field strength remains more or less constant so that kind of saturation effect for magnetic field strength would be in question. For na=knb one obtains eBS/hbar0= n/k so that a fractionization of magnetic flux results and each sheet contributes 1/knb units to the flux. ν=1/2 correspond to k=1,nb=2 and to a non-vanishing magnetic flux unlike in the case of composite fermion model. 5. The understanding of the thermal stability is not trivial. The original FQHE was observed in 80 mK temperature corresponding roughly to a thermal energy of T≈ 10-5 eV. For graphene the effect is observed at room temperature. Cyclotron energy for electron is (from fe= 6× 105 Hz at B=.2 Gauss) of order thermal energy at room temperature in a magnetic field varying in the range 1-10 Tesla. This raises the question why the original FQHE requires so low a temperature? The magnetic energy of a flux tube of length L is by flux quantization roughly e2B2S≈ Ec(e)meL(hbar0=c=1) and exceeds cyclotron energy roughly by factor L/Le, Le electron Compton length so that thermal stability of magnetic flux quanta is not the explanation. A possible explanation is that since FQHE involves several values of Planck constant, it is quantum critical phenomenon and is characterized by a critical temperature. The differences of the energies associated with the phase with ordinary Planck constant and phases with different Planck constant would characterize the transition temperature. Saturation of magnetic field strength would be energetically favored. [Laughlin] R. B. Laughlin (1983), Phys. Rev. Lett. 50, 1395. [Jain] J. K. Jain (1989), Phys. Rev. Lett. 63, 199. For more details see the chapter Dark Nuclear Physics and Condensed Matter. Could one demonstrate the existence of large Planck constant photons using ordinary camera or even bare eyes? If ordinary light sources generate also dark photons with same energy but with scaled up wavelength, this might have effects detectable with camera and even with bare eyes. In the following I consider in a rather light-hearted and speculative spirit two possible effects of this kind appearing in both visual perception and in photos. For crackpotters possibly present in the audience I want to make clear that I love to play with ideas to see whether they work or not, and that I am ready to accept some convincing mundane explanation of these effects and I would be happy to hear about this kind of explanations. I was not able to find any such explanation from Wikipedia using words like camera, digital camera, lense, aberrations.. Why light from an intense light source seems to decompose into rays? If one also assumes that ordinary radiation fields decompose in TGD Universe into topological light rays ("massless extremals", MEs) even stronger predictions follow. If Planck constant equals to hbar= q×hbar0, q=na/nb, MEs should possess Zna as an exact discrete symmetry group acting as rotations along the direction of propagation for the induced gauge fields inside ME. The structure of MEs should somewhat realize this symmetry and one possibility is that MEs has a wheel like structure decomposing into radial spokes with angular distance Δφ= 2π/na related by the symmetries in question. This brings strongly in mind phenomenon which everyone can observe anytime: the light from a bright source decomposes into radial rays as if one were seeing the profile of the light rays emitted in a plane orthogonal to the line connecting eye and the light source. The effect is especially strong if eyes are stirred. Could this apparent decomposition to light rays reflect directly the structure of dark MEs and could one deduce the value of na by just counting the number of rays in camera picture, where the phenomenon turned to be also visible? Note that the size of these wheel like MEs would be macroscopic and diffractive effects do not seem to be involved. The simplest assumption is that most of photons giving rise to the wheel like appearance are transformed to ordinary photons before their detection. The discussions about this led to a little experimentation with camera at the summer cottage of my friend Samppa Pentikäinen, quite a magician in technical affairs. When I mentioned the decomposition of light from an intense light source to rays at the level of visual percept and wondered whether the same occurs also in camera, Samppa decided to take photos with a digi camera directed to Sun. The effect occurred also in this case and might correspond to decomposition to MEs with various values of na but with same quantization axis so that the effect is not smoothed out. What was interesting was the presence of some stronger almost vertical "rays" located symmetrically near the vertical axis of the camera. The shutter mechanism determining the exposure time is based on the opening of the first shutter followed by closing a second shutter after the exposure time so that every point of sensor receives input for equally long time. The area of the region determining input is bounded by a vertical line. If macroscopic MEs are involved, the contribution of vertical rays is either nothing or all unlike that of other rays and this might somehow explain why their contribution is enhanced. Addition: I learned from Samppa that the shutter mechanism is un-necessary in digi cameras since the time for the reset of sensors is what matters. Something in the geometry of the camera or in the reset mechanism must select vertical direction in a preferred position. For instance, the outer "aperture" of the camera had the geometry of a flattened square. Anomalous diffraction of dark photons Second prediction is the possibility of diffractive effects in length scales where they should not occur. A good example is the diffraction of light coming from a small aperature of radius d. The diffraction pattern is determined by the Bessel function J1(x), x=kdsin(θ), k= 2π/λ. There is a strong light spot in the center and light rings around whose radii increase in size as the distance of the screen from the aperture increases. Dark rings correspond to the zeros of J1(x) at x=xn and the following scaling law for the nodes holds true sin(θn)= xnλ/2πd. For very small wavelengths the central spot is almost pointlike and contains most light intensity. If photons of visible light correspond to large Planck constant hbar= q× hbar0 transformed to ordinary photons in the detector (say camera film or eye), their wavelength is scaled by q and one has sin(θn)→ q× sin(θn) The size of the diffraction pattern for visible light is scaled up by q. This effect might make it possible to detect dark photons with energies of visible photons and possibly present in the ordinary light. 1. What is needed is an intense light source and Sun is an excellent candidate in this respect. Dark photon beam is also needed and n dark photons with a given visible wavelength λ could result when dark photon with hbar= n×q×hbar0 decays to n dark photons with same wavelength but smaller Planck constant hbar= q×hbar0. If this beam enters the camera or eye one has a beam of n dark photons which forms a diffraction pattern producing camera picture in the decoherence to ordinary photons. 2. In the case of an aperture with a geometry of a circular hole, the first dark ring for ordinary visible photons would be at sin(θ)≈ (π/36)λ/d. For a distance of r=2 cm between the sensor plane ("film") and effective circular hole this would mean radius of R ≈ rsin(θ)≈ 1.7 micrometers for micron wavelegnth. The actual size of spots is of order R≈ 1 mm so that the value of q would be around 1000: q=210 and q=211 belong to the favored values for q. 3. One can imagine also an alternative situation. If photons responsible for the spot arrive along single ME, the transversal thickness R of ME is smaller than the radius of hole, say of of order of wavelength, ME itself effectively defines the hole with radius R and the value of sin(θn) does not depend on the value of d for d>R. Even ordinary photons arriving along MEs of this kind could give rise to an anomalous diffraction pattern. Note that the transversal thickness of ME need not be fixed however. It however seems that MEs are now macroscopic. 4. A similar effect results as one looks at an intense light source: bright spots appear in the visual field as one closes the eyes. If there is some more mundane explanation (I do not doubt this!), it must apply in both cases and explain also why the spots have precisely defined color rather than being white. 5. The only mention about effects of diffractive aberration effects are colored rings around say disk like objects analogous to colors around shadow of say disk like object. The radii of these diffraction rings in this case scale like wavelengths and distance from the object. The experimentation of Samppa using digi camera demonstrated the appearance of colored spots in the pictures. If I have understood correctly, the sensors defining the pixels of the picture are in the focal plane and the diffraction for large Planck constant might explain the phenomenon. Since I did not have the idea about diffractive mechanism in mind, I did not check whether fainter colored rings might surround the bright spot. In any case, the readily testable prediction is that zooming to bright light source by reducing the size of the aperture should increase the size and number of the colored spots. As a matter fact, experimentation demonstrated that focusing brought in large number of these spots but we did not check whether the size was increased. For details see the chapter Dark Nuclear Physics and Condensed Matter. Burning salt water with radio waves and large Planck constant This morning my friend Samuli Penttinen send an email telling about strange discovery by engineer John Kanzius: salt water in the test tube radiated by radiowaves at harmonics of a frequency f=13.56 MHz burns. Temperatures about 1500 K which correspond to .15 eV energy have been reported. You can radiate also hand but nothing happens. The orginal discovery of Kanzius was the finding that radio waves could be used to cure cancer by destroying the cancer cells. The proposal is that this effect might provide new energy source by liberating chemical emergy in an exceptionally effective manner. The power is about 200 W so that the power used could explain the effect if it is absorbed in resonance like manner by salt water. The energies of photons involved are very small, multiples of 5.6× 10-8 eV and their effect should be very small since it is difficult to imagine what resonant molecular transition could cause the effect. This leads to the question whether the radio wave beam could contain a considerable fraction of dark photons for which Planck constant is larger so that the energy of photons is much larger. The underlying mechanism would be phase transition of dark photons with large Planck constant to ordinary photons with shorter wavelength coupling resonantly to some molecular degrees of freedom and inducing the heating. Microwave oven of course comes in mind immediately. 1. The fact that the effects occur at harmonics of the fundamental frequency suggests that rotational states of molecules are in question as in microwave heating. Since the presence of salt is essential, the first candidate for the molecule in question is NaCl but also HCl can be considered. The basic formula for the rotational energies is E(l)= E0×(l(l+1), E0=hbar2/2μR2. μ= m1 m2/(m1 +m2). Here R is molecular radius which by definition is deduced from the rotational energy spectrum. The energy inducing transition l→l+1 is ΔE(l)= 2E0×(l+1). 2. By going to Wikipedia, one can find molecular radii of heteronuclear di-atomic molecules such as NaCl and homonuclear di-atomic molecules such as H2. Using E0(H2)=8.0×10-3 eV one obtains by scaling E0(NaCl)= (μ(H2/μ(NaCl)) × (R(H2)/R(NaCL)2. The atomic weights are A(H)=1, A(Na)=23, A(Cl)=35. 3. A little calculation gives f(NaCl)= 2E0/h= 14.08 GHz. The ratio to the radiowave frequency is f(NaCl)/f= 1.0386×103 to be compared with the hbar/hbar0=210=1.024×103. The discrepancy is 1 per cent. Thus dark radiowave photons could induce a rotational microwave heating of the sample and the effect could be seen as an additional dramatic support for the hierarchy of Planck constants. There are several questions to be answered. 1. Does this effect occur also for solutions of other molecules and other solutes than water? This can be tested since the rotational spectra are readily calculable from data which can be found at net. 2. Are the radiowave photons dark or does water - which is very special kind of liquid - induce the transformation of ordinary radiowave photons to dark photons by fusing 210 radiowave massless extremals (MEs) to single ME. Does this transformation occur for all frequencies? This kind of transformation might play a key role in transforming ordinary EEG photons to dark photons and partially explain the special role of water in living systems. 3. Why the radiation does not induce spontaneous combustion of living matter which contains salt. And why cancer cells seem to burn: is salt concentration higher inside them? As a matter fact, there are reports about spontaneous human combustion. One might hope that there is a mechanism inhibiting this since otherwise military would be soon developing new horror weapons unless it is doing this already now. Is it that most of salt is ionized to Na+ and Cl- ions so that spontaneous combustion can be avoided? And how this relates to the sensation of spontaneous burning - a very painful sensation that some part of body is burning? 4. Is the energy heating solely due to rotational excitations? It might be that also a "dropping" of ions to larger space-time sheets is induced by the process and liberates zero point kinetic energy. The dropping of proton from k=137 (k=139) atomic space-time sheet liberates about .5 eV (0.125 eV). The measured temperature corresponds to the energy .15 eV. This dropping is an essential element of remote metabolism and provides universal metabolic energy quanta. It is also involved with TGD based models of "free energy" phenomena. No perpetuum mobile is predicted since there must be a mechanism driving the dropped ions back to the original space-time sheets. Recall that one of the empirical motivations for the hierarchy of Planck constants came from the observed quantum like effects of ELF em fields at EEG frequences on vertebrate brain and also from the correlation of EEG with brain function and contents of consciousness difficult to understand since the energies of EEG photons are ridiculously small and should be masked by thermal noise. In TGD based model of EEG (actually fractal hierarchy of EEGs) the values hbar/hbar0 =2k11, k=1,2,3,..., of Planck constant are in a preferred role. More generally, powers of two of a given value of Planck constant are preferred, which is also in accordance with p-adic length scale hypothesis. For details see the chapter Dark Nuclear Physics and Condensed Matter. Blackhole production at LHC and replacement of ordinary blackholes with super-canonical blackholes Tommaso Dorigo has an interesting posting about blackhole production at LHC. I have never taken this idea seriously but in a well-defined sense TGD predicts blackholes associated with super-canonical gravitons with strong gravitational constant defined by the hadronic string tension. The proposal is that super-canonical blackholes have been already seen in Hera, RHIC, and the strange cosmic ray events (see the previous posting). Ordinary blackholes are naturally replaced with super-canonical blackholes in TGD framework, which would mean a profound difference between TGD and string models. Super-canonical black-holes are dark matter in the sense that they have no electro-weak interactions and they could have Planck constant larger than the ordinary one so that the value of αsK=1/4 is reduced. The condition that αK has the same value for the super-canonical phase as it has for ordinary gauge boson space-time sheets gives hbar=26×hbar0. With this assumption the size of the baryonic super-canonical blacholes would be 46 fm, the size of a big nucleus, and would define the fundamental length scale of nuclear physics. 1. RHIC and super-canonical blackholes In high energy collisions of nuclei at RHIC the formation of super-canonical blackholes via the fusion of nucleonic space-time sheets would give rise to what has been christened a color glass condensate. Baryonic super-canonical blackholes of M107 hadron physics would have mass 934.2 MeV, very near to proton mass. The mass of their M89 counterparts would be 512 times higher, about 478 GeV. The "ionization energy" for Pomeron, the structure formed by valence quarks connected by color bonds separating from the space-time sheet of super-canonical blackhole in the production process, corresponds to the total quark mass and is about 170 MeV for ordinary proton and 87 GeV for M89 proton. This kind of picture about blackhole formation expected to occur in LHC differs from the stringy picture since a fusion of the hadronic mini blackholes to a larger blackhole is in question. An interesting question is whether the ultrahigh energy cosmic rays having energies larger than the GZK cut-off (see the previous posting) are baryons, which have lost their valence quarks in a collision with hadron and therefore have no interactions with the microwave background so that they are able to propagate through long distances. 2. Ordinary blackholes as super-canonical blackholes In neutron stars the hadronic space-time sheets could form a gigantic super-canonical blackhole and ordinary blackholes would be naturally replaced with super-canonical blackholes in TGD framework (only a small part of blackhole interior metric is representable as an induced metric). 1. Hawking-Bekenstein blackhole entropy would be replaced with its p-adic counterpart given by Sp= (M/m(CP2))2× log(p), where m(CP2) is CP2 mass, which is roughly 10-4 times Planck mass. M corresponds to the contribution of p-adic thermodynamics to the mass. This contribution is extremely small for gauge bosons but for fermions and super-canonical p"../articles/ it gives the entire mass. 2. If p-adic length scale hypothesis p≈2k holds true, one obtains Sp= k log(2)×(M/m(CP2))2 , m(CP2)=hbar/R, R the "radius" of CP2, corresponds to the standard value of hbar0 for all values of hbar. 3. Hawking Bekenstein area law gives in the case of Schwartschild blackhole S= hbar×A/4G = hbar×πGM2. For the p-adic variant of the law Planck mass is replaced with CP2 mass and klog(2)≈ log(p) appears as an additional factor. Area law is obtained in the case of elementary p"../articles/ if k is prime and wormhole throats have M4 radius given by p-adic length scale Lk=k1/2RCP2, which is exponentially smaller than Lp. For macroscopic super-canonical black-holes modified area law results if the radius of the large wormhole throat equals to Schwartschild radius. Schwartschild radius is indeed natural: I have shown that a simple deformation of the Schwartschild exterior metric to a metric representing rotating star transforms Schwartschild horizon to a light-like 3-surface at which the signature of the induced metric is transformed from Minkowskian to Euclidian (see this). 4. The formula for the gravitational Planck constant appearing in the Bohr quantization of planetary orbits and characterizing the gravitational field body mediating gravitational interaction between masses M and m (see this) reads as hbargr/hbar0=GMm/v0 . v0=2-11 is the preferred value of v0. One could argue that the value of gravitational Planck constant is such that the Compton length hbargr/M of the black-hole equals to its Schwartshild radius. This would give hbargr/hbar0= GM2/v0 , v0=1/2 . This is a natural generalization of the Nottale's formula to gravitational self interactions. The requirement that hbargr is a ratio of ruler-and-compass integers expressible as a product of distinct Fermat primes (only four of them are known) and power of 2 would quantize the mass spectrum of black hole. Even without this constraint M2 is integer valued using p-adic mass squared unit and if p-adic length scale hypothesis holds true this unit is in an excellent approximation power of two. 5. The gravitational collapse of a star would correspond to a process in which the initial value of v0, say v0 =2-11, increases in a stepwise manner to some value v0≤1/2. For a supernova with solar mass with radius of 9 km the final value of v0 would be v0=1/6. The star could have an onion like structure with largest values of v0 at the core. Powers of two would be favored values of v0. If the formula holds true also for Sun one obtains 1/v0= 3×17× 213 with 10 per cent error. 6. Blackhole evaporation could be seen as means for the super-canonical blackhole to get rid of its electro-weak charges and fermion numbers (except right handed neutrino number) as the antip"../articles/ of the emitted p"../articles/ annihilate with the p"../articles/ inside super-canonical blackhole. This kind of minimally interacting state is a natural final state of star. Ideal super-canonical blackhole would have only angular momentum and right handed neutrino number. 7. In TGD light-like partonic 3-surfaces are the fundamental objects and space-time interior defines only the classical correlates of quantum physics. The space-time sheet containing the highly entangled cosmic string might be separated from environment by a wormhole contact with size of black-hole horizon. This looks the most plausible option but one can of course ask whether the large partonic 3-surface defining the horizon of the black-hole actually contains all super-canonical p"../articles/ so that super-canonical black-hole would be single gigantic super-canonical parton. The interior of super-canonical blackhole would be space-like region of space-time, perhaps resulting as a large deformation of CP2 type vacuum extremal. Blackhole sized wormhole contact would define a gauge boson like variant of blackhole connecting two space-time sheets and getting its mass through Higgs mechanism. A good guess is that these states are extremely light. Pomeron, valence quarks, and super-canonical dark matter The recent developments in the understanding of hadron mass spectrum involve the realization that hadronic k=107 space-time sheet is a carrier of super-canonical bosons (and possibly their super-counterparts with quantum numbers of right handed neutrino) (see this) . The model leads to amazingly simple and accurate mass formulas for hadrons. Most of the baryonic momentum is carried by super-canonical quanta: valence quarks correspond in proton to a relatively small fraction of total mass: about 170 MeV. The counterparts of string excitations correspond to super-canonical many-particle states and the additivity of conformal weight proportional to mass squared implies stringy mass formula and generalization of Regge trajectory picture. Hadronic string tension is predicted correctly. Model also provides a solution to the proton spin puzzle. In this framework valence quarks would correspond to a color singlet state formed by space-time sheets connected by color flux tubes having no Regge trajectories and carrying a relatively small fraction of baryonic momentum. This kind structure, known as Pomeron, was the anomalous part of hadronic string model. Valence quarks would thus correspond to Pomeron. 1. Experimental evidence for Pomeron Pomeron originally introduced to describe hadronic diffractive scattering as the exchange of Pomeron Regge trajectory [1]. No hadrons belonging to Pomeron trajectory were however found and via the advent of QCD Pomeron was almost forgotten. Pomeron has recently experienced reincarnation [2,3,4]. In Hera e-p collisions, where proton scatters essentially elastically whereas jets in the direction of incoming virtual photon emitted by electron are observed. These events can be understood by assuming that proton emits color singlet particle carrying small fraction of proton's momentum. This particle in turn collides with virtual photon (antiproton) whereas proton scatters essentially elastically. The identification of the color singlet particle as Pomeron looks natural since Pomeron emission describes nicely diffractive scattering of hadrons. Analogous hard diffractive scattering events in pX diffractive scattering with X=anti-p [3] or X=p [4] have also been observed. What happens is that proton scatters essentially elastically and emitted Pomeron collides with X and suffers hard scattering so that large rapidity gap jets in the direction of X are observed. These results suggest that Pomeron is real and consists of ordinary partons. 2. Pomeron as the color bonded structure formed by valence quarks In TGD framework the natural identification of Pomeron is as valence The lightness and electro-weak neutrality of Pomeron support the view that photon stripes valence quarks from Pomeron, which continues its flight more or less unperturbed. Instead of an actual topological evaporation the bonds connecting valence quarks to the hadronic space-time sheet could be stretched during the collision with photon. The large value of αK=1/4 for super-canonical matter suggests that the criterion for a phase transition increasing the value of Planck constant (this) and leading to a phase, where αK propto 1/hbar is reduced, could occur. For αK to remain invariant, hbar0→ 26×hbar0 would be required. In this case, the size of hadronic space-time sheet, "color field body of the hadron", would be 26× L(107)=46 fm, roughly the size of the heaviest nuclei. Note that the sizes of electromagnetic field bodies of current quarks u and d with masses of order few MeV is not much smaller than the Compton length of electron. This would mean that super-canonical bosons would represent dark matter in a well-defined sense and Pomeron exchange would represent a temporary separation of ordinary and dark matter. Note however that the fact that super-canonical bosons have no electro-weak interactions, implies their dark matter character even for the ordinary value of Planck constant: this could be taken as an objection against dark matter hierarchy. My own interpretation is that super-canonical matter is dark matter in the strongest sense of the world whereas ordinary matter in the large hbar phase is only apparently dark matter because standard interactions do not reveal themselves in the expected manner. 3. Astrophysical counterpart of Pomeron events Pomeron events have a direct analogy in astrophysical length scales. I have commented about this already earlier. In the collision of two galaxies dark and visible matter parts of the colliding galaxies have been found to separate by Chandra X-ray Observatory. Imagine a collision between two galaxies. The ordinary matter in them collides and gets interlocked due to the mutual gravitational attraction. Dark matter, however, just keeps its momentum and keeps going on leaving behind the colliding galaxies. This kind of event has been detected by the Chandra X-Ray Observatory by using an ingenious manner to detect dark matter. Collisions of ordinary matter produces a lot of X-rays and the dark matter outside the galaxies acts as a gravitational lens. 4. Super-canonical bosons and anomalies of hadron physics Super-canonical bosons suggest a solution to several other anomalies related to hadron physics. Spin puzzle of proton has been already discussed in previous postings. The events observed for a couple of years ago in RHIC (see this) suggest a creation of a black-hole like state in the collision of heavy nuclei and inspire the notion of color glass condensate of gluons, whose natural identification in TGD framework would be in terms of a fusion of hadronic space-time sheets containing super-canonical matter materialized also from the collision energy. The black-hole states would be black-holes of strong gravitation with gravitational constant determined by hadronic string tension and gravitons identifiable as J=2 super-canonical bosons. The topological condensation of mesonic and baryonic Pomerons created from collision energy on the condensate would be analogous to the sucking of ordinary matter by real black-hole. Note that also real black holes would be dense enough for the formation of condensate of super-canonical bosons but probably with much large value of Planck constant. Neutron stars could contain hadronic super-canonical condensate. In the collision, valence quarks connected together by color bonds to form separate units would evaporate from their hadronic space-time sheets in the collision just like in the collisions producing Pomeron. The strange features of the events related to the collisions of high energy cosmic rays with hadrons of atmosphere (the p"../articles/ in question are hadron like but the penetration length is anomalously long and the rate for the production of hadrons increases as one approaches surface of Earth) could be also understood in terms of the same general mechanism. 5. Fashions and physics The story of Pomeron is a good example about the destructive effect of reductionism, fashions, and career constructivism in the recent day theoretical physics. For more than thirty years ago we had hadronic string model providing satisfactory qualitative view about non-perturbative aspects of hadron physics. Pomeron was the anomaly. Then came QCD and both hadronic string model and Pomeron were forgotten and low energy hadron physics became the anomaly. No one asked whether valence quarks might relate to Pomeron and whether stringy aspects could represent something which does not reduce to QCD. To have some use for strings it was decided that superstring model describes not only gravitation but actually everything and now we are in a situation in which people are wasting their time with AdS/CFT duality based model in which N=4 super-symmetric theory is decided to describe hadrons. This theory does not contain even quarks, only spartners of gluons, and conclusions are based on study of the limit in which one has infinite number of quark colors. The science historians of future will certainly identify the last thirty years as the weirdest period in theoretical physics. For the revised p-adic mass calculations hadron masses see the chapters p-Adic mass calculations: hadron masses and p-Adic mass calculations: New Physics of "p-Adic Length Scale Hypothesis and Dark Matter Hierarchy". [1] N. M. Queen, G. Violini (1974), {\em Dispersion Theory in High Energy Physics}, The Macmillan Press Limited. [2] M. Derrick et al(1993), Phys. Lett B 315, p. 481. [3] A. Brandt et al (1992), Phys. Lett. B 297, p. 417. [4] A. M. Smith et al(1985), Phys. Lett. B 163, p. 267. Does the spin of hadron correlate with its super-canonical boson content? The revision of hadronic mass calculations is still producing pleasant surprises. The explicit comparison of the super-canonical conformal weights associated with spin 0 and spin 1 states on one hand and spin 1/2 and spin 3/2 states on the other hand (see this) demonstrates that the difference between these states could be understood in terms of super-canonical particle contents of the states by introducing only single additional negative conformal weight sc describing color Coulombic binding . sc is constant for baryons(sc=-4) and in the case of mesons non-vanishing only for pions (sc=-5) and kaons (sc=-12). This leads to an excellent prediction for the masses also in the meson sector since pseudoscalar mesons heavier than kaon are not Golstone boson like states in this model. Deviations of predicted and actual masses are typically below per cent and second order contributions can explain the discrepancy. There is also consistency with string bounds from top quark mass. The correlation of the spin of quark-system with the particle content of the super-canonical sector increases dramatically the predictive power of the model if the allowed conformal weights of super-canonical bosons are assumed to be identical with U type quarks and thus given by (5,6,58) for the three generations. One can even consider the possibility that also exotic hadrons with different super-canonical particle content exist: this means a natural generalization of the notion of Regge trajectories. The next task would be to predict the correlation of hadron spin with super-canonical particle content in the case of long-lived hadrons. For the revised p-adic mass calculations hadron masses see the revised chapter p-Adic mass calculations: hadron masses. Revised p-adic calculations of hadronic masses The progress in the understanding Kähler coupling strength led to considerable increase in the understanding of hadronic masses. I list those points which are of special importance elements for the revised model. 1. Higgs contribution to fermion masses is negligible There are good reasons to believe that Higgs expectation for the fermionic space-time sheets is vanishing although fermions couple to Higgs. Thus p-adic thermodynamics would explain fermion masses completely. This together with the fact that the prediction of the model for the top quark mass is consistent with the most recent limits on it, fixes the CP2 mass scale with a high accuracy to the maximal one obtained if second order contribution to electron's p-adic mass squared vanishes. This is very strong constraint on the model. 2. The p-adic length scale of quark is dynamical The assumption about the presence of scaled up variants of light quarks in light hadrons is not new. It leads to a surprisingly successful model for pseudo scalar meson masses using only quark masses and the assumption mass squared is additive for quarks with same p-adic length scale and mass for quarks labelled by different primes p. This conforms with the idea that pseudo scalar mesons are Goldstone bosons in the sense that color Coulombic and magnetic contributions to the mass cancel each other. Also the mass differences between hadrons containing different numbers of strange and heavy quarks can be understood if s, b and c quarks appear as several scaled up versions. This hypothesis yields surprisingly good fit for meson masses but for some mesons the predicted mass is slightly too high. The reduction of CP2 mass scale to cure the situation is not possible since top quark mass would become too low. In case of diagonal mesons for which quarks correspond to same p-adic prime, quark contribution to mass squared can be reduced by ordinary color interactions and in the case of non-diagonal mesons one can require that quark contribution is not larger than meson mass. 3. Super-canonical bosons at hadronic space-time sheet can explain the constant contribution to baryonic masses Quarks explain only a small fraction of the baryon mass and that there is an additional contribution which in a good approximation does not depend on baryon. This contribution should correspond to the non-perturbative aspects of QCD. A possible identification of this contribution is in terms of super-canonical gluons predicted by TGD. Baryonic space-time sheet with k=107 would contain a many-particle state of super-canonical gluons with net conformal weight of 16 units. This leads to a model of baryons masses in which masses are predicted with an accuracy better than 1 per cent. Super-canonical gluons also provide a possible solution to the spin puzzle of proton. One ends up also to a prediction αsK=1/4 at hadronic space-time sheet. Hadronic string model provides a phenomenological description of non-perturbative aspects of QCD and a connection with the hadronic string model indeed emerges. Hadronic string tension is predicted correctly from the additivity of mass squared for J= bound states of super-canonical quanta. If the topological mixing for super-canonical bosons is equal to that for U type quarks then a 3-particle state formed by super-canonical quanta from the first generation and 1 quantum from the second generation would define baryonic ground state with 16 units of conformal weight. In the case of mesons pion could contain super-canonical boson of first generation preventing the large negative contribution of the color magnetic spin-spin interaction to make pion a tachyon. For heavier bosons super-canonical boson need not to be assumed. The preferred role of pion would relate to the fact that its mass scale is below QCD Λ. 4. Description of color magnetic spin-spin splitting in terms of conformal weight What remains to be understood are the contributions of color Coulombic and magnetic interactions to the mass squared. There are contributions coming from both ordinary gluons and super-canonical gluons and the latter is expected to dominate by the large value of color coupling strength. Conformal weight replaces energy as the basic variable but group theoretical structure of color magnetic contribution to the conformal weight associated with hadronic space-time sheet ($k=107$) is same as in case of energy. The predictions for the masses of mesons are not so good than for baryons, and one might criticize the application of the format of perturbative QCD in an essentially non-perturbative situation. The comparison of the super-canonical conformal weights associated with spin 0 and spin 1 states and spin 1/2 and spin 3/2 states shows that the different masses of these states could be understood in terms of the super-canonical particle contents of the state correlating with the total quark spin. The resulting model allows excellent predictions also for the meson masses and implies that only pion and kaon can be regarded as Goldstone boson like states. The model based on spin-spin splittings is consistent with model. To sum up, the model provides an excellent understanding of baryon and meson masses. This success is highly non-trivial since the fit involves only the integers characterizing the p-adic length scales of quarks and the integers characterizing color magnetic spin-spin splitting plus p-adic thermodynamics and topological mixing for super-canonical gluons. The next challenge would be to predict the correlation of hadron spin with super-canonical particle content in the case of long-lived hadrons. A connection with hadronic string model In the previous posting I described the realization that so called super-canonical degrees of freedom (super Kac-Moody algebra associated with symplectic (canonical) transformations of M4+/-× CP2 (light-cone boundary in a loose terminology) is responsible for the non-perturbative aspects of hadron physics. One can say that the notion of hadronic space-time sheet characterized by Mersenne prime M107 and responsible for the non-perturbative aspects of hadron physics finds a precise quantitative theoretical articulation in terms of super-canonical symmetry. Note that besides bosonic generators also the super counterparts of the bosonic generators carrying quantum numbers of right handed neutrino are present and could give rise to super-counterparts of hadrons. It might not be easy to distinguish them from ordinary hadrons. 1. Quantitative support for the role of super-canonical algebra Quantitative calculations for hadron masses (still under progress) support this picture and one can predict correctly the previously unidentified large contribution to the masses spin 1/2 baryons in terms of a bound state of g=1 (genus) super-canonical gluons with color binding conformal weight of 2 units reducing the net conformal weight of 2-gluon state from 18 to 16. An alternative picture is that super-canonical gluons suffer same topological mixing as U type quarks so that the conformal weights are (5,6,58). In this case ground state could contain two super-canonical gluons of first generation and one of second generation (5+5+6=16). I thought first that in the case of mesons this contribution might not be present. There could be however single super-scanonical meson present inside pion and rho meson with conformal weight 5 (!) and it would prevent color magnetic binding conformal weight to make pion a tachyon. The special role of π-ρ system would be due to the fact that pion mass is below QCD Λ. If no mixing occurs, g=0 gluons would define the analog of gluonic component of parton sea and bringing in additional color interaction besides the one mediated by ordinary gluons and having very strong color coupling strength αsK=1/4. This contribution is compensated by the color magnetic spin-spin splitting and color Coulombic energy in the case of pseudoscalars in accordance with the idea that pseudoscalars are Golstone bosons apart from the contribution of quarks to the mass of the meson. Quite generally, one can say that super-canonical sector adds to the theory the non-perturbative aspects of hadron physics which become important at low energies. This contribution is something which QCD cannot yield in any circumstances since color group has geometric meaning in TGD being represented as color rotations of CP2. 2. Hadronic strings and super-canonical algebra Hadronic string model provides a phenomenological description of the non-perturbative aspects of hadron physics, and TGD was born both as a resolution of energy problem of general relativity and as a generalization of the hadronic string model. Hence one can ask whether something resembling hadronic string model might emerge from the super-canonical sector. TGD allows string like objects but the fundamental string tension is gigantic, roughly a factor 10-8 of that defined by Planck constant. An extremely rich spectrum of vacuum extremals is predicted and the expectation motivated by the p-adic length scale hypothesis is that vacuum extremals deformed to non-vacuum extremals give rise to a hierarchy of string like objects with string tension T propto 1/Lp2, Lp the p-adic length scale. p-Adic length scale hypothesis states that primes p≈2k are preferred. Also a hierarchy of QCD like physics is predicted. The challenge has been the identification of quantum counterpart of this picture and p-adic physics leads naturally to it. 1. The fundamental mass formula of the string model relates mass squared and angular momentum of the stringy state. It has the form M2=M02J , M02≈ .9 GeV2. A more general formula is M2=kn. 2. This kind of formula results from the additivity of the conformal weight (and thus mass squared) for systems characterized by same value of p-adic prime if one constructs a many particle state from g=1 super-canonical bosons with a thermal mass squared M2=M02n, M02=n0m1072. The angular momentum of the building blocks has some spectrum fixed by Virasoro conditions. If the basic building block has angular momentum J0 and mass squared M02, one obtains M2= M02 J, k=M02, J= nJ0. The values of n are even in old fashioned string model for a Regge trajectory with a fixed parity. J0=2 implies the same result so that basic unit might be called "strong graviton". Of course, also J=0 states with the same mass are expected to be there and are actually required by the explanation of the spin puzzle of proton. 3. g=1 super-canonical gluons has mass squared M02= 9m1072. The bound states of super-canonical bosons with net mass squared M02= 16m1072 are responsible for the ground state mass of baryons in the model predicting baryon masses with few per cent accuracy. The value of M02 is .88 GeV2 to be compared with its nominal value .9 GeV2 so that also hadronic string tension is predicted correctly! This picture allows also to consider a possible mechanism explaining spin puzzle of proton and I have already earlier considered an explanation in terms of super-canonical spin (see this) assuming that the state is a superposition of ordinary (J=0,Jq=1/2) state and (J=2,Jq=3/2) state in which super-canonical bound state has spin 2. To sum up, combining these results with earlier ones one can say that besides elementary particle masses all basic parameters of hadronic physics are predicted correctly from p-adic length scale hypothesis plus simple number theoretical considerations involving only integer arithmetics. This is quite an impressive result. To my humble opinion, it would be high time for the string people and other colleagues to realize that they have already lost the boat badly and the situation worsens if they refuse to meet the reality described so elegantly by TGD. There is enormous amount of work to be carried out and the early bird gets the worm;-). Progress in the understanding of baryon masses In the previous posting I explained the progress made in understanding of mesonic masses basically due to the realization how the Chern-Simons coupling k determines Kähler coupling strength and p-adic temperature discussed in still earlier posting. Today I took a more precise look at the baryonic masses. It the case of scalar mesons quarks give the dominating contribution to the meson mass. This is not true for spin 1/2 baryons and the dominating contribution must have some other origin. The identification of this contribution has remained a challenge for years. A realization of a simple numerical co-incidence related to the p-adic mass squared unit led to an identification of this contribution in terms of states created by purely bosonic generators of super-canonical algebra and having as a space-time correlate CP2 type vacuum extremals topologically condensed at k=107 space-time sheet (or having this space-time sheet as field body). Proton and neutron masses are predicted with .5 per cent accuracy and Δ-N mass splitting with .2 per cent accuracy. A further outcome is a possible solution to the spin puzzle of proton. 1. Does k=107 hadronic space-time sheet give the large contribution to baryon mass? In the sigma model for baryons the dominating contribution to the mass of baryon results as a vacuum expectation value of scalar field and mesons are analogous to Goldstone bosons whose masses are basically due to the masses of light quarks. This would suggest that k=107 gluonic/hadronic space-time sheet gives a large contribution to the mass squared of baryon. p-Adic thermodynamics allows to expect that the contribution to the mass squared is in good approximation of form Δm2= nm2(107), where m2(107) is the minimum possible p-adic mass mass squared and n a positive integer. One has m(107)=210m(127)= 210me51/2=233.55 MeV for Ye=0 favored by the top quark mass. 1. n=11 predicts (m(n),m(p))=(944.5, 939.3) MeV: the actual masses are (m(n),m(p)=(939.6,938.3) MeV. Coulombic repulsion between u quarks could reduce the p-n difference to a realistic value. 2. λ-n mass splitting would be 184.7 MeV for k(s)=111 to be compared with the real difference which is 176.0 MeV. Note however that color magnetic spin-spin splitting requires that the ground state mass squared is larger than 11m02(107). 2. What is responsible for the large ground state mass of the baryon? The observations made above do not leave much room for alternative models. The basic problem is the identification of the large contribution to the mass squared coming from the hadronic space-time sheet with k=107. This contribution could have the energy of color fields as a space-time correlate. 1. The assignment of the energy to the vacuum expectation value of sigma boson does not look very promising since the very of existence sigma boson is questionable and it does not relate naturally to classical color gauge fields. More generally, since no gauge symmetry breaking is involved, the counterpart of Higgs mechanism as a development of a coherent state of scalar bosons does not look like a plausible idea. 2. One can however consider the possibility of Bose-Einstein condensate or of a more general many-particle state of massive bosons possibly carrying color quantum numbers. A many-boson state of exotic bosons at k=107 space-time sheet having net mass squared m2=nm02(107), n=∑i ni could explain the baryonic ground state mass. Note that the possible values of ni are predicted by p-adic thermodynamics with Tp=1. 3. Glueballs cannot be in question Glueballs (see this and this) define the first candidate for the exotic boson in question. There are however several objections against this idea. 1. QCD predicts that lightest glue-balls consisting of two gluons have JPC= 0++ and 2++ and have mass about 1650 MeV. If one takes QCD seriously, one must exclude this option. One can also argue that light glue balls should have been observed long ago and wonder why their Bose-Einstein condensate is not associated with mesons. 2. There are also theoretical objections in TGD framework. • Can one really apply p-adic thermodynamics to the bound states of gluons? Even if this is possible, can one assume the p-adic temperature Tp=1 for them if it is quite generally Tp=1/26 for gauge bosons consisting of fermion-antifermion pairs (see this). • Baryons are fermions and one can argue that they must correspond to single space-time sheet rather than a pair of positive and negative energy space-time sheets required by the glueball Bose-Einstein condensate realized as wormhole contacts connecting these space-time sheets. 4. Do exotic colored bosons give rise to the ground state mass of baryon? The objections listed above lead to an identification of bosons responsible for the ground state mass, which looks much more promising. 1. TGD predicts exotic bosons, which can be regarded as super-conformal partners of fermions created by the purely bosonic part of super-canonical algebra, whose generators belong to the representations of the color group and 3-D rotation group but have vanishing electro-weak quantum numbers. Their spin is analogous to orbital angular momentum whereas the spin of ordinary gauge bosons reduces to fermionic spin. Thus an additional bonus is a possible solution to the spin puzzle of proton. 2. Exotic bosons are single-sheeted structures meaning that they correspond to a single wormhole throat associated with a CP2 type vacuum extremal and would thus be absent in the meson sector as required. Tp=1 would characterize these bosons by super-conformal symmetry. The only contribution to the mass would come from the genus and g=0 state would be massless so that these bosons cannot condense on the ground state unless they suffer topological mixing with higher genera and become massive in this manner. g=1 glueball would have mass squared 9m02(k) which is smaller than 11m02. For a ground state containing two g=1 exotic bosons, one would have ground state mass squared 18m02 corresponding to (m(n),m(p))=(1160.8,1155.6) MeV. Electromagnetic Coulomb interaction energy can reduce the p-n mass splitting to a realistic value. 3. Color magnetic spin-spin splitting for baryons gives a test for this hypothesis. The splitting of the conformal weight is by group theoretic arguments of the same general form as that of color magnetic energy and given by (m2(N),m2(Δ))= (18m02-X, 18m02+X) in absence of topological mixing. n=11 for nucleon mass implies X=7 and m(Δ) =5m0(107)= 1338 MeV to be compared with the actual mass m(Δ)= 1232 MeV. The prediction is too large by about 8.6 per cent. If one allows topological mixing one can have m2=8m02 instead of 9m02. This gives m(Δ)=1240 MeV so that the error is only .6 per cent. The mass of topologically mixed exotic boson would be 660.6 MeV and equals m02(104). Amusingly k=104 happens to corresponds to the inverse of αK for gauge bosons. 4. In the simplest situation a two-particle state of these exotic bosons could be responsible for the ground state mass of baryon. Also the baryonic spin puzzle caused by the fact that quarks give only a small contribution to the spin of baryons, could find a natural solution since these bosons could give to the spin of baryon an angular momentum like contribution having nothing to do with the angular momentum of quarks. 5. The large value of the Kähler coupling strength αK=1/4 would characterize the hadronic space-time sheet as opposed to αK=1/104 assignable to the gauge boson space-time sheets. This would make the color gauge coupling characterizing their interactions strong. This would be a precise articulation for what the generation of the hadronic space-time sheet in the phase transition to a non-perturbative phase of QCD really means. 6. The identification would also lead to a physical interpretation of super(-conformal) symmetries. It must be emphasized the super-canonical generators do not create ordinary fermions so that ordinary gauge bosons need not have super-conformal partners. One can of course imagine that also ordinary gauge bosons could have super-partners obtained by assuming that one wormhole throat (or both of them) is purely bosonic. If both wormhole throats are purely bosonic Higgs mechanism would leave the state essentially massless unless p-adic thermal stability allows Tp=1. Color confinement could be responsible for the stability. For super-partners having fermion number Higgs mechanism would make this kind of state massive unless the quantum numbers are those of a right handed neutrino. 7. The importance of the result is that it becomes possible to derive general mass formulas also for the baryons of scaled up copies of QCD possibly associated with various Mersenne primes and Gaussian Mersennes. In particular, the mass formulas for "electro-baryons" and "muon-baryons" can be deduced (see this) For more details about p-adic mass calculations of elementary particle masses see the chapter Massless p"../articles/ and particle massivation. The chapter p-Adic mass calculations: hadron masses describes the model for hadronic masses. The chapter p-Adic mass calculations: New Physics explains the new view about Kähler coupling strength. The model for hadron masses revisited The blog of Tommaso Dorigo contains two postings which served as a partial stimulus to reconsider the model of hadron masses. The first posting is The top quark mass measured from its production rate and tells about new high precision determination of top quark mass reducing its value to the most probale value 169.1 GeV in allowed interval 164.7-175.5 GeV. Second posting Rumsfeld hadrons tells about "crackpottish" finding that the mass of Bc meson is in an excellent approximation average of the mass of Ψ and Υ mesons. TGD based model for hadron masses allows to understand this finding. 1. Motivations There were several motivations for looking again the p-adic mass calculations for quarks and hadrons. 1. If one takes seriously the prediction that p-adic temperature is Tp=1 for fermions and Tp=1/26 for gauge bosons as suggested by the considerations of blog posting (see also this), and accepts the picture about fermions as topologically condensed CP2 type vacuum extremals with single light-like wormhole throat and gauge bosons and Higgs boson as wormhole contacts with two light-like wormhole throats and connecting space-time sheets with opposite time orientation and energy, one is led to the conclusion that although fermions can couple to Higgs, Higgs vacuum expectation value must vanish for fermions. One must check whether it is indeed possible to understand the fermion masses from p-adic thermodynamics without Higgs contribution. This turns out to be the case. This also means that the coupling of fermions to Higgs can be arbitrarily small, which could explain why Higgs has not been detected. 2. There has been some problems in understanding top quark mass in TGD framework. Depending on the selection of p-adic prime p≈ 2k characterizing top quark the mass is too high or too low by about 15-20 per cent. This problem had a trivial resolution: it was due to a calculational error due to inclusion of only the topological contribution depending on the genus of partonic 2-surface. The positive surprise was that the maximal value for CP2 mass corresponding to the vanishing of second order correction to electron mass and maximal value of the second order contribution to top mass predicts exactly the recent best value 169.1 GeV of top mass. This in turn allows to clean up uncertainties in the model of hadron masses. 2. The model for hadron masses The basic assumptions in the model of hadron masses are following. 1. Quarks are characterized by two kinds of masses: current quark masses assignable to free quarks and constituent quark masses assignable to bound state quarks (see this). This can be understood if the integer kq characterizing the p-adic length scale of quark is different for free quarks and bound quarks so that bound state quarks are much heavier than free quarks. A further generalization is that the value of k can depend on hadron. This leads to an elegant model explaining meson and baryon masses within few percent. The model becomes more precise from the fixing of the CP2 mass scale from top mass (note that top quark is always free since toponium does not exist). This predicts several copies of various quarks and there is evidence for three copies of top corresponding to the values kt=95,94,93. Also current quarks u and d can correspond to several values of k. 2. The lowest mass mesonic formula is extremely simple. If the quarks characterized by same p-adic prime, their conformal weights and thus mass squared are additive: m2B = m2q1+ m2q2. If the p-adic primes labelling quarks are different masses are additive mB = mq1+ mq2. This formula generalizes in an obvious manner to the case of baryons. Thus apart from effects like color magnetic spin-spin splitting describable p-adically for diagonal mesons and in terms of color magnetic interaction energy in case of nondiagonal mesons, basic effect of binding is modification of the integer k labelling the quark. 3. The formula produces the masses of mesons and also baryons with few per cent accuracy. There are however some exceptions. 1. The mass of η' meson becomes slightly too large. In case of η' negative color binding conformal weight can reduce the mass. Also mixing with two gluon gluonium can save the situation. 2. Some light non-diagonal mesons such as K mesons have also slightly too large mass. In this case negative color binding energy can save the situation. 2. An example about how mesonic mass formulas work. The mass formulas allow to understand why the "crackpottish" mass formula for Bc holds true. The mass of the Bc meson (bound state of b and c quark and antiquark) has been measured with a precision by CDF (see the blog posting by Tommaso Dorigo) and is found to be M(Bc)=6276.5+/- 4.8 MeV. Dorigo notices that there is a strange "crackpottian" co-incidence involved. Take the masses of the fundamental mesons made of c anti-c (Ψ) and b anti-b (Υ), add them, and divide by two. The value of mass turns out to be 6278.6 MeV, less than one part per mille away from the Bc mass! The general p-adic mass formulas and the dependence of kqon hadron explain the co-incidence. The mass of Bc is given as m(Bc)= m(c,kc(Bc))+ m(b,kb(Bc)), whereas the masses of Ψ and Υ are given by m( Ψ)= 21/2m(c,kΨ) m(Υ)= 21/2m(b,kΥ). Assuming kc(Bc)= kc(Ψ) and kb(Bc)= kb(Υ) would give m(Bc)= 2-1/2[m( Ψ)+m( Υ)] which is by a factor 21/2 higher than the prediction of the "crackpot" formula. kc(Bc)= kc( Ψ)+1 and kb(Bc)= kb( Υ)+1 however gives the correct result. As such the formula makes sense but the one part per mille accuracy must be an accident in TGD framework. 1. The predictions for Ψ and Υ masses are too small by 2 resp. 5 per cent in the model assuming no effective scaling down of CP2 mass. 2. The formula makes sense if the quarks are effectively free inside hadrons and the only effect of the binding is the change of the mass scale of the quark. This makes sense if the contribution of the color interactions, in particular color magnetic spin-spin splitting, to the heavy meson masses are small enough. Ψ and ηc have spins 1 and 0 and their masses differ by 3.7 per cent (m(ηc)=2980 MeV and m(Ψ)= 3096.9 MeV) so that color magnetic spin-spin splitting is measured using per cent as natural unit. Does the quantization of Kähler coupling strength reduce to the quantization of Chern-Simons coupling at partonic level? Kähler coupling strength associated with Kähler action (Maxwell action for the induced Kähler form) is the only coupling constant parameter in quantum TGD, and its value (or values) is in principle fixed by the condition of quantum criticality since Kähler coupling strength is completely analogous to critical temperature. The quantum TGD at parton level reduces to almost topological QFT for light-like 3-surfaces. This almost TQFT involves Abelian Chern-Simons action for the induced Kähler form. This raises the question whether the integer valued quantization of the Chern-Simons coupling k could predict the values of the Kähler coupling strength. I considered this kind of possibility already for more than 15 years ago but only the reading of the introduction of the recent paper by Witten about his new approach to 3-D quantum gravity led to the discovery of a childishly simple argument that the inverse of Kähler coupling strength could indeed be proportional to the integer valued Chern-Simons coupling k: 1/αK=4k if all factors are correct. k=26 is forced by the comparison with some physical input. Also p-adic temperature could be identified as Tp=1/k. 1. Quantization of Chern-Simons coupling strength For Chern-Simons action the quantization of the coupling constant guaranteing so called holomorphic factorization is implied by the integer valuedness of the Chern-Simons coupling strength k. As Witten explains, this follows from the quantization of the first Chern-Simons class for closed 4-manifolds plus the requirement that the phase defined by Chern-Simons action equals to 1 for a boundaryless 4-manifold obtained by gluing together two 4-manifolds along their boundaries. As explained by Witten in his paper, one can consider also "anyonic" situation in which k has spectrum Z/n2 for n-fold covering of the gauge group and in dark matter sector one can consider this kind of quantization. 2. Formula for Kähler coupling strength The quantization argument for k seems to generalize to the case of TGD. What is clear that this quantization should closely relate to the quantization of the Kähler coupling strength appearing in the 4-D Kähler action defining Kähler function for the world of classical worlds and conjectured to result as a Dirac determinant. The conjecture has been that gK2 has only single value. With some physical input one can make educated guesses about this value. The connection with the quantization of Chern-Simons coupling would however suggest a spectrum of values. This spectrum is easy to guess. 1. The U(1) counterpart of Chern-Simons action is obtained as the analog of the "instanton" density obtained from Maxwell action by replacing J wedge *J with J wedge J. This looks natural since for self dual J associated with CP2 extremals Maxwell action reduces to instanton density and therefore to Chern-Simons term. Also the interpretation as Chern-Simons action associated with the classical SU(3) color gauge field defined by Killing vector fields of CP2 and having Abelian holonomy is possible. Note however that instanton density is multiplied by imaginary unit in the action exponential of path integral. One should find justification for this "Wick rotation" not changing the value of coupling strength and later this kind of justification will be proposed. 2. Wick rotation argument suggests the correspondence k/4π = 1/4gK2 between Chern-Simons coupling strength and the Kähler coupling strength gK appearing in 4-D Kähler action. This would give gK2=π/k . The spectrum of 1/αK would be integer valued The result is very nice from the point of number theoretic vision since the powers of αK appearing in perturbative expansions would be rational numbers (ironically, radiative corrections might vanish but this might happen only for these rational values of αK!). 3. It is interesting to compare the prediction with the experimental constraints on the value of αK. The basic empirical input is that electroweak U(1) coupling strength reduces to Kähler coupling at electron length scale (see this). This gives αK= αU(1)(M127)≈ 104.1867, which corresponds to k=26.0467. k=26 would give αK= 104: the deviation would be only .2 per cent and one would obtain exact prediction for αU(1)(M127)! This would explain why the inverse of the fine structure constant is so near to 137 but not quite. Amusingly, k=26 is the critical space-time dimension of the bosonic string model. Also the conjectured formula for the gravitational constant in terms of αK and p-adic prime p involves all primes smaller than 26 (see this). 4. Note however that if k is allowed to have values in Z/n2, the strongest possible coupling strength is scaled to n2/4 if hbar is not scaled: already for n=2 the resulting perturbative expansion might fail to converge. In the scalings of hbar associated with M4 degrees of freedom hbar however scales as 1/n2 so that the spectrum of αK would remain invariant. 3. Justification for Wick rotation It is not too difficult to believe to the formula 1/αK =qk, q some rational. q=4 however requires a justification for the Wick rotation bringing the imaginary unit to Chern-Simons action exponential lacking from Kähler function exponential. In this kind of situation one might hope that an additional symmetry might come in rescue. The guess is that number theoretic vision could justify this symmetry. 1. To see what this symmetry might be consider the generalization of the Montonen-Olive duality obtained by combining theta angle and gauge coupling to single complex number via the formula τ= θ/2π+i4π/g2. What this means in the recent case that for CP2 type vacuum extremals (see this) Kähler action and instanton term reduce by self duality to Kähler action obtained by the replacement g2 with -iτ/4π. The first duality τ→τ+1 corresponds to the periodicity of the theta angle. Second duality τ→-1/τ corresponds to the generalization of Montonen-Olive duality α→ 1/α. These dualities are definitely not symmetries of the theory in the recent case. 2. Despite the failure of dualities, it is interesting to write the formula for τ in the case of Chern-Simons theory assuming gK2=π/k with k>0 holding true for Kac-Moody representations. What one obtains is τ= 4k(1-i). The allowed values of τ are integer spaced along a line whose direction angle corresponds to the phase exp(i2π/n), n=4. The transformations τ→ τ+ 4(1-i) generate a dynamical symmetry and as Lorentz transformations define a subgroup of the group E2 leaving invariant light-like momentum (this brings in mind quantum criticality!). One should understand why this line is so special. One should understand why this line is so special. 3. This formula conforms with the number theoretic vision suggesting that the allowed values of τ belong to an integer spaced lattice. Indeed, if one requires that the phase angles are proportional to vectors with rational components then only phase angles associated with orthogonal triangles with short sides having integer valued lengths m and n are possible. The additional condition that the phase angles correspond to roots of unity! This leaves only m=n and m=-n>0 into consideration so that one would have τ= n(1-i) from k>0. 4. Notice that theta angle is a multiple of 8kπ so that a trivial strong CP breaking results and no QCD axion is needed (this of one takes seriously the equivalence of Kähler action to the classical color YM action). 4. Is p-adicization needed and possible only in 3-D sense? The action of CP2 type extremal is given as S=π/8αK= kπ/2. Therefore the exponent of Kähler action appearing in the vacuum functional would be exp(kπ) known to be a transcendental number (Gelfond's constant). Also its powers are transcendental. If one wants to p-adicize also in 4-D sense, this raises a problem. Before considering this problem, consider first the 4-D p-adicization more generally. 1. The definition of Kähler action and Kähler function in p-adic case can be obtained only by algebraic continuation from the real case since no satisfactory definition of p-adic definite integral exists. These difficulties are even more serious at the level of configuration space unless algebraic continuation allows to reduce everything to real context. If TGD is integrable theory in the sense that functional integral over 3-surfaces reduces to calculable functional integrals around the maxima of Kähler function, one might dream of achieving the algebraic continuation of real formulas. Note however that for lightlike 3-surface the restriction to a category of algebraic surfaces essential for the re-interpretation of real equations of 3-surface as p-adic equations. It is far from clear whether also preferred extremals of Kähler action have this property. 2. Is 4-D p-adicization the really needed? The extension of light-like partonic 3-surfaces to 4-D space-time surfaces brings in classical dynamical variables necessary for quantum measurement theory. p-Adic physics defines correlates for cognition and intentionality. One can argue that these are not quantum measured in the conventional sense so that 4-D p-adic space-time sheets would not be needed at all. The p-adic variant for the exponent of Chern-Simons action can make sense using a finite-D algebraic extension defined by q=exp(i2π/n) and restricting the allowed lightlike partonic 3-surfaces so that the exponent of Chern-Simons form belongs to this extension of p-adic numbers. This restriction is very natural from the point of view of dark matter hierarchy involving extensions of p-adics by quantum phase q. If one remains optimistic and wants to p-adicize also in 4-D sense, the transcendental value of the vacuum functional for CP2 type vacuum extremals poses a problem (not the only one since the p-adic norm of the exponent of Kähler action can become completely unpredictable). 1. One can also consider extending p-adic numbers by introducing exp(π) and its powers and possibly also π. This would make the extension of p-adics infinite-dimensional which does not conform with the basic ideas about cognition. Note that ep is not p-adic transcendental so that extension of p-adics by powers e is finite-dimensional and if p-adics are first extended by powers of π then further extension by exp(π) is p-dimensional. 2. A more tricky manner to overcome the problem posed by the CP2 extremals is to notice CP2 type extremals are necessarily deformed and contain a hole corresponding to the lightlike 3-surface or several of them. This would reduce the value of Kähler action and one could argue that the allowed p-adic deformations are such that the exponent of Kähler action is a p-adic number in a finite extension of p-adics. This option does not look promising. 5. Is the p-adic temperature proportional to the Kähler coupling strength? Kähler coupling strength would have the same spectrum as p-adic temperature Tp apart from a multiplicative factor. The identification Tp=1/k is indeed very natural since also gK2 is a temperature like parameter. The simplest guess is Tp= 1/k. Also gauge couplings strengths are expected to be proportional to gK2 and thus to 1/k apart from a factor characterizing p-adic coupling constant evolution. That all basic parameters of theory would have simple expressions in terms of k would be very nice from the point of view quantum classical correspondence. If U(1) coupling constant strength at electron length scales equals αK=1/104, this would give 1/Tp≈ 1/26. This means that photon, graviton, and gluons would be massless in an excellent approximation for say p=M89, which characterizes electroweak gauge bosons receiving their masses from their coupling to Higgs boson. For fermions one has Tp=1 so that fermionic lightlike wormhole throats would correspond to the strongest possible coupling strength αK=1/4 whereas gauge bosons identified as pairs of light-like wormhole throats associated with wormhole contacts would correspond to αK=1/104. Perhaps Tp=1/26 is the highest p-adic temperature at which gauge boson wormhole contacts are stable against splitting to fermion-antifermion pair. Fermions and possible exotic bosons created by bosonic generators of super-canonical algebra would correspond to single wormhole throat and could also naturally correspond to the maximal value of p-adic temperature since there is nothing to which they can decay. A fascinating problem is whether k=26 defines internally consistent conformal field theory and is there something very special in it. Also the thermal stability argument for gauge bosons should be checked. What could go wrong with this picture? The different value for the fermionic and bosonic αK makes sense only if the 4-D space-time sheets associated with fermions and bosons can be regarded as disjoint space-time regions. Gauge bosons correspond to wormhole contacts connecting (deformed pieces of CP2 type extremal) positive and negative energy space-time sheets whereas fermions would correspond to deformed CP2 type extremal glued to single space-time sheet having either positive or negative energy. These space-time sheets should make contact only in interaction vertices of the generalized Feynman diagrams, where partonic 3-surfaces are glued together along their ends. If this gluing together occurs only in these vertices, fermionic and bosonic space-time sheets are disjoint. For stringy diagrams this picture would fail. To sum up, the resulting overall vision seems to be internally consistent and is consistent with generalized Feynman graphics, predicts exactly the spectrum of αK, allows to identify the inverse of p-adic temperature with k, allows to understand the differences between fermionic and bosonic massivation, and reduces Wick rotation to a number theoretic symmetry. One might hope that the additional objections (to be found sooner or later!) could allow to develop a more detailed picture. For more details see the chapter p-Adic mass calculations: New Physics. Dark matter hierarchy corresponds to a hierarchy of quantum critical systems in modular degrees of freedom Dark matter hierarchy corresponds to a hierarchy of conformal symmetries Zn of partonic 2-surfaces with genus g≥ 1 such that factors of n define subgroups of conformal symmetries of Zn. By the decomposition Zn=∏p|n Zp, where p|n tells that p divides n, this hierarchy corresponds to an hierarchy of increasingly quantum critical systems in modular degrees of freedom. For a given prime p one has a sub-hierarchy Zp, Zp2=Zp× Zp, etc... such that the moduli at n+1:th level are contained by n:th level. In the similar manner the moduli of Zn are sub-moduli for each prime factor of n. This mapping of integers to quantum critical systems conforms nicely with the general vision that biological evolution corresponds to the increase of quantum criticality as Planck constant increases. The group of conformal symmetries could be also non-commutative discrete group having Zn as a subgroup. This inspires a very shortlived conjecture that only the discrete subgroups of SU(2) allowed by Jones inclusions are possible as conformal symmetries of Riemann surfaces having g≥ 1. Besides Zn one could have tedrahedral and icosahedral groups plus cyclic group Z2n with reflection added but not Z2n+1 nor the symmetry group of cube. The conjecture is wrong. Consider the orbit of the subgroup of rotational group on standard sphere of E3, put a handle at one of the orbits such that it is invariant under rotations around the axis going through the point, and apply the elements of subgroup. You obtain Riemann surface having the subgroup as its isometries. Hence all subgroups of SU(2) can act as conformal symmetries. The number theoretically simple ruler-and-compass integers having as factors only first powers of Fermat primes and power of 2 would define a physically preferred sub-hierarchy of quantum criticality for which subsequent levels would correspond to powers of 2: a connection with p-adic length scale hypothesis suggests itself. Spherical topology is exceptional since in this case the space of conformal moduli is trivial and conformal symmetries correspond to the entire SL(2,C). This would suggest that only the fermions of lowest generation corresponding to the spherical topology are maximally quantum critical. This brings in mind Jones inclusions for which the defining subgroup equals to SU(2) and Jones index equals to M/N =4. In this case all discrete subgroups of SU(2) label the inclusions. These inclusions would correspond to fiber space CP2→ CP2/U(2) consisting of geodesic spheres of CP2. In this case the discrete subgroup might correspond to a selection of a subgroup of SU(2)subset SU(3) acting non-trivially on the geodesic sphere. Cosmic strings X2× Y2 subset M4×CP2 having geodesic spheres of CP2 as their ends could correspond to this phase dominating the very early cosmology. For more details see the chapter Construction of Elementary Particle Vacuum Functionals. Elementary particle vacuum functionals for dark matter and why fermions can have only three families One of the open questions is how dark matter hierarchy reflects itself in the properties of the elementary p"../articles/. The basic questions are how the quantum phase q=ep(2iπ/n) makes itself visible in the solution spectrum of the modified Dirac operator D and how elementary particle vacuum functionals depend on q. Considerable understanding of these questions emerged recently. One can generalize modular invariance to fractional modular invariance for Riemann surfaces possessing Zn symmetry and perform a similar generalization for theta functions and elementary particle vacuum functionals. In particular, without any further assumptions n=2 dark fermions have only three families. The existence of space-time correlate for fermionic 2-valuedness suggests that fermions quite generally correspond to even values of n, so that this result would hold quite generally. Elementary bosons (actually exotic p"../articles/) would correspond to n=1, and more generally odd values of n, and could have also higher families. Cold fusion - in news again Cold fusion, whose history begins from the announcement of Fleischman and Pons 1989, is gradually making its way through the thick walls of arrogant dogmatism and prejudices, and - expressing it less diplomatically - of collective academic stupidity. The name of Frank Gordon is associated with the breakthrough experiment. Congratulations to the pioneers. There are popular "../articles/ in Nature and New Scientist. Unfortunately these "../articles/ "../articles/ are not accessible to everyone, including me. The article Cold Fusion - Extraordinary Evidence, Cold fusion is real should be however available to any one. For few weeks ago I revised the earlier model of cold fusion. The model explains nicely the selection rules of cold fusion and also the observed transmutations in terms of exotic states of nuclei for which the color bonds connecting A≤4 nuclei to nuclear string can be also charged. This makes possible neutral variant of deuteron nucleus making possible to overcome the Coulomb wall. It seems that the emission of highly energetic charged p"../articles/ which cannot be due to chemical reactions and could emerge from cold fusion has been demonstrated beyond doubt by Frank Gordon's team using detectors known as CR-39 plastics of size scale of coin used already earlier in hot fusion research. The method is both cheap and simple. The idea is that travelling charged p"../articles/ shatter the bonds of the plastic's polymers leaving pits or tracks in the plastic. Under the conditions claimed to make cold fusion possible (1 deuterium per 1 Pd nucleus making in TGD based model possible the phase transition of D to its neutral variant by the emission of exotic dark W boson with interaction range of order atomic radius) tracks and pits appear during short period of time to the detector. For details see the new chapter Nuclear String Hypothesis of "p-Adic Length Scale Hypothesis and Dark Matter Hierarchy". The older model is discussed in the chapter TGD and Nuclear Physics. De-coherence and the differential topology of nuclear reactions I have already described the basic ideas of nuclear string model in the previous summaries. Nuclear string model allows a topological description of nuclear decays in terms of closed string diagrams and it is interesting to look what characteristic predictions follow without going to detailed quantitative modelling of stringy collisions possibly using some variant of string models. In the de-coherence process explaining giant resonances eye-glass type singularities of the closed nuclear string appear and make possible nuclear decays as decays of closed string to closed strings. 1. At the level of 4He sub-strings the simplest singularities correspond to 4→ 3+1 and 4→ 2+2 eye-glass singularities. The first one corresponds to low energy GR and second to one of higher energy GRs. They can naturally lead to decays in which nucleon or deuteron is emitted in decay process. The singularities 4→ 2+1+1 resp. 4→ 1+1+1+1 correspond to eye-glasses with 3 {\it resp.} four lenses and mean the decay of 4He to deuteron and two nucleons resp. 4 nucleons. The prediction is that the emission of deuteron requires a considerably larger excitation energy than the emission of single nucleon. For GR at level of A=3 nuclei analogous considerations apply. Taking into account the possible tunnelling of the nuclear strings from the nuclear space-time sheet modifies this simple picture. 2. For GR in the scale of entire nuclei the corresponding singular configurations typically make possible the emission of alpha particle. Considerably smaller collision energies should be able to induce the emission of alpha p"../articles/ than the emission of nucleons if only stringy excitations matter. The excitation energy needed for the emission of alpha particle is predicted to increase with A since the number n of 4He nuclei increases with A. For instance, for Z=N=2n nuclei n→ n-1 +1 would require the excitation energy (2n-1)Ec=(A/2-1)Ec, Ec≈ .2 MeV. The tunnelling of the alpha particle from the nuclear space-time sheet can modify the situation. The decay process allows a differential topological description. Quite generally, in the de-coherence process n→ (n-k) +k the color magnetic flux through the closed string must be reduced from n to n-k units through the first closed string and to k units through the second one. The reduction of the color color magnetic fluxes means the reduction of the total color binding energy from n2Ec ((n-k)2+k2 )Ec and the kinetic energy of the colliding nucleons should provide this energy. Faraday's law, which is essentially a differential topological statement, requires the presence of a time dependent color electric field making possible the reduction of the color magnetic fluxes. The holonomy group of the classical color gauge field GAαβ is always Abelian in TGD framework being proportional to HAJαβ, where HA are color Hamiltonians and Jαβ is the induced Kähler form. Hence it should be possible to treat the situation in terms of the induced Kähler field alone. Obviously, the change of the Kähler (color) electric flux in the reaction corresponds to the change of (color) Kähler (color) magnetic flux. The change of color electric flux occurs naturally in a collision situation involving changing induced gauge fields. For more details see the chapter Nuclear String Hypothesis . Strong force as scaled and dark electro-weak force? The fiddling with the nuclear string model has led to following conclusions. 1. Strong isospin dependent nuclear force, which does not reduce to color force, is necessary in order to eliminate polyneutron and polyproton states (see this). This force contributes practically nothing to the energies of bound states. This can be understood as being due to the cancellation of isospin scalar and vector parts of this force for them. Only strong isospin singlets and their composites with isospin doublet (n,p) are allowed for A≤4 nuclei serving as building bricks of the nuclear strings. Only effective polyneutron states are allowed and they are strong isospin singlets or doublets containing charged color bonds. 2. The force could act in the length scalar of nuclear space-time sheets: k=113 nuclear p-adic length scale is a good candidate for this length scale. One must be however cautious: the contribution to the energy of nuclei is so small that length scale could be much longer and perhaps same as in case of exotic color bonds. Color bonds connecting nuclei correspond to much longer p-adic length scale and appear in three p-adically scaled up variants corresponding to A< 4 nuclei, A=4 nuclei and A> 4 nuclei. 3. The prediction of exotic deuterons with vanishing nuclear em charge leads to a simplification of the earlier model of cold fusion explaining its basic selection rules elegantly but requires a scaled variant of electro-weak force in the length scale of atom (see this and this). What is then this mysterious strong force? And how abundant these copies of color and electro-weak force actually are? Is there some unifying principle telling which of them are realized? From foregoing plus TGD inspired model for quantum biology involving also dark and scaled variants of electro-weak and color forces it is becoming more and more obvious that the scaled up variants of both QCD and electro-weak physics appear in various space-time sheets of TGD Universe. This raises the following questions. 1. Could the isospin dependent strong force between nucleons be nothing but a p-adically scaled up (with respect to length scale) version of the electro-weak interactions in the p-adic length scale defined by Mersenne prime M89 with new length scale assigned with gluons and characterized by Mersenne prime M107?! Strong force would be electro-weak force but in the length scale of hadron! Or possibly in length scale of nucleus (keff=107+ 6=113) if a dark variant of strong force with h= nh0=23h0 is in question! 2. Why shouldn't there be a scaled up variant of electro-weak force also in the p-adic length scale of the nuclear color flux tubes? 3. Could it be that all Mersenne primes and also other preferred p-adic primes correspond to entire standard model physics including also gravitation? Could be be kind of natural selection which selects the p-adic survivors as proposed long time ago? Positive answers to the last questions would clean the air and have quite a strong unifying power in the rather speculative and very-many-sheeted TGD Universe. 1. The prediction for new QCD type physics at M89 would get additional support. Perhaps also LHC provides it within the next half decade. 2. Electro-weak physics for Mersenne prime M127 assigned to electron and exotic quarks and color excited leptons would be predicted. This would predict the exotic quarks appearing in nuclear string model and conform with the 15 year old leptohadron hypothesis (leptohadrons result as bound states of colored excitations of leptons (see this ) and also this ). M127 dark weak physics would also make possible the phase transition transforming ordinary deuterium in Pd target to exotic deuterium with vanishing nuclear charge. The most obvious objection against this unifying vision is that hadrons decay only according to the electro-weak physics corresponding to M89. If they would decay according to M107 weak physics, the decay rates would be much much faster since the mass scale of electro-weak bosons would be reduced by a factor 2-9 (this would give increase of decay rates by a factor 236 from the propagator of weak boson). This is however not a problem if strong force is a dark variant with say n=8 giving corresponding to nuclear length scale. This crazy conjecture might work if one accepts the dark Bohr rules! For more details see the chapter TGD and Nuclear Physics and the new chapter Nuclear String Hypothesis of "p-Adic Length Scale Hypothesis and Dark Matter Hierarchy". MiniBooNE and LSND are consistent with each other in TGD Universe MiniBooNE group has published its first findings concerning neutrino oscillations in the mass range studied in LSND experiments. For the results see the press release, the guest posting of Dr. Heather Ray in Cosmic Variance, and the more technical article A Search for Electron Neutrino in Δ m2=1 eV2 scale by MiniBooNE group. 1. The motivation for MiniBooNE Neutrino oscillations are not well-understood. Three experiments LSND, atmospheric neutrinos, and solar neutrinos show oscillations but in widely different mass regions (1 eV2 , 3× 10-3 eV2, and 8× 10-5 eV2). This is the problem. In TGD framework the explanation would be that neutrinos can appear in several p-adically scaled up variants with different mass scales and therefore different scales for the differences Δ m2 for neutrino masses so that one should not try to try to explain the results of these experiments using single neutrino mass scale. TGD is however not main stream physics so that colleagues stubbornly try to put all feet in the same shoe (Dear feet, I am sorry for this: I can assure that I have done my best to tell the colleagues but they do not want to listen;-)). One can of course understand the stubbornness of colleagues. In single-sheeted space-time where colleagues still prefer to live it is very difficult to imagine that neutrino mass scale would depend on neutrino energy (space-time sheet at which topological condensation occurs using TGD language) since neutrinos interact so extremely weakly with matter. The best known attempt to assign single mass to all neutrinos has been based on the use of so called sterile neutrinos which do not have electro-weak couplings. This approach is an ad hoc trick and rather ugly mathematically. 2. The result of MiniBooNE experiment The purpose of the MiniBooNE experiment was to check whether LSND result Δ m2=1 eV2 is genuine. The group used muon neutrino beam and looked whether the transformations of muonic neutrinos to electron neutrinos occur in the mass squared region considered. No such transitions were found but there was evidence for transformations at low neutrino energies. What looks first as an overdiplomatic formulation of the result was rather than direct refutation of LSND results. 3. LSND and MiniBooNE are consistent in TGD Universe The habitant of the many-sheeted space-time would not regard the previous statement as a mere diplomatic use of language. It is quite possible that neutrinos studied in MiniBooNE have suffered topological condensation at different space-time sheet than those in LSND if they are in different energy range. To see whether this is the case let us look more carefully the experimental arrangements. 1. In LSND experiment 800 MeV proton beam entering in water target and the muon neutrinos resulted in the decay of produced pions. Muonic neutrinos had energies in 60-200 MeV range. This one can learn from the article Evidence for νμe oscillations from LSND. 2. In MiniBooNE experiment 8 GeV muon beam entered Beryllium target and muon neutrinos resulted in the decay of resulting pions and kaons. The resulting muonic neutrinos had energies the range 300-1500 GeV to be compared with 60-200 MeV! This is it! This one can learn from the article A Search for Electron Neutrino in Δ m2=1 eV2 scale by MiniBooNE group. Let us try to make this more explicit. 1. Neutrino energy ranges are quite different so that the experiments need not be directly comparable. The mixing obeys the analog of Schrödinger equation for free particle with energy replaced with Δm2/E, where E is neutrino energy. Mixing probability as a function of distance L from the source of muon neutrinos is in 2-component model given by P= sin2(θ)sin2(1.27Δm2L/E). The characteristic length scale for mixing is L= E/Δm2. If L is sufficiently small, the mixing is fifty-fifty already before the muon neutrinos enter the system, where the measurement is carried out and no energy dependent mixing is detected in the length scale resolution used. If L is considerably longer than the size of the measuring system, no mixing is observed either. Therefore the result can be understood if Δm2 is much larger or much smaller than E/L, where L is the size of the measuring system and E is the typical neutrino energy. 2. MiniBooNE experiment found evidence for the appearance of electron neutrinos at low neutrino energies (below 500 MeV) which means direct support for the LSND findings and for the dependence of neutron mass scale on its energy relative to the rest system defined by the space-time sheet of laboratory. 3. Uncertainty Principle inspires the guess Lp propto 1/E implying mp propto E. Here E is the energy of the neutrino with respect to the rest system defined by the space-time sheet of the laboratory. Solar neutrinos indeed have the lowest energy (below 20 MeV) and the lowest value of Δm2. However, atmospheric neutrinos have energies starting from few hundreds of MeV and Δm2 is by a factor of order 10 higher. This suggests that the the growth of Δm2; with E2 is slower than linear. It is perhaps not the energy alone which matters but the space-time sheet at which neutrinos topologically condense. MiniBooNE neutrinos above 500 MeV would topologically could condense at space-time sheets for which the p-adic mass scale is higher than in LSND experiments and one would have Δ m2>> 1 eV2 implying maximal mixing in length scale much shorter than the size of experimental apparatus. 4. One could also argue that topological condensation occurs in condensed matter and that no topological condensation occurs for high enough neutrino energies so that neutrinos remain massless. One can even consider the possibility that the p-adic length scale Lp is proportional to E/m02, where m0 is proportional to the mass scale associated with non-relativistic neutrinos. The p-adic mass scale would obey mp propto m02/E so that the characteristic mixing length would be by a factor of order 100 longer in MiniBooNE experiment than in LSND. To sum up, in TGD Universe LSND and MiniBooNE are consistent and provide additional support for the dependence of neutrino mass scale on neutrino energy. About the phase transition transforming ordinary deuterium to exotic deuterium in cold fusion I have already told about a model of cold fusion based on the nuclear string model predicting ordinary nuclei to have exotic charge states. In particular, deuterium nucleus possesses a neutral exotic state which would make possible to overcome Coulomb wall and make cold fusion possible. 1. The phase transition The exotic deuterium at the surface of Pd target seems to form patches (for a detailed summary see TGD and Nuclear Physics). This suggests that a condensed matter phase transition involving also nuclei is involved. A possible mechanism giving rise to this kind of phase would be a local phase transition in the Pd target involving both D and Pd. In the above reference it was suggested that deuterium nuclei transform in this phase transition to "ordinary" di-neutrons connected by a charged color bond to Pd nuclei. In the recent case di-neutron could be replaced by neutral D. The phase transition transforming neutral color bond to a negatively charged one would certainly involve the emission of W+ boson, which must be exotic in the sense that its Compton length is of order atomic size so that it could be treated as a massless particle and the rate for the process would be of the same order of magnitude as for electro-magnetic processes. One can imagine two options. 1. Exotic W+ boson emission generates a positively charged color bond between Pd nucleus and exotic deuteron as in the previous model. 2. The exchange of exotic W+ bosons between ordinary D nuclei and Pd induces the transformation Z→Z+1 inducing an alchemic phase transition Pd→Ag. The most abundant Pd isotopes with A=105 and 106 would transform to a state of same mass but chemically equivalent with the two lightest long-lived Ag isotopes. 106Ag is unstable against β+ decay to Pd and 105Ag transforms to Pd via electron capture. For 106Ag (105Ag) the rest energy is 4 MeV (2.2 MeV) higher than for 106Pd (105Pd), which suggests that the resulting silver cannot be genuine. This phase transition need not be favored energetically since the energy loaded into electrolyte could induce it. The energies should (and could in the recent scenario) correspond to energies typical for condensed matter physics. The densities of Ag and Pd are 10.49 g�cm-3 and 12.023 gcm-3 so that the phase transition would expand the volume by a factor 1.0465. The porous character of Pd would allow this. The needed critical packing fraction for Pd would guarantee one D nucleus per one Pd nucleus with a sufficient accuracy. 2. Exotic weak bosons seem to be necessary The proposed phase transition cannot proceed via the exchange of the ordinary W bosons. Rather, W bosons having Compton length of order atomic size are needed. These W bosons could correspond to a scaled up variant of ordinary W bosons having smaller mass, perhaps even of the order of electron mass. They could be also dark in the sense that Planck constant for them would have the value h= nh0 implying scaling up of their Compton size by n. For n≈ 248 the Compton length of ordinary W boson would be of the order of atomic size so that for interactions below this length scale weak bosons would be effectively massless. p-Adically scaled up copy of weak physics with a large value of Planck constant could be in question. For instance, W bosons could correspond to the nuclear p-adic length scale L(k=113) and n=211. Nuclear strings and cold fusion The option assuming that strong isospin dependent force acts on the nuclear space-time sheet and binds pn pairs to singlets such that the strong binding energy is very nearly zero in singlet state by the cancellation of scalar and vector contributions, is the most promising variant of nuclear string model. It predicts the existence of exotic di-,tri-, and tetra-neutron like p"../articles/ and even negatively charged exotics obtained from 2H, 3H,3He, and 4He by adding negatively charged color bond. For instance, 3H extends to a multiplet with em charges 4,3,2,1,0,-1,-2. Heavy nuclei with proton neutron excess could actually be such nuclei. The exotic states are stable under beta decay for m(π)<me. The simplest neutral exotic nucleus corresponds to exotic deuteron with single negatively charged color bond. Using this as target it would be possible to achieve cold fusion since Coulomb wall would be absent. The empirical evidence for cold fusion thus supports the prediction of exotic charged states. 1. Signatures of cold fusion In the following the consideration is restricted to cold fusion in which two deuterium nuclei react strongly since this is the basic reaction type studied. In hot fusion there are three reaction types: 1. D+D→ 4He+γ ≈(23.8 MeV) 2. D+D → 3He+ n 3. D+D → 3H + p. The rate for the process 1) predicted by standard nuclear physics is more than 10-3 times lower than for the processes 2) and 3). The reason is that the emission of the gamma ray involves the relatively weak electromagnetic interaction whereas the latter two processes are strong. The most obvious objection against cold fusion is that the Coulomb wall between the nuclei makes the mentioned processes extremely improbable at room temperature. Of course, this alone implies that one should not apply the rules of hot fusion to cold fusion. Cold fusion indeed differs from hot fusion in several other aspects. 1. No gamma rays are seen. 2. The flux of energetic neutrons is much lower than expected on basis of the heat production rate an by interpolating hot fusion physics to the recent case. These signatures can also be (and have been!) used to claim that no real fusion process occurs. Cold fusion has also other features, which serve as valuable constraints for the model building. 1. Cold fusion is not a bulk phenomenon. It seems that fusion occurs most effectively in nano-p"../articles/ of Pd and the development of the required nano-technology has made possible to produce fusion energy in controlled manner. Concerning applications this is a good news since there is no fear that the process could run out of control. 2. The ratio x of D atoms to Pd atoms in Pd particle must lie the critical range [.85,.90] for the production of 4He to occur. This explains the poor repeatability of the earlier experiments and also the fact that fusion occurred sporadically. 3. Also the transmutations of Pd nuclei are observed. Below a list of questions that any theory of cold fusion should be able to answer. 1. Why cold fusion is not a bulk phenomenon? 2. Why cold fusion of the light nuclei seems to occur only above the critical value x\simeq .85 of D concentration? 3. How fusing nuclei are able to effectively circumvent the Coulomb wall? 4. How the energy is transferred from nuclear degrees of freedom to much longer condensed matter degrees of freedom? 5. Why gamma rays are not produced, why the flux of high energy neutrons is so low and why the production of 4He dominates (also some tritium is produced)? 6. How nuclear transmutations are possible? Could exotic deuterium make cold fusion possible? One model of cold fusion has been already discussed in TGD framework. The basic idea is that only the neutrons of incoming and target nuclei can interact strongly, that is their space-time sheets can fuse. One might hope that neutral deuterium having single negatively charged color bond could allow to realize this mechanism. 1. Suppose that part of the target deuterium in Pd catalyst corresponds to exotic deuterium with neutral nuclei so that cold fusion would occur between neutral D in the target and charged incoming D and Coulomb wall in the nuclear scale would be absent. A possible mechanism giving rise to this kind of phase would be a local phase transition in the Pd target possibly involving dark matter hierarchy. 2. The exotic variant of the ordinary D + D reaction yields final states in which 4He, 3He and 3H are replaced with their exotic counterparts with charge lowered by one unit. In particular, exotic 3H is neutral and there is no Coulomb wall hindering its fusion with Pd nuclei so that nuclear transmutations can occur. Why the neutron and gamma fluxes are low might be understood if for some reason only exotic 3H is produced, that is the production of charged final state nuclei is suppressed. The explanation relies on Coulomb wall at the nucleon level. 1. Initial state contains one charged and one neutral color bond and final state A=3 or A=4 color bonds. Additional neutral color bonds must be created in the reaction (one for the production A=3 final states and two for A=4 final state). The process involves the creation of neural fermion pairs. The emission of one exotic gluon per bond decaying to a neutral pair is necessary to achieve this. This requires that nucleon space-time sheets fuse together. Exotic D certainly belongs to the final state nucleus since charged color bond is not expected to be split in the process. 2. The process necessarily involves a temporary fusion of nucleon space-time sheets. One can understand the selection rules if only neutron space-time sheets can fuse appreciably so that only 3H would be produced. Here Coulomb wall at nucleon level should enter into the game. 3. Protonic space-time sheets have the same positive sign of charge always so that there is a Coulomb wall between them. This explains why the reactions producing exotic 4He do not occur appreciably. If the quark/antiquark at the neutron end of the color bond of ordinary D has positive charge, there is Coulomb attraction between proton and corresponding negatively charged quark. Thus energy minimization implies that the neutron space-time sheet of ordinary D has positive net charge and Coulomb repulsion prevents it from fusing with the proton space-time sheet of target D. The desired selection rules would thus be due to Coulomb wall at the nucleon level. Why di-neutron does not exist? As previous postings (see this and this) should make clear, nuclear string model works amazingly well. There is however an objection against the model. This is the experimental absence of stable n-n bound state analogous to deuteron favored by lacking Coulomb repulsion and attractive electromagnetic spin-spin interaction in spin 1 state. Same applies to tri-neutron states and possibly also tetra-neutron state. There has been however speculation about the existence of di-neutron and poly-neutron states. One can consider a simple explanation for the absence of genuine poly-neutrons. 1. The formation of negatively charged bonds with neutrons replaced by protons would minimize both nuclear mass and Coulomb energy although binding energy per nucleon would be reduced and the increase of neutron number in heavy nuclei would be only apparent. As found, this could also explain why heavy nuclei become unstable. 2. The strongest hypothesis is that mass minimization forces protons and negatively charged color bonds to serve as the basic building bricks of all nuclei. If this were the case, deuteron would be a di-proton having negatively charged color bond. The total binding energy would be only 2.222 -1.293=.9290 MeV. Di-neutron would be impossible for this option since only one color bond can be present in this state. 3. The small mass difference m(3He)-m(3H)=.018 MeV would have a natural interpretation as Coulomb interaction energy. Tri-neutron would be allowed. Alpha particle would consist of four protons and two negatively charged color bonds and the actual binding energy per nucleon would be by mn-mp/2 smaller than believed. Tetra-neutron would also consist of four-protons and binding energy per nucleon would be smaller by mn-mp than what obtains from the previous estimate. Beta decays would be basically beta decays of exotic quarks associated with color bonds. Does this model work? I performed the calculations for the binding energies by assuming that ordinary nuclei have protons and neutral and negatively charged color bonds as building bricks. 1. The resulting picture is not satisfactory. The model with ordinary neutrons and protons and color bonds works excellently if one assumes that standard isospin dependent strong interaction is present at nuclear space-time sheets besides the color interaction mediated by much longer color magnetic flux tubes. This fits nicely with the visualization of nucleus as a kind of plant such that nuclear space-time sheet serves as a "seed" from which the long color flux tubes emanate from nucleons and return back. 2. For pn states, which are singlets with respect to strong isospin, this contribution to energy turns out to be surprisingly small, of order .1 MeV: this explains why the fit without this contribution was so good. One can obtain a complete fit for A≤4 nuclei by simple fractal scaling arguments from that for A>4 nuclei by adding this contribution. 3. If isospin dependent strong contribution is much larger in non-singlet states (expressible in terms of isospin Casimirs) one can understand the experimental absence of poly-neutrons in standard sense of the word. Since color bonds can carry em charges (0,1,-1), exotic nuclear states are however predicted. For instance, 3H with 3 color bonds in principle extends to a multiplet with charges running from +4 to -2. This seems to be an unavoidable prediction of TGD. Still about nuclear string hypothesis The nuclear string model has evolved dramatically during last week or two and allows now to understand both nuclear binding energies of both A>4 nuclei and A≤4 nuclei in terms of three fractal variants of QCD. The model also explains giant resonances and so called pygmy resonances in terms of decoherence of Bose-Einstein condensates of exotic pion like color bosons to sub-condensates. In its simplicity the model is comparable to Bohr model of atom, and I cannot avoid the impression that the tragedy of theoretical nuclear physics was that it was born much before any-one new about the notion of fractality. For these reasons a second posting about these ideas involving some repetition is in order. 1. Background Nuclear string hypothesis is one of the most dramatic almost-predictions of TGD. The hypothesis in its original form assumes that nucleons inside nucleus organize to closed nuclear strings with neighboring nuclei of the string connected by exotic meson bonds consisting of color magnetic flux tube with quark and anti-quark at its ends. The lengths of flux tubes correspond to the p-adic length scale of electron and therefore the mass scale of the exotic mesons is around 1 MeV in accordance with the general scale of nuclear binding energies. The long lengths of em flux tubes increase the distance between nucleons and reduce Coulomb repulsion. A fractally scaled up variant of ordinary QCD with respect to p-adic length scale would be in question and the usual wisdom about ordinary pions and other mesons as the origin of nuclear force would be simply wrong in TGD framework as the large mass scale of ordinary pion indeed suggests. The presence of exotic light mesons in nuclei has been proposed also by Chris Illert based on evidence for charge fractionization effects in nuclear decays. 2. A>4 nuclei as nuclear strings consisting of A< 4 nuclei During last weeks a more refined version of nuclear string hypothesis has evolved. 1. The first refinement of the hypothesis is that 4He nuclei and A<4 nuclei and possibly also nucleons appear as basic building blocks of nuclear strings instead of nucleons which in turn can be regarded as strings of nucleons. Large number of stable lightest isotopes of form A=4n supports the hypothesis that the number of 4He nuclei is maximal. Even the weak decay characteristics might be reduced to those for A<4 nuclei using this hypothesis. 2. One can understand the behavior of nuclear binding energies surprisingly well from the assumptions that total strong binding energy associated with A≤ 4 building blocks is additive for nuclear strings and that the addition of neutrons tends to reduce Coulombic energy per string length by increasing the length of the nuclear string implying increase binding energy and stabilization of the nucleus. 3. In TGD framework tetra-neutron is interpreted as a variant of alpha particle obtained by replacing two meson-like stringy bonds connecting neighboring nucleons of the nuclear string with their negatively charged variants. For heavier nuclei tetra-neutron is needed as an additional building brick and the local maxima of binding energy E_B per nucleon as function of neutron number are consistent with the presence of tetra-neutrons. The additivity of magic numbers 2, 8, 20, 28, 50, 82, 126 predicted by nuclear string hypothesis is also consistent with experimental facts and new magic numbers are predicted. 3. Bose-Einstein condensation of color bonds as a mechanism of nuclear binding The attempt to understand the variation of the nuclear binding energy and its maximum for Fe leads to a quantitative model of nuclei lighter than Fe as color bound Bose-Einstein condensates of 4He nuclei or rather, of pion like colored states associated with color flux tubes connecting 4He nuclei. 1. The crucial element of the model is that color contribution to the binding energy is proportional to n2 where n is the number of color bonds. Fermi statistics explains the reduction of EB for the nuclei heavier than Fe. Detailed estimate favors harmonic oscillator model over free nucleon model with oscillator strength having interpretation in terms of string tension. 2. Fractal scaling argument allows to understand 4He and lighter nuclei as strings formed from nucleons with nucleons bound together by color bonds. Three fractally scaled variants of QCD corresponding A>4 nuclei, A=4 nuclei and A<4 nuclei are thus involved. The binding energies of also lighter nuclei are predicted surprisingly accurately by applying simple p-adic scaling to the parameters of model for the electromagnetic and color binding energies in heavier nuclei. 4. Giant dipole resonance as de-coherence of Bose-Einstein condensate of color bonds Giant (dipole) resonances and so called pygmy resonances interpreted in terms of de-coherence of the Bose-Einstein condensates associated with A≤ 4 nuclei and with the nuclear string formed from A≤ 4 nuclei provide a unique test for the model. The key observation is that the splitting of the Bose-Einstein condensate to pieces costs a precisely defined energy due to the n2 dependence of the total binding energy. 1. For 4He de-coherence the model predicts singlet line at 12.74 MeV and triplet (25.48, 27.30,29.12) MeV at ≈ 27 MeV spanning 4 MeV wide range which is of the same order as the width of the giant dipole resonance for nuclei with full shells. 2. The de-coherence at the level of nuclear string predicts 1 MeV wide bands 1.4 MeV above the basic lines. Bands decompose to lines with precisely predicted energies. Also these contribute to the width. The predictions are in a surprisingly good agreement with experimental values. The so called pygmy resonance appearing in neutron rich nuclei can be understood as a de-coherence for A=3 nuclei. A doublet (7.520,8.4600) MeV at ≈ 8 MeV is predicted. At least the prediction for the position is correct. I am grateful for Elio Conte for discussions which stimulated a more detailed consideration of nuclear string model. Experimental evidence for colored muons One of the basic deviations of TGD from standard model is the prediction of colored excitations of quarks and leptons. The reason is that color is not spin like quantum number but partial wave in CP2 degrees of freedom and thus angular momentum like. Accordingly new scaled variants of QCD are predicted. As a matter fact, dark matter hierarchy and p-adic length scale hierarchy populate many-sheeted Universe with fractal variants of standard model physics. In the blog of Lubos there were comments about a new particle. The finding has been published (Phys. Rev. D74) and (Phys. Rev. Lett. 98). The mass of the new particle, which is either scalar or pseudoscalar, is 214.4 MeV whereas muon mass is 105.6 MeV. The mass is about 1.5 per cent higher than two times muon mass. The proposed interpretation is as light Higgs. I do not immediately resonate with this interpretation although p-adically scaled up variants of also Higgs bosons live happily in the fractal Universe of TGD. For decades ago anomalous production of electron-positron pairs in heavy ion nuclear collisions just above the Coulomb wall was discovered with the mass of the pseudocalar resonance slightly above 2me. All this have been of course forgotten since it is just boring low energy phenomenology to which brave brane theorists do not waste their precious time;-). This should however put bells ringing. TGD explanation is in terms of exotic pions consisting of colored variants of ordinary electrons predicted by TGD. I of course predicted that also muon and tau would give rise to a scaled variant of QCD type theory. Karmen anomaly gave indications that muonic variant of this QCD is there. Just now I am working with nuclear string model where scaled variant of QCD for exotic quarks in p-adic length scale of electron is responsible for the binding of 4He nuclei to nuclear strings. One cannot exclude the possibility that the fermion and antifermion at the ends of color flux tubes connecting nucleons are actually colored leptons although the working hypothesis is that they are exotic quark and antiquark. One can of course also turn around the argument: could it be that lepto-pions are "leptonuclei", that is bound states of ordinary leptons bound by color flux tubes for a QCD in length scale considerably shorter than the p-adic length scale of lepton. This QCD binds 4He nuclei to tangled nuclear strings. Two other scaled variants of QCD bind nucleons to 4He and lighter nuclei. The model is extremely simple and quantitatively amazingly successful. For instance, the last discovery is that the energies of giant dipole resonances can be predicted and first inspection shows that they come out correctly. For more details about the lepto-hadron hypothesis see the chapter The Recent Status of Lepto-Hadron Hypothesis. For the recent state of nuclear string model see the new chapter Further progress in Nuclear String Hypothesis. Further progress related to nuclear string hypothesis Nuclear string hypothesis leads to rather detailed predictions and allows to understand the behavior of nuclear binding energies surprisingly well from the assumptions that total strong binding energy is additive for nuclear strings and that the addition of neutrons tends to reduce Coulombic energy per string length by increasing the length of the nuclear string implying increase binding energy and stabilization of the nucleus. Perhaps even also weak decay characteristics could be understood in a simple manner by assuming that the stable nuclei lighter than Ca contain maximum number of alpha p"../articles/ plus minimum number of lighter isotopes. Large number of stable lightest isotopes of form A=4n supports this hypothesis. In TGD framework tetra-neutron is interpreted as a variant of alpha particle obtained by replacing two meson-like stringy bonds connecting neighboring nucleons of the nuclear string with their negatively charged variants (see this). For heavier nuclei tetra-neutron is needed as an additional building brick and the local maxima of binding energy E_B per nucleon as function of neutron number are consistent with the presence of tetra-neutrons. The additivity of magic numbers 2, 8, 20, 28, 50, 82, 126 predicted by nuclear string hypothesis is also consistent with experimental facts and new magic numbers are predicted and there is evidence for them. The attempt to understand the variation of the nuclear binding energy and its maximum for Fe leads to a quantitative model of nuclei lighter than Fe as color bound Bose-Einstein condensates of 4He nuclei or rather, of color flux tubes defining meson-like structures connecting them. Fermi statistics explains the reduction of EB for the nuclei heavier than Fe. Detailed estimate favors harmonic oscillator model over free nucleon model with oscillator strength having interpretation in terms of string tension. Fractal scaling argument allows to understand 4He and lighter nuclei analogous states formed from nucleons and binding energies are predicted quite satisfactorily. Giant dipole resonance interpreted as a de-coherence of the Bose-Einstein condensate to pieces provides a unique test for the model and precise predictions for binding energies follow. For more details see the chapter TGD and Nuclear Physics and the new chapter Further Progress in Nuclear String Hypothesis. Could also gauge bosons correspond to wormhole contacts? 1. Option I: Only Higgs as a wormhole contact 2. Option II: All elementary bosons as wormhole contacts The difference would naturally relate to the different time orientations of wormhole throats and make itself manifest via the definition of light-like operator o=xkγk appearing in the generalized eigenvalue equation for the modified Dirac operator (see this and this). For the first throat ok would correspond to a light-like tangent vector tkof the partonic 3-surface and for the second throat to its M4 dual tdk in a preferred rest system in M4 (implied by the basic construction of quantum TGD). What is nice that this picture non-asks the question whether tkor tdkshould appear in the modified Dirac operator. 2.2 Phase conjugate states and matter-antimatter asymmetry 3. Graviton and other stringy states 4. Spectrum of non-stringy states The general bosonic wave-function would be expressible as a matrix Mg1,g2 and ordinary gauge bosons would correspond to a diagonal matrix Mg1,g2g1,g2 as required by the absence of neutral flavor changing currents (say gluons transforming quark genera to each other). 8 new gauge bosons are predicted if one allows all 3× 3 matrices with complex entries orthonormalized with respect to trace meaning additional dynamical SU(3) symmetry. Ordinary gauge bosons would be SU(3) singlets in this sense. The existing bounds on flavor changing neutral currents give bounds on the masses of the boson octet. The 2-throat character of bosons should relate to the low value T=1/n<< 1 for the p-adic temperature of gauge bosons as contrasted to T=1 for fermions. 5. Higgs mechanism The finite range of interaction characterized by the gauge boson mass should correlate with the finite range for the free propagation of wormhole contacts representing bosons along corresponding ME. The finite range would result from the emission of Higgs like wormhole contacts from gauge boson like wormhole contact leading to the generation of coherent states of neutral Higgs p"../articles/. The emission would also induce non-rectilinearity of ME as a correlate for the recoil in the emission of Higgs. For more details see either the chapter Construction of Elementary Particle Vacuum Functionals or the chapter Massless states and Particle Massivation. Can one deduce the Yukawa couplings of Higgs from the anomalous ratio H/Z0(b pair):H/Z0(tau pair)? Generalizing the simple argument of Conway one therefore has Of course, it might turn out that fake Higgs is in question. What is however important is that the deviation of the Yukawa coupling allowed by TGD for Higgs from those predicted by standard model could manifest itself in the ratio of Z0→ b-bbar and Z0→ τ-τbar excesses. Indications for Higgs with mass of 160 GeV Has Higgs been detected? TGD picture about Higgs briefly 2. The slow rate for the production of Higgs could also allow the presence of Higgs at much lower mass and explain why Higgs has not been detected in the mass range mH 114 GeV. Interestingly, around 1990 a 2σ evidence for Higgs with mass about 100 GeV was reported and one might wonder whether there might be genuine Higgs there after all. For more details see the chapter <Massless p"../articles/ and particle massivation. To the index page
2fe39d5d51eb3b04
Microlasers and ray chaos A hitchhiker's guide to dielectric cavities* Contents of this page: Light's growing weight You don't need a great deal of imagination to foresee an increasing significance of lightwave technology in data processing and telecommunications. Here are some arguments in favor of light: Miniaturization of electronic circuits leads to increased resistances and hence larger dissipation. Photons don't suffer from losses in the same degree because their interaction is much weaker than that of electrons. The bandwidths available for signal transmission are a few hundred kHz on copper cables, versus roughly a THz in a typical glass fiber - even now it is feasible to carry half a million telephone conversations over a single glass fiber. Photons are the method of choice for massively parallel data processing and storage. A more specific example of how microphotonics can make an impact is described in this PDF-article describing my field of work in the photonics industry from May 2000 until August 2001. The material system discussed there Indium Phosphide, a semiconductor compound. Other material systems for microphotonics can be found among polymers, glasses, porous media - to name a few. At the heart of these developments is the availability of small but efficient lasers which deliver the required intense and coherent light. If you have any doubts that the laser is one of the twentieth century's most important achievements in science and technology, please read about the impact and history of laser light at this new website. For an amusing but also informative glimpse of laser physics, see the Britney Spears Guide to Semiconductor Physics. Wikipedia is a good source of information and links on laser physics. Microlaser design All of us (physicists) have probably been "exposed" to the He-Ne laser in some graduate student lab. But of course the most ubiquitous lasers are by now the semiconductor diode lasers. Both of these incarnations rely on the parallel-mirror configuration to provide the feedback that makes laser action possible. This type of resonator is also known from the Fabry-Perot interferometer. Trapping light with interference One common way of making especially good parallel mirrors is to use Bragg reflection at multiple layers of dielectric films. See, e.g., the Wikipedia entry on "Vertical-Cavity Surface Emitting Lasers". The Bragg principle is based on the destructive interference between waves in successive layers of a stack of dielectric layers. As a rather logical continuation of the same principle, one has progressed to photonic crystals which employ the Bragg principle in more than one spatial direction and can in principle be used to make extremely small photonic cavities. The price one pays is that one needs many periods of the artificial crystal lattice in order to obtain high reflectivities, so that the total size of the structure ends up being much larger than the cavity itself. Higher and higher reflectivities are required, on the other hand, if one wants to make a laser out of such a microcavity. The simple reason is that a small cavity can host only a small amount of amplifying material, and therefore it becomes more difficult for amplification to win over the losses in a microcavity laser. Whispering-gallery resonators - trapping without interference In solid-state laser materials, it is often possible to realize the mirrors simply by exploiting total internal reflection at the interface between the high-index solid and the surrounding medium (e.g., air). In contrast to the Bragg principle, this confinement mechanism for light is to lowest order frequency-independent and can therefore be called a classical effect - it can be described without explicit use of the wave nature of light, by using Fermat's principle. This is good because it means that a device based on this confinement mechanism will in principle be able to work over a very broad range of wavelengths - in stark contrast to photonic crystals. Nevertheless, one can use total internal reflection to make three-dimensionally confined resonators with high frequency selectivity (or "finesse"), provided one can force wavefronts inside the cavity to interfere with themselves. This is achieved with the "whispering-gallery" resonator which is at the heart of the lowest-threshold lasers made so far. This low threshold becomes possible as a consequence of the small size that can be achieved with these resonators. They are essentially circular disks in which the light circulates around close to the dielectric interface. Such modes are especially low in losses. Whispering-gallery waves: To illustrate the whispering-gallery effect, the movie shows a cross sectional view of a curved interface (black circle) between glass and air, with a circulating wave radiating in all directions. The color represents the electric field, and in the first animation the field inside the resonator is only slightly higher than outside. This is not a good resonator because it is very "lossy". In the second movie, the wavelength is about 4 times shorter than above. In this case, the field outside the resonator is much weaker than inside it, meaning that we are confining the light much better. In both animations, the wave fronts look slanted, especially on the outside. Comparing the two clips, you will notice, however, that the wave fronts right at the circular interface are perfectly radial in the bottom image. This is what makes the two scenarios different: the straight wave fronts at the interface correspond to grazing propagation along the curved boundary. There is still a wave emanating from the cavity at the bottom, but its amplitude relative to that at the interface is now much smaller. Observe also the central region of the dielectric circle, which is essentially field-free. The intensity is highly concentrated near the surface. Even in the more strongly confined case, shown here, the wave penetrates slightly into the surrounding medium. In reflection off a straight dielectric interface, this penetration is known to go along with the Goos-Hänchen effect, a lateral displacement of the scattered beam. A calculation of the analogous effect in reflection off a curved interface can be done starting from the circular geometry. Since the Goos-Hänchen effect can be incorporated into a ray model, it improves semiclassical calculations for non-circular cavities. A detailed introduction to the Goos-Hänchen effect and our relevant work is presented on a separate page. To find out more about the spiral patterns shown in these movies, read about wavefronts in open systems. Semiconductors are far from being the only application of the whispering-gallery mechanism. The first laser resonators in the submillimeter size regime were made of liquid droplets containing a lasing organic dye. The highest-quality optical microresonators have been achieved using fused-silica spheres (i.e., glass). Although these materials have a refractive index closer to unity than a semiconductor, they still support whispering-gallery modes. In that context, they are often called morphology-dependent resonances (MDRs). Both the semiconductor and the droplet realizations of the whispering gallery are illustrated on the cover of Optical Processes in Microcavities, edited by R.K.Chang and A.J.Campillo (World Scientific Publishers, 1996). The lasing droplets are seen on the left side, and a "thumbtack" microlaser with its rotationally symmetric calculated emission pattern appears in the main panel. This book contains 11 chapters on important experimental and theoretical aspects of dielectric microcavities. Chapter 11 represents the status of our work as of summer 1995: "Chaotic Light: a theory of asymmetric cavity resonators", J.U.Nöckel and A.D.Stone PDF - (warning: large files) Don't be square! The question that arises naturally in lasing microdroplets is: how strongly can a dielectric resonator be deformed before whispering-gallery modes cease to exist, or become degraded by leakage? The intuitive answer is, "the rounder, the better". However, even shapes with sharp corners can sustain modes that have every right to be called whispering-gallery phenomena. In fact, these types of whispering-gallery modes cannot be understood purely on the basis of ray optics. This is discussed in our work on hexagonal nanoporous microlasers. Intriguingly, hexagonal zinc oxide nanocrystals have recently become the smallest resonators sustaining whispering-gallery type modes ever observed. Being round is not a prerequisite for whispering-gallery action. So there is a huge space of possible shapes (practically from circle to square) that could possibly be considered as whispering-gallery type resonators. If we had a choice, what should the ideal shape be? This clearly depends on the application context, but in any case it would be desirable to have some design rules. In the following, we begin to discuss some design issues, and point out how our work in particular aims to provide the design rules just mentioned, based on approximate methods such as the ray picture. Stable and unstable resonators Other mirror arrangements provide different advantages. In particular, there has been a considerable body of work employing concave or convex mirrors. E.g., concave mirrors separated by less than their radii of curvature added together, make a stable resonator in which light rays undergo focussing while being multiply reflected between the mirrors. Light can then be coupled out by making one of the mirrors slightly transparent. When the output coupling is small, the theoretical treatment of such a laser can often be performed by neglecting the leakage and hence assuming the existence of some orthogonal set of modal eigenfunctions. If one wants to avoid the use of partially transparent mirrors (which need to have very low losses for high-power applications), one alternative design is the unstable resonator containing defocussing elements [see the exhaustive textbook by A.E.Siegman, Lasers (University Science Books, Mill Valley, CA (1986)]. E.g., two concave mirrors separated by more than their added radii of curvature cause rays to diverge out from the optical axis after several reflections. Outcoupling occurs when the light spills over the edge of one of the mirrors (which hence need not be partially transparent themselves). Such unstable lasers differ from stable resonators in their mode structure: A set of well-defined bound modes is not available for the expansion of the laser field, because they all couple to the outside. Therefore, it has been necessary to use quasibound states in the calculations. Lasers are fundamentally open systems, so a description in terms of quasibound states seems only natural. These states are, however, not as familiar a tool as the usual square-integrable eigenfunctions one knows from bound systems. Their properties are still a topic of current research. Important work on such "quasi-normal modes" has also been carried out by Kenneth Young, Pui-Tang Leung and co-workers. The central problem from the point of view of laser physics is this: In order to define photons in the first place, we expect to have at our disposal a set of normal modes for which we then write the creation and annihilation operators. But metastable states are not eigenstates of a Hermitian differential operator, because they represent energy escaping to infinity. Therefore, familiar precedures involving expansions in normal modes run into problems. Nevertheless, their use makes a lot of sense when discussing the emission properties of individual metastable states, such as their frequency shifts as a result of a perturbation in the resonator's shape or dielectric constant. Or - just to mention a really far-out example: metastable states find application in the study of gravitational waves emitted from a black hole [P.T.Leung et al., Phys.Rev.Lett. 78, 2894 (1997)] Chaotic resonators As an extention of the unstable-resonator idea, one can think of two concave mirrors in a defocussing setup combined with some lateral (sideways) guiding of the light between the mirrors. A naive reasoning could be this: We want lasing from light spilling out near one of the mirrors, but we don't want the escape angle with the optical axis to be too large, hoping thereby to improve the spatial mode pattern (focussing). So we put additional mirrors along the open sides joining the mirrors. Now combine this idea with the use of dielectric interfaces as (partially transparent) mirrors, and one is lead quite directly to consider the so-called stadium resonator (or a generalization thereof). Here is an illustration of the stadium shape and of how it scatters an incident ray: It is taken from J.H.Jensen, J.Opt.Soc.Am.A 10 (1993). Remark on previous work: Jensen seems to have been the first to attack the ray-wave duality for a stadium-shaped dielectric resonator, in particular taking into account the inevitable ray-splitting into reflected and transmitted portions that occurs at the sharp dielectric interface of the chaotic resonator (thanks to R.K. Chang and A. Poon for pointing out the reference). However, he did not consider the long-lived resonances that such a cavity could support, which are a prerequisite for lasing. Instead, Jensen's paper gives a quasiclassical analysis of the rainbow-peaks for this structure. For more on rainbows, see this Atmospheric Optics web site. Ray splitting has received renewed interest in recent years (in my own ray optics simulations, it is taken into account as well - it becomes essential in high-index materials). We are not the only ones to consider chaotic dielectric resonators. However, we were the first (to my knowledge) to seriously apply chaos analysis to the emission properties of quasibound states in dielectric resonators, see "Q spoiling and directionality in deformed ring cavities", J.U.Nöckel, A.D.Stone and R.K.Chang, Optics Letters 19, 1693 (1994). This is a theory paper in which we address the consequences of emerging ray chaos for the lifetimes and emission directionality of deformed dielectric resonators. The first experiment in which the correspondence between emission anisotropy and chaotic structure in the classical ray dynamics was successfully applied to dielectric microlasers is "Ray chaos and Q-spoiling in lasing droplets", A.Mekis, J.U.Nöckel, G.Chen, A.D.Stone and R.K.Chang, Phys.Rev.Lett. 75, 2682 (1995). In this paper, we studied lasing microdroplets with a nonspherical shape, which leads to a strongly anisotropic light output along the droplet surface. The total-intensity profile was imaged and compared with a ray model, yielding an explanation for the observed features. To arrive at the idea of using a chaotic resonator cavity, one can either start from the unstable-resonator concept as described above, or  from the whispering-gallery design. We came from the latter direction. The argument leading to an oval dielectric resonator is simply that a circular whispering-gallery cavity does not have a preferred emission direction, owing to its rotational symmetry. In addition, one wishes to have a parameter with which the resonance lifetimes of the cavity can be controlled. This is achieved by deforming its shape. Confocal resonators Inbetween stable and unstable resonators, there is another useful mirror configuration, called confocal. It has the advantage of creating a focussing effect inside the resonator, which in turn amounts to producing a smaller effective mode volume for the laser. Instead of the whole volume between the mirrors, it is possible to utilize only a smaller volume around the coinciding focal points of the mirrors. The ray pattern that forms in a confocal arrangement of two concave mirrors can sometimes take on the shape of a bowtie (depending on the shape of the mirrors). This well-known configuration is found in etalons but also in lasers. The simplest confocal cavity would consist of two circle segments with a common focus. A less trivial example is the case of two confocal paraboloids, i.e., surfaces of revolution generated by opposing parabolas that share their focal point: dome   plot The righthand picture shows two bowtie rays going through the focus. There are many other ray paths that never go through the focus, but they form caustics which are reminiscent of this basic shape. For a study if this type of (three-dimensional) mirror configuration, see my work with Izo Abram's group at CNET, "Mode structure and ray dynamics of a parabolic dome microcavity". This is the manuscript: Microresonators such as this can find application in quantum electrodynamics because they allow to modify the rate of spontaneous emission of atoms or quantum dots interacting with the electromagnetic field. To that end, one has to go to small mode volumes. But the cavity volume isn't necessarily what counts. With a focused ray pattern as in the confocal resonator, the light field is especially strong in only certain portions of the resonator, notably the focal point in the center. And that is where the desired strong coupling between the light and the active medium occurs. Bowtie laser Now we put all of the above together, but for the price of one... The microcylinder laser shown here is not circular, but not a stadium shape, either. The stadium has fully chaotic ray dynamics, the circle has no chaos at all. This oval shape has a mixed phase space. As a by-product of the transition to chaos which takes place with increasing deformation, a bowtie-shaped ray path is born that does not exist below a certain eccentricity. This pattern combines internal and external focussing, and its lifetime is long enough for lasing because the rays hit the surface close to the critical angle for total internal reflection. This is the world's most powerful microlaser to date. To understand why this very desirable intensity distribution arises in the smooth oval shape we chose here, but not in the circle or the stadium, one has to use methods of classical nonlinear dynamics. This is explained in our article, " High power directional emission from lasers with chaotic resonators ", C.Gmachl, F.Capasso, E.E.Narimanov, J.U.Nöckel A.D.Stone, J.Faist, D.Sivco and A.Cho, Science 280, 1556 (1998) PDF, cond-mat/9806183. In this paper, the oval-resonator concept is combined with a very innovative laser material that turns out to be particularly compatible with a disk-shaped resonator geometry: the quantum cascade laser. This active material consists of a semiconductor heterostructure in which an electrical current leads to the emission of photons. But in contrast to more conventional quantum-well diode lasers, the optical transitions responsible for the creation of the photons take place exclusively within the nanostructured conduction band (between quantum well subbands). Electron-hole recombination across the valence band (the usual mechanism) is not involved here, leading to various advantages. F.Capasso and J.Faist are among the winners of the 1998 Rank Prize for the invention of the quantum cascade laser. The basic ideas of our work are illustrated on picture pages starting with a galery of magazine covers and continuing with a special type of shape called the Robnik billiard (also known as the dipole shape or limacon billiard). How to learn more: What is chaos ? And what in the world  is quantum chaos ? Chaos is not just chaos We are talking here about deterministic chaos. The term refers to the fact that even simple classical systems governed by simple equations such as Newton's laws can exhibit highly irregular motion that defies long-term predictions. One example for such a simple physical system is the double pendulum; as the following animation shows, the two degrees of freedom represented by the two angles θ and ψ are coupled, and this leads to a non-periodic, unpredictable-looking combined motion: In Optics, there is a slight confusion of terminology about the concept of chaos, because it is traditionally found (in quantum optics) when people want to describe the statistical properties of a photon source. "Chaotic light" in that context has a much shallower meaning - it just means "random" thermal distribution of photons as it is found in blackbody radiation. Chaos in the deterministic sense already has a place in optics as well, but again we have to make a distinction to our work. In multimode lasing one can look at the temporal and/or spatial evolution of the laser emission and finds that the signal can become very irregular. By mapping this behavior onto an artificial (usually many-dimensional) space, e.g. by a so-called time-delay embedding, one then sometimes finds that the system follows a trajectory on a "chaotic attractor". That's a type of structure one finds in dissipative nonlinear classical systems. This is what people have studied in nonlinear optics for a long time now. There are many lists of chaos-science links; see for example the Wikipedia artticle on this subject. For more on the the relation between our work and the more traditional nonlinear optics, see below. Chaos in billiards In the classical ray picture for our microresonators, the fact that boundaries are penetrable does not (to lowest order in the wavelength) affect the shape of the trajectories, and hence our internal ray dynamics is that of a non-dissipative, closed system. The optical resonator in the ray picture is a realization of what mathematicians call a billiard. See this short article for an entertaining introduction to billiards. Only non-chaotic billiards are shown there: the circle and the ellipse (note that this math definition of a billiard doesn't conform with what we know from the local pub). But generic oval billiards display chaotic dynamics. To take the step into the world of chaotic billiards, follow this link to the polygonal and stadium billiard (among others). If you have any further questions about chaos, you may well find an answer at this informative FAQ site maintained by Jim Meiss. Further information, including a host of graphics and animations, is also available from the chaos group at the University of Maryland. Quantum Chaos Quantum chaos sounds like a contradiction in terms because linear wave equations such as the Schrödinger equation do not exhibit the sensitivity to initial conditions that gives rise to chaos. Nonetheless, classical mechanics is just a limiting case of quantum mechanics, just as ray optics is the limit of wave optics for short wavelengths. So one should expect "signatures of chaos" in the wave solutions. To find and understand these, semiclassical methods are indispensable. One of the pioneers of quantum chaos, Martin C. Gutzwiller, has written a beautiful introduction to this field in Scientific American. See in particular the third figure describing the central place of quantum chaos in our our understanding of quantum mechanics. An important lesson here is: Playing around with the simple standard systems, such as harmonic oscillators, we barely scratch the surface of what the classical-quantum transition really entails. If we want to go beyond pedestrian descriptions of this transition, classically chaotic systems are where the action is! This also holds for much-discussed fundamental topics such as "decoherence", see the example of periodically "kicked" Cesium atom. As a by-product, quantum chaos has brought together an arsenal of powerful techniques. My first chance to study these was a graduate course at Yale taught by Prof. Gutzwiller in 1993/94; he also accompanied my thesis work on chaotic optical cavities through discussions and as a reader at dissertation time. As it turns out, many of the intrinsic emission properties of dielectric optical resonators have a classical origin. The significance of this for quantum chaos is that comparison between ray model and numerical solutions of the wave equations uncover corrections to the ray model. Alternatively, one can also discover such wave corrections by comparing the ray predictions to an actual experiment. We follow both approaches. Such wave corrections become especially interesting when the underlying classical dynamics is partially chaotic, as is the case in the asymmetric dielectric resonators. In that setting, two major new effects arise: dynamical localization and dynamical tunneling. In dielectric cavities, the effect of such phenomena on resonance lifetimes and emission directionality, and of course on resonance frequencies, can be studied. Emission directionality is in itself a completely new question to investigate from the viewpoint of quantum chaos: when decay occurs, e.g.,in nuclear physics or chemistry, any anisotropy of the individual process is averaged out in the observation of an ensemble - but microlasers can be looked at individually, and from various directions. If they are bounded only by a dielectric interface, the emission pattern is determined by the phase-space structure. This is an important focus of my work: the short-wavelength asymptotics of systems that are chaotic and open. What this means is illustrated in a slightly different example on a picture page describing the annular billiard. There, we studied the relation between resonance lifetimes and dynamical tunneling (since it involves tunneling into a chaotic portion of phase space, it is also called "chaos-assisted tunneling"). Is quantum chaos just a mathematical-conceptual game without relevance for experiments? Our work has been among the first to propose actual applications of quantum chaos phenomena, and to my knowledge the two patents I co-authored were the very first to rely on such phenomena. Nonlinear dynamics Chaos, belonging to the field of nonlinear dynamics, is known to laser physicists in another guise as well: pattern formation, in particular vortices and vortex lattices, due to the nonlinearity of the lasing medium, has been studied much longer than our type of chaotic phenomena which rely on the boundary effects. Of course, there can be a cross-over from one regime to the other, e.g. from nonlinear vortices to linear vortices which in a circular resonator are encountered as whispering-gallery modes. What I'm discussing above is chaos in the linear wave equation. This phenomenon often dominates the physics, especially near the lasing threshold. At higher powers the nonlinearity of the medium itself becomes more important. This is something we had earlier addressed in an invited conference contribution , and also commented on in a book chapter titled "2-d Microcavities: Theory and Experiments" . Last significant revision: 09/09/04. This page represents a compilation of information relevant to our work on microlaser resonators. Naturally, it cannot claim to be complete in any way. However, I felt it appropriate to provide some context because the questions we are discussing are at the interface between two fields of study that traditionally haven't had much overlap: micro-optics and quantum chaos. These fields have more in common than meets the eye. But that by no means implies that one community cannot learn from the other... Since this is a NET DOCUMENT, I am trying to refer mostly to other documents that are available online, instead of citing things printed on dead trees. But if you have something you'd like me to include, feel free to let me know. Related information is found on the following web pages: This page © Copyright Jens Uwe Nöckel, 2002-2004 Last modified: Sat Jun 22 09:20:46 PDT 2013
6f4618bf5959c49c
Symmetry, Integrability and Geometry: Methods and Applications (SIGMA) SIGMA 2 (2006), 064, 4 pages      nlin.SI/0408027 On a 'Mysterious' Case of a Quadratic Hamiltonian Sergei Sakovich Institute of Physics, National Academy of Sciences, 220072 Minsk, Belarus Received June 02, 2006, in final form July 18, 2006; Published online July 28, 2006 We show that one of the five cases of a quadratic Hamiltonian, which were recently selected by Sokolov and Wolf who used the Kovalevskaya-Lyapunov test, fails to pass the Painlevé test for integrability. Key words: Hamiltonian system; nonintegrability; singularity analysis. pdf (138 kb)   ps (107 kb)   tex (7 kb) 1. Sokolov V.V., Wolf T., Integrable quadratic classical Hamiltonians on so(4) and so(3,1), J. Phys. A: Math. Gen., 2006, V.39, 1915-1926, nlin.SI/0405066. 2. Ablowitz M.J., Ramani A., Segur H., A connection between nonlinear evolution equations and ordinary dif ferential equations of P-type. I, J. Math. Phys., 1980, V.21, 715-721. 3. Ramani A., Grammaticos B., Bountis T., The Painlevé property and singularity analysis of integrable and non-integrable systems, Phys. Rep., 1989, V.180, 159-245. 4. Tsiganov A.V., Goremykin O.V., Integrable systems on so(4) related with XXX spin chains with boundaries, J. Phys. A: Math. Gen., 2004, V.37, 4843-4849, nlin.SI/0310049. 5. Sokolov V.V., On a class of quadratic Hamiltonians on so(4), Dokl. Akad. Nauk, 2004, V.394, 602-605 (in Russian). 6. Ramani A., Dorizzi B., Grammaticos B., Painlevé conjecture revisited, Phys. Rev. Lett., 1982, V.49, 1539-1541. 7. Grammaticos B., Dorizzi B., Ramani A., Integrability of Hamiltonians with third- and fourth-degree polynomial potentials, J. Math. Phys., 1983, V.24, 2289-2295. 8. Ablowitz M.J., Clarkson P.A., Solitons, nonlinear evolution equations and inverse scattering, Cambridge, Cambridge University Press, 1991. 9. Sakovich S.Yu., Tsuchida T., Symmetrically coupled higher-order nonlinear Schrödinger equations: singularity analysis and integrability, J. Phys. A: Math. Gen., 2000, V.33, 7217-7226, nlin.SI/0006004. 10. Sakovich S.Yu., Tsuchida T., Coupled higher-order nonlinear Schrödinger equations: a new integrable case via the singularity analysis, nlin.SI/0002023. Previous article   Next article   Contents of Volume 2 (2006)
44900449559a9529
Take the 2-minute tour × We're making a video presentation on the topic of eigenvectors and eigenvalues. Unfortunately we have only reached the theoretical part of the discussion. Any comments on practical applications would be appreciated. share|improve this question closed as no longer relevant by Andy Putman, Mark Meckes, Benoît Kloeckner, Mark Sapir, Tom Church Feb 3 '12 at 10:26 en.wikipedia.org/wiki/… –  Daniel Moskovich Sep 29 '10 at 11:48 I'd say that physics was pretty much an application of eigenvalues and eigenvectors. :-) In particular normal modes (en.wikipedia.org/wiki/Normal_modes) of oscillations for a system with $n$ degrees of freedom comes down to finding eigenvalues/vectors of an $n$-by-$n$ matrix. –  Robin Chapman Sep 29 '10 at 11:50 Please add more context: Who is your intended audience, and what scientific background can be assumed? What form is the presentation (e.g., video lecture like OpenCourseWare, animated demo like the Geometry Center, interpretive dance, etc.)? –  S. Carnahan Sep 29 '10 at 12:00 10 Answers 10 The problem of ranking the outcomes of a search engine like Google is solved in terms of an invariant measure on the net, seen as a Markov chain. Finding the invariant measure requires the spectral analysis of the associated matrix. share|improve this answer I would comment on Peitro's answer, but I don't have enough reputation; for a marvelously-titled explanation of Google's Pagerank, see The $25,000,000,000 Eigenvector. share|improve this answer Google's pagerank system is most likely the most canonical example, however others include, -Dynamical System If you are able to express a model in terms of a matrix acting on vectors, one can look at the iterations and ask what occurs? This can be done to model the life cycle of some species in an environment (bacteria on a petri dish, wolf/sheep interaction, fibonacci sequence as the spread of a population of bunnies, etc...). These examples are fairly small, however you can certainly have massive systems to model, and if your matrix is diagonalizable, the iterations of this map correspond to iterations of a diagonal matrix (very easy to do!) instead of the standard $m^{2}$ operations to multiply out an $m\times m$ matrix. Think about a $1 000 000 \times 1 000 000$ matrix $M$, where you're looking at whether a certain species will die out (i.e., itererating $M^{n}$ and checking as $n\to\infty$. Quite the time saver!) -Graph theory As an undergrad one of my summer research projects looked into special graphs called (3,6)-fullerenes, where we were finding that, looking at the adjacency matrix of the graph, one could pick 3 well chosen eigenvalues and their corresponding eigenvetors, and generate nice 3d plots of the graphs, whereas other choices would produce degenerate images, involving some twisted 2d surface. -Differential equations One can use eigenvalues and eigenvectors to express the solutions to certain differential equations, which is one of the main reasons theory was developed in the first place! I would highly recommend reading the wikipedia article, as it covers many more examples than any one reply here will likely contain, with examples along to way! (Schrödinger equation, Molecular Orbitals, Geology and Glaciology, Factor Analysis, Vibration Analysis, Eigenfaces, Tensor of Inertia, Stress Tensor, Eigenvalues of a Graph) share|improve this answer All of Quantum Mechanics is based on the notion of eigenvectors and eigenvalues. Observables are represented by hermitian operators Q, their determinate states are eigenvectors of Q, a measure of the observable can only yield an eigenvalue of the corresponding operator Q. If you measure an observable in the state $\psi$ in a system and find as result the eigenvalue $a$, the state of the system just after the measurement will be the normed projection of $\psi$ onto the eigenvector associated to $a$. And so on and so forth. Of course Quantum Physics is not mathematically trivial: the arena is infinite dimensional Hilbert Space (or more complicated functional analytic structures like Gelfand triples), operators are not bounded, etc...However, in the extremely fast growing field of Quantum Computing the algebra is mostly limited to finite-dimensional spaces and their operators. Finally, let me mention that Frank Wilczek, a winner of the 2004 Nobel Prize in Physics, has interestingly reminisced that as a student he found Quantum Mechanics easier than Classical Mechanics because of its nice axiomatization alluded to above.. share|improve this answer For visual appeal, you should look into the area of pendulums. There is a good demonstration with swinging bottles, I recall, and this does depend on eigenvalues that are nearly equal. Do a Web search on "coupled pendulums". share|improve this answer Principal Component Analysis is a way of identifying patterns in data, and expressing the data in such a way as to highlight their similarities and differences. It is very difficult to visualize data in high dimensional space, but PCA can be used their to analyze data. From the data set covariance matrix is formed and then eigen values and eigen vectors of that covariance matrix are found. These eigne values and eigen vectors then can be compared to figure out the contribution of a particular feature in the data set. Thus PCA can be successfully applied to reduce dimension of the data. share|improve this answer In telecommunications the so-called "beam-forming" algorithm in case of multiple antennas requires calculation of eigenvectors. share|improve this answer I think the book $Spectra$ $of$ $Graphs$$:$ $Theory$ $and$ $Applications$ by Dragos M. Cvetkovic, Michael Doob, Horst Sachs and M. Cvetkovic is very good source for practical applications of eigenvalues and eigenvectors. In communication theory, coding theory and cryptography, the minimum distance of codes is very important parameter in decoding and also is very important in coding based cryptography (for example McEliece cryptosystem). It is interesting that the second largest eigenvalue of related graph to a code, can determine a good lower-bond for minimum distance of code. share|improve this answer Another interesting application is rigid body rotation theory. No matter how complicated an object looks, there's always (at least) a set of three mutually orthogonal directions around which it can rotate perfectly without precession. Maybe not something you can base a whole lecture on, but it's a nice remark. share|improve this answer
7bcad134d5f44e07
Dismiss Notice Dismiss Notice Join Physics Forums Today! L^2 in spherical coordinates. 1. Sep 29, 2006 #1 I am trying to calculate L^2 in spherical coordinates. L^2 is the square of L, the angular momentum operator. I know L in spherical coordinates. This L in spherical coordinates has only 2 components : one in the direction of the theta unit vector and one in the direction of the phi unit vector. I get the correct result for L^2 by substituting cartesian values for the theta and phi unit vectors in L, and then squaring and adding the components. I do not get the correct result by simply squaring and adding the theta and phi components of L directly.Why not? Surely if this were a classical vector whose components are scalars rather than operators, I could find its norm squared in both ways, isn't it? 2. jcsd 3. Sep 30, 2006 #2 Before I can answer your question, I'll need to see how you've got L in spherical co-ordinates. 4. Sep 30, 2006 #3 L = - i * h * (r x nabla) = - i * h * ( u_phi * d/theta - u_theta/sin(theta) * d/dphi ) where h=hbar, nabla=grad operator, u_phi and u_theta=phi and theta unit vectors, x = vector product. Substituting cartesian values u_theta = (cos(theta)*cos(phi), cos(theta)*sin(phi), -sin(theta)) u_phi = (-sin(phi), cos(phi), 0) and squaring and adding the components gives the desired result for L^2: L^2 = -h^2 * (1/sin(theta)^2 * d^2/dphi^2 + 1/sin(theta) * d/dtheta (sin(theta) * d/dtheta)) Simply squaring and adding the components of L does not seem to give this result. 5. Sep 30, 2006 #4 Oh I see. As far as I'm aware, the reasoning behind this is not entirely obvious. In classical Hamiltonian mechanics, the physics of a system with N degrees of freedom can be formulated in terms of 2N variables. Traditionally, these are the position and conjugate momentum in the various dimensions. However there are a class of variables called canonical variables. Any of these variables can be used used to do Hamiltonian mechanics. In going to spherical co-ordinates, you are suggesting using [itex]r, \theta, \phi[/itex] and the associated derivatives (for the momenta). The reason all this is important is that the prescription for going from classical to quantum mechanics is to promote the Poisson brackets to commutators, and the functions on position and momenta to functions of the associated operators. I think it boils down to which derivative operators we need for the momentum operators to be canonical variables (and I'm guessing that the extra [itex]\sin(\theta)\mbox{'s}[/itex] appear because of that). 6. Sep 30, 2006 #5 Thanks for your response. I can't say I completely understand it though. In Quantum mechanics, as it is being taught to me, we never used classical Hamiltonian mechanics (except for the hamiltonian in the Schrödinger equation). Rather, we converted from classical mechanics to quantum mechanice by replacing the momentum with -i * h * nabla and the kinetic energy by i * h * d/dt. 7. Oct 1, 2006 #6 I haven't done the proof, but the issue may be that the derivatives of the basis vectors are not zero. For example d/d phi u_theta = cos theta u_phi. 8. Oct 2, 2006 #7 Yes, I see, you are correct. How dumb of me to not see that. Purely out of curiosity: Is it even possible to do this in spherical coordinates directly? I mean, using the correct derivatives when "squaring" the components will give me another vector operator, but the end result (L^2) is an operator whose result is a scalar (rather than a vactor). 9. Oct 2, 2006 #8 NB. throughout, I use the following transformation Hmm. I'm sure that's not enough to explain it. The Lagrangian in spherical polars is given by: This gives the momenta as: [tex]p_r = \partial L / \partial \dot{r}} = m\dot{r}[/tex] [tex]p_\theta=\partial L / \partial \dot{\theta}} = mr^2\dot{\theta}[/tex] [tex]p_\phi = \partial L / \partial \dot{\phi}} = mr^2 \sin^2{\theta}\dot{\phi}[/tex] The reason this is important is because: [\hat{q}_i,\hat{p}_j] = i\hbar\{q_i,p_j\}=i\hbar\delta_{ij} where it is understood that q, p are the variables the problem is formulated in, and [itex]\hat{q}, \hat{p}[/itex] are the associated position and momentum operators, and the curly brackets are Poisson brackets. What this means is that if we are to do our problem in a new set of variables, we must find what the momentum corresponds to, and then replace those with the operators [itex]-i\hbar\partial / \partial q_i[/itex]. So: [tex]\hat{p}_r = -i\hbar\partial / \partial r[/tex] [tex]\hat{p}_\theta = -i\hbar\partial / \partial \theta[/tex] [tex]\hat{p}_\phi = -i\hbar\partial / \partial \phi[/tex] In spherical polars, the cartesian components of angular momentum are given by: [tex]L_x=-p_\theta \sin{\phi}\cos{\phi}\cos{\theta}-\frac{p_\phi}{\sin{\theta}}[/tex] [tex]L_y=-p_\theta \sin{\phi}\cos{\phi}\sin{\theta}-\frac{p_\phi\cos{\theta}}{\sin^2{\theta}}[/tex] [tex]L_z=p_\theta \sin^2{\theta}\end{multiline*}[/tex] where the [itex]p_\theta, p_\phi[/itex] are the canonical momenta of the [itex]\theta, \phi[/itex] variables. This was obtained by changing the cartesian components of L from cartesian variables (i.e. [itex]L_x = yp_z - zp_y=my\dot{z}-mz\dot{y}[/itex] to spherical polars). Now by doing our quantisation (i.e. replacing classical variables with their corresponding operators, whose commutators correspond to the classical Poisson bracket) [tex]\hat{L}_x = -i\hbar(\sin{\phi}\cos{\phi}\cos{\theta}\frac{\partial}{\partial \theta}-\frac{1}{\sin{\theta}}\frac{\partial}{\partial \phi})[/tex] [tex]\hat{L}_y=-i\hbar(\sin{\phi}\cos{\phi}\sin{\theta}\frac{\partial}{\partial \theta}-\frac{\cos{\theta}}{\sin^2{\theta}}\frac{\partial}{\partial \phi})[/tex] [tex]\hat{L}_z=-i\hbar \sin^2{\theta}\frac{\partial}{\partial \theta}[/tex] All that remains is to square these operators up (remembering that they apply to functions on the right hand side; this ensures that the product rule/Leibniz rule is applied accordingly) and add them up to see what [itex]\hat{L}^2[/itex] looks like in spherical polars. I'm not 100% if I'm on the right tracks here, but as far as I know, I haven't made any mistakes. If I had the inclination/time to square those operators and sum them, I might have found out... EDIT: lots of edits to get the [itex]\LaTeX[/itex] right. Last edited: Oct 2, 2006
89dc38991ac6e415
onsdag 26 november 2014 The Radiating Atom 2: Those Damn Quantum Jumps If we are going to have to put up with those damn quantum jumps, I am sorry I ever had anything to do with quantum theory. Schrödinger formulated the Schrödinger equation as the foundation of quantum mechanics in 1926, but his equation was then hijacked by Bohr, Born and Heisenberg, who gave it a meaning as statistics of discrete energy quanta, which Schrödinger could not accept and forced him out of business. Schrödinger returned to the  in 1952 in his article Are There Quantum Jumps? seeking to resurrect quantum mechanics as wave mechanics resonances without any need of particles and discrete energy quanta or light quanta (photons). Schrödinger's view was present in the previous post considering interference resonance in superposition (linear combination with (real say) coefficients $c_1$ and $c_2$) • $\psi (x,t) = c_1\psi_1(x,t)+c_2\psi_2(x,t)$ • $ih\frac{\partial\psi_j}{\partial t} + H\psi_j = 0$  for $j=1,2$, where $H\phi_1=E_1\phi_1$  and $H\phi_2=E_2\phi_2$ with $E_1=h\nu_1$ and $E_2=h\nu_2$ and $H$ is the Hamiltonian operator acting with respect to a space coordinate $x$, thus with $\phi_1$ and $\phi_2$ eigen-functions of the Hamiltonian with eigen-values $E_1$ and $E_2$ and corresponding frequencies $\nu_1$ and $\nu_2$ (with $\nu_2 > \nu_1$). • $\rho (x,t) = \vert\psi (x,t)\vert^2 =  \psi (x,t)\overline{\psi (x,t)}$, as a measure of electronic charge distribution, direct computation shows that   • $\rho (x,t) = c_1^2+c_2^2 + 2c_1c_2\cos((\nu_2 -\nu_1)t)$. We see that if either $c_1=0$ or $c_2=0$, then the electronic charge distribution $\rho$ is constant in time and thus does not generate any electromagnetic radiation. An atom in a simple eigen-state such as the ground state does not radiate. On the other hand, in real superposition with if $c_1c_2 > 0$, the electronic charge varies in time with frequency $\nu_2-\nu_1$, and thus generates electromagnetic radiation according to the Abraham-Lorentz law or Larmor formula stating that radiation power is proportional to the square of charge acceleration. This means that an electron in true superposition of two states of different eigenstates of different frequencies, must radiate and thus needs external forcing to persist. This is what happens in emission/absorption spectrography with a hot/cold gas emitting/absorbing light of specific frequencies. This phenomena of interference in superposition is the (sincere and true Schrödinger) rational of the Einstein-Planck's relation • $h\nu = E$       with $E=h\nu_2 - h\nu_2$ by Bohr-Heisenberg-Born instead viewed as a difference in "energy" between two states, and $h\nu$ a so-called "quantum of energy" supposedly being emitted/absorbed when an electron "jumps" between two eigen-states. Schrödinger's main point is that there is no need to introduce any concept of "energy quanta" and electron "jump" to give the relation $h\nu = E = h\nu_2 -h\nu_1$ a meaning, because its (sincere and true) meaning is that the frequency $\nu$ emitted from superposition is simply equal to the difference $\nu_2 -\nu_1$, that is a beat frequency. This is highly remarkable and gives strong support to Schrödinger's view. But without energy quanta the quantum mechanics of Bohr-Heisenberg-Born has no meaning and that is why Schrödinger left the field in dismay. It remains to continue from where Schrödinger ended in 1952 (or 1927). My idea is then to extend the analysis in Mathematical Physics of Blackbody Radiation (proving Planck's radiation law using finite precision wave mechanics without the statistics of energy quanta used by Planck in his proof)  to atom physics following the (Vedanta) spirit of Schrödinger. Inga kommentarer: Skicka en kommentar