text
stringlengths
100
957k
meta
stringclasses
1 value
# Maximize the number of subarrays with XOR as zero in C++ C++Server Side ProgrammingProgramming We are given an array Arr[] containing integer values. The goal is to find the maximum number of subarrays with XOR as 0. The bits of any subarray can be swapped any number of times. Note:- 1<=Arr[i]<=1018 In order to make any subarray’s XOR as 0 by swapping bits, two conditions have to be met:- • If the number of set bits in range left to right is even. • For any given range sum of bits <= 2 (largest number in set bits) Let us see various input output scenarios for this - In −Arr[] = { 1,2,5,4 } Out Subarrays satisfying only 1st condition : 4 Subarrays satisfying both condition : 3 In − Arr[] = { 3,7,2,9 } Out Subarrays satisfying only 1st condition : 6 Subarrays satisfying both condition : 3 ## Approach used in the below program is as follows − In this approach we observed that in order to make any subarray’s XOR as 0 by swapping bits, two conditions have to be met:- If the number of set bits in range left to right is even or for any given range sum of bits <= 2 (largest number in set bits) • Take the input array Arr[] and calculate its length. • Function removeSubarr(int arr[], int len) returns the count of subarrays not satisfying condition 2. • Take the initial count as 0. • Iterate array using for loop and take variables sum and maxVal. • Take another for loop to iterate in range of 60 subarrays as beyond 60 the condition 2 can never be false. • Add element to sum and take maximum in maxVal. • If sum is even and 2 * maxVal > sum then increment count as condition 2 is not met. • At the end of both loops return count. • Function findSubarrays(int arr1[], int len1) takes an input array and its length and returns the count of subarrays satisfying both conditions mentioned above. • Take a prefix array to calculate the count of subarrays that follow condition 1 only. • Traverse array using for loop and set each element with __builtin_popcountll(arr1[i]) which is the number of set bits in it. • Populate prefix array using for loop and set prefix[i] = prefix[i] + prefix[i - 1] where except first element. • Count odd and even values in the prefix array. • Set tmp1= ( oddcount * (oddcount-1) )/2 and tmp2= ( evencount * (evencount-1) )/2 and result as sum of both. • Result will be the sum of subarrays satisfying condition 1 only. • Print result. • Now update result with result=result - removeSubarr(arr1, len1). • Now the result contains subarrays satisfying both conditions. • Print result again. ## Example #include <bits/stdc++.h> using namespace std; // Function to count subarrays not satisfying condition 2 int removeSubarr(int arr[], int len){ int count = 0; for (int i = 0; i < len; i++){ int sum = 0; int maxVal = 0; for (int j = i; j < min(len, i + 60); j++){ sum = sum + arr[j]; maxVal = arr[j] > maxVal ? arr[j]: maxVal; if (sum % 2 == 0){ if( 2 * maxVal > sum) { count++; } } } } return count; } int findSubarrays(int arr1[], int len1){ int prefix[len1]; int oddcount, evencount; int result; for (int i = 0; i < len1; i++) { arr1[i] = __builtin_popcountll(arr1[i]); } for (int i = 0; i < len1; i++){ prefix[i] = arr1[i]; if (i != 0) { prefix[i] = prefix[i] + prefix[i - 1]; } } oddcount = evencount = 0; for (int i = 0; i < len1; i++){ if (prefix[i] % 2 == 0) { evencount = evencount +1; } else { oddcount = oddcount +1; } } evencount++; int tmp1= ( oddcount * (oddcount-1) )/2; int tmp2= ( evencount * (evencount-1) )/2; result = tmp1+tmp2; cout << "Subarrays satisfying only 1st condition : "<<result << endl; cout << "Subarrays satisfying both condition : "; result = result - removeSubarr(arr1, len1); return result; } int main() { int Arr[] = { 1,2,5,4 }; int length = sizeof(Arr) / sizeof(Arr[0]); cout << findSubarrays(Arr, length); return 0; } ## Output If we run the above code it will generate the following Out Subarrays satisfying only 1st condition : 4 Subarrays satisfying both condition : 3 Published on 22-Oct-2021 08:52:03
{}
## Accepted Submissions Paper Title Author Names Abstract Revisiting Spatial Invariance with Low-Rank Local Connectivity Elsayed, Gamaleldin F*; Ramachandran, Prajit; Shlens, Jonathon; Kornblith, Simon Convolutional neural networks are among the most successful architectures in deep learning. This success is at least partially attributable to the efficacy of spatial invariance as an inductive bias. Locally connected layers, which differ from convolutional layers only in their lack of spatial invariance, usually perform poorly in practice. However, these observations still leave open the possibility that some degree of relaxation of spatial invariance may yield a better inductive bias than either convolution or local connectivity. To test this hypothesis, we design a method to relax the spatial invariance of a network layer in a controlled manner. In particular, we create a ow-rank locally connected (LRLC) layer, where the kernel applied at each position is constructed as a linear combination of basis kernels with spatially varying combining weights . By varying the number of basis kernels, we can control the degree of relaxation of spatial invariance. In experiments with small convolutional networks, we find that relaxing spatial invariance improves classification accuracy over both convolution and locally connected layers across MNIST, CIFAR-10, and CelebA datasets. These results suggest that spatial invariance may be an overly restrictive inductive bias. Neural Additive Models: Interpretable Machine Learning with Neural Nets Agarwal, Rishabh*; Frosst, Nicholas; Zhang, Xuezhou; Caruana, Rich; Hinton, Geoffrey The accuracy of deep neural networks (DNNs) comes at the cost of intelligibility: it is usually unclear how they make their decisions. This hinders their applicability to high stakes decision-making domains such as healthcare. We propose Neural Additive Models (NAMs) which combine some of the expressivity of DNNs with the inherent intelligibility of generalized additive models. NAMs learn a linear combination of neural networks that each attend to a single input feature. These networks are trained jointly and can learn arbitrarily complex relationships between their input feature and the output. Our experiments on regression and classification datasets show that NAMs are more accurate than widely used intelligible models such as logistic regression and shallow decision trees. They perform similarly to existing state-of-the-art generalized additive models in accuracy, but can be more easily applied to real-world problems. Bandit-based Monte Carlo Optimization for Nearest Neighbors Bagaria, Vivek; Baharav, Tavor Z*; Kamath, Govinda; Tse, David The celebrated Monte Carlo method estimates an expensive-to-compute quantity by random sampling. Bandit-based Monte Carlo optimization (BMO) is a general technique for computing the minimum of many such expensive-to-compute quantities by adaptive random sampling. The technique converts an optimization problem into a statistical estimation problem which is then solved via multi-armed bandits. We apply this technique to solve the important problem of high-dimensional k-nearest neighbors. We show that this technique allows us to develop an algorithm which can confer significant gains on real datasets over both exact computation (up to 100x in number of operations and 30x in wall-clock time) and state-of-the-art algorithms such as K-graph, NGT, and LSH. We provide theoretical guarantees and show that under regularity assumptions the complexity of this algorithm scales logarithmically with the dimension of the data rather than linearly as in exact computation. LassoNet: A Neural Network with Feature Sparsity Lemhadri, Ismael*; Tibshirani, Rob; Ruan, Feng Much work has been done recently to make neural networks more interpretable, and one obvious approach is to arrange for the network to use only a subset of the available features. In linear models, Lasso (or L1-regularized) regression assigns zero weights to the most irrelevant or redundant features, and is widely used in data science. However the Lasso only applies to linear models. Here we introduce LassoNet, a neural network framework with global feature selection. Our approach enforces a hierarchy: specifically a feature can participate in a hidden unit only if its linear representative is active. Unlike other approaches to feature selection for neural nets, our method uses a modified objective function with constraints, and so integrates feature selection with the parameter learning directly. As a result, it delivers an entire regularization path of solutions with a range of feature sparsity. On systematic experiments, LassoNet significantly outperforms state-of-the-art methods for feature selection and regression. The LassoNet method uses projected proximal gradient descent, and generalizes directly to deep networks. It can be implemented by adding just a few lines of code to a standard neural network. Protecting Against Image Translation Deepfakes by Leaking Universal Perturbations from Black-Box Neural Networks Ruiz, Nataniel*; Bargal, Sarah; Sclaroff, Stan In this work, we develop efficient disruptions of black-box image translation deepfake generation systems. We are the first to demonstrate black-box deepfake generation disruption by presenting image translation formulations of attacks initially proposed for classification models. Nevertheless, a naive adaptation of classification black-box attacks results in a prohibitive number of queries for image translation systems in the real-world. We present a frustratingly simple yet highly effective algorithm Leaking Universal Perturbations (LUP), that significantly reduces the number of queries needed to attack an image. LUP consists of two phases: (1) a short leaking phase where we attack the network using traditional black-box attacks and gather information on successful attacks on a small dataset and (2) and an exploitation phase where we leverage said information to subsequently attack the network with improved efficiency. Our attack reduces the total number of queries necessary to attack GANimation and StarGAN by 30%. What is being transferred in transfer learning? Neyshabur, Behnam; Sedghi, Hanie*; Zhang, Chiyuan One desired capability for machines is the ability to transfer their understanding of one domain to another domain where data is (usually) scarce. Despite ample adaptation of transfer learning in many deep learning applications, we yet do not understand what enables a successful transfer and which part of the network is responsible for that. In this paper, we provide new tools and analysis to address these fundamental questions. Through a series of analysis on transferring to block-shuffled images, we separate the effect of feature reuse from learning high-level statistics of data and show that some benefit of transfer learning comes from the latter. We present that when training from pre-trained weights, the model stays in the same basin in the loss landscape and different instances of such model are similar in feature space and close in parameter space. VP-FO: A Variable Projection Method for Training Neural Networks Sahiner, Arda*; Pauly, John; Pilanci, Mert We propose a new optimization method for training neural networks for regression problems, built upon the success of the Variable Projection method for separable non-linear least squares problems. This Variable Projection approach eliminates the final-layer weights of a network by observing that the optimal values of these weights can be solved for in closed-form when the weights of the remaining layers are considered fixed. We propose minimizing the Variable Projection loss with first-order optimization methods, which allows for scalability at any network depth, and can easily be incorporated into existing neural network training pipelines. We extensively demonstrate the effectiveness of implementing our approach for training neural networks, in both training time and performance for applications such as image auto-encoders. Heteroskedastic and Imbalanced Deep Learning with Adaptive Regularization Cao, Kaidi*; Chen, Yining; Lu, Junwei; Arechiga, Nikos; Gaidon, Adrien; Ma, Tengyu Real-world large-scale datasets are heteroskedastic and imbalanced -- labels have varying levels of uncertainty and label distributions are long-tailed. Heteroskedasticity and imbalance challenge deep learning algorithms due to the difficulty of distinguishing among mislabeled, ambiguous, and rare examples. Addressing heteroskedasticity and imbalance simultaneously is under-explored. We propose a data-dependent regularization technique for heteroskedastic datasets that regularizes different regions of the input space differently. Inspired by the theoretical derivation of the optimal regularization strength in a one-dimensional nonparametric classification setting, our approach adaptively regularizes the data points in higher-uncertainty, lower-density regions more heavily. We test our method on several benchmark tasks, including a real-world heteroskedastic and imbalanced dataset, WebVision. Our experiments corroborate our theory and demonstrate a significant improvement over other methods in noise-robust deep learning. siVAE: interpreting latent dimensions withinvariational autoencoders Choi, Yongin*; Quon, Gerald Interpretation of variational autoencoders (VAE) to measure contributions of input features to latent dimensions remains challenging because feature contributions are implicit in the trained parameters and choice of architecture of the VAE. Here we propose a scalable, interpretable variational autoencoder (siVAE), a Bayesian extension of VAEs that is interpretable by design: it learns feature embeddings that guide the interpretation of the sample embeddings, in a manner analogous to factor loadings of factor analysis. siVAE is as powerful and nearly as fast to train as the standard VAE, but achieves full interpretability of the latent dimensions, as well as all hidden layers of the decoder. We also introduce a new interpretability measure, feature awareness, that captures which features each layer of the siVAE model focuses on reconstructing well for each input sample. Training siVAE on a dataset exceeding 1 million samples and 28,000 samples is between 12 and 2,292 times faster than applying existing feature attribution methods to a trained VAE. Learning to grow: control of materials self-assembly using evolutionary reinforcement learning Whitelam, Stephen* We show that neural networks trained by evolutionary reinforcement learning can enact efficient molecular self-assembly protocols. Presented with molecular simulation trajectories, networks learn to change temperature and chemical potential in order to promote the assembly of desired structures or choose between competing polymorphs. In the first case, networks reproduce in a qualitative sense the results of previously-known protocols, but faster and with higher fidelity; in the second case they identify strategies previously unknown, from which we can extract physical insight. Networks that take as input the elapsed time of the simulation or microscopic information from the system are both effective, the latter more so. The evolutionary scheme we have used is simple to implement and can be applied to a broad range of examples of experimental self-assembly, whether or not one can monitor the experiment as it proceeds. Our results have been achieved with no human input beyond the specification of which order parameter to promote, pointing the way to the design of synthesis protocols by artificial intelligence. Neural Anisotropy Directions Ortiz-Jimenez, Guillermo*; Modas, Apostolos; Moosavi-Dezfooli, Seyed-Mohsen; Frossard, Pascal In this work, we analyze the role of the network architecture in shaping the inductive bias of deep classifiers. To that end, we start by focusing on a very simple problem, i.e., classifying a class of linearly separable distributions, and show that, depending on the direction of the discriminative feature of the distribution, many state-of-the-art deep convolutional neural networks (CNNs) have a surprisingly hard time solving this simple task. We then define as neural anisotropy directions (NADs) the vectors that encapsulate the directional inductive bias of an architecture. These vectors, which are specific for each architecture and hence act as a signature, encode the preference of a network to separate the input data based on some particular features. We provide an efficient method to identify NADs for several CNN architectures and thus reveal their directional inductive biases. Furthermore, we show that, for the CIFAR-10 dataset, NADs characterize features used by CNNs to discriminate between different classes. Self-supervised Learning for Deep Models in Recommendations Yao, Tiansheng*; Yi, Xinyang; Cheng, Zhiyuan; Hong, Lichan; Chi, Ed H.; Yu, Felix Large scale neural recommender models play a critical role in modern search and recommendation systems. With millions to billions of items to choose from, the quality of learned query and item representations is crucial to recommendation quality. Inspired by the recent success in self-supervised representation learning research in both computer vision and natural language understanding, we propose a multi-task self-supervised learning (SSL) framework for sparse neural models in recommendations. Furthermore, we propose two highly generalizable SSL tasks: (i) Feature Masking (FM) and (ii) Feature Dropout (FD) within the proposed framework. We evaluate our framework using two large-scale datasets with ~500M and 1B training examples respectively. Our results demonstrate that the proposed framework outperforms baseline models and state-of-the-art spread-out regularization techniques in the context of retrieval. The SSL framework shows larger improvement with less supervision compared to the counterparts. Deployment-Efficient Reinforcement Learning via Model-Based Offline Optimization Matsuhima, Tatsuya*; Furuta, Hiroki; Matsuo, Yutaka; Nachum, Ofir; Gu, Shixiang Most reinforcement learning (RL) algorithms assume online access to the environment, in which one may readily interleave updates to the policy with experience collection using that policy. However, in many real-world applications such as health, education, dialogue agents, and robotics, the cost or potential risk of deploying a new data-collection policy is high, to the point that it can become prohibitive to update the data-collection policy more than a few times during learning. With this view, we propose a novel concept of deployment efficiency, measuring the number of distinct data-collection policies that are used during policy learning. We observe that naïvely applying existing model-free offline RL algorithms recursively does not lead to a practical deployment-efficient and sample-efficient algorithm. We then propose a novel model-based algorithm, Behavior-Regularized Model-ENsemble (BREMEN), which is able to achieve impressive deployment efficiency while maintaining the same or better sample efficiency, learning successful policies from scratch on simulated robotic environments with only 5-10 deployments, compared to typical values of hundreds to millions in standard RL baselines. Learning Multi-granular Quantized Embeddings for Large-Vocab Categorical Features in Recommender Systems Kang, Wangcheng*; Cheng, Zhiyuan; Chen, Ting; Yi, Xinyang; Lin, Dong; Hong, Lichan; Chi, Ed H. Recommender system models often represent various sparse features like users, items, and categorical features via one-hot embeddings. However, a large vocabulary inevitably leads to a gigantic embedding table, creating two severe problems: (i) making model serving intractable in resource-constrained environments; (ii) causing overfitting problems. We seek to learn highly compact embeddings for large-vocab sparse features in recommender systems (recsys). First, we show that the novel Differentiable Product Quantization (DPQ) approach can generalize to recsys problems. In addition, to better handle the power-law data distribution commonly seen in recsys, we propose a Multi-Granular Quantized Embeddings (MGQE) technique which learns more compact embeddings for infrequent items. Preliminary experiments on three recommendation tasks and two datasets show that we can achieve on par or better performance, with only ∼20% of the original model size Temperature check: theory and practice for training models with softmax-cross-entropy losses Agarwala, Atish*; Schoenholz, Samuel S; Pennington, Jeffrey; Dauphin, Yann The softmax-cross-entropy loss function is a principled way of modeling probability distributions that has become ubiquitous in deep learning. While its lone hyperparameter, the temperature, is commonly set to one or regarded as a way to tune the model's confidence after training, less is known about how the temperature impacts training dynamics or generalization performance. In this work, we develop a theory of early learning for models trained with softmax-cross-entropy loss and show that the learning dynamics depend crucially on the inverse-temperature $\beta$ as well as the initial training set logit magnitude $||\beta\z||_{F}$. Empirically, we find that generalization performance depends strongly on the temperature even though the model’s final confidence does not. It follows that the addition of $\beta$ as a tunable hyperparameter is key to maximizing model performance which we demonstrate by showing that optimizing $\beta$ increases performance of ResNet-50 trained on ImageNet. Together these results underscore the importance of tuning the softmax temperature and provide qualitative guidance in performing this tuning. Autofocused oracles for design Fannjiang, Clara*; Listgarten, Jennifer Data-driven design is making headway into a number of application areas, including protein, small-molecule, and materials engineering. The design goal is to construct an object with desired properties, such as a material that exhibits superconductivity at higher temperatures than previously ever observed. To that end, costly experimental measurements are being replaced with calls to a high-capacity regression model trained on labeled data, which can be leveraged in an in silico search for promising design candidates. However, the design goal necessitates moving into regions of the input space beyond where such models were trained. Therefore, one can ask: should the regression model be altered as the design algorithm explores the input space, in the absence of new data acquisition? Herein, we answer this question in the affirmative. In particular, we (i) formalize the data-driven design problem as a non-zero-sum game, (ii) leverage this formalism to develop a strategy for retraining the regression model as the design algorithm proceeds---what we refer to as autofocusing the model, and (iii) demonstrate the promise of autofocusing empirically. A full paper detailing our work can be found at: https://arxiv.org/abs/2006.08052. Provably Efficient Policy Optimization via Thompson Sampling Ishfaq, Haque*; Yang, Zhuoran; Lupu, Andrei; Islam, Riashat; Liu, Lewis; Precup, Doina; Wang, Zhaoran Policy Optimization (PO) methods with function approximation are one of the most popular classes of Reinforcement Learning (RL) algorithms. Despite their popularity, it largely remains a challenge to design provably efficient policy optimization algorithms. In particular, it still remains elusive how to design provably efficient policy optimization algorithm using Thompson sampling ~\citep{thompson1933likelihood, ThompsonTutorial} based exploration strategy. This paper presents a provably efficient policy optimization algorithm that incorporates exploration using Thompson sampling. We prove that, in an episodic linear MDP setting, our algorithm, Thompson Sampling for Policy Optimization (TSPO) achieves $\Tilde{\mathcal{O}}(d^{3/2} H^{2} \sqrt{T})$ worst-case (frequentist) regret, where $H$ is the length of each episode, $T$ is the total number of steps and $d$ is the number of features. Finally, we empirically evaluate TSPO and show that it is competitive with state-of-the-art baselines. Robustness Analysis of Deep Learning via Implicit Models Tsai, Alicia Y.*; El Ghaoui, Laurent Despite the success of deep neural networks (DNNs), it is well-known that they can fail significantly in the presence of adversarial perturbations. Starting with Szegedy et al., a large number of works have demonstrated that state-of-the-art DNNs are vulnerable to adversarial samples. The vulnerability of DNNs has motivated the study of building models that are robust to such perturbations. However, many defense strategies are later shown to be ineffective. Although a large number of research works have been devoted to improving the robustness of DNNs and to our understanding of their behaviors, many fundamental questions about their vulnerabilities remain unclear. In this work, we introduce the implicit model and formalize its well-posedness properties theoretically. We analyze the robustness of DNNs via the lens of the implicit model and define its sensitivity matrix, which relates perturbations in inputs to those in outputs. Empirically, we show how the sensitivity matrix can be used to generate adversarial attacks effectively on MNIST and CIFAR-10 dataset. Learning Discrete Energy-based Models via Auxiliary-variable Local Exploration Dai, Hanjun*; Singh, Rishabh; Dai, Bo; Sutton, Charles; Schuurmans, Dale Discrete structures play an important role in applications like program language modeling and software engineering. Current approaches to predicting complex structures typically consider autoregressive models for their tractability, with some sacrifice in flexibility. Energy-based models (EBMs) on the other hand offer a more flexible and thus more powerful approach to modeling such distributions, but require partition function estimation. In this paper we propose ALOE, a new algorithm for learning conditional and unconditional EBMs for discrete structured data, where parameter gradients are estimated using a learned sampler that mimics local search. We show that the energy function and sampler can be trained efficiently via a new variational form of power iteration, achieving a better trade-off between flexibility and tractability. Experimentally, we show that learning local search leads to significant improvements in challenging application domains. Most notably, we present an energy model guided fuzzer for software testing that achieves comparable performance to well engineered fuzzing engines like libfuzzer. Distributed Sketching Methods for Privacy Preserving Regression Bartan, Burak*; Pilanci, Mert In this work, we study distributed sketching methods for large scale regression problems. We leverage multiple randomized sketches for reducing the problem dimensions as well as preserving privacy and improving straggler resilience in asynchronous distributed systems. We derive novel approximation guarantees for classical sketching methods and analyze the accuracy of parameter averaging for distributed sketches. We consider random matrices including Gaussian, randomized Hadamard, uniform sampling and leverage score sampling in the distributed setting. Moreover, we propose a hybrid approach combining sampling and fast random projections for better computational efficiency. We illustrate the performance of distributed sketches in a serverless computing platform with large scale experiments. Exact posteriors of wide Bayesian neural networks Hron, Jiri*; Bahri, Yasaman; Novak, Roman; Pennington, Jeffrey; Sohl-Dickstein, Jascha Recent work has shown that the prior over functions induced by a deep Bayesian neural network (BNN) behaves as a Gaussian process (GP) if the width of all layers is large. However, many BNN applications are concerned with the BNN function space posterior. While some empirical evidence of the posterior convergence was provided in the original works of Neal (1996) and Matthews et al. (2018), it is limited to small datasets or architectures due to the notorious difficulty of obtaining and verifying exactness of BNN posterior approximations. We provide the missing proof that the exact BNN posterior converges (weakly) to the one induced by the GP limit of the prior. For empirical validation, we generate samples from the exact finite BNN posterior on a small dataset via rejection sampling. Selectivity considered harmful: evaluating the causal impact of class selectivity in DNNs Leavitt, Matthew L*; Morcos, Ari S The properties of individual neurons are often analyzed in order to understand the biological and artificial neural networks in which they're embedded. Class selectivity—typically defined as how different a neuron's responses are across different classes of stimuli or data samples—is commonly used for this purpose. However, it remains an open question whether it is necessary and/or sufficient for deep neural networks (DNNs) to learn class selectivity in individual units. We investigated the causal impact of class selectivity on network function by directly regularizing for or against class selectivity. Using this regularizer to reduce class selectivity across units in convolutional neural networks increased test accuracy by over 2% in ResNet18 trained on Tiny ImageNet. In ResNet20 trained on CIFAR10 we could reduce class selectivity by a factor of 2.5 with no impact on test accuracy, and reduce it nearly to zero with only a small (~2%) drop in test accuracy. In contrast, regularizing to increase class selectivity had rapid and disastrous effects on test accuracy across all models and datasets. These results indicate that class selectivity in individual units is neither sufficient nor strictly necessary, and can even impair DNN performance. They also encourage caution when focusing on the properties of single units as representative of the mechanisms by which DNNs function. GANs for Continuous Path Keyboard Input Modeling Mehra, Akash*; Bellegarda, Jerome; Bapat, Ojas; Lal, Partha; Wang, Xin Continuous path keyboard input has higher inherent ambiguity than standard tapping, because the path trace may exhibit not only local overshoots/undershoots (as in tapping) but also, depending on the user, substantial mid-path excursions. Deploying a robust solution thus requires a large amount of high-quality training data, which is difficult to collect/annotate. In this work, we address this challenge by using GANs to augment our training corpus with user-realistic synthetic data. Experiments show that, even though GAN-generated data does not capture all the characteristics of real user data, it still provides a substantial boost in accuracy at a 5:1 GAN-to-real ratio. GANs therefore inject more robustness in the model through greatly increased word coverage and path diversity. Optimizing Memory Placement using Evolutionary Graph Reinforcement Learning Khadka, Shauharda; Guez Aflalo, Estelle; Marder, Mattias; Ben-David, Avrech; Miret, Santiago; Tang, Hanlin; Mannor, Shie; Hazan, Tamir; Majumdar, Somdeb* As modern neural networks have grown to billions of parameters, meeting tight latency budgets has become increasingly challenging. Solutions like compression and pruning modify the underlying network. We present Evolutionary Graph RL (EGRL) - a complimentary approach of optimizing how tensors are mapped to on-chip memory keeping the network untouched. Since different memory components trade off capacity for bandwidth differently, a sub-optimal mapping can result in high latency. We train and validate EGRL directly on the Intel NNP-I chip for inference. EGRL outperforms policy-gradient, evolutionary search and dynamic programming baselines on ResNet-50, ResNet-101 and BERT achieving 28-78% speed-up over the native compiler. Interpretable Planning-Aware Representations for Multi-Agent Trajectory Forecasting Ivanovic, Boris*; Elhafsi, Amine; Rosman, Guy; Gaidon, Adrien; Pavone, Marco Reasoning about human motion is an important prerequisite to safe and socially-aware robotic navigation. As a result, multi-agent behavior prediction has become a core component of modern human-robot interactive systems, such as self-driving cars. In particular, one of the main uses of behavior prediction in autonomous systems is to inform ego-robot motion planning and control. A unifying theme among most human motion prediction approaches is that they produce trajectories (or distributions thereof) for each agent in a scene; an intuitive output representation that matches common evaluation metrics. However, a majority of planning and control algorithms reason about system dynamics rather than future agent tracklets, which can hinder their integration. Towards this end, we investigate Mixtures of Linear Time-Varying Systems as an output representation for trajectory forecasting that is more amenable to downstream planning and control use. Our approach leverages successful ideas from prior probabilistic trajectory forecasting works to learn dynamical system representations that are well-studied in the planning and control literature. We consider an intuitive two-agent interaction scenario to illustrate how our method works and motivate further evaluation on large-scale autonomous driving datasets as well as real-world hardware. Architecture Compression Ashok, Anubhav* In this paper we propose a novel approach to model compression termed Architecture Compression. Instead of operating on the weight or filter space of the network like classical model compression methods, our approach operates on the architecture space. A 1-D CNN encoder-decoder is trained to learn a mapping from discrete architecture space to a continuous embedding and back. Additionally, this embedding is jointly trained to regress accuracy and parameter count in order to incorporate information about the architecture's effectiveness on the dataset. During the compression phase, we first encode the network and then perform gradient descent in continuous space to optimize a compression objective function that maximizes accuracy and minimizes parameter count. The final continuous feature is then mapped to a discrete architecture using the decoder. We demonstrate the merits of this approach on visual recognition tasks such as CIFAR-10, CIFAR-100, Fashion-MNIST and SVHN and achieve a greater than 20x compression on CIFAR-10. Simultaneous Learning of the Inputs and Parameters in Neural Collaborative Filtering Raziperchikolaei, Ramin*; Li, Tianyu; Chung, Young Joo User and item representations have a significant impact on the prediction performance of neural network-based collaborative filtering systems. Previous works fix the input to the user/item interaction vectors and/or IDs and train neural networks to learn the representations. We argue that this strategy adversely affects the quality of the representations since the similarities in the users’ tastes might not be reflected in the input space. We show that there is an implicit embedding matrix in the first fully connected layer which takes the user/item interaction vectors as the input. The role of the non-zero elements of the input vectors is to choose and combine a subset of the embedding vectors. To learn better representations, instead of fixing the input and only relying on neural network structure, we propose to learn the value of the non-zero elements of the input jointly with the neural network parameters. Our experiments on two movielens datasets and two real-world datasets show that our method outperforms the state-of-the-art methods. Neural Representations in Hybrid Recommender Systems: Prediction versus Regularization Raziperchikolaei, Ramin*; Li, Tianyu; Chung, Young Joo Autoencoder-based hybrid recommender systems have become popular recently because of their ability to learn user and item representations by reconstructing various information sources, including users' feedback on items (e.g., ratings) and side information of users and items (e.g., users' occupation and items' title). However, existing systems still use representations learned by matrix factorization (MF) to predict the rating, while using representations learned by neural networks as the regularizer. In our work, we define the neural representation for prediction (NRP) framework and apply it to the autoencoder-based recommendation systems. We theoretically analyze how our objective function is related to the previous MF and autoencoder-based methods and explain what it means to use neural representations as the regularizer. We also apply the NRP framework to a direct neural network structure which predicts the ratings without reconstructing the user and item information. Our experimental results confirm that neural representations are better for prediction than regularization and show that the NRP framework, combined with the direct neural network structure, outperforms the state-of-the-art methods in the prediction task. Anatomy of Catastrophic Forgetting: HiddenRepresentations and Task Semantics Ramasesh, Vinay V*; Dyer, Ethan; Raghu, Maithra Catastrophic forgetting is a central obstacle to continual learning. Many methods have been proposed to overcome this problem, but fully mitigating forgetting is likely hindered by a lack of understanding of the phenomenon’s fundamental properties. For example, how does catastrophic forgetting affect the hidden representations of neural networks? Are there underlying principles common to methods that mitigate forgetting? How is catastrophic forgetting affected by (semantic) similarities between sequential tasks? And what are good benchmark tasks that capture the essence of how catastrophic forgetting naturally arises in practice? This paper begins to provide answers to these and other questions. Meta-Learning Requires Meta-Augmentation Rajendran, Janarthanan*; Irpan, Alex; Jang, Eric In several areas of machine learning, data augmentation is critical to achieving state-of-the-art generalization performance. Examples include computer vision, speech recognition, and natural language processing. It is natural to suspect that data augmentation can play an equally important role in helping meta-learners. In this work, we present a unified framework for meta-data augmentation and an information theoretic view on how it prevents overfitting. Under this framework, we interpret existing augmentation strategies and propose modifications to handle overfitting. We show the importance of meta-augmentation on current benchmarks and meta-learning algorithms and demonstrate that meta-augmentation produces large complementary benefits to recently proposed meta-regularization techniques. Automated Utterance Generation Parikh, Soham*; Tiwari, Mitul; Vohra, Quaizar Conversational AI assistants are becoming popular and question-answering is an important part of any conversational assistant. Using relevant utterances as features in question-answering has shown to improve both the precision and recall for retrieving the right answer by a conversational assistant. Hence, utterance generation has become an important problem with the goal of generating relevant utterances (sentences or phrases) from a knowledge base article that consists of a title and a description. However, generating good utterances usually requires a lot of manual effort, creating the need for an automated utterance generation. In this paper, we propose an utterance generation system which 1) uses extractive summarization to extract important sentences from the description, 2) uses multiple paraphrasing techniques to generate a diverse set of paraphrases of the title and summary sentences, and 3) selects good candidate paraphrases with the help of a novel candidate selection algorithm. Uncovering Task Clusters in Multi-Task Reinforcement Learning Kumar, Varun; Rakelly, Kate; Majumdar, Somdeb* Multi-task learning refers to the approach of learning several distinct tasks using a shared representation. Such a strategy can be beneficial if the tasks share common structure: in this case, training a model on each individual tasks would be unnecessarily inefficient, as it would involve learning the common structure repeatedly. By contrast, a shared representation only needs to learn the structure a single time, following which it can be transferred to other tasks. This approach, when paired with deep neural networks, has proven effective in domains such as computer vision and natural language processing. Results in reinforcement learning, however, have been mixed, with multi-task reinforcement learning sometimes proving to be less sample efficient than independent single-task models. We investigate multi-task reinforcement learning in a recently published benchmark, Meta-World MT10. We suggest a method to reduce conflicts in multi-task reinforcement learning by dividing the task space into clusters of related tasks, and show that this method results in improved performance compared to prior work. ECLIPSE: An Extreme-Scale Linear Program Solver for Web-Applications Basu, Kinjal; Ghoting, Amol; Pan, Yao*; Keerthi, S. Sathiya; Mazumder, Rahul Web applications (involving many millions of users and items) based on machine learning often involve global constraints (e.g., budget limits of advertisers) that need to be satisfied during deployment (inference). This problem can usually be formulated as a Linear Program (LP) involving billions to trillions of decision variables and constraints. Despite the appeal of an LP formulation, solving problems at such scales is well beyond the capabilities of existing LP solvers. Often, ad-hoc decomposition rules are used to approximately solve these LPs, which have limited optimality guarantees and lead to sub-optimal performance in practice. In this work, we propose a distributed solver that solves the LP problems at scale. We propose a gradient-based algorithm on the smoothened dual of the LP with computational guarantees. The main workhorses of our algorithm are distributed matrix-vector multiplications (with load balancing) and efficient projection operations on distributed machines. Experiments on real-world data show that our proposed LP solver, ECLIPSE, can even solve problems with 10^12 decision variables within a few hours -- well beyond the capabilities of current generic LP solvers. Deep Ensembles: a loss landscape perspective Hu, Huiyi*; Fort, Stanislav; Lakshminarayanan, Balaji Deep ensembles have been empirically shown to be a promising approach for improving accuracy, uncertainty and out-of-distribution robustness of deep learning models. While deep ensembles were theoretically motivated by the bootstrap, non-bootstrap ensembles trained with just random initialization also perform well in practice, which suggests that there could be other explanations for why deep ensembles work well. Bayesian neural networks, which learn distributions over the parameters of the network, are theoretically well-motivated by Bayesian principles, but do not perform as well as deep ensembles in practice, particularly under dataset shift. One possible explanation for this gap between theory and practice is that popular scalable variational Bayesian methods tend to focus on a single mode, whereas deep ensembles tend to explore diverse modes in function space. We investigate this hypothesis by building on recent work on understanding the loss landscape of neural networks and adding our own exploration to measure the similarity of functions in the space of predictions. Our results show that random initializations explore entirely different modes, while functions along an optimization trajectory or sampled from the subspace thereof cluster within a single mode predictions-wise, while often deviating significantly in the weight space. Developing the concept of the diversity--accuracy plane, we show that the decorrelation power of random initializations is unmatched by popular subspace sampling methods. Finally, we evaluate the relative effects of ensembling, subspace based methods and ensembles of subspace based methods, and the experimental results validate our hypothesis. CoCon: Cooperative-Contrastive Learning Rai, Nishant*; Adeli, Ehsan; Lee, Kuan-Hui; Gaidon, Adrien; Niebles, Juan Carlos Labeling videos at scale is impractical. Consequently, self-supervised visual representation learning is key for efficient video analysis. Recent success in learning image representations suggest contrastive learning is a promising framework to tackle this challenge. However, when applied to real-world videos, contrastive learning may unknowingly lead to separation of instances that contain semantically similar events. In our work, we introduce a cooperative variant of contrastive learning to address this issue. We use data-driven sampling to leverage implicit relationships between multiple input video views, whether observed (e.g. RGB) or inferred (e.g. flow, segmentation masks, poses). We experimentally evaluate our representations on the downstream task of action recognition. Our method sets a new state of the art on standard benchmarks (UCF101, HMDB51, Kinetics400). Furthermore, qualitative experiments illustrate that our models can capture higher-order class relationships. We will release code and models. Curriculum and Decentralized Learning in Google Research Football Domitrz, Witalis; Mikula, Maciej; Opała, Zuzanna*; Pacek, Mikołaj; Rychlicki, Mateusz; Sieniawski, Mateusz; Staniszewski, Konrad; Michalewski, Henryk; Miłoś, Piotr; Osiński, Błażej B We make a study of various curricula in the game of football (aka soccer). We aim to understand various methodological and architectonical choices. Concentrating on football has an advantage of interpretability, but we believe that observations with regard to decentralized learning are applicable more generally. Learning Mixed-Integer Convex Optimization Strategies for Robot Planning and Control Cauligi, Abhishek*; Culbertson, Preston; Stellato, Bartolomeo; Schwager, Mac; Pavone, Marco Mixed-integer convex programming (MICP) has seen significant algorithmic and hardware improvements with several orders of magnitude solve time speedups compared to 25 years ago. Despite these advances, MICP has been rarely applied to real-world robotic control because the solution times are still too slow for online applications. In this work, we extend the machine learning optimizer (MLOPT) framework to solve MICPs arising in robotics at very high speed. MLOPT encodes the combinatorial part of the optimal solution into a strategy. Using data collected from offline problem solutions, we train a multiclass classifier to predict the optimal strategy given problem-specific parameters such as states or obstacles. Compared to existing approaches, we use task-specific strategies and prune redundant ones to significantly reduce the number of classes the predictor has to select from, thereby greatly improving scalability. Given the predicted strategy, the control task becomes a small convex optimization problem that we can solve in milliseconds. Numerical experiments on a free-flying space robot and task-oriented grasps show that our method provides not only 1 to 2 orders of magnitude speedups compared to state-of-the-art solvers but also performance close to the globally optimal MICP solution. Entity Skeletons as Intermediate Representations for Visual Storytelling Chandu, Khyathi Raghavi* We are enveloped by stories of visual interpretations in our everyday lives. Story narration often comprises of two stages: forming a central mind map of entities and weaving a story around them. In this paper, we address these two stages of introducing the right entities at seemingly reasonable junctures and also referring them coherently in the context of visual storytelling. The building blocks of this, also known as entity skeleton are entity chains including nominal and coreference expressions. We establish a strong baseline for skeleton informed generation and propose a glocal hierarchical attention model that attends to the skeleton both at the sentence (local) and the story (global) levels. We observe that our proposed models outperform the baseline in terms of automatic evaluation metric, METEOR. We also conduct human evaluation from which concludes that visual stories generated by our model are preferred 82% of the times. Exact Polynomial-time Convex Optimization Formulations for Two-Layer ReLU Networks Pilanci, Mert; Ergen, Tolga* We develop exact representations of two-layer neural networks with rectified linear units in terms of a single convex program with number of variables polynomial in the number of training samples and number of hidden neurons. Active Online Domain Adaptation Chen, Yining*; Luo, Haipeng; Ma, Tengyu; Zhang, Chicheng Online machine learning systems need to adapt to domain shifts. Meanwhile, acquiring label at every timestep is expensive. We propose a surprisingly simple algorithm that adaptively balances its regret and its number of label queries in settings where the data streams are from a mixture of hidden domains. For online linear regression with oblivious adversaries, we provide a tight tradeoff that depends on the durations and dimensionalities of the hidden domains. Our algorithm can adaptively deal with interleaving spans of inputs from different domains. We also generalize our results to non-linear regression for hypothesis classes with bounded eluder dimension and adaptive adversaries. Experiments on synthetic and realistic datasets demonstrate that our algorithm achieves lower regret than uniform queries and greedy queries with equal labeling budget. DisARM: An Antithetic Gradient Estimator for Binary Latent Variables Dong, Zhe*; Mnih, Andriy; Tucker, George Training models with discrete latent variables is challenging due to the difficulty of estimating the gradients accurately. Much of the recent progress has been achieved by taking advantage of continuous relaxations of the system, which are not always available or even possible. The Augment-REINFORCE-Merge (ARM) estimator (Yin and Zhou, 2019) provides an alternative that, instead of relaxation, uses continuous augmentation. Applying antithetic sampling over the augmenting variables yields a relatively low-variance and unbiased estimator applicable to any model with binary latent variables. However, while antithetic sampling reduces variance, the augmentation process increases variance. We show that ARM can be improved by analytically integrating out the randomness introduced by the augmentation process, guaranteeing substantial variance reduction. Our estimator, \emph{DisARM}, is simple to implement and has the same computational cost as ARM. We evaluate DisARM on several generative modeling benchmarks and show that it consistently outperforms ARM and a strong independent sample baseline in terms of both variance and log-likelihood. Boosted Sparse Oblique Decision Trees Gabidolla, Magzhan*; Zharmagambetov, Arman S; Carreira-Perpinan, Miguel A Boosted decision trees are widely used machine learning algorithms, achieving state-of-the-art performance in many domains with little effort on hyperparameter tuning. Though much work on boosting has focused on the theoretical properties and empirical variations, there has been little progress on the tree learning procedure itself. To this day, boosting algorithms employ regular axis-aligned trees as base learners optimized by CART-style greedy top-down induction. These trees are known to be highly suboptimal due to their greedy nature, and they are not well-suited to model the correlation of features due to their axis-aligned partition. In fact, these suboptimality characteristics are commonly believed to be beneficial because of the weak learning criterion in boosting. In this work we consider boosting better optimized sparse oblique decision trees trained with the recently proposed Tree Alternating Optimization (TAO). TAO generally finds much better approximate optima than CART-type algorithms due to the ability to monotonically decrease a desired objective function over a decision tree. Our extensive experimental results demonstrate that boosted sparse oblique TAO trees improve upon CART trees by a large margin, and achieve better test error than other popular tree ensembles such as gradient boosting (XGBoost) and random forests. Moreover, the resulting TAO ensembles require far smaller number of trees. A flexible, extensible software framework for model compression based on the LC algorithm Idelbayev, Yerlan*; Carreira-Perpinan, Miguel A We propose a software framework based on the ideas of the Learning-Compression (LC) algorithm, that allows a user to compress a neural network or other machine learning model using different compression schemes with minimal effort. Currently, the supported compressions include pruning, quantization, low-rank methods (including automatically learning the layer ranks), and combinations of those, and the user can choose different compression types for different parts of a neural network. The library is written in Python and PyTorch and available online at https://github.com/UCMerced-ML/LC-model-compression Safety Aware Reinforcement Learning (SARL) Miret, Santiago*; Wainwright, Carroll; Majumdar, Somdeb As reinforcement learning agents become more and more integrated into complex, real-world environments, designing for safety becomes more and more important. We specifically focus on scenarios where the agent can cause undesired side effects that may be linked with performing the primary task. The interdependence of side effects with the primary task makes it difficult to define hard constraints for the agent without sacrificing task performance. In order to address this challenge, we propose a novel virtual agent embedded co-training framework (SARL). SARL includes a primary reward based actor and a virtual agent that assesses side effect impacts and influences the behavior of the reward based actor via loss regularization. The actor loss is regularized with a proper distance metric measuring the difference in action probabilities of both agents. As such, in addition to optimizing for the task objective, the actor also aims to minimize the disagreement between itself and the safety agent. We apply SARL to tasks and environment in the SafeLife suite, which can generate complex tasks in dynamic environments, and construct performance vs side-effect Pareto fronts. Preliminary results indicate that SARL is competitive with a reward-based penalty method, which punishes side effects directly on the reward function, while also providing zero-shot generalization of the safety agent across different environments. This zero-shot generalization suggest that through SARL we can obtain a more flexible notion of side effects that is useful for a variety of settings. Hamming Space Locality Preserving Neural Hashing for Similarity Search Idelson, Daphna* We propose a novel method for learning to map a large-scale dataset in the feature representation space to binary hash codes in the hamming space, for fast and efficient approximate nearest-neighbor similarity search. Our method is composed of a simple neural network and a novel training scheme, that aims to preserve the locality relations between the original data points. We achieve distance preservation of the original cosine space upon the new hamming space, by introducing a loss function that translates the relational similarities in both spaces to probability distributions - and optimizes the KL divergence between them. We also introduce a simple data sampling method by representing the database with randomly generated proxies, used as reference points to query points from the training set. Experimenting with three publicly available standard ANN benchmarks we demonstrate significant improvement over other binary hashing methods, achieving an improvement of between 7% to 17%. As opposed to other methods, we show high performance in both low (64 bits) and high (768 bits) dimensional bit representation, offering increased accuracy when resources are available and flexibility in choice of ANN strategy. What Neural Networks Memorize and Why: Discovering the Long Tail via Influence Estimation Feldman, Vitaly*; Zhang, Chiyuan Deep learning algorithms are well-known to have a propensity for fitting the training data very well and often fit even outliers and mislabeled data points. Such fitting requires memorization of training data labels, a phenomenon that has attracted significant research interest but has not been given a compelling explanation so far. A recent work of Feldman (2020) proposes a theoretical explanation for this phenomenon based on a combination of two insights. First, natural image and data distributions are (informally) known to be long-tailed, that is have a significant fraction of rare and atypical examples. Second, in a simple theoretical model such memorization is necessary for achieving close-to-optimal generalization error when the data distribution is long-tailed. However, no direct empirical evidence for this explanation or even an approach for obtaining such evidence were given. In this work we design experiments to test the key ideas in this theory. The experiments require estimation of the influence of each training example on the accuracy at each test example as well as memorization values of training examples. Estimating these quantities directly is computationally prohibitive but we show that closely-related subsampled influence and memorization values can be estimated much more efficiently. Our experiments demonstrate the significant benefits of memorization for generalization on several standard benchmarks. They also provide quantitative and visually compelling evidence for the theory put forth in (Feldman 2020). Can Neural Networks Learn Non-Verbal Reasoning? Zhang, Chiyuan*; Raghu, Maithra; Bengio, Samy Neural networks have demonstrated excellent capabilities in learning generalizable pattern-matching --- the ability to identify simple properties of the training data and utilize these properties to correctly process unseen (test) instances. These results raise fundamental questions on the reasoning capabilities of neural networks and how they generalize. Can neural networks learn more sophisticated reasoning? Are there insights on how they generalize in pattern matching and sophisticated reasoning settings? In this paper, we introduce a visual reasoning task to help investigate these questions. Learning to reason by learning on rationales Piękos, Piotr*; Michalewski, Henryk; Malinowski, Mateusz For centuries humans have been codifying observed natural or social phenomena in some abstract language. Such a language, which we call mathematics, is at the core of not only modern science but also everyday activity. In this work, we look into the basic algebraic formulations used to solve some real concrete problems. Like how much money I need to spend to buy 2 apples knowing each costs 2 pounds''. We teach to solve such math word problems very early and universally in our education system. The teacher asks a question about some real problem and expects not only answers but to understand the rationale behind them - consecutive precise steps that lead to the answer. In this work we are motivated by the same learning process and incorporate rationales during training of a language model. We also show that through learning to understand the order of steps in rationales, we can improve the overall performance of our model. Modality-Agnostic Attention Fusion for visual search with text feedback Dodds, Eric M*; Culpepper, Jack; Herdade, Simao; Zhang, Yang; Boakye, Kofi A Image retrieval with natural language feedback offers the promise of catalog search based on fine-grained visual features that go beyond objects and binary attributes, facilitating real-world applications such as e-commerce. Our Modality-Agnostic Attention Fusion (MAAF) model combines image and text features and outperforms existing approaches on two visual search with modifying phrase datasets, Fashion IQ and CSS. We also introduce two new challenging benchmarks adapted from Birds-to-Words and Spot-the-Diff, which provide new settings with rich language inputs, and we show that our approach without modification outperforms strong baselines. To better understand our model, we conduct detailed ablations on Fashion IQ and provide visualizations of the surprising phenomenon of words avoiding attending'' to the image region they refer to. MUFASA: Multimodal Fusion Architecture Search for Electronic Health Records Xu, Zhen*; So, David; Dai, Andrew M Deep learning models trained on electronic health records (EHR) have demonstrated their potential to improve healthcare quality in a variety of areas, such as predicting diagnoses, reducing healthcare costs and personalizing medicine. However, most model architectures that are commonly employed were originally developed for academic unimodal machine learning datasets, such as ImageNet or WMT. In contrast, EHR data is multimodal, containing sparse and irregular longitudinal features with a mix of structured and unstructured data. Such complex data often requires specific modeling for each modality and a good strategy to fuse different representations to reach peak performance. To address this, we propose MUltimodal Fusion Architecture SeArch (MUFASA), the first multimodal Neural Architecture Search (NAS) method for EHR data. Specifically, we reformulate the NAS objective to simultaneously search for several architectures, jointly optimizing multimodal fusion strategies and per-modality model architectures together. We demonstrate empirically that our MUFASA method outperforms established unimodal evolutionary NAS on Medical Information Mart for Intensive Care (MIMIC-III) EHR data with comparable computation costs. What’s more, our experimental results show that MUFASA produces models that outperform the Transformer, and its NAS variant, the Evolved Transformer, on public EHR data. Compared with these baselines on MIMIC CCS diagnosis code prediction, our discovered models improve top-5 recall from 0.88 to 0.91, and demonstrate the ability to generalize to other EHR tasks. Studying our top architecture in depth, we provide empirical evidence that MUFASA’s improvements are derived from its ability to optimize custom modeling for varying input modalities and find effective fusion strategies. VirAAL: Virtual Adversarial Active Learning Senay, Gregory*; Youbi Idrissi, Badr; Marine Haziza This paper presents VirAAL, an Active Learning framework based on Virtual Adversarial Training (VAT), a semi-supervised approach that regularizes the model through Local Distributional Smoothness (LDS). VirAAL aims to reduce the effort of annotation in Natural Language Understanding (NLU). Adversarial perturbations are added to the inputs making the posterior distribution more consistent. Therefore, entropy-based Active Learning (AL) becomes robust by querying more informative samples without requiring additional components. VirAAL is an inexpensive method in terms of AL computation with a positive impact on data sampling. Furthermore, VirAAL decreases annotations in AL up to 80%. Beyond Supervision for Monocular Depth Estimation Guizilini, Vitor*; Ambruș, Rareș A; Li, Jie; Pillai, Sudeep; Gaidon, Adrien Self-supervised learning enables training predictive models on arbitrarily large amounts of unlabeled data. One of the most successful examples of self-supervised learning is monocular depth estimation, which relies on strong geometric priors to learn from raw monocular image sequences in a structure-from-motion setting. In this work, we present recent breakthroughs in self-supervised monocular depth estimation that establish a new state of the art on standard benchmarks, reaching parity with fully supervised methods. Our contributions center on a novel neural network architecture, PackNet, that is specifically designed for large-scale self-supervised learning on high resolution videos. We also discuss semi-supervised training extensions that can effectively combine the self-supervised objective with partial supervision, whether from very sparse Lidar scans, velocity information, or pretrained segmentation models, while keeping inference monocular. Finally, we introduce a new, diverse, and challenging benchmark: Dense Depth for Automated Driving (DDAD). DDAD contains diverse scenes collected using a fleet of autonomous vehicles across the US and Japan. Thanks to long-range Lidar sensors, we expand standard metrics to include (a) evaluation at longer ranges of up to 200m, to properly measure how performance degrades with distance; and (b) provide fine-grained labels in the validation and test frames to enable per-category and per-instance metrics, thus overcoming the current limitation of uniform per-pixel depth evaluations. Synthetic Health Data for Fostering Reproducibility of Private Research Studies Bhanot, Karan*; Dash, Saloni; Yale, Andrew; Guyon, Isabelle; Erickson, John; Bennett, Kristin The inability to share private health data can severely stifle research and has led to the reproducibility crisis in biomedical research. Recent synthetic data generation methods provide an attractive alternative for making data available for research and education purposes without violating privacy. In this paper, we discuss our novel HealthGAN model that produces high quality synthetic health data and demonstrate its effectiveness by reproducing research studies. To preserve privacy, HealthGAN synthetic data can be released when research papers are published. Approaches can be developed on synthetic data and then evaluated on real data inside secure environments enabling novel method generation. A Synthetic Data Petri Dish for Studying Mode Collapse in GANs Mangalam, Karttikeya*; Garg, Rohin In this extended abstract, we present a simple yet powerful data generation procedure for studying mode collapse in GANs. We describe a computationally efficient way to obtain visualizable high dimensional data using normalizing flows. We also train GANs (Table 1) on different proposed dataset Levels and find mode collapse to occur even in the most robust GAN formulations. We also use the inversion quality of our proposed transformation to visualize both the high dimensional generated samples in a 2D space and the learnt discriminator's distribution as a heatmap. Such 2D visualizations are ill-defined with other dimensionality reduction methods such as PCA or t-SNE when applied to natural images since they suffer from approximations, strong dependence on visualization hyperparameters and are computationally expensive. We believe our proposed procedure will serve as a petri dish for studying mode collapse in controlled settings and a better understanding of failure modes of proposed robust formulations thereby propelling research further in generative algorithms. Whitening and second order optimization both destroy information about the dataset, and can make generalization impossible Wadia, Neha*; Duckworth, Daniel; Schoenholz, Samuel S; Dyer, Ethan; Sohl-Dickstein, Jascha We argue that both data whitening and second order optimization can harm or entirely prevent generalization. In general, model training can harness information contained in the sample-sample second moment matrix of the dataset. We show that for models with a fully connected first layer, the information contained in this matrix is the only information which can be used to generalize. Models trained using whitened data, or with certain second order optimization schemes, have less access to this information; in the high dimensional regime, the training procedure has no access to this information, producing models that either generalize poorly or not at all. We experimentally verify the predicted harmful effects of data whitening and second order optimization on generalization. We further show experimentally that generalization continues to be harmed even when theoretical requirements are relaxed. A Deep Learning Pipeline for Patient Diagnosis Prediction Using Electronic Health Records Paudel, Bibek*; Shrestha, Yash Raj; Franz, Leopold H Augmentation of disease diagnosis and decision-making in health care with machine learning algorithms is gaining much impetus in recent years. In particular, in the current epidemiological situation caused by COVID-19 pandemic, swift and accurate prediction of disease diagnosis with machine learning algorithms could facilitate identification and care of vulnerable clusters of population, such as those having multi-morbidity conditions. In order to build a useful disease diagnosis prediction system, advancement in both data representation and development of machine learning architectures are imperative. First, with respect to data collection and representation, we face severe problems due to multitude of formats and lack of coherency prevalent in Electronic Health Records (EHRs). This causes hindrance in extraction of valuable information contained in EHRs. Currently, no universal global data standard has been established. As a useful solution, we develop and publish a Python package to transform public health dataset into an easy to access universal format. This data transformation to an international health data format facilitates researchers to easily combine EHR datasets with clinical datasets of diverse formats. Second, machine learning algorithms that predict multiple disease diagnosis categories simultaneously remain underdeveloped. We propose two novel model architectures in this regard. First, DeepObserver, which uses structured numerical data to predict the diagnosis categories and second, ClinicalBERT\_Multi, that incorporates rich information available in clinical notes via natural language processing methods and also provides interpretable visualizations to medical practitioners. We show that both models can predict multiple diagnoses simultaneously with high accuracy. Ads Clickthrough Rate Prediction Models For Multi-Datasource Tasks Wang, Erzhuo* Clickthrough rate prediction in online advertisement is a challenging machine learning problem that involves multiple objectives and multiple data sources. For example, at Pinterest we serve both shopping and standard Ads products where each product has its own unique characteristics of creatives and user behavior patterns. In this work, we address this problem by adopting a multi-task deep neural network model that jointly learns distinct distributions of the data from various sources simultaneously. To tackle the multi-data-source problem, we proposed a shared-bottom, multi-tower model architecture. The multi-tower structure can effectively isolate the interference from the distinct data distributions of different sources, while the shared-bottom layers enable us to learn lower level common signals. Furthermore, we make use of the contextual signals on top of the neural networks to calibrate the predictions, such that a good confidence in the inferenced likelihood is established. In addition, an automatic machine learning framework is leveraged, which handles feature extraction and feature transforms with algorithms. Hence the cost of human feature engineering is saved. The multi-tower model results in improved offline evaluation results on both data sources than the single tower structure, as well as learning them with separate models. The integrated solution realizes significant CTR gain compared to vanilla multilayer perceptron neural network models on online A/B testing. Adversarial Learning for Debiasing Knowledge Base Embeddings Paudel, Bibek*; Arduini, Mario; Shrestha, Yash Raj; Zhang, Ce; Pirovano, Federico; Noci, Lorenzo Knowledge Graphs (KG) are gaining increasing attention in both academia and industry. Despite their diverse benefits, recent research have identified social and cultural biases embedded in the representations learned from KGs. Such biases can have detrimental consequences on different population and minority groups as applications of KG begin to intersect and interact with social spheres. This paper describes our \textbf{work-in-progress} which aims at identifying and mitigating such biases in Knowledge Graph (KG) embeddings. We explore gender bias in KGE, and a careful examination of popular KGE algorithms suggest that sensitive attribute like the gender of a person can be predicted from the embedding. This implies that such biases in popular KGs is captured by the structural properties of the embedding. As a preliminary solution to debiasing KGs, we introduce a novel framework to filter out the sensitive attribute information from the KG embeddings, which we call FAN (Filtering Adversarial Network). We also suggest the applicability of FAN for debiasing other network embeddings which could be explored in future work. Meta Attention Networks: Meta Learning Attention to Modulate Information Between Sparsely Interacting Recurrent Modules Madan, Kanika*; Ke, Nan Rosemary; Goyal, Anirudh; Bengio, Yoshua Decomposing knowledge into interchangeable pieces promises a generalization advantage when, at some level of representation, the learner is likely to be faced with situations requiring novel combinations of existing pieces of knowledge or computation. We hypothesize that such a decomposition of knowledge is particularly relevant for higher levels of representation as we see this at work in human cognition and natural language in the form of systematicity or systematic generalization. To study these ideas, we propose a particular training framework in which we assume that the pieces of knowledge an agent needs, as well as its reward function are stationary and can be re-used across tasks and changes in distribution. As the learner is confronted with variations in experiences, the attention selects which modules should be adapted and the parameters of those selected modules are adapted fast, while the parameters of attention mechanisms are updated slowly as meta-parameters. We find that both the meta-learning and the modular aspects of the proposed system greatly help achieve faster learning in experiments with reinforcement learning setup involving navigation in a partially observed gridworld. Batch Reinforcement Learning Through Continuation Method Guo, Yijie*; Chen, Minmin; Lee, Honglak; Chi, Ed H. Many real-world application of reinforcement learning (RL) requires the agent to learn from a fixed set of trajectories, without collecting new interactions. Policy optimization under this setting is extremely challenging due to the distribution shift. In this work, we propose a simple yet effective policy-based approach to batch RL using global optimization methods known as continuation, i.e., by constraining the Kullback-Leibler(KL) divergence between the learned policy and the behavior policy that generates the fixed trajectories, and continuously relaxing the constraint. We theoretically show that policy gradient with KL divergence regularization converges significantly faster than vanilla policy gradient under the tabular setting even with the exact gradient. We empirically verify that our method benefits not only from the faster convergence, but also reduced noise in the gradient estimate under the batch RL setting with function approximation. We present results on continuous control tasks and the tasks with discrete action to demonstrate the efficacy of our proposed method. Rotation-Invariant Gait Identification with Quaternion Convolutional Neural Networks Jing, Bowen*; Prabhu, Vinay Uday; Gu, Angela; Whaley, John CNN-based accelerometric gait identification systems suffer a catastrophic drop in test accuracy when they encounter new device orientations unobserved during enrollment. In this paper we target this problem by introducing an SO(3)-equivariant quaternion convolutional kernel inside the CNN and disseminate some initial promising results. Attention-Sampling Graph Convolutional Networks Lippoldt, Franziska; Lavin, Alexander* A principle advantage of Graph Convolutional Networks (GCN) lies in the ability to cope with irregular data, which we evaluate in the image domain by inspecting both graph downsampling methods and network accuracy with respect to edge connections. We specifically investigate the effects of distance-based vs feature-attention downsampling, and suggest a method of generalizing pixel-wise attention to the graphs setting to better represent distributions and irregularity. Our analysis is especially important for pathological images for carcinoma prediction: due to image size and over-represented cell-graphs, downsampling is naturally required, and simplifying graph assumptions may misrepresent the cellular structures. With principled downsampling within GCN, we find that graph analysis of cells reveals possible stages of carcinoma development. Energy-based View of Retrosynthesis Sun, Ruoxi*; Dai, Hanjun; Li, Li; Kearnes, Steven; Dai, Bo Retrosynthesis---the process of identifying a set of reactants to synthesize a target molecule---is of vital importance to material design and drug discovery. Existing machine learning approaches based on language models and graph neural networks have achieved encouraging results. In this paper, we propose a framework that unifies sequence- and graph-based methods as energy-based models (EBMs) with different energy functions. This unified perspective provides critical insights about EBM variants through a comprehensive assessment of performance. Additionally, we present a novel dual'' variant within the framework that performs consistent training over Bayesian forward- and backward-prediction by constraining the agreement between the two directions. This model improves state-of-the-art performance by 9.6\% for template-free approaches where the reaction type is unknown. Neural Interventional GRU-ODEs Zhou, Helen*; Xue, Yuan; Dai, Andrew M Data is often generated as a continually accumulating byproduct of existing systems. This data can be observational, interventional, or a mixture of the two. In hospitals, for example, diagnostic measurements may be taken as needed, and treatments may be administered and recorded over a finite period of time. Leveraging recent advances in continuous-time modeling, we propose Neural Interventional GRU-ODEs (NIGO) to model passive observations alongside active interventions which happen at irregular timepoints. In this model, observations provide information about the underlying state of the system, whereas interventions drive changes in the underlying state. Our model seeks to capture the influence of interventions on the latent state, while also learning the dynamics of the system. Experiments are done on a simulated pendulum dataset with gravity interventions. See, Hear, Explore: Curiosity via Audio-Visual Association Dean, Victoria*; Tulsiani, Shubham; Gupta, Abhinav Exploration is one of the core challenges in reinforcement learning. A common formulation of curiosity-driven exploration uses the difference between the real future and the future predicted by a learned model. However, predicting the future is an inherently difficult task which can be ill-posed in the face of stochasticity. In this paper, we introduce an alternative form of curiosity that rewards novel associations between different senses. Our approach exploits multiple modalities to provide a stronger signal for more efficient exploration. Our method is inspired by the fact that, for humans, both sight and sound play a critical role in exploration. We present results on Habitat (a photorealistic navigation simulator), showing the benefits of using an audio-visual association model for intrinsically guiding learning agents in the absence of external rewards. TSGLR: an Adaptive Thompson Sampling for the Switching Multi-Armed Bandit Problem ALAMI, Reda*; Azizi, Oussama The stochastic multi-armed bandit problem has been widely studied under the stationary assumption. However in real world problems and industrial applications, this assumption is often unrealistic because the distributions of rewards may change over time. In this paper, we consider the piece-wise iid non-stationary stochastic multi-armed bandit problem with unknown change-points and we focus on the change of mean setup. To solve the latter, we propose a Thompson Sampling strategy equipped with a change point detector based on a well tuned non-parametric Generalized Likelihood Ratio test (GLR). We call the resulting strategy Thompson Sampling-GLR (\TSGLR). Analytically, in the context of regret minimization for the global switching setting, our proposal achieves a $\mathcal{O}\left( K_T\log T\right)$ regret upper-bound where $K_T$ is the overall number of change-points up to the horizon $T$. This contradicts the lower bound in $\Omega(\sqrt{T})$. This result mainly comes from the order optimal detection delay of the GLR test for sub-Gaussian distributions and its well controlled false alarm rate. Experimentally, we demonstrate that the $\TSGLR$ outperforms the state-of-art non stationary stochastic bandits over synthetic Bernoulli rewards as well as on the Yahoo! User Click Log Dataset. Learning Invariant Representations for Reinforcement Learning without Reconstruction Zhang, Amy; McAllister, Rowan*; Calandra, Roberto; Gal, Yarin; Levine, Sergey We study how representation learning can accelerate reinforcement learning from rich observations, such as images, without relying either on domain knowledge or pixel-reconstruction. Our goal is to learn representations that both provide for effective downstream control and invariance to task-irrelevant details. Bisimulation metrics quantify behavioral similarity between states in continuous MDPs, which we propose using to learn robust latent representations which encode only the task-relevant information from observations. Our method trains encoders such that distances in latent space equal bisimulation distances in state space. We demonstrate the effectiveness of our method at disregarding task-irrelevant information using modified visual MuJoCo tasks, where the background is replaced with moving distractors and natural videos, while achieving SOTA performance. ChemBERTa: Utilizing Transformer-Based Attention for Understanding Chemistry Chithrananda, Seyone*; Ramsundar, Bharath Despite the success of pre-training methods in NLP and computer vision, machine-learning-based pre-training methods remain incredibly scarce and ineffective for applications to chemistry. Many previous graph-based molecular property prediction models have yet to have seen a strong boost in generalizability or prediction accuracy through pre-training techniques, by mapping molecules to a sparse discrete space, known as a molecular fingerprint or numerical representation of molecules. To solve this, we present ChemBERTa, a RoBERTa-like transformer model that learns molecular fingerprints through semi-supervised pre-training of the sequence-to-sequence language model, using masked-language modelling of a large corpus of 250,000 SMILES strings, a well-known text representation of molecules. We train the model over 15 epochs, obtaining a mean masked LM likelihood loss of 0.285. After pre-training, we fine-tune ChemBERTa by benchmarking its performance on Tox21, a multi-task dataset for predicting the toxicities of molecules through various biochemical pathways. We also present the promise of visualizing the attention mechanism in ChemBERTa for the interpretability of chemical features in a molecule and evaluating the performance of our model. Our models have been made openly available through Huggingface's model hub with over 12,000 downloads, and we provide a tutorial for running masked language modelling, attention visualization, and binary classification experiments in the DeepChem library with ChemBERTa. Gradient Descent on Unstable Dynamical Systems Nar, Kamil*; Xue, Yuan; Dai, Andrew M When training the parameters of a linear dynamical model, the gradient descent algorithm is likely to fail to converge if the squared-error loss is used as the training loss function. Restricting the parameter space to a smaller subset and running the gradient descent algorithm within this subset can allow learning stable dynamical systems, but this strategy does not work for unstable systems. In this work, we show that observations taken at different times from the system to be learned influence the dynamics of the gradient descent algorithm in substantially different degrees. We introduce a time-weighted logarithmic loss function to fix this imbalance and demonstrate its effectiveness in learning unstable systems. Towards Learning Robots Which Adapt On The Fly Julian, Ryan C*; Swanson, Benjamin; Sukhatme, Gaurav; Levine, Sergey; Finn, Chelsea; Hausman, Karol One of the great promises of robot learning systems is that they will be able to learn from their mistakes and continuously adapt to ever-changing environments. Despite this potential, most of the robot learning systems today are deployed as a fixed policy and they are not being adapted after their deployment. Can we efficiently adapt previously learned behaviors to new environments, objects and percepts in the real world? We present a method and empirical evidence towards a robot learning framework that facilitates continuous adaption. In particular, we demonstrate how to adapt vision-based robotic manipulation policies to new variations by fine-tuning via off-policy reinforcement learning, including changes in background, object shape and appearance, lighting conditions, and robot morphology. Further, this adaptation uses less than 0.2\% of the data necessary to learn the task from scratch. We find that the simple approach of fine-tuning pre-trained policies leads to substantial performance gains over the course of fine-tuning, and that pre-training via RL is essential: training from scratch or adapting from supervised ImageNet features are both unsuccessful with such small amounts of data. We also find that these positive results hold in a limited continual learning setting, in which we repeatedly fine-tune a single lineage of policies using data from a succession of new tasks. Our empirical conclusions are consistently supported by experiments on simulated manipulation tasks, and by 52 unique fine-tuning experiments on a real robotic grasping system pre-trained on 580,000 grasps. ## Call-for-Submissions Please submit your proposals via CMT in the form of an abstract as a 2-page pdf in the NeurIPS Style by 11:59:59PM PDT, June 25th, 2020.  References can be included in a third page. Note: Submissions are not blind-reviewed, thus please include authors' names and affiliations in the submissions. Acceptable material includes work which has already been submitted or published, preliminary results and controversial findings. We do not intend to publish paper proceedings, only abstracts will be shared through an online repository. Our primary goal is to foster discussion! For examples of previously accepted talks, please watch the paper presentations from BayLearn 2019 or review the complete list of accepted submissionsFor examples of abstracts that have been selected in the past, please see the schedule of talks from BayLearn 2018. This page has videos of the talks and links to PDFs of the abstracts are provided for each of the selected talks.
{}
# Nonequilibrium stationary states of harmonic chains with bulk noises  http://hdl.handle.net/10138/29106 #### Citation Bernardin , C , Kannan , V , Lebowitz , J L & Lukkarinen , J 2011 , ' Nonequilibrium stationary states of harmonic chains with bulk noises ' , European Physical Journal B. Condensed Matter and Complex Systems , vol. 84 , pp. 685-689 . https://doi.org/10.1140/epjb/e2011-20746-0 Title: Nonequilibrium stationary states of harmonic chains with bulk noises Author: Bernardin, Cedric; Kannan, Venkateshan; Lebowitz, Joel L.; Lukkarinen, Jani Contributor organization: Department of Mathematics and StatisticsMathematical physics Date: 2011 Language: eng Number of pages: 5 Belongs to series: European Physical Journal B. Condensed Matter and Complex Systems ISSN: 1434-6028 DOI: https://doi.org/10.1140/epjb/e2011-20746-0 URI: http://hdl.handle.net/10138/29106 Abstract: We consider a chain composed of $N$ coupled harmonic oscillators in contact with heat baths at temperature $T_\ell$ and $T_r$ at sites 1 and $N$ respectively. The oscillators are also subjected to non-momentum conserving bulk stochastic noises. These make the heat conductivity satisfy Fourier's law. Here we describe some new results about the hydrodynamical equations for typical macroscopic energy and displacement profiles, as well as their fluctuations and large deviations, in two simple models of this type. Subject: 114 Physical sciences Peer reviewed: Yes Usage restriction: restrictedAccess 
{}
### Easy Mandrill inbound email and webhook handling with Rails (blogarhythm ~ Psycho Monkey - Joe Satriani) Mandrill is the transactional email service by the same folks who do MailChimp, and I've been pretty impressed with it. For SMTP mail delivery it just works great, but where it really shines is inbound mail handling and the range of event triggers you can feed into to your application as webhooks (for example, to notify on email link clicks or bounces). The API is very nice to use, but in a Rails application it's best to keep all the crufty details encapsulated and hidden away, right? That's what the mandrill-rails gem aims to do - make supporting Mandrill web hooks and inbound email as easy and Rails-native as possible. I recently added some new methods to mandrill-rails to provide explicit support for inbound mail attachments (in the 0.0.3 version of the gem). With the mandrill-rails gem installed, we simply define the routes to our webhook receiver (in this example an 'inbox' controller): resource :inbox, :controller => 'inbox', :only => [:show,:create] And then in the controller we provide handler implementations for any of the 9 event types we wish to consume. Here's how we might get started handling inbound email, including pulling out the attachments: class InboxController < ApplicationController include Mandrill::Rails::WebHookProcessor # Defines our handler for the "inbound" event type. # This gets called for every inbound event sent from Mandrill. def handle_inbound(event_payload) [... do something with the event_payload here, or stuff it on a background queue for later ... ] if attachments = event_payload.attachments.presence # yes, we have at least 1 attachment. Let's examine the first: a1 = attachments.first a1.name # => e.g. 'sample.pdf' a1.type # => e.g. 'application/pdf' a1.content # => this is the raw content provided by Mandrill, # and will be base64-encoded if not plain text # e.g. 'JVBERi0xLjMKJcTl8uXrp/Og0MTGCjQgMCBvY ... (etc)' a1.decoded_content # => this is the content decoded by Mandrill::Rails, # ready to be written as a File or whatever # e.g. '%PDF-1.3\n%\xC4\xE5 ... (etc)' end endend That's nice and easy, yes? See the Mandrill::Rails Cookbook for more tips. If you love playing with transactional mail and haven't tried Mandrill yet, it's well worth a look! ### Designing for Interesting Moments (blogarhythm ~ Moments Not Words - F.I.B) Some deep thinking and analysis of how to design for interesting and effective interactions.. ### 2013: Time for web development to have its VB3 moment (blogarhythm ~ Come Around Again - JET) And that's a compliment! Wow. This year we mark the 20th anniversary of the Visual Basic 3.0 launch way back in 1993. It's easy to forget the pivotal role it played in revolutionizing how we built software. No matter what you think of Microsoft, one can't deny the impact it had at the time. Along with other products such as PowerBuilder and Borland Delphi, we started to see long-promised advances in software development (as pioneered by Smalltalk) become mainstream reality: • finally, Rapid Application Development that really was rapid • simplicity that put the development of non-trivial applications within the realm of the average computer user. It made simple things simple and complex things possible (to borrow from Alan Kay) • development environments that finally did the obvious: want to build a graphical user interface? Then build it graphically (i.e. WYSIWYG), and build a complete client or client-server app from a single IDE. • an event-driven programming model that explicitly linked code to the user-facing triggers and views (like buttons and tables) • perhaps the first mainstream example of a viable software component reuse mechanism (improved and rebranded many times over time: ActiveX, COM, .NET) In its day, Visual Basic 3.0 was variously lauded (by non-programmers who could finally make the app they always wanted) and loathed (by IT professionals shocked at the prospect of ceding control to the great unwashed). Interestingly, Visual Basic succeeded *despite* the language (BASIC, probably the most widely derided language of all time. Or perhaps it shares that crown with COBOL). The party didn't last long however, as by the late 90's the internet had fundamentally changed the rules of the game. VB, PowerBuilder and the like suffered from an implicit assumption of a client-server architecture, and were not prepared for a webified world. They didn't (all) disappear of course, with Visual Basic in particular finding a significant role as Microsoft's mainstream server-side language, and it lives on in Visual Studio. Yet it lost it's revolutionary edge, and had to be content to simply fit in as an "also can do in this language" alternative. ### Web Development - a case of one step back and one step forward? You would think that over the past 20 years, web development would have been able to leap far ahead of what was best practice in client-server computing at the time. We have certainly come a long way since then, and many advances in practice and technology have become de rigueur. Here are some examples that would not have been considered normal by any stretch in 1993: • Reliance on open standard protocols at every tier: from client to server, server to database and messaging systems • Global, well-known repositories of shared, reusable code (Github, Rubygems .. and let's not forget grand-daddy CPAN) • Version control. There is no argument. • Automated testing tools and continuous integration. • Open source is mainstream, and even preferred in many contexts. Yet it is also salutary to reflect on some of the great innovations we saw back in 1993 that have yet to be re-invented and re-imagined successfully for the web. I am thinking in particular of the radical productivity that was possible with the event-driven, WYSIWYG GUI programming model. It certainly hasn't gone away (take xcode for example). But why is that not the leading way of building for the web today? After all, the web is graphical and event-driven. A perfect fit one would think. It has perhaps been the very success of the internet, and the rapid unconstrained innovation it has enabled, that has in turn inhibited major advances in web development. Those that have come close (such as Adobe Flash) have ultimately failed primarily because they did not embrace the open standards of the web. And others, like Microsoft Visual Studio and Oracle JDeveloper have remained locked in proprietary silos. On the whole, we still work at levels of abstraction that are no higher, and many times lower, than those embodied by the best tools of 1993. It is, after all, very difficult to build abstractions over a foundation that is in constant flux. And with highly productive languages and frameworks at our disposal (like Ruby/Rails), it makes complete sense for many - myself included - to actively spurn graphical IDEs for the immense flexibility we get in return for working at the coding coalface. ### The Tide is Turning Once the wild west of hackety scripts and rampant browser incompatibilities, the building blocks of the web have been coalescing. HTML5, CSS3 and leading browser rendering engines are more stable, consistent and reliable than ever. Javascript is now considered a serious language, and the community has embraced higher-level APIs like jQuery and RIA frameworks such as ember.js and backbone.js. Web design patterns are more widely understood than ever, with kits like bootstrap putting reusable good practice in the hands of novices. On the backend, our technology stacks are mature and battle-tested (LAMP, Rails). And we have an array of cloud-ready, open source solutions for just about every back-end infrastructure need you can imagine: from BigData (Hadoop, MongoDB ..) to messaging (RabbitMQ, ØMQ ..) and more. My sense is that in the past couple of years we have been edging towards the next leap forward. Our current plateau is now well consolidated. Yet despite efforts such as codecademy to open up software coding to all, web development remains as complex as ever. To do it well, you really need to master a dizzying array of technologies and standards. ### Time for Web Development to Level Up What does the next level offer? We don't know yet, but I'd suggest the following as some of the critical concerns for next gen web development: • a unified development experience: the ability to build a full-stack application as one without the need for large conceptual and technological leaps from presentation, to business logic, to infrastructure concerns. • implicit support for distributed event handling: a conventional mechanism for events raised on a client or server to be consumed by another client or server. • event-driven GUI development: draw a web page as you want it to be presented, hook up events and data sources. • it is mobile: more than just responsive web design. Explicit suport for presenting appropriately on the full range of desktop, tablet and mobile devices • distributed data synchronisation: whether data is used live on a web page, stored for HTML5 offline, or synchronized with a native mobile application, our tools know how to distribute and synchronize updates. • (ideally) let's not have to go back to square one and re-invent our immense investments in standard libraries and reusable code (like the extensive collection of ruby gems) Do we have the perfect solution yet? No. But we are starting to see enticing inklings of what the future may look like. Perhaps one of the most compelling and complete visions is that provided by the meteor project. It is very close. Will meteor streak ahead to gain massive mid-share and traction? Or will an established platform like Rails take another giant step forward? Or is there something else in the wings we don't know about yet? It will be an interesting year. And if the signs are to be trusted, I expect we'll look back on 2013 as a tipping point in web development - its VB3 moment. Do you think we're in for such a radical shift? Or heading in a different direction altogether? Or will inertia simply carry the status quo... I'd love to hear what others think! ### How to make an eBook (blogarhythm ~ Land of a Thousand Words - Scissor Sisters) So eBook sales have surpassed hardcover for the first time, and it is no surprise that the rise of the tablets is the main driver. There's something quite comfortable about having a nice digital bundle of information at your fingertips, like warm buttered toast. With relatively open standards and the ubiquity of ereaders, the ebook has become ideal packaging for all manner of information, from training manuals to open source project documentation. Or even that book that apparently 81% of us believe we have inside. So how do you make an ebook? My first thought on searching for options is that we are pretty spoiled for choice. But there are important caveats to note, like how Apple iBooks Author can only publish in full fidelity to iTunes Book Store. And we can't get far before needing to study up on the various formats out there: EPUB is widely used, but not without its criticisms and edge-cases especially when trying to push the boundaries with multimedia and social aspects; the Kindle and other ereaders expect Mobi; and Kindle Fire introduced the KF8 format. The good news is that producing pretty standard EPUB, Mobi, PDF and HTML variants of your book can be done very easily with a range of commercial and non-commercial tools. It's even possible to build an EPUB by hand with just a text editor if you are game. I started to experiment with some open source toolchains to see just how well they work in practice. Personally, I'm liking the simplicity of using pandoc to build a range of output formats from markdown source files. My experiments are in the eBook Toolchains project on github if you'd like to examine the various approaches I've tried. Have you tried something that's not there? I'd love to hear about it - comment below or better yet, send me a pull-request on the github project with your examples added!
{}
# Prime ideal decomposition in quadratic field extensions Once you have the character $\chi$ of a quadratic field extension and the corresponding modulus $N$, it is easy to see which prime ideals split, ramify and are inert by looking at their remainder $\mod N.$ But how do you determine what the ideal actually splits (or ramifies) into? Is this hard in general or is there an algorithm one can use? For example, in $\mathbb{Q}(\sqrt{3})$, I know that $p=2,3$ are the ramified primes, and $p$ is split if $p=1,11 \pmod{12}$ and inert if $p=5,7 \pmod{12}$. But, without running through all possible combinations of $a$ and $b$, how would I find the specific $a,b\in \mathbb{N}$ such that $2=(a+b\sqrt{3})(a-b\sqrt{3})$? If there is no general algorithm, would you be able to give an explanation for the case of $\mathbb{Q}(\sqrt{3})$? • In general there need not be such $a,b$, not even in the split case, because the prime ideals may not be principal. A general result when $N\not\equiv1\pmod4$ is that the if $p=m^2\pmod{N}$, then the ideals $\frak{p}_1=(p,m+\sqrt{N})$ and $\frak{p}_2=(p,m-\sqrt{N})$ are the prime factors. But it is possible that neither is generated by a single element of the form $a+b\sqrt{N}$. Similarly in other cases. I suspect that algorithms are known for those $N$, where we know the class number to be equal to one. – Jyrki Lahtonen Feb 25 '13 at 8:26 • @JyrkiLahtonen Your comment fits into an answer. It might be done, for the convenience of the readers? thanks in any case in advance. – awllower Feb 25 '13 at 14:53 • I appreciate the vote of confidence, but I would like to be able to point at an algorithm for finding those generators (when they exist). Something like a Pell equation will pop out, and those have algorithms (IIRC based on continued fractions), but I'm not familiar with those. – Jyrki Lahtonen Feb 25 '13 at 14:56 Let $\omega\in \mathbb{C}$ be the root of a monic irreducible $f\in \mathbb{Z}[X]$ of degree $2$. Let $K=\mathbb{Q}(\omega)$, $R=\mathcal{O}_K$. Suppose the rational prime $p\in \mathbb{Z}$ splits in $R$, so that $p$ doesn't divide the discriminant of $f$. Then $f$ factors mod $p$ as $f=(X-r)(X-s)$ for some $r\neq s$. So, by the Chinese Remainder Theorem, $$R/(p)\cong \frac{\mathbb{Z}[X]}{(p,f)}\cong \mathbb{F}_p[X]/(f) \cong \frac{\mathbb{F}_p[X]}{(X-r)}\times \frac{\mathbb{F}_p[X]}{(X-s)},$$ which is a product of fields. Let $\varphi$ be the isomorphism from the rhs to the lhs. Since $s-r$ is non-zero mod $p$, it has an inverse $u$ mod $p$. So, $u(X-r)-u(X-s)\equiv 1$ mod $p$, which means that $u(\omega-r)-u(\omega-s)\equiv 1$ mod $pR$. So, $\varphi$ takes the two prime ideals $(1)\times 0$ and $0\times (1)$ of the lhs of the above equation to the prime ideals $(-u(\omega-r) \mod{pR})=(\omega-r \mod {pR})$ and $(u(\omega-s) \mod{pR})=(\omega-s \mod {pR})$. Hence, the two prime ideals in $R$ containing $p$ are $$(p,\omega-r)\quad\text{and}\quad(p,\omega-s).$$ Now, suppose $p$ ramifies in $R$. Then $f$ factors mod $p$ as $(X-r)^2$ for some $r$, and so $$R/(p)\cong \frac{\mathbb{F}_p[X]}{((X-r)^2)}.$$ But the rhs is $(X-r)$-primary, so its only prime ideal is the image of $(X-r)$. Hence, the only prime in $R$ containing $p$ is $(p,\omega-r)$. For any prime $p\in\mathbb{Z}$, either $p$ is inert, or $p$ is not inert and the primes in $R$ containing $p$ are exactly the ideals $(p,r)$, where $r$ is a root of $f$ modulo $p$.
{}
# In the force method of analysis of indeterminate trusses, if the truss is indeterminate to degree one, the change in length of redundant, member due to unit force is found by using the formula where A is cross-sectional areaI – Moment of Inertian – force in the member due to unit load applicationN – force in the member due to actual loadE – Modulus of Elasticity This question was previously asked in MPSC AE CE Mains 2017 Official (Paper 1) View all MPSC AE Papers > 1. $$\sum \frac{{nNL}}{{EI}}$$ 2. $$n\sum \frac{NL}{AE}$$ 3. $$\sum \frac{nNL}{AE}$$ 4. $$\sum \frac{NL}{AE}$$ Option 3 : $$\sum \frac{nNL}{AE}$$ ## Detailed Solution Explanation: From the virtual work principle, we know that External virtual work = Internal virtual work When the external virtual loads are multiplied by the real displacement, we get the external virtual work. If a unit load virtual load produces internal loading equal to ni in various members and the real displacements of the member is dli, then the internal virtual work done is w. w = ∑ ni × dli. Thus, if the virtual load at any point is 1 and the displacement at that point due to external effects is Δ then, ∴ dli = $$\frac{{{N_i}{L_i}}}{{{A_i}{E_i}}}$$ = Change in length of any member due to external load Δ = $$\mathop \sum \limits_{i = 1}^n {n_i} \times \frac{{{N_i}{L_i}}}{{{A_i}{E_I}}}$$
{}
# In a shader, why does substituting a variable with the expression producing it cause different behaviour? I have this correctly-working OpenGL shader producing Perlin noise: float left = lerp(fade(v), downleft, topleft); float right = lerp(fade(v), downright, topright); float result = lerp(fade(u), left, right); Then I tried plugging the definitions of right and left into result: float result = lerp(fade(u), Surprisingly, this behaves completely differently, giving visible edges in my Perlin noise. Below are both results: My whole 30-line shader is here. What is the difference between those? • Some trivia notes -- your lerp() is equivalent to the built-in function mix(). And, it may be handy to put your shader into ShaderToy, as a convenient way for others to easily run/debug it. (That all said, I'm so far mystified why the substitution would do what it seems to be doing.) Oct 23 '15 at 18:47 • I bet that you're taking coordinates from each render in one of them and screen in the other. Nov 1 '15 at 10:43 • @Lolums 34 can you rephrase? I did not quite get you. Nov 1 '15 at 11:20 • @hungry91 ignore that comment, it's invalid. I'll try to answer the question later though when I get some time :) Nov 1 '15 at 14:53 • I'm unable to recreate this. Are you sure you didn't accidentally plug left in twice or something? Nov 24 '15 at 22:00
{}
# Elastic collision momentum transfer #### j1979p Consider a collision between a large and small body (golf club/hockey stick/baseball bat and a ball for example). How is it possible to calculate how much the momentum of parts of the body removed from the collision contact point have an effect on the momentum of the smaller body before it accelerates away? Many people think that since the contact duration is so small (as little as half a millisecond in some cases), that only the part of mass in the vicinity of the collision matters (i.e. the effective mass). However, how do you calculate this? Is it something to do with the speed of propogation of elastic stress waves? Related Other Physics Topics News on Phys.org #### K^2 Elastic collision conserves energy. When a club/bat hits a ball, it looses some of its momentum and some of its angular momentum. Both are carried away by the linear motion of the ball. (Ball can also have angular momentum, but it can usually be neglected at this stage.) So you have 3 unknowns after collision: linear momentum of the ball, linear momentum of the bat, angular momentum of the bat; and 3 equations: conservation of momentum, conservation of angular momentum, and conservation of energy. If you write out the before/after condition on all 3 equations, and then solve them as a system for your 3 unknowns, you will find the correct momentum transfer. Notice that momentum transfer technically has 3 degrees of freedom, but in a brief collision, momentum is only transfered in the direction that is normal to the surface of the ball at the impact point. This is what lets you reduce the problem to 3 unknowns instead of 9. (3 DOF for two linear motions and for one rotation.) This approach does neglect a few factors. When a baseball bat hits baseball, for example, the bat vibrates quite violently. Typically, you can model that as energy loss in a collision, id est, assume it's not perfectly elastic, which is usually true. But even assuming perfectly elastic collision, you'll be surprised how accurate you can get, especially for something like the golf ball. If any of this is still hazy, feel free to make up an example problem, and somebody here can probably walk you through solving it. Make sure you provide initial velocities for the ball and whatever you swing at it, as well as masses of the ball, and either the actual value or a way to calculate moment of inertia for the bat/club. #### j1979p This approach does neglect a few factors. When a baseball bat hits baseball, for example, the bat vibrates quite violently. Typically, you can model that as energy loss in a collision, id est, assume it's not perfectly elastic, which is usually true. But even assuming perfectly elastic collision, you'll be surprised how accurate you can get, especially for something like the golf ball. True, but the vibrations occur long after the ball has left the bat and have no effect on the ball. I am more interested in the momentum/energy that is transferred to the ball. If any of this is still hazy, feel free to make up an example problem, and somebody here can probably walk you through solving it. Make sure you provide initial velocities for the ball and whatever you swing at it, as well as masses of the ball, and either the actual value or a way to calculate moment of inertia for the bat/club. OK if you need numbers, take: shaft mass: 120g shaft material: steel initial ball velocity: 0 ball mass: 45g clubhead speed at impact: 45 m/s coefficient of restitution between club and ball = 0.8 Now, the usual answer is to ignore shaft and say momentum (of clubhead) before = 9000 gm/s etc etc. However, we must calculate the angular momentum of the shaft too (probably a differential equation) and then try to find out how much of this angular momentum is transferred in the contact time of only 0.5 milliseconds! The latter is the part of the problem I am most interested in. And I think more variables need to be brought in for this (e.g. stress wave velocity in steel) #### K^2 I'll need length of the shaft and initial pivot point as well. And can I assume that impact happens in direction perpendicular to the arm of rotation? Vibrations do impact collision, though. It has to do with the fact that the club isn't a true rigid body. #### j1979p Take length of shaft as 1m for simplicity. Pivot point is at the top (l = 0) of this shaft again for simplicity. Yes, impact is perpendicular to the arm of rotation. I understand vibrations impact slightly how the ball comes off the clubface but not very much since the majority of the vibrational energy is dissipated after the ball is gone. #### K^2 Alright. Center of mass. $$R = \frac{M_{shaft}\frac{L}{2} + M_{head}L}{M_{shaft} + M_{head}}$$ $$I_{head} = M_{head}(L-R)^2$$ $$I_{shaft} = \frac{M_{shaft}L^2}{12} + M_{shaft}\left(R-\frac{L}{2}\right)^2$$ $$I = I_{head} + I_{shaft}$$ v - CoM velocity of club. ω - angular velocity of club. u - CoM velocity of ball. M = Mhead+Mshaft - mass of club. m - mass of ball. $$v_i = v_{head}\frac{R}{L}$$ $$\omega_i = \frac{v_{head}}{L}$$ Conservation equations. Energy. $$\frac{M v_i^2}{2} + \frac{I \omega_i^2}{2} = \frac{M v_f^2}{2} + \frac{I \omega_f^2}{2} + \frac{m u_f^2}{2}$$ Momentum. $$M v_i = M v_f + m u_f$$ Angular momentum. $$I \omega_i = I \omega_f + m u_f L$$ Rewriting for vf and ωf. $$v_f = \frac{M v_i - m u_f}{M}$$ $$\omega_f = \frac{I \omega_i - m L u_f}{I}$$ And going back to energy conservation with substitutions. $$\frac{M v_i^2}{2} + \frac{I \omega_i^2}{2} = \frac{1}{2}M \left(\frac{M v_i - m u_f}{M}\right)^2 + \frac{1}{2}I \left(\frac{I \omega_i - m L u_f}{I}\right)^2 + \frac{m u_f^2}{2}$$ Collecting terms. $$\frac{1}{2}\left( \frac{m^2}{M} + \frac{m^2 L^2}{I} + m \right)u_f^2 - (v_i m + \omega_i m L)u_f = 0$$ Note the constant term cancellation. This is a very good thing, since one of the two solutions is automatically uf=0. Initial condition must satisfy the same conservation equations as final. If not, we messed up somewhere. Excluding uf=0 solution, final form. $$\frac{1}{2}\left( \frac{m^2}{M} + \frac{m^2 L^2}{I} + m \right)u_f = v_i m + \omega_i m L$$ Numbers time. (I just realized that I completely forgot to put in coefficient of restitution. So this collision just became perfectly elastic. The coefficient of restitution can be put into energy conservation equation, and it makes a royal mess. The constant term doesn't go away, so you have to solve a quadratic equation and get two solutions for uf. Only one of these will make sense, though. I'm pretty sure the wrong solution will be imaginary, but I don't want to bother with actually going through it.) R = 0.8125m Ishaft = 0.02171875 kg m² I = 0.02875 kg m² vi = 36.5625 m/s ωi = 45 s-1 uf = 91.12 m/s. Seems reasonable, and dimensions work out, so that's always nice. Feel free to try this with restitution coefficient in place. It shouldn't change the answer too much. #### j1979p Thanks K^2, you put a lot of work into that problem but it seems a bit familiar to me. It seems you have simplified the problem (as many do) by not taking into account the fact that not all of the energy or momentum transfer that you have calculated to have been transferred to the ball below will actually have been transferred. I'm sure the clubhead contribution you have calculated is pretty accurate but in the case of the shaft, some of the angular momentum of the shaft simply cannot have had time in the 0.5 milliseconds to have transferred anything to the ball! The reason is because the shaft is so long. I would love to know how much of the shaft might have contribued and what determines this? It is a part of collision physics that is so important (not just in sport) because most colliding objects are not perfectly rigid and so only the "effective" mass close to the point of collision should be used in momentum calculations. Yet despite its apparent importance, I cannot find anything on the topic on the internet!
{}
# Saved After evaluating Null Company's manufacturing process, management decides to establish stan... Saved After evaluating Null Company's manufacturing process, management decides to establish standards of 3 hours of direct labor per unit of product and $16.00 per hour for the labor rate. During October, the company uses 19,200 hours of direct labor at a$311,040 total cost to produce 6,600 units of product. In November, the company uses 23,000 hours of direct labor at a \$374,900 total cost to produce 7,000 units of product (1)Compute the direct labor rate variance, the direct labor efficiency variance, and the total direct labor cost variance for each of these two months. October Actual Cost Stand November Actual Cost Standaro ven't started Firefox in a while. Do you want to clean it up for a fresh, Eke-new experience? And by the way, welcome back! K Prev 62 of 63 ## Recent Questions in Accounting - Others Copy and paste your question here... Attach Files
{}
Q: # What is the formula for the surface area of a triangular prism? A: The formula for the surface area of a triangular prism is SA = bh + (s1 + s2 + s3)H. In this formula, "b" is the triangle base, "h" is the triangle height, "s1," "s2" and "s3" are the three triangle sides, and "H" is the length of the prism. A triangular prism can be thought of as two triangles and three quadrilaterals, so finding the total surface area involves finding the area of each of the five shapes and adding them together. The area of a triangle is one half of the base multiplied by the height. The area of a quadrilateral is the base multiplied by the height. ## Similar Questions • A: The faces of a triangular prism are parallelograms while the two bases are triangles. This gives the prism three faces. In a regular prism, the faces are rectangles. However, some prisms lean to one side and are oblique. Filed Under: • A: The formula for calculating the total surface area of a pyramid is: S = (1/2)Pl + B. The surface area of a pyramid is the total sum of the lateral area combined with the area of the base. Filed Under: • A: The formula is the length of the prism times the area of the trapezoid, which is one-half times (a+b) times the height; the area is also called the cross-sectional area. "A" and "B" are the two bases of the trapezoid. The bases are the sides that run parallel to one another, which means they never touch.
{}
# Tag Info 24 Disclaimer: I'm by no means knowledgeable in this field and I haven't read the papers or books I mention below. I found these by digging in the literature and hope these pointers are useful. The answer to your question is no. The first example was given by R.H. Bing, The Cartesian Product of a Certain Nonmanifold and a Line is $E^4$, Ann. of ... 22 There is exactly one way in which one can convince oneself that a statement is not obvious: try to prove it and look at your attempts very, very critically. If you think you can come up with a proof of the curve theorem, edit it into the answer and we can help you dissect it :) Later. Asaf observes that it may be the case that you are refering to ... 17 The Jordan Curve theorem is actually pretty easy to prove if you assume the curve is smooth or piecewise linear. The difficulty arises when you try to handle the general case. This includes nowhere-differentiable curves like the boundary of the Koch snowflake, and even wilder curves which can't even be drawn by hand, like Mariano says. It's kind of a magic ... 16 I think perhaps the biggest thing that blinds one's intuition is that when one imagines embedding a circle in the plane, it's very easy to "lose the plot" and instead imagine embedding a disc in the plane, with the circle on the boundary. So not only do you see immediately the inside and outside, but you also see the Schoenflies theorem -- that a circle in ... 12 The $E_8$ manifold is pretty easy to construct. We may describe it by the following diagram: The meaning of the diagram is as follows. For each dot, we take the disk bundle over $S^2$ with Euler number $2$. This gives us eight $4$-manifolds with boundary. Now we plumb together each disk bundle as indicated by each edge. The result of the plumbing is a ... 12 Stillwell's book Classical Topology and Combinatorial Group Theory is a good first place to start to get a feel for the techniques of geometric topology. If you want to get your feet wet in the world of $4$-manifolds, there's a great book called The Wild World of $4$-manifolds by Scorpan which could serve as a source of further papers for you to look at. For ... 11 In general, the tangent bundle of a smooth $n$-manifold $M$ is classified by a (homotopy class of) map $\phi:M\rightarrow BO(n)$. The manifold $M$ is orientable iff there is a lift of this map to $BSO(n)$. For $n=2$, and $M$ orientable, we see see that the tangent bundle to $M$ is classified by a map $\phi:M\rightarrow BSO(2) = BS^1 = \mathbb{C}P^\infty$. ... 11 No for $n=2$. Consider a disk with two circular holes (red in the picture below) versus a disk inside an annulus (blue) versus three disks (green). In all cases the boundary consists of three disjoint circles. 11 You can't do it. Suppose $G$ were an $n$-dimensional manifold which is a topological group. Recall that an orientation of a topological manifold $M$ is a consistent choice of generator for $H_n(M,M\setminus\{x\})\cong \mathbb Z$ for each $x\in M$. But the left-multiplication homeomorphisms $\ell_g\colon G\to G$, $x\mapsto gx$ give canonical isomorphisms from ... 10 There's the path fibration $\Omega B \to PB \to B$, where for basepoint $* \in B$, $PB = \{\gamma:[0,1]\to B \ |\ \gamma(0) = *\}$ is called the path space of $B$, and $\Omega B = \{\gamma:[0,1]\to B \ |\ \gamma(0) = \gamma(1) = *\}$ is the loop space of $B$. The map $p:PB \to B$ is the endpoint map $\gamma \mapsto \gamma(1)$. I'm not quite sure what ... 10 This answer is meant to complement Jim's. Guillemin and Pollack's Differential Topology text is a great start that's not too specialized any any particular direction. Once you've got some basic algebraic topology background, you can start to link up a lot of basic notions via Guillemin and Pollack (Poincare duality, intersection theory). A lot of ... 10 As I said in the comments above, this has a positive solution due to a recent preprint of Lars Louder, which can be found here. The paper proves that surface groups have a "single Nielsen equivalence class of generating $2g$-tuples". I'll explain what this means, and then I will explain why this solves the problem. (In the comments below, Lars Louder has ... 10 Trying to use $\chi = 2 - 2g$ to describe things that aren't closed orientable surfaces is missing the point, I think. In my opinion one should think of the Euler characteristic of a compact space as a homotopy-invariant refinement of the cardinality of a finite set; see this blog post. A closed disk is contractible, so has Euler characteristic $1$, and ... 10 The connected sum of two disks is an annulus. If you think of an annulus as being a hole, then I suppose a disk is half a hole. 10 A vector bundle on $S^2$ can be constructed by gluing two trivial vector bundles over $S^2_+$ and $S^2_-$, the closed hemispheres. This is called the clutching construction; see, for example, Husemoller's book. The «gluing instructions» are a map from the equator, a cicle $S^1$, to $\mathrm{GL}_n(\mathbb R)$, and the result depends only on the homotopy ... 8 What you get when you do that quotient is a space homeomorphic to $\mathbf P^3(\mathbb R)$. In particular, it does not have a boundary. A way to see this is to remember that $\mathbf P^3(\mathbb R)$ is more usually built as the quotient space of a $3$-sphere $S^3\subseteq\mathbb R^4$ by identifying antipodal points, and noticing that when you do this, the ... 8 The intuition is basically that $4$ dimensions is large enough that $4$-manifolds exhibit great variety (one example: there are $4$ manifolds with arbitrary finitely-generated fundamental groups), but small enough that the high-dimensional-manifold surgery theory apparatus doesn't apply to smooth manifolds, only topological manifolds. The key idea for the ... 8 For your first question, it is true that fundamental groups of closed hyperbolic manifolds cannot contain copies of $\mathbb{Z}^2$. However, a knot complement is not a closed manifold! The hyperbolic structure on the knot complement will be a complete hyperbolic manifold with finite volume, but with a cusp. The fundamental group of the cusp is ... 8 Here are explicit equations for nonsmoothable manifolds (all of which admit triangulations). I do not know if these are the "easiest" but they are surely much more explicit than a description of the E8-manifolds, which is constructed as a result of some infinite, and very implicit, process (Freedman's work). Consider the homogeneous equation $$z_1^5 + ... 7 Proposition 1 answers the "revised" question and Proposition 2 the original one. For completeness we give a self-contained proof of Proposition 2. Proposition 1 Let n\in\mathbb{N} and let p(x)=\alpha x^2+\beta x +\gamma be a polynomial with real coefficients where \alpha >0 such that p'(n)> 0 and p'(n+1)<1. Then for k\in \mathbb{Z}\, ... 7 Yes. Your space is homeomorphic to the standard unit sphere S^2\subseteq\mathbb R^3 with three open disks removed. These disks can be chosen so that the rotation of \mathbb R^3 by \frac{2\pi}3 around some axis cyclically permutes them. This rotation, however, has two fixed points, where the axis of rotation intersects S^2. Compose it with a ... 7 Not always. Consider in \mathbb{R}:$$A = \mathbb{Z}, \quad B = \left\{ n + \frac{1}{n} : n \geqslant 2 \right\}$$so \frac{1}{n} \in A + B but 0 \not \in A+B. 7 The fibrations O(n-1)\to O(n) \to S^{n-1}, U(n-1)\to U(n) \to S^{2n-1}, and Sp(n-1)\to Sp(n) \to S^{4n-1} from Bott periodicity are fairly important. Also mapping tori are fiber bundles. For example, the complement of a fibered knot in S^3. 7 There are many more 3-manifolds that arise as the boundary of compact contractible 4-manifolds than just those, including some Brieskorn spheres like \Sigma(2,3,5). The keyword you want is Mazur manifold. It is my impression that, and I would be surprised if it weren't true, there is not much known about what 3-manifolds arise as the boundary of a Mazur ... 7 A handle attachment is the process of gluing a copy of D^k\times D^{n-k} to \partial X. A (normal) framing gives a recipe for performing such a gluing, by specifying (up to ambient isotopy) a collar of \partial D^{k}\times \{0\} in X. Gompf-Stipsicz express this data as: An embedding \varphi_0\colon\, S^{k-1}\to\partial X with trivial normal ... 7 The answer depends on what you mean by a solid genus-2 handlebody, and the trouble is that this is ambiguous and requires some interpretation. Had you asked about "the complement of the solid genus-2 handlebody then I would assume you meant this and the answer would be yes, as in the answer of @QuangHoang. But since you asked about "the complement of a ... 7 Consider the topologist sine curve$$y = \sin \bigg(\frac{1}{x}\bigg),\ 1\geq x>0 together with the interval $\{(0, t): |t|\leq 1\}$ and a curve joining this interval with the graph. This is simply connected but is not contractible. You may find the proof of noncontractibility in http://math.ucr.edu/~res/math205B-2012/polishcircle.pdf 7 Here's an alternate proof which doesn't use invariance of domain. It also gives a slightly stronger result. Theorem: Let $M^n$ be compact without boundary. Then there is no immersion $f:M\rightarrow \mathbb{R}^n$. Proof: (sketch). Assume for a contradiction there is such an $f$. Since $M$ is compact, $f$ is a closed map, that is, it maps closed sets to ... 7 The general case is very much like the 2-dimensional case, it just takes time to process the picture, to see how you could do the same constructions in the higher-dimensional case. A punctured $S^1 \times S^1$ looks like a wedge of two circles, but fattened up a little bit. Precisely, around each circle you have an annulus neighbourhood. To immerse the ... 6 Poincaré's conjecture follows from Perelman's proof on the Thurston Elliptization Conjecture. To put it simply, Thurston's Geometrization Conjecture claims that if you have a closed prime orientable $3$-manifold than you can cut it along a suitable collection of embedded tori so that each of the pieces you are left with can be endowed with a "nice" geometry. ... Only top voted, non community-wiki answers of a minimum length are eligible
{}
# Category Archives: Abstracts Abstracts ## A theory of Lower Semicontinuity for Integral Functionals with Linear Growth and u-dependence, Giles Shaw (Reading & Cambridge) Variational problems with linear growth arise naturally in the Calcu- lus of Variations from the study of singular perturbation problems associated to a large number of physical and mathematical applications. These problems must be posed over the class of functions of bounded variation, and their analysis is signifi- cantly more involved than that which is… Read More » ## A variational problem from micromagnetics with a nonlocal term, Roger Moser (Bath) We study a model for transition layers, called Nel walls, in thin films of ferromagnetic materials. The magnetisation is represented by a map from a line to the unit circle in this model, and there is an energy functional consisting of an Allen-Cahn type term and a nonlocal term penalising a fractional Sobolev norm of… Read More » ## Coherent motion for interacting particles: waves in the Frenkel-Kontorova chain, Johannes Zimmer (Bath) In 1939, Frenkel and Kontorova proposed a model for the motion of a dislocation (an imperfection in a crystal). The model is simple, a chain of atoms following Newton’s equation of motion. The atoms interact with their nearest neighbours via a harmonic spring and are exposed to a periodic (non-convex) on-site potential. Despite the simplicity, the model has… Read More » ## Entropic gradient flow formulation for non-linear diffusion, Marios Stamatakis (Bath) Nonlinear diffusion ∂tρ = ∆(Φ(ρ)) is considered as the hydrodynamic limit of the zero- range process. It is shown that for suitable choices of Φ, a metric can be defined with respect to which the non-linear diffusion is the gradient flow of the thermodynamic entropy of the zero-range process. Hence we call this metric the… Read More » ## On the L-p approximation of L-infinity minimisation problems : Theory and Numerics, Tristan Pryer (Reading) In this talk we will present a methodology for the approximation of solutions to problems arising from calculus of variations in L-infity. We make use of L-p approximations and present a variety of theoretical and numerical results to this end. ## A quasilinear boundary value problem involving Sobolev’s exponent, Carlo Mercuri (Swansea) We will discuss about a p-Laplacian problem involving a nonlinearity of critical growth. Although the problem is variational, standard variational techniques do not directly apply because of the possible lack of compactness of the minimising sequences, due to the combined effect of dilations and translations. I will present a recent result in collaboration with B.… Read More » ## Singular limits of nonlinear elliptic and parabolic systems, Elaine Crooks (Swansea) Large-interaction limits of certain systems of elliptic and parabolic PDE, such as, for instance, population systems with large competition, both provide a powerful mathematical tool that can be exploited to obtain information about systems that are otherwise difficult to analyse, and correspond to important biological and physical phenomena such as spatial segregation, phase separation, or… Read More » ## Homogenisation for mean field games, Nicolas Dirr (Cardiff University) Mean field games have been introduced by J.-M. Lasry and P.-L. Lions as an effective model for very many competing rational agents. They are a system of Hamilton-Jacobi equations and Kolmogorov-Fokker-Planck equations. One of the challenges is the fact that these two types of equations have different “natural” notions of generalized solutions.  We investigate dynamical mean field… Read More » ## Some recent results on Optimal Transport and Density Functional Theory, Augusto Gerolin (Bath) We want to present some new results related to Multi-marginal Optimal Transport Theory for Coulomb and repulsive harmonic costs. In particular, we are inquiring about the existence of Monge-type solutions in the multi-marginal case. We intend to give a brief survey on the general theory and discuss some of its main issues. Most of the… Read More » ## Vectorial Calculus of Variations in $L^\infty$ and generalised solutions for fully nonlinear PDE systems, Nikos Katzourakis (Reading) Calculus of Variations in $L^\infty$ has a long history, the scalar case of which was initiated by G.Aronsson in the 1960s and is under active research ever since. Aronsson’s motivation to study this problem was related to the optimisation of Lipschitz Extensions of functions. Mathematically, minimising the supremum is very challenging because the equations are… Read More »
{}
# Unknown Random Variable Hi all, This is perhaps a non-standard question. I am building a model, and there is an unknown parameter Z. I have placed a prior over Z \sim N(0, 1). The model has several other parameters, \theta. Usually, inference would compute p(Z, \theta | \mathcal{D}). However, even though statistically \mathcal{D} might provide evidence about the value of Z, I want to force the posterior to be the same as the prior, or alternatively, I want to compute p(\theta | \mathcal{D}, Z \sim N(0, 1)). The reason that I want to do this is that the model is mis-specified, and so I don’t think the evidence should be used to choose Z. If I build my model as usual, the posterior on Z will not be equal to the prior because \mathcal{D} does provide evidence. The most naive way of doing this would be to sample from p(Z), compute p(\theta | \mathcal{D}, Z) and average them. This would require many runs of NUTS, and so I was wondering if there was a way to do this using PyMC3? I think this amounts to somehow removing Z from the computational graph, but I’m not sure how to do this, Alternatively: p(\theta | \mathcal{D}) = \int p(\theta, Z | \mathcal{D}) dZ = \int p(\theta| \mathcal{D}, Z) p(Z | \mathcal{D}) dZ, I set p(Z | \mathcal{D}) = p(Z) i.e., I want to discard the evidence that \mathcal{D} provides about Z. Therefore, p(\theta | \mathcal{D}) = \int p(\theta | \mathcal{D}, Z) p(Z) dZ, which can be compute by sampling Z from the prior, performing NUTS sampling for that value, and then averaging this across a bunch of runs. I just want to do this by doing NUTS sampling once. Thanks! Perhaps this is trivial and not very useful but if you run NUTS and come up with samples of both \theta and Z, then if you simply ignore the samples of Z then the resulting samples of \theta will be drawn from the marginal distribution p(\theta \vert D) and will not be specific to any single value of Z. This marginal distribution will be affected by choice of priors via the integration you listed however, so using a N(0,10) prior will give differing results from N(0,1). Also, note that sampling Z, fixing its value in the model, and running NUTS for each of these sampled values should give you the same results as just running NUTS once. If you do the former, you’re mixing MCMC and vanilla Monte Carlo but you won’t be changing anything fundamental about the target distribution. Edit - not right. Hi Chris, Thanks for the input. You are right that ignoring Z amounts to having samples from p(\theta | \mathcal{D}). However, this marginalises over the posterior of Z, rather than the prior over Z, so this isn’t the integral that I actually want to perform. I don’t believe that the sampling procedure that I suggested and running MCMC should give the same results, because of the above issue. Interesting. You’re right but I’m not quite sure how to get what you want. For anyone else as confused as me, you can reproduce the desired result with the following code to see that the resulting distributions are different. import pymc3 as pm import numpy as np import matplotlib.pyplot as plt prior_draws = np.random.randn(100) traces = [] z_observation = 2.0 for mu in prior_draws: with pm.Model() as individual_model: y = pm.Normal('y', mu=mu) z = pm.Normal('z',mu=y,observed=z_observation) traces.append(pm.sample(progressbar=False)) individual_trace = np.concatenate([trace['y'].ravel() for trace in traces]) with pm.Model() as combined_model: x = pm.Normal('x') y = pm.Normal('y', mu=x) z = pm.Normal('z',mu=y,observed=z_observation) combined_trace = pm.sample(draws=20000) plt.hist(individual_trace,bins=100, density=True,alpha=0.5) plt.hist(combined_trace['y'],bins=100, density=True,alpha=0.5);
{}
# Problem: How many atoms of the 10 atoms in the following cumulene are coplanar? ###### Problem Details How many atoms of the 10 atoms in the following cumulene are coplanar?
{}
# Comments in Dart Programming Dart ProgrammingServer Side ProgrammingProgramming #### Artificial Intelligence : The Future Of Programming 15 Lectures 54 mins #### Beyond Basic Programming - Intermediate Python Most Popular 36 Lectures 3 hours #### C Programming from scratch- Master C Programming Best Seller 60 Lectures 8 hours Comments are a set of commands that are ignored by the compiler. They are used in a scenario where you want to attach a note to your code or a section of code so that when you visit it later, you can recall it easily. The comment statements are usually ignored during the execution of the program. There are multiple types of comments in Dart, mainly these are − • Single line comments • Multi-line comments • Documentation comments We will explore all the above document types in this article. ## Single Line comments Single line comments make use of // (double forward slash). They extend up to a new line character. ### Syntax // this is a comment ### Example Live Demo void main(){ // var x = 10; print("Hello World"); } ### Output Hello World ## Multi-line comment A multi-line comment makes use of /* */. Everything put between the opening /* and the closing */ will be ignored by the compiler. ### Syntax /* Inside comment Also inside comment */ ### Example Live Demo void main(){ /* multi line comment */ print("Hello World"); } ### Output Hello World ## Documentation Comment The documentation comment is used in cases where we are generating reference for a project or software package. ### Syntax /// This /// is /// a /// documentation /// comment Updated on 21-May-2021 12:25:35 Advertisements
{}
## Monthly Archives: July 2015 ### Complex Numbers for you If $iz^{3}+z^{2}-z+i=0$, then $|z|$ equals (a) 4 (b) 3 (c) 2 (d) 1. Solution. We can write the given equation as $z^{3}+\frac{1}{i}z^{2}-\frac{1}{i}z+1=0$, or $z^{3}-iz^{2}+iz-i^{2}=0$ $\Longrightarrow z^{2}(z-i)+i(z-i)=0$ $\Longrightarrow (z^{2}+i)(z-i)=0 \Longrightarrow z^{2}=-i, z=i$ $\Longrightarrow |z|^{2}=|-i|$ and $|z|=|i|$ $\Longrightarrow |z|^{2}=1$ and $|z|=1$ $\Longrightarrow |z|=1$ More later, Nalin Pithwa ### River Crossing 1 — Farm Produce Alcuin of Northumbria, aka Flaccus Albinus Alcuinus or Ealhwine, was a scholar, a clergyman and a poet. He lived in the eighth century and rose to be a leading figure at the court of the emperor Charlemagne. He included this puzzle in a letter to the emperor,  as an example of ‘subtlety in Arithmetick for your enjoyment’. It still has mathematical significance, as I will eventually explain. It goes like this. A farmer is taking a wolf, a goat, and a basket of cabbages to market,, and he comes to a river where there is a small boat. He can fit only one item out of the three into the boat with him at any time. He can’t leave the wolf with the goat, or the goat with the cabbages, for reasons that should be obvious. Fortunately, the wolf detests cabbages. How does the farmer transport all three items across the river? Have fun! Nalin Pithwa ### More complex stuff Problem. If $z_{1}, z_{2}, \ldots , z_{n}$ lie on the unit circle $|z|=2$, then value of $E=|z_{1}+z_{2}+\ldots+z_{n}|-4|\frac{1}{z_{1}}+\frac{1}{z_{2}}+\ldots+\frac{1}{z_{n}}|$ is (a) 0 (b) n (c) -n (d) none of these. Solution. As $z_{1},z_{2},\ldots, z_{n}$ lie on the circle $|z|=2$, $|z_{i}|=2 \Longrightarrow |z_{i}|^{2}=4 \Longrightarrow z_{i}\overline{z_{i}}=4$ for $i=1,2,3, \ldots, n$ Thus, $\frac{1}{z_{i}}=\frac{\overline{z_{i}}}{4}$ for $i=1, 2, 3, \ldots, n$ Hence, $E=|z_{1}+z_{2}+\ldots+z_{n}|-4|\frac{\overline{z_{1}}}{4}+\frac{\overline{z_{2}}}{4}+\ldots+\frac{\overline{z_{n}}}{4}|$, which in turn equals $|z_{1}+z_{2}+\ldots+z_{n}|-|\overline{z_{1}}+\overline{z_{2}}+\ldots+\overline{z_{3}}|$, that is, $|z_{1}+z_{2}+\ldots+z_{n}|-|\overline{z_{1}+z_{2}+\ldots+z_{n}}|=0$. (since $|z|=|\overline{z}|$). More later, Nalin Pithwa ### Rabbits in the Hat The Great Whodunni, a stage magician, placed his top hat on the table. ‘In this hat are two rabbits,’ he announced. ‘Each of them is either black of white, with equal probability. I am now going to convince you, with the aid of my lovely assistant Grumpelina, that I can deduce their colours without looking inside the hat!’ He turned to his assistant, and extracted a black rabbit from her costume. ‘Please place this rabbit in the hat.’ She did. Whodunnii now turned to the audience. ‘Before Grumpelina added the third rabbit, there were four equally likely combinations of rabbits. ‘ He wrote a list on a small blackboard:BB, BW, WB and WW. ‘Each combination is equally likely — the probability is 1/4. But, then I added a black rabbit. So, the possibilities are BBB, BWB, BBW and BWW — again, each with probability 1/4. ‘Suppose —- I won’t do it, this is hypothetical — suppose I were to pull a rabbit from the hat. What is the probability that it is black? If the rabbits are BBB, that probability is 1. If BWB or BBW, it is 2/3. If BWW, it is 1/3. So the overall probability of pulling out a black rabbit is $\frac{1}{4} \times 1 + \frac{1}{4} \times \frac{2}{3}+\frac{1}{4} \times \frac{2}{3}+\frac{1}{4} \times \frac{1}{3}$ which is exactly 2/3. ‘But. If there are three rabbits in a hat, of which exactly r are black and the rest white, the probability of extracting a black rabbit is r/3. Therefore, $r=2$, so there are two black rabbits in the hat.’ He reached into the hat and pulled out a black rabbit. ‘Since I added this black rabbit, the original pair must have been one black and one white!’ The Great Whodunni bowed to tumultous applause. Then, he pulled out two rabbits fwrom the hat — one pale lilac and the other shocking pink. It seems evident that you can’t deduce the contents of a hat without finding out what’s inside. Adding the extra rabbit and then removing it again (was it the same black rabbit? Do we care?) is a clever piece of misdirection. But, why is the calculation wrong? More later, Nalin Pithwa ### Shaggy Cat Story No cat has eight tails, One cat has one tail. Adding: one cat has nine tails. 🙂 🙂 🙂 More later, Nalin Pithwa ### De Moivre’s Theorem application Question: If $f_{r}(\alpha)=(\cos{\frac{\alpha}{r^{2}}}+i\sin{\frac{\alpha}{r^{2}}}) \times (\cos{\frac{2\alpha}{r^{2}}}+i\sin{\frac{2\alpha}{r^{2}}}) \ldots (\cos{\frac{\alpha}{r}}+i\sin{\frac{\alpha}{r}})$, then $\lim_{n \rightarrow \infty}f_{n}{\pi}$ equals (a) -1 (b) 1 (c) -i (d) i Solution. Using De Moivre’s theorem, $f_{r}{\alpha}=e^{i\frac{\alpha}{r^{2}}}e^{i\frac{2\alpha}{r^{2}}}\ldots e^{i\frac{\alpha}{r}}$ which in turn equals $e^{(i \frac{\alpha}{r^{2}})(1+2+\ldots+r)}=e^{(i\frac{\alpha}{r^{2}})(\frac{r(r+1)}{2})}=e^{i(\frac{\alpha}{2})(1+\frac{1}{r})}$ Hence, $\lim_{n \rightarrow \infty}f_{n}(\pi)=\lim_{n \rightarrow \infty}e^{(i)(\frac{\pi}{2})(1+\frac{1}{n})}=e^{i(\frac{\pi}{2})}=\cos{\frac{\pi}{2}}+i\sin{\frac{\pi}{2}}=i$. More complex stuff to be continued in next blog (pun intended) 🙂 Nalin Pithwa ### A complex equation Find the number of solutions of the equation $z^{3}+\overline{z}=0$. Solution. Given that $z^{3}+\overline{z}=0$. Hence, $z^{3}=-z$. $|z|^{3} =|-\overline{z}| \Longrightarrow |z|^{3}=|z|$.Hence, we get $|z|(|z|-1)(|z|+1)=0 \Longrightarrow |z|=0, |z|=1$ (since $|z|+1>0$) If $|z|=1$, we get $|z|^{2}=1 \Longrightarrow z.\overline{z}=1$. Thus, $z^{3}+\overline{z}=0 \Longrightarrow z^{3}+1/z=0$ Thus, $z^{4}+1=0 \Longrightarrow z^{4}=\cos{\pi}+i\sin{\pi}$, that is, $z=\cos{\frac{2k+1}{4}}\pi+i\sin{\frac{2k+1}{4}}\pi$ for $k=0,1,2,3$. Therefore, the given equation has five solutions. ### Shaggy Dog Story Brave Sir Lunchalot was travelling through foreign parts. Suddenly, there was a flash of lightning and a deafening crack of thunder, and the rain started bucketing down. Fearing rust, he headed for the nearest shelter, Duke Ethelfred’s castle. He arrived to find the Duke’s wife, Lady Gingerbere weeping piteously. Sir Lunchalot liked attractive young ladies, and for a brief moment he noticed a distinct glint through Gingerbere’s tears. Ethelfred was very old and frail, he observed…Only one thing, he vowed would deter him from a secret tryst with the Lady — the one thing in all the world that he could not stand. Puns. Having greeted the Duke, Lunchalot enquired why Gingerbere was so sad. “It is my uncle Elpus,” she explained. “He died yesterday.” “Permit me to offer my sincerest condolences,” said Lunchalot. “That is not why I weep so…so piteously, sir knight,” replied Gingerbere. “My cousins Gord, Evan and Liddell are unable to fulfill the terms of uncle’s will.” “Why ever not?” “It seems that Lord Elpus invested the entire family fortune in a rare breed of giant-riding dogs. He owned 17 of them.” Lunchalot had never heard of a riding-dog, but he did not wish to display his ignorance in front of such a lithesome lady. But, this fear, it appeared, could be set to rest, for she said, “Although I have heard much of these animals, I  myself have never set eyes on one.” “They are no fit sight for a fair lady,” said Ethelfred firmly. “And, the terms of the will —?” Lunchalot asked, to divert the direction of the conversation. “Ah, Lord Elpus left everything to the three sons. He decreed that Gord should receive half the dogs, Evan one third, and Liddell one ninth.” “Mmm. Could be messy.” “No dog is to be subdivided, good knight.” Lunchalot stiffened at the phrase good knight, but decided it had been uttered innocently and was not a pathetic attempt at humour. “Well, —- : Lunchalot began. “Pah, ’tis a puzzle as ancient as yonder hills!” said Ethelfred scathingly. “All you have to do is take one of your riding dogs over to the castle. Then, there are 18 of the damn things!” “Yes, my husband, I understand the numerology, but —” “So, the first son gets half that, which is 9; the second gets one third which is 6; the third son gets one ninth, which is 2. That makes 17 altogether, and our own dog can be taken back here!” “Yes, my husband, but we have no one here who is manly enough to ride such a dog.” Sir Lunchalot seized his opportunity. “Sire, I will ride your dog!” The look of admiration in Gingerbere’s eye showed him how shrewd his gallant gesture had been. “Very well,” said Ethelfred.”I will summon my houndsman and he will bring the animal to the courtyard. Where we shall meet them.” They waited in an archway as the rain continued to fall. When the dog was led into the courtyard, Lunchalot’s jaw dropped so far that it was a good job he had his helmet on. The animal was twice the size of an elephant, with thick striped fur, claws like broadswords, blazing red eyes the size of Lunchalot’s shield, huge floppy ears dangling to the ground, and a tail like a pig’s — only with more twists and covered in sharp spines. Rain cascaded off its coat in waterfalls. The smell was indescribable. Perched improbably on its back was  a saddle. Gingerbere seemed even more shocked than he by the sight of this terrible monstrosity. However, Sir Lucnhalot was undaunted. Nothing could daunt his confidence. Nothing could prevent a secret tryst with the lady, once he returned astride the giant hound, the will executed in full. Except… Well, as it happened, Sir Lunchalot did not ride the monstrous dog to Lord Elpus’s castle, and for all he knows the will has still not been executed. Instead, he leaped on his horse and rode off angrily into the stormy darkness, mortally offended, leaving Gingerbere to suffer the pangs of unrequited lust. It wasn’t Ethelfred’s’ dodgy arithmetic — it was what the Lady had said to her husband in a stage whisper. What did she say? 🙂 🙂 🙂 More later, Nalin Pithwa ### More trigonometry practice Problem. Let ABC be a triangle and $h_{a}$ be the altitude through A. Prove that $(b+c)^{2} \geq a^{2}+4(h_{a})^{2}$. (As usual, a, b, c denote the sides BC, CA, AB respectively.) Proof. The given inequality is equivalent to $(b+c)^{2}-a^{2} \geq 4(h_{a})^{2}=\frac{16\triangle^{2}}{a^{2}}$. where $\triangle$ is the area of triangle ABC. Using the identity $16\triangle^{2}=[(b+c)^{2}-a^{2}][a^{2}-(b-c)^{2}]$, we see that the inequality to be proved is $a^{2}-(b-c)^{2} \leq a^{2}$ (here we use $a), which is true. Observe that equality holds iff $b=c$. QED. More later, Nalin Pithwa A cute problem in Trigonometry or pure plane geometry Problem. In a triangle ABC, $latex angle A$ is twice $latex angle B$. Show that $latex a^{2}=b(b+c)$. (In fact, the converse is also true. Prove it!). Proof. Method I. You can use plane geometry also. This is left to you as an exercise. Method II. You can use trigonometry also. We may use the sine rule for a triangle to dispose of both the implications simultaneously. $latex A=2B Longleftrightarrow A-B=B Longleftrightarrow sin{(A-B)}=sin{B} Longleftrightarrow sin{(A-B)}sin{(A+B)}=sin{B}sin{C} Longleftrightarrow sin^{2}{A}-sin^{2}{B}=sin{B}sin{C} Longleftrightarrow (2Rsin{A})^{2}-(2Rsin{B})^{2}=(2Rsin{B})(2Rsin{C}) Longleftrightarrow a^{2}-b^{2}=bc Longleftrightarrow a^{2}=b(b+c)$ View original post
{}
# Solving a particular system of differential equations The problem I'm trying to solve is this: $X'(t) \in \mathbb{R}^3 \,, \, \omega = (\omega_1,\omega_2,\omega_3)$ Find the general solution for $$X'(t) = \omega \times X(t)$$ After doing the cross product and rearranging a bit I got to $$\begin{pmatrix} x_1' (t) \\ x_2' (t) \\ x_3' (t) \end{pmatrix} = \begin{pmatrix} 0 & -\omega_3 & \omega_2 \\ \omega_3 & 0 &-\omega_1 \\ -\omega_2 & \omega_1 & 0 \end{pmatrix} \begin{pmatrix} x_1 (t) \\ x_2 (t) \\ x_3 (t) \end{pmatrix}$$ Then I looked for the eigenvalues of the matrix, which are 0, $\sqrt{-\omega_1 ^2 -\omega_2 ^2 -\omega_3 ^2}$ and $-\sqrt{-\omega_1 ^2 -\omega_2 ^2 -\omega_3 ^2}$. Once I have the eigenvectors, I know how to proceed to find the general solution, but finding the eigenvectors of the second and third eigenvalues lead me to really weird looking stuff, which makes me think that there's another way of solving this. I've been thinking about this all day, and I can't think of anything else, any help would be greatly appreciated. • One thing which often helps is to write your initial $\vec{\omega}$ in spherical coordinates; that should allow some trig identities when solving for the eigenvectors. Another thing would be to rotate your system so that the $z$-axis is parallel to $\vec{\omega}.$ – Semiclassical Jul 29 '14 at 20:05 • Another thing you can notice is that the triple product identity implies $$\omega\cdot \dot{X}(t)=\omega\cdot (\omega\times X)=X\cdot(\omega\times\omega)=0.$$ So the velocity vector is always perpendicular to $\vec{\omega}$. – Semiclassical Jul 29 '14 at 20:19 • Cylindrical coordinates with the $z$ axis in the $\omega$ direction are better in this case. – Robert Israel Jul 29 '14 at 21:17 • I've been trying to work it out, but if I understood correctly what you said, replacing $\omega$ with $(r\sin(\theta), r\cos(\theta),z)$ (since the equality holds for some $\theta$, r) didn't really do much to simplify the system and let me find the eigenvectors. Maybe I didn't understand correctly what you mean with using spherical / cylindrical coordinates. – John Williams Jul 31 '14 at 16:24 Let $A$ and $X(t)$ be the two matrices $$A = \begin{bmatrix} 0 & -\omega_3 & \omega_2 \\ \omega_3 & 0 &-\omega_1 \\ -\omega_2 & \omega_1 & 0 \end{bmatrix} \quad\text{ and }\quad X(t) = \begin{bmatrix} x_1 (t) \\ x_2 (t) \\ x_3 (t) \end{bmatrix}$$ and $\omega = \sqrt{\omega_1^2 + \omega_2^2 + \omega_3^2} \ge 0$. When $A$ is constant, the ODE for $X(t)$ $$\frac{d}{dt}X(t) = A X(t)$$ has solution of the form $$X(t) = e^{tA} X(0)\quad\text{ where }\quad e^{tA} = \sum_{k=0}^\infty \frac{t^k}{k!} A^k\tag{*1}$$ When $A$ is a skew symmetric matrix, it satisfy an interesting identity: $$A^3 = -\omega^2 A$$ A consequence of this is for any even function $f(z)$ whose power series expansion converges rapidly enough, you will find $$f(A)A = f(i\omega)A\quad\text{ and }\quad f(A)A^2 = f(i\omega)A^2$$ Apply this to $e^{tA}$, one can convert the series into a quadratic polynomial in $A$! \begin{align} e^{tA} &= I_n + \left(\sum_{k=0}^\infty\frac{t^{2k+1}}{(2k+1)!}(A^2)^k\right)A + \left(\sum_{k=0}^\infty\frac{t^{2k+2}}{(2k+2)!}(A^2)^k\right)A^2\\ &= I_n + \left(\sum_{k=0}^\infty\frac{t^{2k+1}}{(2k+1)!}(i\omega)^{2k}\right)A + \left(\sum_{k=0}^\infty\frac{t^{2k+2}}{(2k+2)!}(i\omega)^{2k}\right)A^2\\ &= I_n + \frac{\sin(t\omega)}{\omega} A + \frac{1-\cos(t\omega)}{\omega^2} A^2 \end{align}\tag{*2} Please note that during practical calculation, there is no need to manipulate power series explicitly like this. Instead, one can formally manipulate $(*2)$ as follows $$e^{tA} = I_n + \frac{\sinh(tA)}{A} A + \frac{\cosh(tA) - I_n}{A^2} A^2 = I_n + \frac{\sin(t\omega)}{\omega} A + \frac{1-\cos(t\omega)}{\omega^2} A^2$$ Please keep in mind in above manipulation, expression like $\displaystyle\;\frac{\sinh(tA)}{A}\;$ doesn't mean compute $\sinh(tA)$ by a power series expansion and then divide it by $A$. Instead, it means evaluate the power series associated with $\displaystyle\;\frac{\sin(tz)}{z}$ "at" $z = A$. Finally, let us rephrase the solution $(*1)$ in terms of vectors. Let $\hat{e}_i, i = 1,2,3$ be the canonical basis of $\mathbb{R}^3$ as a vector space. Let $\vec{\omega}$ and $\vec{X}(t)$ be the two vectors $$\vec{\omega} = \omega_1 \hat{e}_1 + \omega_2 \hat{e}_2 + \omega_3 \hat{e}_3 \quad\text{ and }\quad \vec{X}(t) = x_1(t) \hat{e}_1 + x_2(t) \hat{e}_2 + x_3(t) \hat{e}_3$$ Let $\hat{\omega}$ be the unit vector $\displaystyle\;\frac{\vec{\omega}}{\omega}\;$. As mentioned in the question, application of $A$ to the column vector $X(t)$ is equivalent to a cross product between $\vec{\omega}$ and $\vec{X}(t)$. $$A X(t)\quad\leftrightarrow\quad \vec{\omega} \times \vec{X}(t)$$ If we substitute $(*2)$ into $(*1)$ and employ this type of correspondence between matrices and vectors. We will obtain $$\vec{X}(t) = \vec{X}(0) + \sin(t\omega)\big( \hat{\omega} \times \vec{X}(0) \big) + (1 - \cos(t\omega)) \big( \hat{\omega} \times ( \hat{\omega} \times \vec{X}(0)) \big)$$ If we split $\vec{X}(0)$ into two orthogonal vectors, one parallel and another perpendicular to $\hat{\omega}$: $$\vec{X}(0) = \vec{X}_{\parallel}(0) + \vec{X}_{\perp}(0) \quad\text{ where }\quad \begin{cases} \vec{X}_{\parallel}(0) &= ( \vec{X}_(0) \cdot \hat{\omega} ) \hat{\omega}\\ \vec{X}_{\perp}(0) &= -\hat{\omega} \times ( \hat{\omega} \times \vec{X}(0) ) \end{cases}$$ We can re-express $\vec{X}(t)$ in a much more informative form: $$\vec{X}(t) = \underbrace{\vec{X}_{\parallel}(0)}_{\vec{X}_{\parallel}(t)} + \underbrace{\cos(t\omega)\vec{X}_{\perp}(0) + \sin(t\omega)\big(\hat{\omega} \times \vec{X}_{\perp}(0) \big)}_{\vec{X}_{\perp}(t)}$$ As one can see, the parallel part of $\vec{X}(t)$ remains a constant while the perpendicular part of $\vec{X}(t)$ rotate around the axis in direction of $\hat{\omega}$ in circle. • More generally, the characteristic polynomial $P(t)$ of an $n \times n$ skew symmetric matrix has only terms in odd powers of $t$ if $n$ is odd, and only even powers of $t$ if $n$ is even. – Robert Israel Jul 29 '14 at 21:07 • Dividing by $A$, or using $A^{-1}$, as in the equation $e^{tA} = I_n + \frac{sinh(tA)}{A}A + \ldots$ is problematic when $A$ is a $3 \times 3$ (or $n \times n$ for $n$ odd) matrix, since such matrices always have $0$ as an eigenvalue and hence are singular! – Robert Lewis Jul 30 '14 at 4:30 • @RobertLewis $\frac{\sinh(tA)}{A}$ here doesn't mean compute $\sinh(tA)$ by a power series expansion and then divide the result by $A$. Instead, it refers to a function whose power series expansion has the from $\frac{\sinh(tz)}{z}$ and you evaluate the corresponding power series expansion in $A$. – achille hui Jul 30 '14 at 5:24 • OK. OK. OK. Fifteen characters. – Robert Lewis Jul 30 '14 at 6:18 Recall that $\omega \times X(t)$ is orthogonal to both $\omega$ and $X(t)$. Therefore, $X'(t)$ is orthogonal to $X(t)$ and to $\omega$. In particular, $$\frac{d}{dt}(X(t)\cdot X(t)) = 2X'(t)\cdot X(t) = 0.$$ That means that $|X(t)|$ is constant in $t$. Similiary $\frac{d}{dt}(X(t)\cdot\omega)=0$, which keeps $X(t)\cdot\omega$ constant in time. So the motion of $X(t)$ remains in the plane $$X(t)\cdot\omega = X(0)\cdot\omega,$$ and the motion is in a circle because $|X(t)|=|X(0)|$ for all $t$. To be concrete about the representation, project $X(0)$ onto the line through the origin with direction vector $\omega$. That projection is $$P=X(0)-\frac{X(0)\cdot \omega}{\omega\cdot\omega}\omega.$$ The derivative $X'(t)=\omega\times X(t)$ means that $X(t)$ rotates in a circular direction in the plane perpendicular to $\omega$; the direction is in the direction of your fingers (according to the right-hand rule) when your thumb points along $\omega$. That's why it's best to choose the second orthogonal vector in the plane with normal $\omega$ to be $$Q = \frac{1}{|\omega|}\omega \times P.$$ Then $|Q|=|P|$ and $$X(t)= \frac{X(0)\cdot\omega}{\omega\cdot\omega}\omega+\cos(\alpha t)P+\sin(\alpha t)Q$$ for some constant $\alpha$. The constant $\alpha$ is positive and its determined by the length of the derivative vector: $$|X'(0)|=\alpha|Q| \mbox{ and } |X'(0)| = |\omega\times X(0)|=|\omega\times P|= |\omega||Q|.$$ Now check the above solution where $\alpha=|\omega|$: \begin{align} X'(t) & = |\omega|(-\sin(|\omega|t)P+\cos(|\omega|t)Q),\\ \omega \times X(t) & = \omega\times\left[ \frac{X(0)\cdot\omega}{\omega\cdot\omega}\omega+\cos(|\omega| t)P+\sin(|\omega| t)Q \right] \\ & = |\omega|\left(\frac{1}{|\omega|}\omega\right)\times\left\{\cos(|\omega|t)P+\sin(|\omega|t)Q\right\} \\ & = |\omega|\left\{\cos(|\omega|t)Q-\sin(|\omega|t)P\right\} \end{align} So the equation is satisfied. Plus, by design, $X(0)$ is correct. • This is probably a stupid question, what do you mean by $X(t) \dot X(t)$? I assume the dot means inner product but I haven't yet defined it, nor proved that it works well with differentiation. It seems like this problem is way more complex than I expected it to be. Thanks a lot for your help. – John Williams Jul 31 '14 at 16:41 • Hi, I used $X'(t)$ because it's easier to type than putting the dot above the symbol. The prime notation indicates derivative in general, whereas the dot is usually restricted to a time derivative. When I write $A\cdot B$, I do mean the dot product. You can easily write this out in component form, differentiate and see that you get $\frac{d}{dt}(X\cdot X)=\frac{dX}{dt}\cdot X + X\cdot\frac{dX}{dt}$ where $\frac{dX}{dt}$ is the vector obtained from $X$ by differentiating each of the components. So, most of these operators you can work out fairly easily, though it make take a little thought. – DisintegratingByParts Jul 31 '14 at 17:16
{}
# Topology Seminar Speaker: Peter Feller (ETH-Zurich) Topic: Braids, quasimorphisms, and slice-Bennequin inequalities. Abstract: The writhe of a braid ($=$ # pos crossing $-$ # neg crossings) and the fractional Dehn twist coefficient of a braid (a rational number that measures "how much the braid twists") are the two most prominent examples of what is known as a quasimorphism (a map that fails to be a group homomorphism by at most a bounded amount) from Artin's braid group on $n$-strands to the reals. We consider characterizing properties for such quasimorphisms and talk about relations to the study of knot concordance. For the latter, we consider inequalities for quasimorphisms modelled after the so-called slice-Bennequin inequality: writhe$(B) ≤ 2g_4(K) - 1 + n$ for all $n$-stranded braids $B$ with closure a knot $K$. Based on work in progress. Event Date: February 16, 2021 - 2:30pm to 3:30pm Location: Online Calendar Category: Seminar Seminar Category: Geometry and Topology
{}
Q: What is the other name of a Continuous Time Unit Impulse Function? A) Dirac delta function B) Unit function C) Area function D) Direct delta function 39
{}
# How to Interpret and Calculate the Indefinite Integral Integrals are mainly split into two categories: Definite and indefinite integrals. The indefinite integral is the same as the anti-derivative. A challenge in working with integrals is having to find the expression without determining the constant term. The symbol $C$ that you add to the end of integrals represents this unknown constant of integration. To find the constant, you are dependent on having constraints, such that you can find $C$ by substitution and equation solving. Theory ### TheIndefiniteIntegral $\int f\left(x\right)\phantom{\rule{0.17em}{0ex}}dx=F\left(x\right)+C,\phantom{\rule{2em}{0ex}}{F}^{\prime }\left(x\right)=f\left(x\right)$ Here, $f\left(x\right)$ is called the integrand and $C$ is called the constant of integration. Example 1 Compute the integral $\int \mathrm{ln}x+{e}^{x}+{x}^{3}\phantom{\rule{0.17em}{0ex}}dx$ $\begin{array}{llll}\hfill \int \mathrm{ln}x+{e}^{x}+{x}^{3}\phantom{\rule{0.17em}{0ex}}dx& =x\mathrm{ln}x-x+{e}^{x}\phantom{\rule{2em}{0ex}}& \hfill & \phantom{\rule{2em}{0ex}}\\ \hfill & \phantom{\rule{2em}{0ex}}+\frac{1}{4}{x}^{4}+C\phantom{\rule{2em}{0ex}}& \hfill & \phantom{\rule{2em}{0ex}}\end{array}$ $\begin{array}{lll}\hfill \int \mathrm{ln}x+{e}^{x}+{x}^{3}\phantom{\rule{0.17em}{0ex}}dx=x\mathrm{ln}x-x+{e}^{x}+\frac{1}{4}{x}^{4}+C& \phantom{\rule{2em}{0ex}}& \hfill \end{array}$ Example 2 Compute the integral $\int 3\mathrm{cos}\left(3x\right)-4\mathrm{sin}\left(2x\right)\phantom{\rule{0.17em}{0ex}}dx$ $\int 3\mathrm{cos}\left(3x\right)-4\mathrm{sin}\left(2x\right)\phantom{\rule{0.17em}{0ex}}dx$ $\begin{array}{llll}\hfill & \phantom{=}\int 3\mathrm{cos}\left(3x\right)-4\mathrm{sin}\left(2x\right)\phantom{\rule{0.17em}{0ex}}dx\phantom{\rule{2em}{0ex}}& \hfill & \phantom{\rule{2em}{0ex}}\\ \hfill & =3\cdot \frac{1}{3}\mathrm{sin}\left(3x\right)+4\cdot \frac{1}{2}\mathrm{cos}\left(2x\right)+C\phantom{\rule{2em}{0ex}}& \hfill & \phantom{\rule{2em}{0ex}}\\ \hfill & =\mathrm{sin}\left(3x\right)+2\mathrm{cos}\left(2x\right)+C\phantom{\rule{2em}{0ex}}& \hfill & \phantom{\rule{2em}{0ex}}\end{array}$ $\begin{array}{llll}\hfill \int 3\mathrm{cos}\left(3x\right)-4\mathrm{sin}\left(2x\right)\phantom{\rule{0.17em}{0ex}}dx& =3\cdot \frac{1}{3}\mathrm{sin}\left(3x\right)+4\cdot \frac{1}{2}\mathrm{cos}\left(2x\right)+C\phantom{\rule{2em}{0ex}}& \hfill & \phantom{\rule{2em}{0ex}}\\ \hfill & =\mathrm{sin}\left(3x\right)+2\mathrm{cos}\left(2x\right)+C\phantom{\rule{2em}{0ex}}& \hfill & \phantom{\rule{2em}{0ex}}\end{array}$ Example 3 Compute the integral $\int \mathrm{sin}\left(2x\right)+3\mathrm{cos}\left(x\right)-{e}^{5x}\phantom{\rule{0.17em}{0ex}}dx$ $\int \mathrm{sin}\left(2x\right)+3\mathrm{cos}\left(x\right)-{e}^{5x}\phantom{\rule{0.17em}{0ex}}dx$ $\begin{array}{llll}\hfill & \phantom{=}\int \mathrm{sin}\left(2x\right)+3\mathrm{cos}\left(x\right)-{e}^{5x}\phantom{\rule{0.17em}{0ex}}dx\phantom{\rule{2em}{0ex}}& \hfill & \phantom{\rule{2em}{0ex}}\\ \hfill & =-\frac{1}{2}\mathrm{cos}\left(2x\right)+3\mathrm{sin}\left(x\right)-\frac{1}{5}{e}^{5x}+C\phantom{\rule{2em}{0ex}}& \hfill & \phantom{\rule{2em}{0ex}}\end{array}$ $\int \mathrm{sin}\left(2x\right)+3\mathrm{cos}\left(x\right)-{e}^{5x}\phantom{\rule{0.17em}{0ex}}dx=-\frac{1}{2}\mathrm{cos}\left(2x\right)+3\mathrm{sin}\left(x\right)-\frac{1}{5}{e}^{5x}+C$
{}
Periodogram classes¶ Defines the Periodogram class and associated tools. Classes¶ Periodogram(frequency, power[, nyquist, …]) Class to represents a power spectrum, i.e.
{}
# Fisher’s test By Data Tricks, 28 July 2020 ### What is Fisher’s test? Fisher’s test is a good alternative to a chi-square test of independence when the expected value in any one of the cells in a contingency table is less than 5. One of the main differences between a chi-square test of independence and a Fisher’s test is that the p-value in a chi-square test is an approximation which tends towards the exact value as the sample size goes towards infinity. In a Fisher’s test, the p-value is exact and not an approximation, which is why it is sometimes called Fisher’s exact test. ### Example in R Let’s create some nominal data: set.seed(150) data <- data.frame(sampleA = sample(c("Positive","Positive","Negative"), 30, replace = TRUE), sampleB = sample(c("Positive","Positive","Negative"), 30, replace = TRUE)) frequencies <- table(data$sampleA, data$sampleB) Look at the contingency table, we have one cell less than 5: Negative Positive Negative 3 9 Positive 9 9 Perform the Fisher’s test using the fisher.test function: test <- fisher.test(x = data$sampleA, y = data$sampleB) Analyse the result: > test Fisher's Exact Test for Count Data data: data$sampleA and data$sampleB p-value = 0.2599 alternative hypothesis: true odds ratio is not equal to 1 95 percent confidence interval: 0.04497001 2.03461029 sample estimates: odds ratio 0.3458827 #### p-value The p-value is 0.26, which is above the 5% significance level and therefore the null hypothesis cannot be rejected. ## Is Fisher’s the right test? Use our interactive tool to help you choose the right statistical test or read our article on how to choose the right statistical test. Tags: , Please note that your first comment on this site will be moderated, after which you will be able to comment freely. ## Free data science in R guide Sign up to our newsletter and we will send you a series of guides containing tips and tricks on data science and machine learning in R. No thanks
{}
## Fourier series-Fourier series formulas ### Fourier series A Fourier series is nothing but the expansion of a periodic function f(x) with the terms of an infinite sum of sins and cosine values. It decomposes any periodic function or periodic signal into the sum of a set of simple oscillating functions, namely sines and cosines. ### Fourier series formula $$f(x) = \frac 12 a_0 + \sum_{n=1}^\infty a_n \cos nx + \sum_{n=1}^\infty b_n \sin nx$$ where, $$a_0= \frac {1}{\pi} \int\limits_{-\pi}^{\pi} f(x)dx$$ $$a_n= \frac {1}{\pi} \int\limits_{-\pi}^{\pi} f(x) \cos nx \ dx$$ $$b_n= \frac {1}{\pi} \int\limits_{-\pi}^{\pi} f(x) \sin nx \ dx$$ ### Example: Expand the function f(x) = ekx in the interval [ – π , π ] using fourier series. ### Solution: $$f(x) = \frac 12 a_0 + \sum_{n=1}^\infty a_n \cos nx + \sum_{n=1}^\infty b_n \sin nx$$ Here, $$a_0= \frac {1}{\pi} \int\limits_{-\pi}^{\pi} f(x)dx$$ $$a_0= \frac {1}{\pi} \int\limits_{-\pi}^{\pi} e^{kx} dx$$ $$a_0= \frac {2}{k\pi} \sinh k\pi$$ $$a_n= \frac {1}{\pi} \int\limits_{-\pi}^{\pi} f(x) \cos nx \ dx$$ $$a_n= \frac {1}{\pi} \int\limits_{-\pi}^{\pi} e^{kx} \cos nx \ dx$$ $$a_n= \frac {e^{kx}}{\pi (k^2+n^2)} [(k \cos nx)+(n \sin nx)]_{-\pi}^{\pi}$$ $$a_n= \frac {k \cos n\pi}{\pi (k^2+n^2)} [e^{k\pi} – e^{-k\pi}]$$ $$a_n= 2k (-1)^n \frac {\sinh k\pi}{\pi (k^2+n^2)}$$ $$b_n= \frac {1}{\pi} \int\limits_{-\pi}^{\pi} f(x) \sin nx \ dx$$ $$b_n= \frac {1}{\pi} \int\limits_{-\pi}^{\pi} e^{kx} \sin nx \ dx$$ $$b_n= \frac {e^{kx}}{\pi (k^2+n^2)} [(k \sin nx)+(n \cos nx)]_{-\pi}^{\pi}$$ $$b_n= -2n (-1)^n \frac {\sinh k\pi}{\pi (k^2+n^2)}$$ Putting all above values in the equation, we get the expansion of the above function: $$f(x) = e^{kx} = \frac {2 \sinh k\pi}{\pi} \frac {1}{2k} – k \left[ \frac {\cos x}{k^2+1^2}- \frac {\cos 2x}{k^2+2^2} + \frac {\cos 3x}{k^2+3^2}- \cdots \right]+ \left[ \frac {\sin x}{k^2+1^2}- \frac {2\sin 2x}{k^2+2^2} + \frac {3\sin 3x}{k^2+3^2}- \cdots \right]$$
{}
SERVING THE QUANTITATIVE FINANCE COMMUNITY Cuchulainn Posts: 59434 Joined: July 16th, 2004, 7:38 am Location: Amsterdam Contact: ### Re: exp(5) = $e^5$ What I'm really saying I suppose: I did not come down in the last shower. ISayMoo Posts: 1898 Joined: September 30th, 2015, 8:30 pm ### Re: exp(5) = $e^5$ I think that you won't get a response unless you challenge them publicly. What I'm really saying I suppose: I did not come down in the last shower. I hope that's not an answer you'd give in a live seminar Cuchulainn Posts: 59434 Joined: July 16th, 2004, 7:38 am Location: Amsterdam Contact: ### Re: exp(5) = $e^5$ What I'm really saying I suppose: I did not come down in the last shower. I hope that's not an answer you'd give in a live seminar Never had a reason to. In speakers, honesty is the best policy. Don't dig yourself into a hole. ISayMoo Posts: 1898 Joined: September 30th, 2015, 8:30 pm ### Re: exp(5) = $e^5$ Sure. What does it have to do with the question whether your answer was relevant? The paper was discussing properties of minimas, not optimisation. Cuchulainn Posts: 59434 Joined: July 16th, 2004, 7:38 am Location: Amsterdam Contact: ### Re: exp(5) = $e^5$ The paper was discussing properties of minimas, not optimisation. Not at all; it's about DL-PDE(See the DL-PDE thread). Besides, what is your Venn diagram for the above? I fear my questions will remain unanswered.. Cuchulainn Posts: 59434 Joined: July 16th, 2004, 7:38 am Location: Amsterdam Contact: ### Re: exp(5) = $e^5$ Prove that $e$ is the number that maximises $\sqrt[x] {x}$ for $x > 0$. ppauper Posts: 70239 Joined: November 15th, 2001, 1:29 pm ### Re: exp(5) = $e^5$ Prove that $e$ is the number that maximises $\sqrt[x] {x}$ for $x > 0$. $x^ {1/x}=\exp\left[\frac{\log x}{x}\right]$ so ,maximize $\frac{\log x}{x}$ Cuchulainn Posts: 59434 Joined: July 16th, 2004, 7:38 am Location: Amsterdam Contact: ### Re: exp(5) = $e^5$ Prove that $e$ is the number that maximises $\sqrt[x] {x}$ for $x > 0$. $x^ {1/x}=\exp\left[\frac{\log x}{x}\right]$ so ,maximize $\frac{\log x}{x}$ And if we continue with $log x/x < 1$ we conclude $\sqrt[x] {x} < e$ for any $x>=0$. Compare with Steiner's proof in Crelle's journal (overkill?) http://www2.washjeff.edu/users/mwolterm ... rie/89.pdf BTW $\pi(x) = x/log x$ is the (conjectured) number of primes that do not exceed x. Collector, are you there? // The Gozilla number cruncher minimises  $- \sqrt[x] {x}$ as 2.7182818.. ppauper Posts: 70239 Joined: November 15th, 2001, 1:29 pm ### Re: exp(5) = $e^5$ Compare with Steiner's proof in Crelle's journal (overkill?) http://www2.washjeff.edu/users/mwolterm ... rie/89.pdf he does seem to have made a mountain out of a molehill. Cuchulainn Posts: 59434 Joined: July 16th, 2004, 7:38 am Location: Amsterdam Contact: ### Re: exp(5) = $e^5$ The only solution left is tie solve this problem using (a hand-crafted?) ML algorithm? Can I just look at the diagram and give the answer? $e^x$ is difficult with standard GD (bad answers) but $e^{-x}$ is better. But we see how the learning rate is important. (using sigmoid here) Cuchulainn Posts: 59434 Joined: July 16th, 2004, 7:38 am Location: Amsterdam Contact: ### Re: exp(5) = $e^5$ cntd ExSan Posts: 4547 Joined: April 12th, 2003, 10:40 am ### Re: exp(5) = $e^5$ π⁴ + π⁵ ≈ e⁶ to seven significant figures Ref Analysis Fact  @AnalysisFact Cuchulainn Posts: 59434 Joined: July 16th, 2004, 7:38 am Location: Amsterdam Contact: ### Re: exp(5) = $e^5$ π⁴ + π⁵ ≈ e⁶ to seven significant figures Ref Analysis Fact  @AnalysisFact "Das ist nicht nur nicht richtig; es ist nicht einmal falsch!". ppauper Posts: 70239 Joined: November 15th, 2001, 1:29 pm ### Re: exp(5) = $e^5$ π⁴ + π⁵ ≈ e⁶ to seven significant figures Ref Analysis Fact  @AnalysisFact "Das ist nicht nur nicht richtig; es ist nicht einmal falsch!". it's true but not necessarily useful. You want $e^{5}$ and this is a formula for $e^{6}$ Computing $\pi^4$ and $\pi^5$ may be difficult as well On my machine $\pi^4+\pi^5= 403.4287761$ $e^6=403.4287935$ Cuchulainn Posts: 59434 Joined: July 16th, 2004, 7:38 am Location: Amsterdam Contact: ### Re: exp(5) = $e^5$ Compute $e^{\pi}$ to 2 decimal places with pencil and paper. (Gelfond's constant) A follow-on from Exsan's post and ansatz (big conjecture but in the right direction) is whether $\pi$ and $e$ are algebraically independent? i.e. is there a polynomial relation $a_{n}\pi^{n} + a_{n-1}\pi^{n-1} + ... + a_{0}\pi^{0} = e^5$ or $a_{n}e^{n} + a_{n-1}e^{n-1} + ... + a_{0}e^{0} = \pi$ where $a_{j} , j = 0,..,n$ are algebraic numbers?
{}
About the existence of tamely ramified extensions I'm trying to understand the proof of the existence of tamely ramified extensions. For this, the theorem from my book says: Let $$K$$ be a complete field with respect to a discrete valuation, and let $$\Omega/K$$ be a totally ramified extension of degree $$n=e(\Omega/K)$$. Let the characteristic of the residue field $$k$$ be $$p>0$$ and suppose that $$n=n_{0}p^{l}$$ with $$(n_{0},p)=1$$. Then there exists a unique extension $$V$$ of $$K$$ with $$K\subset V\subset \Omega$$ such that $$[V:K]=n_{0}$$. Moreover, $$V=K(\sqrt[n_{0}]\pi)$$ where $$\pi$$ is an element of $$K$$ such that $$\mathfrak{p}_{K}=\pi\mathcal{O}_{K}$$ I understood the proof except by one fact which might be obvious, but I can't see why it happens. I'll write how the proof begins: Since $$\Omega/K$$ is totally ramified, $$\omega=k$$ (the residue field o $$\Omega$$, ehich is $$\omega$$ coincides with the residue field of $$K$$.), and if $$G_{K}=\langle |\pi|\rangle, G_{\Omega}=\langle |\Pi|\rangle$$ then $$\Omega=K(\Pi)$$ and $$\mathcal{O}_{\Omega}=\mathcal{O}_{K}+\mathcal{O}_{K}\Pi+\cdots+\mathcal{O}_{K}\Pi^{e-1},$$ with $$e=n=n_{0}p^{l}$$. (All this facts were viewed in previous theorems) It follows that $$\Pi^{n_{0}p^{l}}=\pi U$$ with $$U\in\mathcal{O}_{\Omega}$$ satisfying $$|U|=1$$ (This fact were proved in a previous theorem) Now, here it comes the part which I don't understand: Since $$\omega=k$$ we may write $$U=uZ$$ where $$u\in K$$ satisfies $$|u|=1$$ and $$Z\in\mathcal{O}_{\Omega}$$ satisfies $$|Z-1|<1$$. In some try to understand this I have the following: think in $$\overline{U}$$ in $$k$$, then $$\overline{U}=\overline{u}$$ with $$u\in\mathcal{O}_{K}$$, also $$\overline{u}=\overline{u}\overline{1}$$, now we view this equality in $$\omega$$ and if we lift both sides we have the desired result. Are correct my last argument? Any hint for obtain the result? I think that is the easier fact on the proof, but is the only one which I couldn't understand As a remark, $$G_{K}$$ is the value group of $$|-|$$. Thanks Yes, you're right. Since $$K$$ and $$\Omega$$ has the same residual field, hence thinking $$\bar{U}$$ in $$k$$, we have $$\bar{U} = \bar{u}$$ for some $$u\in \mathcal{O}_K$$. Let $$Z = U/u \in \mathcal{O}_\Omega$$, then $$\bar{Z} = \bar{1}$$, hence $$|Z-1|<1$$.
{}
## anonymous 4 years ago Y= -16x^2+190+0 A= B= C= graph when done. 1. anonymous I think you have to do the quadratic formula 2. anonymous umm i forgot what that is.... the b= something right? 3. anonymous in quadratic equation term with x^2 is A term with x is B term with constant is C 4. anonymous yes -b + or - sqrt(b^2-4ac) all over at 2a 5. anonymous okay thx 6. anonymous in this case you can solve for your variables a b c 7. anonymous $b = -b \pm \frac{ \sqrt{b^2 - 4ac} }{ 2 }$ 8. anonymous jazy its 2a 9. anonymous @jazy 10. anonymous I thought I put that. But yes, over 2a. (: 11. anonymous $b = -b \pm \frac{\sqrt{b^2 - 4ac}} { 2a }$ 12. anonymous uughh this is sooo harrddd 13. anonymous ax^2 + b^2 + c -16x^2 + 190 +0 what is a, b, and c? 14. anonymous -16 190 0 15. anonymous oh. i got -380 16. anonymous how do you get -380 17. anonymous you have the steps down, i think you know what you are doing 18. hartnn also 'x' term is missing.. 19. anonymous oh snap i didnt do the bottom hold on 20. anonymous so... id plug in the answer i got from the eqation as x? 21. anonymous i got 0.95 22. anonymous it right @jazy 23. anonymous I think you had it right up to the -380. You have to divide by 2(-16) or -32. What is -380 /-32? 24. anonymous you have it 25. anonymous yea thats when i got 11.87 26. anonymous then i pluged it in to the equation 27. anonymous right(: what was the 0.95 for though? 28. anonymous y= -16(11.87^2)............. the 11.87 is substituted for the x
{}
Number of ways to distribute N balls into K boxes with a special condition Revision en1, by van_persie9, 2019-08-25 00:14:57 Recently I came across a question in which we had to find the no. of ways to distribute N balls into 3 boxes such that the number of box getting maximum no. of balls is exactly 1. For example • if N=2, ans is 2 : {2,0,0},{0,2,0},{0,0,2} • if N=3, ans is 9 : {3,0,0},{0,3,0},{0,0,3},{3,2,1},{3,1,2},{1,3,2},{2,3,1},{1,2,3},{2,1,3} Now, since the number of boxes here is only 3, I was able to solve the question by observation and basic maths(Arith. Prog.) My query is how to solve the above question if the number of boxes is K? #### History Revisions Rev. Lang. By When Δ Comment en1 van_persie9 2019-08-25 00:14:57 620 Initial revision (published)
{}
# $\cos(x)$ and $\arccos(x)$ couple limit Find the value of the following limit: $$\lim_{n\to\infty} \frac {\cos 1 \cdot \arccos \frac{1}{n}+\cos\frac {1}{2} \cdot \arccos \frac{1}{(n-1)}+ \cdots +\cos \frac{1}{n} \cdot \arccos{1}}{n}$$ - Are you limiting $\frac{cos(1)arccos(\frac{1}{n})+cos(\frac{1}{2})arccos(\frac{1}{n-1})+...+cos(\‌​frac{1}{n})arccos(1)}{n}$ ? –  Nancy Rutkowskie May 25 '12 at 17:49 @Nancy R: i posted the limit above. Yes. –  Chris's sis the artist May 25 '12 at 17:54 @Chris My first though was to treat the sum in the numerator as a series and try to sum it up to, say - $k$ then bound given sequence from above by the sequence $\frac{k}{n}$ and from below by constant function $0$. But I can't see immediatly if it's the shortest way to evaluate that limit or if some extra difficulties wouldn't appear. –  data May 25 '12 at 17:59 @m.woj: thanks for your comment –  Chris's sis the artist May 25 '12 at 18:06 ## 3 Answers What follows is a little hand-wavy, and I wish I had more rigorous demonstration, but the post is too big for a comment. $$\begin{eqnarray} \frac{1}{n} \sum_{k=1}^n \cos\left(\frac{1}{k}\right) \arccos\left( \frac{1}{n+1-k} \right) &=& \frac{1}{n} \sum_{k=1}^n \left( 1 - 2 \sin^2\left(\frac{1}{2 k}\right) \right) \left( \frac{\pi}{2} - \arcsin\left( \frac{1}{n+1-k} \right) \right) \end{eqnarray}$$ We can now split the sum in two parts, $1 \leqslant k \leqslant \lfloor\frac{n}{2}\rfloor$ and $\lfloor\frac{n}{2}\rfloor < k \leqslant n$. In each of these parts, either $\sin$, or $\arcsin$ will be small, and in the limiting value will be $\frac{\pi}{2}$: In[28]:= Table[ N[1/n Sum[Cos[1/k] ArcCos[1/(n - k + 1)], {k, 1, n}], 50], {n, {5000, 10000, 100000}}] // N Out[28]= {1.56861, 1.56963, 1.57066} More rigorously: $$\frac{1}{n} \sum_{k=1}^n \left( 1 - 2 \sin^2\left(\frac{1}{2 k}\right) \right) \left( \frac{\pi}{2} - \arcsin\left( \frac{1}{n+1-k} \right) \right) = \frac{\pi}{2} - \frac{\pi}{n} \sum_{k=1}^n \sin^2 \frac{1}{k} - \frac{1}{n} \sum_{k=1}^n \arcsin\frac{1}{k} + \frac{2}{n} \sum_{k=1}^n \sin^2\left( \frac{1}{k}\right) \arcsin\left(\frac{1}{n+1-k}\right)$$ Now: $$0 \leqslant \lim_{n \to \infty} \frac{1}{n} \sum_{k=1}^n \sin^2\left(\frac{1}{k}\right) \leqslant \lim_{n \to \infty} \frac{1}{n} \sum_{k=1}^n \frac{1}{k^2} = 0$$ $$0 \leqslant \lim_{n\to \infty} \frac{1}{n} \sum_{k=1}^n \arcsin\left(\frac{1}{k}\right) \leqslant \lim_{n \to \infty} \frac{\pi}{2n} \sum_{k=1}^n \frac{1}{k} = \lim_{n \to \infty} \frac{\pi}{2 n} \ln(n) = 0$$ $$0 \leqslant \lim_{n \to \infty} \frac{2}{n} \sum_{k=1}^n \sin^2 \left( \frac{1}{k} \right) \arcsin\left(\frac{1}{n+1-k} \right) \leqslant \lim_{n \to \infty} \frac{2}{n} \sum_{k=1}^n \frac{1}{k^2} \frac{1}{n+1-k} \leqslant \lim_{n \to \infty} \frac{2}{n} \sum_{k=1}^n \frac{1}{k^2} = 0$$ - could you provide with some more details pls about that splitting. I'm surrounded by mist because maybe it's something i don't catch it yet. This is from my highschool notebook and i was thinking that there is an easier way to solve it. I'm struggling to understand your way, now. –  Chris's sis the artist May 25 '12 at 19:12 thanks for your solution. It's a bit ugly but i need to get used with these not-so-nice approaches. –  Chris's sis the artist May 25 '12 at 20:17 A more convenient way to state the sequence is: $$\frac{\sum_{k=1}^n\cos\frac{1}{k}\arccos\frac{1}{n-k+1}}{n}$$ Note that for $0\leq x\leq 1$ we have $1-x\leq\cos x\leq 1$ and $\frac{\pi}{2}-\frac{\pi}{2}x\leq\arccos x\leq \frac{\pi}{2}-x$. Therefore, we have $$\tfrac{\pi}{2}(1-\tfrac{1}{k})(1-\tfrac{1}{n-k+1})\leq\cos\frac{1}{k}\arccos\frac{1}{n-k+1}\leq\frac{\pi}{2}-\frac{1}{n-k+1}.$$ Thus, a lower bound for the limit is given by the sequence $$\frac{\sum_{k=1}^n\frac{\pi}{2}(1-\tfrac{1}{k})(1-\tfrac{1}{n-k+1})}{n}=\frac{\pi}{2}-\frac{\sum_{k=0}^n\frac{(n-k+1)+k-1}{k(n-k+1)}}{n}=\frac{\pi}{2}-\sum_{k=1}^n\frac{1}{k(n-k+1)}$$ The terms in the latter sum can be rewritten to $$\frac{1}{k(n+1)}+\frac{1}{(n-k+1)(n+1)}$$ so we get the sums $$\frac{1}{n+1}\sum_{k=1}^n\frac{1}{k}\qquad\text{and}\qquad\frac{1}{n+1}\sum_{k=1}^n\frac{1}{n-k+1}.$$ Sacha indicated a proof in the comments that these sums tend to zero. Similarly, for the upper bound we have $$\frac{\sum_{k=1}^n\frac{\pi}{2}-\tfrac{1}{n-k+1}}{n}=\frac{\pi}{2}-\frac{\sum_{k=1}^n\frac{1}{n-k+1}}{n}\to\frac{\pi}{2}$$ This gives an upper bound of $\frac{\pi}{2}$. This finishes the proof that the limit is $\frac{\pi}{2}$. Note: Another proof that $\frac{1}{n}\sum_{k=1}^n\frac{1}{k}\to 0$. Using Cauchy-Schwarz, we see that $$\sum_{k=1}^n\frac{1}{k}\leq \sqrt{n}\sqrt{\textstyle\sum_{k=1}^n\tfrac{1}{k^2}}$$ Therefore we get $$\frac{1}{n}\sum_{k=1}^n\frac{1}{k}\leq\sqrt{\frac{\textstyle\sum_{k=1}^n\tfrac{1}{k^2}}{n}}$$ In the square root on the right hand side, the numerator converges, so the whole tends to zero. - +1 This is exactly the way to go. You can now show that $\lim_{n\to \infty} \frac{1}{n} \sum_{k=1}^n \frac{1}{n+1-k} = \lim_{n\to \infty} \frac{1}{n} \sum_{k=1}^n \frac{1}{k} = \lim_{n\to \infty} \frac{1}{n} H_n = 0$, and similarly for the sum of the lower bound. –  Sasha May 25 '12 at 19:42 This is becoming a community proof now. Thanks @Sasha! –  Egbert May 25 '12 at 20:34 @Egbert: thanks for your interesting proof. –  Chris's sis the artist May 25 '12 at 20:48 Great proof Egbert! I'm trying to come up with one involving integration... –  Pedro Tamaroff May 25 '12 at 20:50 ...but it seems fruitless. –  Pedro Tamaroff May 25 '12 at 20:54 This is too long for a comment, so I am posting it as an answer, although it doesn't completely resolve the question. I shall prove that $\frac{\pi}{2}$ is an upper bound. Lemma. For all $n\in\Bbb N$, we have $\frac{1}{n} \sum\limits_{k=1}^n \cos\left(\frac{1}{k}\right) \arccos\left( \frac{1}{n+1-k} \right) \leq \frac{\pi}{2}$. Proof. Since both functions are decreasing, we have: $$\frac{1}{n} \sum_{k=1}^n \cos\left(\frac{1}{k}\right) \arccos\left( \frac{1}{n+1-k} \right) \leq \frac1nn\cos(0)\arccos(0)=\frac{\pi}{2},$$ which shows the desired inequality. $\square$ As Sasha mentions above this is probably also the value of the limit. As Chris notes in the comment below this answer, it is possible to prove that $\frac{\pi}2$ is also a lower bound by a simple application of AM-GM inequality and the Cesaro-Stolz theorem, completing the proof. - @Chris: That's a great idea! There may be some complications caused by the fact that $\arccos1=0$ but we can simply throw that last term away and write $\frac1n=\frac1{n-1}\frac{n-1}n$, allowing us to use AM-GM on the remaining $n-1$ terms, which are positive. –  Dejan Govc May 26 '12 at 17:52 @Chris: I just noticed the rearrangement inequality didn't really add anything. How silly of me. (I have edited the answer accordingly.) –  Dejan Govc May 27 '12 at 12:15
{}
Article Contents Article Contents # A semidiscrete Galerkin scheme for backward stochastic parabolic differential equations • In this paper, we present a numerical scheme to solve the initial-boundary value problem for backward stochastic partial differential equations of parabolic type. Based on the Galerkin method, we approximate the original equation by a family of backward stochastic differential equations (BSDEs, for short), and then solve these BSDEs by the time discretization. Combining the truncation with respect to the spatial variable and the backward Euler method on time variable, we obtain the global $L^2$ error estimate. Mathematics Subject Classification: Primary: 60H15, 65M60; Secondary: 65C30. Citation: • [1] C. Bender and R. Denk, A forward scheme for backward SDEs, Stochastic Process. Appl., 117 (2007), 1793-1812.doi: 10.1016/j.spa.2007.03.005. [2] B. Bouchard and N. Touzi, Discrete-time approximation and Monte-Carlo simulation of backward stochastic differential equations, Stochastic Process. Appl., 111 (2004), 175-206.doi: 10.1016/j.spa.2004.01.001. [3] P. Briand and C. Labart, Simulation of BSDEs by Wiener chaos expansion, Ann. Appl. Probab., 24 (2014), 1129-1171.doi: 10.1214/13-AAP943. [4] J. Douglas Jr., J. Ma and P. Protter, Numerical methods for forward-backward stochastic differential equations, Ann. Appl. Probab., 6 (1996), 940-968.doi: 10.1214/aoap/1034968235. [5] M. Fuhrman and Y. Hu, Infinite horizon BSDEs in infinite dimensions with continuous driver and applications, J. Evol. Equ., 6 (2006), 459-484.doi: 10.1007/s00028-006-0263-x. [6] E. Gobet, J.-P. Lemor and X. Warin, A regression-based Monte Carlo method to solve backward stochastic differential equations, Ann. Appl. Probab., 15 (2005), 2172-2202.doi: 10.1214/105051605000000412. [7] W. Grecksch and P. E. Kloeden, Time-discretised Galerkin approximations of parabolic stochastic PDEs, Bull. Austral. Math. Soc., 54 (1996), 79-85.doi: 10.1017/S0004972700015094. [8] I. Gyöngy, Lattice approximations for stochastic quasi-linear parabolic partial differential equations driven by space-time white noise. I, Potential Anal., 9 (1998), 1-25.doi: 10.1023/A:1008615012377. [9] T. Y. Hou, W. Luo, B. Rozovskii and H.-M. Zhou, Wiener chaos expansions and numerical solutions of randomly forced equations of fluid mechanics, J. Comput. Phys., 216 (2006), 687-706.doi: 10.1016/j.jcp.2006.01.008. [10] Y. Hu, D. Nualart and X. Song, Malliavin calculus for backward stochastic differential equations and application to numerical solutions, Ann. Appl. Probab., 21 (2011), 2379-2423.doi: 10.1214/11-AAP762. [11] Y. Hu and S. G. Peng, Maximum principle for semilinear stochastic evolution control systems, Stochastics Stochastics Rep., 33 (1990), 159-180.doi: 10.1080/17442509008833671. [12] Y. Hu and S. G. Peng, Adapted solution of a backward semilinear stochastic evolution equation, Stochastic Anal. Appl., 9 (1991), 445-459.doi: 10.1080/07362999108809250. [13] A. Jentzen and P. E. Kloeden, Overcoming the order barrier in the numerical approximation of stochastic partial differential equations with additive space-time noise, Proc. R. Soc. Lond. Ser. A Math. Phys. Eng. Sci., 465 (2009), 649-667.doi: 10.1098/rspa.2008.0325. [14] S. Lototsky, R. Mikulevicius and B. L. Rozovskii, Nonlinear filtering revisited: A spectral approach, SIAM J. Control Optim., 35 (1997), 435-461.doi: 10.1137/S0363012993248918. [15] Q. Lü and X. Zhang, General Pontryagin-type Stochastic Maximum Principle and Backward Stochastic Evolution Equations in Infinite Dimensions, SpringerBriefs in Mathematics, Springer, Cham, 2014.doi: 10.1007/978-3-319-06632-5. [16] J. Ma, P. Protter, J. San Martin and S. Torres, Numerical method for backward stochastic differential equations, Ann. Appl. Probab., 12 (2002), 302-316.doi: 10.1214/aoap/1015961165. [17] J. Ma, P. Protter and J. M. Yong, Solving forward-backward stochastic differential equations explicitly-a four step scheme, Probab. Theory Related Fields, 98 (1994), 339-359.doi: 10.1007/BF01192258. [18] J. Ma and J. Zhang, Path regularity for solutions of backward stochastic differential equations, Probab. Theory Related Fields, 122 (2002), 163-190.doi: 10.1007/s004400100144. [19] G. N. Milstein and M. V. Tretyakov, Numerical algorithms for forward-backward stochastic differential equations, SIAM J. Sci. Comput., 28 (2006), 561-582 (electronic).doi: 10.1137/040614426. [20] E. Pardoux, Stochastic partial differential equations and filtering of diffusion processes, Stochastics, 3 (1979), 127-167.doi: 10.1080/17442507908833142. [21] T. Shardlow, Numerical methods for stochastic parabolic PDEs, Numer. Funct. Anal. Optim., 20 (1999), 121-145.doi: 10.1080/01630569908816884. [22] G. Tessitore, Existence, uniqueness and space regularity of the adapted solutions of a backward SPDE, Stochastic Anal. Appl., 14 (1996), 461-486.doi: 10.1080/07362999608809451. [23] J. B. Walsh, Finite element methods for parabolic stochastic PDE's, Potential Anal., 23 (2005), 1-43.doi: 10.1007/s11118-004-2950-y. [24] P. Wang and X. Zhang, Numerical solutions of backward stochastic differential equations: A finite transposition method, C. R. Math. Acad. Sci. Paris, 349 (2011), 901-903.doi: 10.1016/j.crma.2011.07.011. [25] Y. Wang, Transposition Solutions of Backward Stochastic Differential Equations and Numerical Schemes, 2013, Thesis (Ph.D.)-Academy of Mathematics and Systems Science, Chinese Academy of Sciences. [26] Y. Yan, Semidiscrete Galerkin approximation for a linear stochastic parabolic partial differential equation driven by an additive noise, BIT, 44 (2004), 829-847.doi: 10.1007/s10543-004-3755-5. [27] A. N. Yannacopoulos, N. E. Frangos and I. Karatzas, Wiener chaos solutions for linear backward stochastic evolution equations, SIAM J. Math. Anal., 43 (2011), 68-113.doi: 10.1137/090750652. [28] J. Yong and X. Y. Zhou, Stochastic Controls. Hamiltonian Systems and HJB Equations, vol. 43 of Applications of Mathematics (New York), Springer-Verlag, New York, 1999.doi: 10.1007/978-1-4612-1466-3. [29] J. Zhang, A numerical scheme for BSDEs, Ann. Appl. Probab., 14 (2004), 459-488.doi: 10.1214/aoap/1075828058. [30] X. Y. Zhou, A duality analysis on stochastic partial differential equations, J. Funct. Anal., 103 (1992), 275-293.doi: 10.1016/0022-1236(92)90122-Y.
{}
# Is true * true = true in Separation Logic? I am trying to show that the following interference is unsound in terms of Separation Logic: $$(p_0 \implies p_1) \implies ((p_0 * q) \implies (p_1 * q))$$ I came up with the following values for a counter example: $$p_0 = true$$, $$p_1 = x \mapsto 1$$, $$q = true$$. The heap contains just one mapping of $$x \mapsto 1$$. The idea is that $$(p_1 * q)$$ cannot be true because $$p_1$$ already describes the whole heap and there is nothing left for $$q$$. I am not sure, however, why $$true * true$$ would hold if there is only one element in the heap. Shouldn’t it be false as those two $$true$$s cannot claim that single heap element at the same time?
{}
## Jul 2, 2016 ### C Programming #80: string literal concatenation C Programming #80: string literal concatenation This article details very tiny feature of C Programming language that is string literal concatenation. As we know constant string(string literal) are put in double quote in C. If there are two string placed side by side, with or without white-space then they are concatenated. Let us explore it by an example. #include <stdio.h> int main() { char *p = "Hello " "World"; printf("p is %s\n", p); return 0; } p is Hello World In the above program there are two string constants Hello and World which are placed side by side. Then the C concatenates them ignoring the white-space if any between the string constant. This comes in very handy if there is long strings that need to spread across several lines.
{}
# American Institute of Mathematical Sciences December  2019, 39(12): 7265-7290. doi: 10.3934/dcds.2019303 ## Sharp large time behaviour in $N$-dimensional Fisher-KPP equations 1 Institut de Mathématiques de Toulouse; UMR 5219, Université de Toulouse; CNRS, Université Toulouse Ⅲ, 118 route de Narbonne, 31062 Toulouse, France 2 Centre d'Analyse et de Mathématique Sociales; UMR 8557, Paris Sciences et Lettres; CNRS, EHESS, 54 Bv. Raspail, 75006 Paris, France 3 Institut de Mathématiques de Toulouse; UMR 5219, Université de Toulouse; CNRS, INSA Toulouse, 135 av. Rangueil, 31077 Toulouse, France * Corresponding author Dedicated to L. Caffarelli, as a sign of friendship, admiration and respect Received  February 2019 Revised  August 2019 Published  September 2019 Fund Project: The first and second authors are supported by the European Union's Seventh Framework Programme (FP/2007-2013) / ERC Grant Agreement n. 321186 - ReaDi - "Reaction-Diffusion Equations, Propagation and Modelling". The third author is supported by the ANR project NONLOCAL ANR-14-CE25-0013. We study the large time behaviour of the Fisher-KPP equation $\partial_t u = \Delta u +u-u^2$ in spatial dimension $N$ , when the initial datum is compactly supported. We prove the existence of a Lipschitz function $s^\infty$ of the unit sphere, such that $u(t, x)$ approaches, as $t$ goes to infinity, the function $U_{c_*}\bigg(|x|-c_*t + \frac{N+2}{c_*} \mathrm{ln}t + s^\infty\Big(\frac{x}{|x|}\Big)\bigg),$ where $U_{c*}$ is the 1D travelling front with minimal speed $c_* = 2$ . This extends an earlier result of Gärtner. Citation: Jean-Michel Roquejoffre, Luca Rossi, Violaine Roussier-Michon. Sharp large time behaviour in $N$-dimensional Fisher-KPP equations. Discrete & Continuous Dynamical Systems - A, 2019, 39 (12) : 7265-7290. doi: 10.3934/dcds.2019303 ##### References: show all references ##### References: [1] Zhenzhen Wang, Tianshou Zhou. Asymptotic behaviors and stochastic traveling waves in stochastic Fisher-KPP equations. Discrete & Continuous Dynamical Systems - B, 2020  doi: 10.3934/dcdsb.2020323 [2] Jianquan Li, Xin Xie, Dian Zhang, Jia Li, Xiaolin Lin. Qualitative analysis of a simple tumor-immune system with time delay of tumor action. Discrete & Continuous Dynamical Systems - B, 2020  doi: 10.3934/dcdsb.2020341 [3] Yongxiu Shi, Haitao Wan. Refined asymptotic behavior and uniqueness of large solutions to a quasilinear elliptic equation in a borderline case. Electronic Research Archive, , () : -. doi: 10.3934/era.2020119 [4] Xin-Guang Yang, Lu Li, Xingjie Yan, Ling Ding. The structure and stability of pullback attractors for 3D Brinkman-Forchheimer equation with delay. Electronic Research Archive, 2020, 28 (4) : 1395-1418. doi: 10.3934/era.2020074 [5] Nicolas Rougerie. On two properties of the Fisher information. Kinetic & Related Models, , () : -. doi: 10.3934/krm.2020049 [6] Andreu Ferré Moragues. Properties of multicorrelation sequences and large returns under some ergodicity assumptions. Discrete & Continuous Dynamical Systems - A, 2020  doi: 10.3934/dcds.2020386 [7] Giuseppina Guatteri, Federica Masiero. Stochastic maximum principle for problems with delay with dependence on the past through general measures. Mathematical Control & Related Fields, 2020  doi: 10.3934/mcrf.2020048 [8] Wenmeng Geng, Kai Tao. Large deviation theorems for dirichlet determinants of analytic quasi-periodic jacobi operators with Brjuno-Rüssmann frequency. Communications on Pure & Applied Analysis, 2020, 19 (12) : 5305-5335. doi: 10.3934/cpaa.2020240 [9] Annegret Glitzky, Matthias Liero, Grigor Nika. Dimension reduction of thermistor models for large-area organic light-emitting diodes. Discrete & Continuous Dynamical Systems - S, 2020  doi: 10.3934/dcdss.2020460 [10] Soniya Singh, Sumit Arora, Manil T. Mohan, Jaydev Dabas. Approximate controllability of second order impulsive systems with state-dependent delay in Banach spaces. Evolution Equations & Control Theory, 2020  doi: 10.3934/eect.2020103 [11] Lars Grüne, Matthias A. Müller, Christopher M. Kellett, Steven R. Weller. Strict dissipativity for discrete time discounted optimal control problems. Mathematical Control & Related Fields, 2020  doi: 10.3934/mcrf.2020046 [12] Serena Dipierro, Benedetta Pellacci, Enrico Valdinoci, Gianmaria Verzini. Time-fractional equations with reaction terms: Fundamental solutions and asymptotics. Discrete & Continuous Dynamical Systems - A, 2021, 41 (1) : 257-275. doi: 10.3934/dcds.2020137 [13] Guido Cavallaro, Roberto Garra, Carlo Marchioro. Long time localization of modified surface quasi-geostrophic equations. Discrete & Continuous Dynamical Systems - B, 2020  doi: 10.3934/dcdsb.2020336 [14] Cuicui Li, Lin Zhou, Zhidong Teng, Buyu Wen. The threshold dynamics of a discrete-time echinococcosis transmission model. Discrete & Continuous Dynamical Systems - B, 2020  doi: 10.3934/dcdsb.2020339 [15] Awais Younus, Zoubia Dastgeer, Nudrat Ishaq, Abdul Ghaffar, Kottakkaran Sooppy Nisar, Devendra Kumar. On the observability of conformable linear time-invariant control systems. Discrete & Continuous Dynamical Systems - S, 2020  doi: 10.3934/dcdss.2020444 [16] Peter Poláčik, Pavol Quittner. Entire and ancient solutions of a supercritical semilinear heat equation. Discrete & Continuous Dynamical Systems - A, 2021, 41 (1) : 413-438. doi: 10.3934/dcds.2020136 [17] Jianhua Huang, Yanbin Tang, Ming Wang. Singular support of the global attractor for a damped BBM equation. Discrete & Continuous Dynamical Systems - B, 2020  doi: 10.3934/dcdsb.2020345 [18] Stefano Bianchini, Paolo Bonicatto. Forward untangling and applications to the uniqueness problem for the continuity equation. Discrete & Continuous Dynamical Systems - A, 2020  doi: 10.3934/dcds.2020384 [19] Hoang The Tuan. On the asymptotic behavior of solutions to time-fractional elliptic equations driven by a multiplicative white noise. Discrete & Continuous Dynamical Systems - B, 2020  doi: 10.3934/dcdsb.2020318 [20] Haixiang Yao, Ping Chen, Miao Zhang, Xun Li. Dynamic discrete-time portfolio selection for defined contribution pension funds with inflation risk. Journal of Industrial & Management Optimization, 2020  doi: 10.3934/jimo.2020166 2019 Impact Factor: 1.338
{}
Home # Mean symbol latex List of LaTeX mathematical symbols. From OeisWiki. There are no approved revisions of this page, so it may not have been reviewed. Jump to: navigation, search. All the predefined mathematical symbols from the T e X package are listed below. More symbols are available from extra packages. Contents LATEX Mathematical Symbols The more unusual symbols are not defined in base LATEX (NFSS) and require \usepackage{amssymb} 1 Greek and Hebrew letters α \alpha κ \kappa ψ \psi z \digamma ∆ \Delta Θ \Theta β \beta λ \lambda ρ \rho ε \varepsilon Γ \Gamma Υ \Upsilo TeX - LaTeX Stack Exchange is a question and answer site for users of TeX, LaTeX, In math, it better to use a symbol instead of two words. For average it is better to use the \bar not the \overline: What does this mean for the future of AI,. A LaTeX symbol is a character or a backslash followed by a symbol name, that is rendered by LaTeX. Some symbols have required parameters that contain text that is rendered inside the given symbol, such as \sqrt in the following example Different classes of mathematical symbols are characterized by different formatting (for example, variables are italicized, but operators are not) and different spacing. Open an example in Overleaf. Further Reading. The mathematics mode in LaTeX is very flexible and powerful, there is much more that can be done with it: Subscripts and superscript ### List of LaTeX mathematical symbols - OeisWik • Finding Other Symbols. Here are some external resources for finding less commonly used symbols: Detexify is an online application which allows you to draw the symbol you'd like and shows you the code for it!; MathJax (what allows us to use on the web, (technically an AJAX library simulating it.) maintains a list of supported commands.; The Comprehensive LaTeX Symbol List • Guide. This list is organized by symbol type and is intended to facilitate the finding of an unfamiliar symbol by its visual appearance. For a related list organized by mathematical topic, see List of mathematical symbols by subject.That list also includes LaTeX and HTML markup, and Unicode code points for each symbol • In this chapter we will tackle matters related to input encoding, typesetting diacritics and special characters. In the following document, we will refer to special characters for all symbols other than the lowercase letters a-z, uppercase letters A-Z, figures 0-9, and English punctuation marks.. Some languages usually need a dedicated input system to ease document writing • Fortunately, there's a tool that can greatly simplify the search for the command for a specific symbol. Look for Detexify in the external links section below. Another option would be to look in The Comprehensive LaTeX Symbol List in the external links section below.. Greek letters []. Greek letters are commonly used in mathematics, and they are very easy to type in math mode • LaTeX symbols cheat sheet. An online LaTeX editor that's easy to use. No installation, real-time collaboration, version control, hundreds of LaTeX templates, and more The Comprehensive LATEX Symbol List Scott Pakin <scott+clsl@pakin.org>∗ 25 June 2020 Abstract This document lists 14599 symbols and the corresponding LATEX commands that produce them. Some of these symbols are guaranteed to be available in every LATEX2system; others require font Average symbol in LaTeX? Showing 1-8 of 8 messages. Average symbol in LaTeX? Moritz Beller: 1/6/05 3:07 AM: Hello, I figured out quite a bad approximation of the average symbol with this $\varnothing x$ just the arithmetic mean of x or does average mean something different here? Is $\varnothing$ common in some particula An online LaTeX editor that's easy to use. No installation, real-time collaboration, version control, hundreds of LaTeX templates, and more percent sign (LaTeX symbol) Edit. Edit source History Talk (0) Share. Routing metric = The character sequence \% generates a percent (%) sign. See also [edit | edit source] % command used to denote a comment; This article is a stub. You can help LaTeX Wiki by expanding it. Retrieved. I was wondering about the use of latex symbols \implies ($\implies$) and \therefore ($\therefore$). From the naming, I think \therefore and \implies are redundant, but I can't find a symbol for \suchthat and at university, we used $\therefore$ as a shortcut for such that Symbols Emoji Atm Sign Litter in Bin Sign Potable Water ♿ Wheelchair Symbol Men's Room Women's Room Restroom Baby Symbol Water Closet Passport Control Customs Baggage Claim Left Luggage ⚠️. Warning Children Crossing ⛔ No Entry Prohibited No Bicycles No. ### Average symbol for showing a math - LaTeX Stack Exchang 1. Questa pagina è la traduzione della pagina inglese meta:Help:Formula.Verrà aggiornata di tanto in tanto, ma la pagina inglese resta la guida di riferimento. Dal gennaio 2003, è stata aggiunta la possibilità di usare su Wikipedia dei comandi TeX per formule matematiche.. Ogni markup matematico deve rientrare all'interno dei due tag Le interruzioni fisiche di linea all. 2. The Comprehensive LATEX Symbol List Scott Pakin <scott+clsl@pakin.org>. 3. How can I write a ° (degree) symbol in LaTeX? The \degree command is provided by the gensymb package, so if you add: \usepackage {gensymb} to your preamble, that should enable the command. Another alternative is the \textdegree command, which is provided by the textcomp package 4. Probability and statistics symbols table and definitions - expectation, variance, standard deviation, distribution, probability function, conditional probability, covariance, correlatio 5. As you are aware, there are commands to put a bar or a tilde over a symbol in math mode in LaTeX. Sometimes, the output doesn't come out the way some of us might expect or want. Fortunately, there are alternative commands that do the same task differently that we can try and there ar 6. It doesn't mean anything actually, it's just another character in the macro name. It's just not allowed in normal LaTeX mode, only inside packages (or, more generally, after invoking \makeatletter). - Konrad Rudolph Jun 5 at 16:2 The following list of mathematical symbols by subject features a selection of the most common symbols used in modern mathematical notation within formulas, grouped by mathematical topic. As it is virtually impossible to list all the symbols ever used in mathematics, only those symbols which occur often in mathematics or mathematics education are included The Comprehensive LATEX Symbol List Scott Pakin <scott+clsl@pakin.org>∗ 19 January 2017 Abstract This document lists 14283 symbols and the corresponding LATEX commands that produce them. Some of these symbols are guaranteed to be available in every LATEX2system; others require font Translingual: ·(statistics) population mean· (physics) coefficient of friction· (physics) magnetic permeability (physics) muon (dated, physics) micron, micrometre·The lower case letter mu (μι), the 12th letter of the modern Greek alphabet ### LaTeX symbol LaTeX Wiki Fando According to Wikipedia, a glossary is an alphabetical list of terms in a particular domain of knowledge with the definitions for those terms.It doesn't come as a surprise that there are several LaTeX packages that assist with the generation of glossaries. Among them are the nomencl package, the glossary package, and the glossaries package Latex is a stable dispersion of polymer microparticles in water. Latexes are found in nature, but synthetic latexes are common as well.. Latex as found in nature is a milky fluid found in 10% of all flowering plants (angiosperms). It is a complex emulsion consisting of proteins, alkaloids, starches, sugars, oils, tannins, resins, and gums that coagulate on exposure to air Wikipedia:LaTeX symbols. Jump to navigation Jump to search. This is an information page. It is not one of Wikipedia's policies or guidelines, but rather intends to describe some aspect(s) of Wikipedia's norms, customs, technicalities, or practices. It may reflect varying levels of consensus and vetting. LaTeX. The square root of a number can never be negative by definition. The root of a quadratic equation however, can be either positive or negative. The solution to the equation $$x^2=4$$ is given by $$x = \pm 2$$. The symbol $$\pm$$ is written using the code \pm in LaTeX ### Mathematical expressions - Overleaf, Online LaTeX Edito Latex definition, a milky liquid in certain plants, as milkweeds, euphorbias, poppies, or the plants yielding India rubber, that coagulates on exposure to air. See more 1 Introduction Welcome to the Comprehensive LATEX Symbol List!This document strives to be your primary source of LATEX symbol information: font samples, LATEX commands, packages, usage details, caveats—everything needed to put thousands of different symbols at your disposal Special Symbols in LaTeX. The LaTeX language has a wide variety of special symbols for which markup commands have already been defined. These range from accents and greek letters to exotic mathematical operators. These LaTeX's symbols are grouped together more or less according to function Sum-class symbols, or accumulation symbols, are symbols whose sub- and superscripts appear directly below and above the symbol rather than beside it. For example, the following example illustrates that \\sum is one of these elite symbols whereas \\Sigma is not. The terminology from AMS-LaTeX documentation. 1 Table of sum-class symbols 2 Using sum 3 Using prod TeX is smart enough to only show. ### symbolsSymbols - Art of Problem Solvin • For more symbols, you can use LaTeX markup by setting the Interpreter property to 'latex'. Use dollar symbols around the text. For example: title('$\hat{\psi}$', 'Interpreter', 'latex') If you are using the legend function in R2018a or earlier, you must specify the labels as a cell array to distinguish the labels from the name-value pairs • How to Insert Symbols and Special Characters in Excel (Quick and Easy). Written by co-founder Kasper Langmann, Microsoft Office Specialist.. Most spreadsheets are full of numbers. Some of them include text. But if you want to customize what's in your spreadsheet and open up some neat possibilities, you can also include symbols and special characters.. The two are almost exactly the same, but. • Set symbols of set theory and probability with name and definition: set, subset, union, intersection, element, cardinality, empty set, natural/real/complex number se • Alexey, LaTeX complains the two commands are foreign. Also, the two symbols are put equal size which is not what I wanted. Artelius's answer matches my needs perfectly. Anyway, thank you all the same. - day Jun 26 '10 at 6:5 • ISO 15223-1 Medical Devices - Symbols To Be Used with Medical Device Labels, Labeling, and Information to be Supplied 5.4.5 Item Contains or Has a Presence of Natural Rubber Latex • LaTeX arrows. Latex provides a huge number of different arrow symbols. Arrows would be used within math enviroment. If you want to use them in text just put the arrow command between two $like this example:$\uparrow$now you got an up arrow in text • Open an example in ShareLaTeX. Integer and sum limits improvement. In inline math mode the integral/sum/product lower and upper limits are placed right of integral symbol. Similar is for limit expressions. If you want the limits of an integral/sum/product to be specified above and below the symbol in inline math mode, use the \limits command before limits specification ### List of mathematical symbols - Wikipedi Symbol Symbol Name Meaning / definition Example ∠ angle: formed by two rays ∠ABC = 30° measured angle : ABC = 30° spherical angle : AOB = 30° ∟ right angle = 90° α = 90° ° degree: 1 turn = 360° α = 60° deg: degree: 1 turn = 360deg: α = 60deg ′ prime: arcminute, 1° = 60′ α = 60°59′ ″ double prime: arcsecond, 1′ = 60. It is especially useful for documents that contain mathematical symbols and related formulae. Here is a list of LaTeX math symbols. They include the latex degree symbol, does-not-equal sign, greater-than-or-equal-to, less-than-or-equal-to signs, omega symbol, approximately symbol, etc. List of LaTeX Mathematical Symbols Binary Symbols Stack Exchange network consists of 176 Q&A communities including Stack Overflow, the largest, most trusted online community for developers to learn, share their knowledge, and build their careers.. Visit Stack Exchang Define latex. latex synonyms, latex pronunciation, latex translation, English dictionary definition of latex. n. pl. la·ti·ces or la·tex·es 1. The colorless or milky sap of certain plants, such as the poinsettia or milkweed, that coagulates on exposure to air. 2 Easy-to-use symbol, keyword, package, style, and formatting reference for LaTeX scientific publishing markup language. We've documented and categorized hundreds of macros The LaTex command \overline will result in giving a line over the bracketed text. This is a nice command to use when giving a repeating decimal Number sets such as natural numbers or complex numbers are not provided by default by LaTeX.It doesn't mean that LaTeX doesn't know those sets, or more importantly their symbols There are two packages which provide the same set of symbols After looking for a builtin expectation symbol in LaTeX, and coming up with none, I've defined one. Just add: % Expectation symbol \DeclareMathOperator*{\E}{\mathbb{E}} to your LaTeX preamble and you're done. You'll also need to add \usepackage{amsmath} or in LyX to tick Use AMS math package under Document->Settings->Math Options All answers are correct, but people should demonstrate using Quora's LaTeX: [code=latex]\equiv[/code] produces [math]\equiv[/math If you \usepackage{amsmath}, the \blacksquare command will typeset a solid black square. The \square command will give you a hollow square.. The ulsy package has a few version of the lightning bolt for contradictions: \blitza, \blitzb \blitze.Just drop \usepackage{ulsy} into the preamble of your document.. Finally, as others have pointed out, the Comprehensive LaTeX Symbols List is a. Latex plus or minus symbol; Latex symbol for all x; Latex symbol exists; Latex symbol not exists; How to write matrices in Latex ? matrix, pmatrix, bmatrix, vmatrix, Vmatrix; Latex horizontal space: qquad,hspace, thinspace,enspace; Horizontal and vertical curly Latex braces: \left\{,\right\},\underbrace{} and \overbrace{ Natural rubber latex is used in the manufacture of various FDA-regulated products, such as condoms and medical gloves. Here, a physical science technician inspects medical gloves in related. Average and mean are two names of the same thing it can be denoted by a Greek letter μ (mu) or x̅ (x bar) LaTeX has different packages which automatically generates dummy text in our document. You can generate them with just a few lines of code. 1) USING LIPSUM PACKAGE. One of such package is the lipsum package. Lipsum package has access to 150 paragraphs of Lorem ipsum dummy text Latex definition is - a milky usually white fluid that is produced by cells of various seed plants (as of the milkweed, spurge, and poppy families) and is the source of rubber, gutta-percha, chicle, and balata Learn the LaTeX commands to display the greek alphabet. A rendered preview of all letters is shown alongside all commands in a nice table To control the distance between the symbol or abbreviation and the explaining text use the optional distance argument. \printnomenclature[5em] To change the name of the list use \renewcommand{\nomname}{List of Symbols} Similar to a glossary or bibliography, the document is typesetted once (latex). Next, the nomenclature is generated using. In logic, a set of symbols is commonly used to express logical representation. The following table lists many common symbols, together with their name, pronunciation, and the related field of mathematics. Additionally, the third column contains an informal definition, the fourth column gives a short example, the fifth and sixth give the Unicode location and name for use in HTML documents The equals sign or equality sign, =, is a mathematical symbol used to indicate equality in some well-defined sense. It was invented in 1557 by Robert Recorde.In an equation, the equals sign is placed between two expressions that have the same value, or for which one studies the conditions under which they have the same value. In Unicode and ASCII, it has the code point 3D ### LaTeX/Special Characters - Wikibooks, open books for an All communication get through the use of symbols. Math Symbols take the form of words, sounds, gestures, ideas or visual images. It is use to convey other ideas and beliefs. For example, a red octagon may be a symbol for STOP. On a map, a blue line might represent a river. Numerals are symbols for numbers. Alphabetic letters may be symbols. Those symbols are only valid in the countries where trademark ownership and protection of a mark is granted to the one who first uses it (and not to the one who first registers it). An example of a mark that uses ™ with its logo is Starbucks. The R ® symbol ### LaTeX/Mathematics - Wikibooks, open books for an open worl • En online-LaTeX-editor som är enkel att använda. Samarbeta i realtid, utan installation, med versionshantering, hundratals LaTeX-mallar, med mera • Mathematical Symbols Latex Overleaf Mean Meaning this website. The second paper I ordered was a research report on history. I received high grade and positive feedback Mathematical Symbols Latex Overleaf Mean Meaning from my instructor. Of course, I will order new essays again • symbols on the 1972 graduating class mosaic at Université Laval Bowers et al.(1997, Appendix 4) offer an excellent overview of the composition rules for symbols of actuarial functions. In a nutshell, a principal symbol, say S, is combined with auxiliary symbols positioned in subscript or in superscript, to the left or to the right. • g language used for typesetting technical documents. LaTeX is a free software package created in 1985 by the American computer scientist Leslie Lamport as an addition to the TeX typesetting system. LaTeX was created to make it easier to produce general-purpose books an latex2exp. latex2exp is an R package that parses and converts LaTeX math formulas to R's plotmath expressions.Plotmath expressions are used to enter mathematical formulas and symbols to be rendered as text, axis labels, etc. throughout R's plotting system for the sake of simplicity, LaTeX separates the tasks of typesetting mathematics and typesetting normal text. This is achieved by the use of two operating modes, paragraph and math mode. There is also a third mode called LR mode, however, this is rarely used by beginners and furthermore, is usually implicitly entered with other commands The 'Roman/Latin' alphabet is used when we are working with a sample, and the Greek alphabet when working with a population. Thus: S or s or SD for the standard deviation of a sample, and σ (lower case sigma) for standard deviation of a population.. The default di erential symbol d which is used in \differential and \derivative can be switched to an italic form dby including the option italicdiff in the preamble !\usepackage[italicdiff]{physics}. \differential \dd !d \dd x !dx no spacing (not recommended) \dd{x} ! dx automatic spacing based on neighbors \dd[3]{x} !d3x optional powe 'As a symbol of Britain's musical worth, it's a bit of an embarrassment.' 'In another essay, he presents the character as a symbol of the mental retardation of our society.' 'Gill sees David as a symbol of an age, a city, a man and an artist.' 'That got him thinking about child stars as symbols for a larger malaise. The symbol # is known as the number sign, hash, or (in North American usage) pound sign. The symbol has historically been used for a wide range of purposes, including the designation of an ordinal number and as a ligatured abbreviation for pounds avoirdupois - having been derived from the now-rare ℔.. Since 2007, widespread usage of the symbol to introduce metadata tags on social media. The LaTeX syntax []. When using LaTeX, you write a plain text file which describes the document's structure and presentation. LaTeX converts this source text, combined with markup, into a typeset document. For the purpose of analogy, web pages work in a similar way: HTML is used to describe the document, which is then rendered into on-screen output - with different colours, fonts, sizes, etc. LaTeX is great in that it can display all those strange math symbols for you. Summation is a common symbol in math and really useful to know how to display in LaTeX. There are two ways of displaying the symbol: compressed to fit onto one line (useful when printing long equations or proofs) or in a larger, more readable format The Comprehensive LaTeX Symbol List currently showcases symbols from 214 separate typefaces. The same directory that contains this README file also contains SYMLIST (an ASCII list of symbols that appear in the symbols list) and prebuilt versions of the symbol list for both A4 and U.S. Letter sized paper Typing Succ or Prec just returns latex symbol tables. I'm using them in mathematical programming, but in practice just replacing them by the corresponding $$\displaystyle \le, \ge$$, etc. What do they mean more generally? Drexel28. MHF Hall of Honor. Nov 2009 4,563 1,56 It doesn't mean anything in Python.$ marks the beginning and end of maths in a LaTeX string. $\sin x+1$ means . Similarly, $\cos x^2+1$ will be . For a basic tutorial in MathJax (which uses LaTeX syntax) read this Except for people familiar with LaTeX, this is often an unfamiliar territory. In this post I'll show you, with examples, how to write equations in Jupyter notebook's markdown Great day! Welcome to a teeny tiny corner of the vast interwebs. My hope is that you find this particular corner useful.I got tired of hunting down color codes and syntax, saw that there were a surprising number of searches for latex color, whence the solution seemed obvious Complex number symbols in LaTeX. but there may be some disambiguation benefit in using the double-struck italic letters from Unicode's Letterlike Symbols block: U+2148 ⅈ and U+2149 ⅉ. At the very least, I've found it useful to use those characters in my source code. Most import, this post is showing you the basics about math symbols in Latex. This what wikipedia said about Latex: One of the greatest motivating forces for Donald Knuth when he began developing the original TeX system was to create something that allowed simple construction of mathematical formulas, while looking professional when printed LaTeX forum ⇒ Math & Science ⇒ Mathematical Expectation symbol Information and discussion about LaTeX's math and science related features (e.g. formulas, graphs). 3 posts • Page 1 of LaTeX Math Formulas TeX provides almost any mathematical symbol you're likely to need. The commands for generating them can be used only in math mode. For example, if you include $\pi$ in your source, you will get the symbol pi in your output. Here is a. LaTeX forum ⇒ Math & Science ⇒ arc over symbols Information and discussion about LaTeX's math and science related features (e.g. formulas, graphs). 2 posts • Page 1 of ### symbol-table - Overleaf, Online LaTeX Edito • Mathematical symbols are used to perform various operations. The symbols make it easier to refer the Maths quantities. It is interesting to note that Mathematics is completely based on numbers and symbols. The math symbols not only refer to different quantities but also represent the relationship between two quantities • To use additional special characters, such as integral and summation symbols, you can use LaTeX markup instead. This example shows how to insert Greek letters, superscripts, and annotations into chart text and explains other available TeX options. Include Greek Letter • $\begingroup$ It could mean several different things depending on context, so you might want to be more specific than a paper. $\endgroup$ - Tobias Kildetoft Jan 17 '14 at 13:19 1 $\begingroup$ It usually means much greater than; but as Tobias says, precisely what that means will depend on the context. $\endgroup$ - Christopher Jan 17 '14 at 13:2 This wikiHow teaches you how to insert the X-bar statistical symbol into a Microsoft Word document. Open Microsoft Word. You'll find it in the Microsoft Office area of your Start menu Help. Press Alt with the appropriate letter. For example, to type ⊂, ⊆ or ⊄, hold Alt and press C one, two or three times.. Stop the mouse over each button to learn its keyboard shortcut. Shift + click a button to insert its upper-case form. Alt + click a button to copy a single character to the clipboard.. You can select text and press Ctrl + C to copy it to your docu­ment Before it became the standard symbol for e-mail, the @ symbol was typically used to indicate the cost or weight of something. For example, if you bought five oranges for $1.25 each, you might write it as 5 oranges @$1.25 ea. It is still used in this manner on a variety of forms and invoices around the world ### Average symbol in LaTeX? - Google Group • Analyse and paint the TLatex formula.. It is called twice : first for calculating the size of each portion of the formula, then to paint the formula. When analyse finds an operator or separator, it calls itself recursively to analyse the arguments of the operator. when the argument is an atom (normal text), it calculates the size of it and return it as the result. for example : if the operator. • Some welding symbols look complicated, but when they are broken down, you will see they're quite simple. The symbols are illustrations of the pre-weld joint looking side on, as through a cross-section. Each symbol is explained individually, with its weld profile alongside it • Probability Symbols and Explanations. Below you'll find a list of probability symbols. For a more advanced explanation of what these symbols are used for in probability and statistics, check out this course on descriptive statistics and this course on inferential statistics. P (A) Name: Probability function • LaTeX is a powerful tool to typeset math; Embed formulas in your text by surrounding them with dollar signs \$; The equation environment is used to typeset one formula; The align environment will align formulas at the ampersand & symbol; Single formulas must be seperated with two backslashes \\; Use the matrix environment to typeset matrices; Scale parentheses with \left( \right) automaticall • The simplest answer: \not\in The packages txfonts, pxfonts, kpfonts, mathdesign and others provide \notin command, not just mathabx. (If you want to keep the document in the default Computer/Latin Modern, you could load one of these and then load lmodern right afterwards. • Symbol Language Translator Make coded messages! Generate Random Sentence. Send. Use my translator to convert English text into symbols! EX: Hello world! = Ó´¬¬ø. • All symbols, whether explicitly entered using Symbol or not, have head Symbol. x _Symbol can be used as a pattern to represent any symbol. The string name in Symbol [ name ] must be an appropriate name for a symbol. It can contain any letters, letter ‐ like forms, or digits, but cannot start with a digit Mathematical symbols and signs of basic math, algebra, geometry, statistics, logic, set theory, calculus and analysi See the LaTeX WikiBook for more information (especially the section on mathematics). Common Symbols. Below we give a partial list of commonly used mathematical symbols. Most other symbols can be inferred from these examples. See the LaTeX WikiBook (Mathematics) and the Detexify App to find any symbol you can think of Using lists in LaTeX is pretty straightforward and doesn't require you do add any additional packages. For unordered lists, LaTeX provides the itemize environment and for ordered lists there is the enumerate environment. The elements within both environments have to be declared beginning with the \item command Board index LaTeX Fonts & Character Sets Ask a question LaTeX Community Announcements Community talk Comments & Wishes New Members LaTeX Text Formatting Graphics, Figures & Tables Math & Science Fonts & Character Sets Page Layout Document Classes General LaTeX's Friends BibTeX, biblatex and biber MakeIndex, Nomenclature, Glossaries and Acronyms Conversion Tools Viewers for PDF, PS, and DV Type Mathematical Symbols Shortcut Keys Symbols Used In Mathematics Mathematics Symbols Chart Ingleses, Idiomas Math Symbols And Meanings Symbol Table... Cause What Doesn't Blow Your Mind Will List Of 32 Important Mathematical Symbols Comparing Many Maths Fonts Ancient Greek Math Be Derived On Mathematical Images Basic Math Symbols And Meanings LaTeX Math Symbols Meanings Of Mathematical. This symbol shall be accompanied by the name and address of the authorized representative in the European Community. Latex Free: ISO 7000:2019-2725: Not made with natural rubber latex: Consists of the ISO symbol Contains or presence of natural rubber latex with the negation symbol from IEC 80416-3 to indicate Not Made with Natural Rubber Latex ### List of Greek letters and math symbols - Overleaf, Online Symbols are used in all branches of math to represent a formula or procedure, express a condition or to denote a constant. The four basic operations are denoted by the following symbols: + implies addition, - implies subtraction, x implies multiplication, and / implies division Symbol Glossary Definitions SYMBOL STANDARD REFERENCE STANDARD TITLE SYMBOL TITLE EXPLANATORY TEXT EN 980, Clause 5.12 Symbols for use in the labelling of medical devices. Manufacturer Indicates the medical device manufacturer. ISO 15223-1, Clause 5.1.1 Medical devices — Symbols to be used with medical device labels, labelling and information to be supplied Writing LaTeX Code Special Characters Another type of command Don't de ne any formatting or structure Print non-standard characters or characters which usually mean something else Ex. nLaTeX, ntextbackslash, n% Note: % is a special character reserved for comments (after a %, the rest of a line is ignored by the compiler Mathematical Annotation in R Description. If the text argument to one of the text-drawing functions (text, mtext, axis, legend) in R is an expression, the argument is interpreted as a mathematical expression and the output will be formatted according to TeX-like rules. Expressions can also be used for titles, subtitles and x- and y-axis labels (but not for axis labels on persp plots) Different plotting symbols are available in R. The graphical argument used to specify point shapes is pch. Plotting symbols. The different points symbols commonly used in R are shown in the figure below : The function used to generate this figure is provided at the end of this document. pch = 0,square ### percent sign (LaTeX symbol) LaTeX Wiki Fando ALT 128 - ALT 255 produces special characters and symbols from Code Page 437 that are composed of extended characters which include international text or accented letters (diacritics), some Greek letters, line-drawing (box-drawing) symbols, mathematical symbols and miscellaneous symbols. Windows ALT codes based on Windows Code Page 125 If A is a vector, then mean(A) returns the mean of the elements.. If A is a matrix, then mean(A) returns a row vector containing the mean of each column.. If A is a multidimensional array, then mean(A) operates along the first array dimension whose size does not equal 1, treating the elements as vectors. This dimension becomes 1 while the sizes of all other dimensions remain the same Table: Text-Mode Accents Table: National Symbols Table: Miscellaneous Symbols Table: Math-Mode Accents Table: Greek Letters (Math Mode) Table: Binary Operations (Math Mode) Table: Relations (Math Mode) Table: Variable-Sized Symbols (Math Mode) Table: Delimiters (Math Mode) Table: Function Names (Math Mode) Table: Arrows (Math Mode) Table: Miscellaneous Symbols (Math Mode Degree symbol is °.Sometimes students or those who deal with mathematics, physics or various kinds of calculations may need to type a degree sign, but we do not have one directly on our keyboard.Degree symbol can be used in case if we're dealing with angles, or when we need to operate with temperature and use Celsius degree. It is also a common coordinate degree sign ### LaTeX symbols for therefore and suchthat - Mathematics Mathematical and scientific symbols. Common pronunciations (in British English - Gimson,1981) of mathematical and scientific symbols are given in the list below. (all the pages in this section need a unicode font installed - e.g. Arial Unicode MS, Doulos SIL Unicode, Lucida Sans Unicode - see: The International Phonetic Alphabet in Unicode. • Ponte dentale costo croazia. • Mormorio bisacquino. • Frasi sul dare per scontata una persona. • Noce moscata droga. • Picture made of many pictures. • Robusti e forti. • Intertrigo inguinal femme. • Naples botanical garden. • Chateau de goulaine a vendre. • Signaler un individu. • Gru cenerina italia. • Palloni in spugna vendita. • Arsenale venezia mappa. • Come fare linee dritte sul muro. • La scimmia yoga ashtanga. • Heron tower aquarium. • Annuncianimali. • Marco belinelli vita privata. • Memory game. • Tesina sulla fotografia mappa concettuale. • Comment enlever un copyright sur une photo. • Boulanger electromenager. • Cuccioli regalo venezia. • Immigrazione e schiavitù. • Messico pericoloso per turisti. • Esercizi di acquerello. • Winamp italiano. • Stemma della pace. • Vera davich. • Disegni da colorare gratis di barbie. • Materassi americani sealy. • String art schemi. • Progetto costruzione strumenti musicali. • Tec 9 kaufen. • Hit parade dischi più venduti. • Dove vendere scarpe usate. • Capacità autobetoniera calcestruzzo. • French bulldog puppy. • Zanichelli scienze scuola media. • Princesse sofia en arabe.
{}
Hardy's paradox is a thought experiment in quantum mechanics devised by Lucien Hardy[1][2] in 1992-3 in which a particle and its antiparticle may interact without annihilating each other. Experiments[3][4] using the technique of weak measurement[5] have studied an interaction of polarized photons and these have demonstrated that the phenomenon does occur. However, the consequence of these experiments is only that past events can be inferred after their occurrence as a probabilistic wave collapse. These weak measurements are considered to be an observation themselves, and therefore part of the causation of wave collapse, making the objective results only a probabilistic function rather than a fixed reality. However, a careful analysis of the experiment shows that Hardy's paradox only proves that a local hidden variable theory can not exist, as there can not be a theory that assumes that the system meets the states of reality regardless of the interaction with the measuring apparatus. This confirms that a quantum theory, to be consistent with the experiments, must be non-local (in the sense of Bell) and contextual. ## Setup description and the results Setup for Hardy's thought experiment The basic building block of Hardy’s thought experiment are two Mach–Zehnder interferometers for quantum particles and antiparticles. We will describe the case using electrons and positrons. Each interferometer consists of bent paths and two beam splitters (labeled BS1 and BS2 in the accompanying diagram) and is tuned so that when operating individually particles always exit to the same particle detector (the ones labeled "c" in the diagram – "c" is for "constructive interference" and "d" is for "destructive interference"). For example, for the right-hand side interferometer, when operating alone, entering electrons (labeled e) become a quantum superposition of electrons taking the path v and electrons taking path w (in the diagram, the latter part of the w path is labeled u), but these constructively interfere and thus always exit in arm c: ${\displaystyle |e^{-}\rangle \to {\frac {|v^{-}\rangle +i|w^{-}\rangle }{\sqrt {2}}}\to i|c^{-}\rangle .}$ Similarly, positrons (labeled e+) are always detected at c+. In the actual experiment the interferometers are arranged so that part of their paths overlap as shown in the diagram. If the amplitude for the particle in one arm, say w, were to be obstructed by a second particle in w+ that collides with it, only the v amplitude would reach the second beam splitter, and would split into arms c+ and d+ with equal amplitude. The detection of a particle in d+ would thus indicate the presence of the obstructing particle, but without an annihilation taking place. For this reason, this scheme was named interaction-free measurement. If (classically speaking) both the electron and the positron take the w paths in their respective interferometers, they will annihilate to produce two gamma rays: ${\displaystyle |w^{+}\rangle |w^{-}\rangle \to |\gamma \rangle |\gamma \rangle }$. There is a 1 in 4 chance of this happening. We can express the state of the system, before the final beam splitters, as ${\displaystyle {\frac {1}{2}}\left(|v^{+}\rangle |v^{-}\rangle +i|v^{+}\rangle |w^{-}\rangle +i|w^{+}\rangle |v^{-}\rangle -|\gamma \rangle |\gamma \rangle \right).}$ Since the c detectors click for ${\displaystyle {\frac {|v\rangle +i|w\rangle }{\sqrt {2}}}}$ and the d detectors for ${\displaystyle {\frac {|v\rangle -i|w\rangle }{\sqrt {2}}}}$, this becomes: ${\displaystyle |e^{+}e^{-}\rangle \to {\frac {1}{4}}\left(3|c^{+}\rangle |c^{-}\rangle +|c^{+}\rangle |d^{-}\rangle +|d^{+}\rangle |c^{-}\rangle -|d^{+}\rangle |d^{-}\rangle -2|\gamma \rangle |\gamma \rangle \right).}$ Since the probabilities are the squares of the absolute values of these amplitudes, this means a 9 in 16 chance of each particle being detected in its respective c detector, a 1 in 16 chance for one particle being detected in its c detector and the other in its d detector, or for both being detected in their d detectors, and a 4 in 16 (1 in 4) chance that the electron and positron annihilate so neither is detected. Notice that a detection in both d detectors is represented by ${\displaystyle {\frac {|v^{+}\rangle -i|w^{+}\rangle }{\sqrt {2}}}{\frac {|v^{-}\rangle -i|w^{-}\rangle }{\sqrt {2}}}={\frac {1}{2}}\left(|v^{+}\rangle |v^{-}\rangle -i|v^{+}\rangle |w^{-}\rangle -i|w^{+}\rangle |v^{-}\rangle -|w^{+}\rangle |w^{-}\rangle \right).}$ This is not orthogonal to the expression above for the state before the final beam splitters. The scalar product between them is 1/4, showing that there is a 1 in 16 chance of this happening, paradoxically. The situation can be analyzed in terms of two simultaneous interaction-free measurements: from the point of view of the interferometer on the left, a click at d+ implies the presence of the obstructing electron in u. Similarly, for the interferometer on the right, a click at d implies the presence of the positron in u+. Indeed, every time a click is recordered at d+ (or d) the other particle is found in u (or u+, resp.). If we assume the particles are independent (described by local hidden variables), we conclude that they can never emerge simultaneously in d+ and d. This would imply that they were in u+ and u, which cannot occur because of the annihilation process. A paradox then arises because sometimes the particles do emerge simultaneously at d+ and d (with probability p=1/16). Quantum mechanically, the ${\displaystyle |d^{+}\rangle |d^{-}\rangle }$ term arises, in fact, from the nonmaximally entangled nature of the state just before the final beam splitters. A paper by Yakir Aharonov and colleagues in 2001[6] pointed out that the number of electrons or positrons in each branch is theoretically observable, and is 0 in the w branches and 1 in the v branches. And yet, the number of electron-positron pairs in any combination is also observable, and is not given by the product of the single-particle values. So we find that the number of ww pairs (both particles in their w path) is 0, each wv pair is 1, and the number in the vv combination is −1! They proposed a way that this could be observed physically by temporarily trapping the electron and the positron in the v paths in boxes and noting the effect of their mutual electrostatic attraction. They stated that one would actually find a repulsion between the boxes. In 2009 Jeff Lundeen and Aephraim Steinberg published work[3] in which they set up a "Hardy's paradox" system using photons. A 405 nm laser goes through a barium borate crystal to produce pairs of 810 nm photons, with polarizations orthogonal one to the other. These then hit a beam splitter, which sends photons back to the barium borate crystal with 50% probability. The 405 nm pumping beam also bounces off a mirror and comes back to the barium borate. If both the 810 nm photons come back to the crystal, they are annihilated by interaction with the returning pump beam. In any case, the beam of photons that make it through the crystal and the beam of photons that pass through the beam splitter are both separated into "vertically polarized" and "horizontally polarized" beams, which correspond to the "electrons" and the "positrons" of Hardy's scheme. The two "electron" beams (the photons with one kind of polarization) are united at a beam splitter and go to one or two detectors, and the same for the "positrons" (the other photons). Classically, no photons should be detected at what the authors call the "dark ports" because if they take both directions from the first beam splitter, they will interfere with themselves, whereas if they take only one path, then one cannot detect them both at the dark ports because of the paradox. By introducing a 20° rotation in polarization and using half-wave plates on certain beams, and then measuring coincidence rates at the detectors, they were able to make weak measurements that allowed them to calculate the "occupation" of different arms (paths) and combinations. As predicted by Aharonov and colleagues, they found a negative value for the combination in which both photons take the outer (no-annihilation) route. The results were not exactly as predicted and they attribute this to imperfect switching (annihilation) and interaction-free measurements. ## References 1. ^ Hardy, Lucien (1992). "Quantum mechanics, local realistic theories, and Lorentz-invariant realistic theories". Physical Review Letters 68 (20): 2981–2984. Bibcode:1992PhRvL..68.2981H. doi:10.1103/PhysRevLett.68.2981. PMID 10045577. 2. ^ Hardy, Lucien (1993). "Nonlocality for two particles without inequalities for almost all entangled states". Physical Review Letters 71 (11): 1665–1668. Bibcode:1993PhRvL..71.1665H. doi:10.1103/PhysRevLett.71.1665. PMID 10054467. 3. ^ a b Lundeen, J. S.; Steinberg, A. M. (2009). "Experimental Joint Weak Measurement on a Photon Pair as a Probe of Hardy's Paradox". Physical Review Letters 102 (2): 020404–000001. arXiv:0810.4229. Bibcode:2009PhRvL.102b0404L. doi:10.1103/PhysRevLett.102.020404.. Also available here. 4. ^ Yokota, K.; Yamamoto, T.; Koashi, M.; Imoto, N. (2009). "Direct observation of Hardy's paradox by joint weak measurement with an entangled photon pair". New Journal of Physics 11 (3): 033011. arXiv:0811.1625. Bibcode:2009NJPh...11c3011Y. doi:10.1088/1367-2630/11/3/033011. 5. ^ Y. Aharonov, D.Z. Albert, L. Vaidman, "How the result of a measurement of a component of the spin of a spin-1/2 particle can turn out to be 100," Physical Review Letters, 1988. [1] 6. ^
{}
# Synchronization of complex human networks ## Abstract The synchronization of human networks is essential for our civilization and understanding its dynamics is important to many aspects of our lives. Human ensembles were investigated, but in noisy environments and with limited control over the network parameters which govern the network dynamics. Specifically, research has focused predominantly on all-to-all coupling, whereas current social networks and human interactions are often based on complex coupling configurations. Here, we study the synchronization between violin players in complex networks with full and accurate control over the network connectivity, coupling strength, and delay. We show that the players can tune their playing period and delete connections by ignoring frustrating signals, to find a stable solution. These additional degrees of freedom enable new strategies and yield better solutions than are possible within current models such as the Kuramoto model. Our results may influence numerous fields, including traffic management, epidemic control, and stock market dynamics. ## Introduction The synchronization of coupled ensembles appears in numerous fields, including biology1,2,3, astronomy4, psychology5,6, optics7,8,9, economics10, and politics; at different size scales, from the synchronization of planets4 to the synchronization of subatomic particles11; and in different time-scales, from slow-moving mechanical structures12,13 to coupled ultrafast lasers14,15. Synchronization is crucial for the life of all living species on our planet1,2, from the cellular level16,17,18 to the crowd synchrony of large groups19. In particular, the synchronization of human networks is essential for our civilization20,21,22 and can impact the physical and mental well-being of individuals in groups5,6. Understanding the motivations, behavior, and basic parameters that govern the dynamics of human networks is important for many aspects of our lives, including stock market dynamics10, traffic management23, epidemic control24, and investigating the decision-making processes in different types of groups25,26,27,28,29. Additionally, studying the dynamics of human networks will help predict the consequences of introducing artificial intelligence into our highly connected world, where each node in a computer network will have complex decision-making ability30,31. Human ensembles and crowd synchrony19 have been investigated in recent years. Synchronized brokers in the stock market were found to earn more money10, the synchronization of crowd attention was shown to be a basic survival mechanism32,33, pedestrians walking on the London Millennium bridge synchronized their footsteps through the bridge vibrations to form macroscopic oscillations of the bridge above a critical number12, the collective movement of concert audiences showed vortexes and gas-like states34,35, the synchronized movements of dancers differ from those of nondancers36,37, music players are following each other according to their musical instrument38,39,40, and an audience clapping hands shows both synchronization and period doubling41,42. Synchronization in the broader sense of coordinating decision-making between humans on complex networks has also been studied43,44. However, all these seminal studies had limited control over the network parameters, namely, the connectivity of the network, coupling strength, and delay between individuals, and were subject to noisy environments. In particular, these studies focused mostly on all-to-all coupling, whereas current social networks and human interactions are often based on complex coupling configurations. To date, there are no studies of synchronization of rhythmic behavior of humans in complex networks, for example, one-dimensional, two-dimensional, scale-free, or small-world connectivity in a controlled environment45,46,47. Additionally, the influence of changing the coupling strength or the delay between two individuals is critical for the dynamics of the network48,49,50 and has not been studied in human networks thus far. We study the synchronization between professional violin players in complex human networks with full and accurate control over the network connectivity, coupling strength of each connection, and delay between players. We set 16 isolated electric violin players to repeatedly play a musical phrase. We collect the output from each violin and control the input to each player via noise cancellation headphones. The players cannot see or hear each other apart from what is heard in their headphones. All the players start playing the first phrase with the help of an external rhythmical beat, to verify that they all start with the same playing period and phase. The rhythmical beat is stopped after the first phrase, and the only instruction to the players is to try to synchronize their rhythm to what they hear in their headphones. A picture of the experimental setup is shown in Fig. 1, and the musical phrase is shown in the inset. We establish different network connectivities and introduce delayed coupling between the players while monitoring the phase, playing period, volume, and frequency of each player with a mixing system. Our system is the first for investigating human networks with full and accurate control over the network parameters, including, the connectivity, the coupling strength, and the delay of each connection. In addition, this is the first system where the parameters of the network can be changed in a controlled manner in real time, enabling the study of dynamical human networks. Our results reveal that the usual models for coupled networks such as the Kuramoto model51,52,53,54 cannot always be applied to human networks. We found that the players can change their playing period3,41,42,55 and can delete connections by completely ignoring frustrating signals56 to find a stable solution to the coupled network. These additional degrees of freedom enable new strategies and yield better solutions than are possible within the simple Kuramoto model. To analyze the dynamics of a human network and the influence of different parameters on its global behavior, we extended the Kuramoto model to take into account these important abilities of the human mind, which have been neglected thus far. ## Results ### Coupled violin players without delay In our first experiment, we set the coupling between the players to zero, causing the players to hear only themselves. We measure the time it takes for each player to play the musical phrase and denote this time as the playing period of the player, Ti(t). In Fig. 2a, we show the phase of each player as a function of time, where blue denotes the beginning of the musical phrase and yellow denotes the end. In Fig. 2b, we show the playing period of all the players and the standard deviation of their period as a function of time. The opening phrase, accompanied by an external rhythmical beat, verified that all the players start with the same playing period; after the first phrase, the beat stopped, and the playing period of each player deviates towards the player’s natural one. The playing periods of the players are spreading as a function of time, reflecting that the players cannot hear or see each other. Then, we introduce coupling between the different players with our mixing system. The coupling strength is defined as the ratio between the volume of the coupled violin compared to the volume of the player’s own violin while maintaining the total volume that each player hears constant. The volume level is monitored to make sure it stays within the linear response range of the human hearing57. We compare two configurations for the players, a one-dimensional open chain, which is a network with the lowest possible connectivity, and an all-to-all coupling, which is a network with the highest possible connectivity. In each configuration, we start with a coupling strength of 0.5 and reduce it linearly to zero over a period of 4  min. We measure the in-phase order parameter in the network as a function of the coupling strength and present the results in Fig. 2c. The in-phase order parameter is calculated by $$<\cos ({\varphi }_{i}-{\varphi }_{j})>$$, where φi is the phase of the ith player, φj is the phase of its coupled neighbor, and we average over all connections. Similar to other networks, the order parameter of the all-to-all configurations remains high for lower coupling strength compare to the one-dimensional configuration. (The order parameter does not reach zero since the playing time is limited to 4 min to keep the players focused.) ### Two coupled violin players with delay Next, we set the coupling strength to 0.5, which is strong enough to ensure synchronization, as shown by Fig. 2c. Then, we impose a delay on the coupling between the players, starting from zero delay and increasing it linearly, according to d(t) = 0.0332t, where d is the delay and t is time, so after 120 s the delay equals to 4 s, which is the starting playing period of the musical phrase. The delay prevents the players from synchronizing with each other, which leads them to shift from an in-phase synchronization to other states of synchronization58. We demonstrate these states of synchronization by examining the synchronization of two coupled violin players as a function of the delay, schematically shown in Fig. 3. In Fig. 3a, we present the phase of each player in the musical phrase by a color code as a function of time in one representative measurement. We determine that player i is following player j once they have the same playing period, namely Ti(t) = Tj(t), and their relative phase during at least one musical phrase, follows: $${\varphi }_{j}-{\varphi }_{i}=2\pi \frac{d(t)}{{T}_{i}(t)}.$$ (1) When Eq. (1) is satisfied, player i is playing in synchrony with player j as it sounds in its earphones. If Eq. (1) is not satisfied, even if the relative phase between them is constant in time, they are not following each other. This can occur when both players are following a third player while ignoring each other. In Fig. 3b, we show the averaged period of all the players as a function of the delay and time together with the out-of-phase order parameter, $$<\sin ({\varphi }_{i}-{\varphi }_{j})>$$. The results reveal three states of synchronization: initially, the delay is zero, so the two players are perfectly synchronized in phase. With the introduction of the delay, they increase their playing period (play slower) to keep the delay small relative to the duration of each note. This state is emphasized on the left side of Fig. 3a and is indicated by the increased playing period, presented in Fig. 3b. This effect was also observed when playing over the Internet with a small delay59. When the delay is further increased, the players cannot maintain an in-phase synchronization state, as one of them starts to ignore the other and returns to its original playing period. In our case, player #1 ignores player #2, while player #2 still follows player #1, which is emphasized in the middle part of Fig. 3a. When the delay is increased to approximately half of the period, an out-of-phase synchronization emerges that satisfies both players, since, φj − φi = φi − φj = 2πd(t)/Ti,j(t), so they are following each other. In this out-of-phase synchronization58, when player i is at the middle of the musical phrase, player j is at the beginning or the end of the phrase, and vice versa. This state is highly stable; therefore, when the delay is further increased, the players increase their playing period to ensure that the delay is always half the playing period. This is shown in Fig. 3b, where the out-of-phase order parameter is presented by the red curve. Once this order parameter approaches unity, it stays there, and the playing period increases linearly with the delay. This is also observed by the checkerboard pattern on the right side of Fig. 3a. To verify that the delay is changing slow enough, we measure the coupled violin players when the delay is changing at half the rate according to d(t) = 0.0166t obtaining similar results. This indicates that, although the delay is constantly changing, the delay change-rate is slow enough so that at each point in time the network can be considered as quasistatic. In such a system, the players are not aware to the fact that the delay is changing and only react to its current value. ### Even number of coupled violin players When increasing the number of the coupled violin players to 4, 6, or 8, as shown in Fig. 4a, d, and g, they follow the same behavior as the delay is increased: we first observe an in-phase synchronization with an increase in the playing period; next, each player spontaneously decides to ignore one of its inputs. In this stage, we observe two states of synchronization, a vortex state or an arrowhead state. If all the players ignore the same side and follow the other side, they create a vortex state of synchronization where the phase increases monotonically, as seen in Fig. 4h, while if some players are choosing to follow the player on one side and other players are choosing to follow the player on the another side, they create an arrowhead-shaped state of synchronization, as seen in Fig. 4e. Finally, when the delay reaches approximately half of the average playing period, a stable and highly ordered state of out-of-phase synchronization emerges, as evident by the checkerboard pattern emphasized at the right side of Fig. 4b, e, and h, together with the linear increase in the average playing period as a function of the delay and the out-of-phase order parameter, which approaches unity, as seen in Fig. 4c, f, and i. These results are identical whether the players are organized in open- or close-chain configurations. In the case of eight players, we also observe that the players are divided into two clusters, players 1–3 and players 4–7, while player 8 is somewhere between them60. The second cluster finds the out-of-phase synchronization state faster compared to the first cluster, so the dynamic of the second cluster is shown in Fig. 4i. ### Odd number of coupled violin players The total accumulate phase for an even number of violin players in a state of out-of-phase synchronization is an even integer multiplied by π, and is therefore consistent with the periodic boundary conditions of the loop. For odd numbers of violin players, this is not the case, and therefore the state of out-of-phase synchronization is no longer a stable solution61,62,63,64. In such cases, the players spontaneously choose to ignore one of the connections, which break the chain and forms an open chain where the out-of-phase synchronization state is possible. Thus, the players change the connectivity of the configuration into one with a stable solution. In Fig. 5, we present the results for three and five coupled violin players. When the delay is low, the players remain in an in-phase synchronization, as shown on the left side of 5a, c, while increasing the playing period, as shown in 5b, d. When we increase the delay, the players choose either a vortex state, as shown in 5a, or an arrowhead state, as shown in 5c. When the delay reaches half of the playing period, the players prefer the state of out-of-phase synchronization while ignoring one of the connections, as shown on the right side of 5a, c. When this state is achieved, it is highly stable, as seen by the out-of-phase order parameter shown in 5b, d calculated for open-chain connectivity. When we increase the delay further, the players increase their playing period, keeping it twice the delay, to maintain the out-of-phase synchronization state, as shown in 5b, d, and similar to the dynamics of configurations with even number of players. For nine or more coupled players, the violin players can find an approximate out-of-phase synchronization state without breaking the connection by shifting each player by 2π/9 in addition to the out-of-phase synchronization. The combination of an out-of-phase with a vortex states is shown on the right side of Fig. 6a. We evaluate the out-of-phase order parameter, which reaches 0.9 instead of a unity due to this vortex shown in Fig. 6b. Nevertheless, this state is as stable as the regular out-of-phase states, as evident by the increasing playing period as a function of the delay while keeping the order parameter at 0.9. Here, similar to eight violin players, the players divided into two clusters, where one cluster found the out-of-phase synchronization state faster compared to the other60. In Fig. 6b, we show both clusters where the playing period of players 1–3 and 9 is denoted by the yellow dots and the playing period of players 4–8 is denoted by the blue dots. We see that when both clusters found the state of out-of-phase synchronization, they converged into a single cluster. ### Two-dimensional lattices configurations Finally, we measure the synchronization of the players when arranging them in a square and a triangular lattice configurations while increasing the delay. During the experiment, we monitor the relative phase between each pair of players and determine if they are coupled or not similar to the method described for the one-dimensional configurations and according to Eq. (1). The results are shown in Fig. 7, where the measured results of the square lattice are shown in Fig. 7a and the measured results of the triangular lattice are shown in Fig. 7b. When the delay is low, the players of the square lattice configuration are synchronized in phase, and when we increase the delay, they create a vortex states until reaching the state of out-of-phase synchronization, which is a stable solution for the square lattice configuration. In the triangular configuration, the players start with in-phase synchronization, and when we increase the delay, they cannot find a stable solution61,65, so they ignore some of the connections and reduce the connectivity of the network to one based on square motifs or open chains. A reduced network that is based on square motifs or open chains is following the same dynamics as any chain with even number of players, and thus, can find the highly stable state of out-of-phase synchronization. This result is shown by the reduced network on the right side of Fig. 7b. When repeating the experiment, the players converge to a different solution every time, as shown in Figs. 7c–e, presenting solutions that include rings of four and six players and the breaking of the network into smaller coupled clusters. Once the players find a stable solution they tend to stay in it, while in some rare cases they switch from one stable solution to another. ### Numerical models To develop a model for coupled human networks, we extend the simple Kuramoto model for coupled oscillators51,52,53,54 to include broad-bandwidth oscillators and the ability of each oscillator to ignore some of the connections. We start by simulating coupled violin players with ring-like connectivity according to: $$\frac{\partial {\varphi }_{i}}{\partial t}={\omega }_{i}+\kappa {\mathop {\sum}\limits _{j}}\sin \left({\varphi }_{j}(t-\Delta t)-{\varphi }_{i}(t)\right),$$ (2) where φi is the phase of the ith violin player, ωi is the eigenfrequency of the player, κ = 0.2 is the coupling strength, and Δt is the delay between the players. We simulate the dynamics of different numbers of violin players and study the phase of each player compared to the others. We randomly choose the eigenfrequencies between ω = 0.25 and 0.3 Hz with a uniform distribution, corresponding to a playing period of 3.3–4 s. We set the delay as a function of time according to d(t) = 0.0332t, so after 120 s the delay reaches 4 s. Representative results of four coupled players are shown in Fig. 8a. At first, the players are coupled in phase, and as the delay increases, the playing period likewise increases until a state of out-of-phase synchronization is achieved. This is also shown by the out-of-phase order parameter, which approaches unity at a delay of ~2 s. However, since the oscillators are narrow band, they cannot shift their playing period by more than 15%. Therefore, the players cannot maintain the out-of-phase state of synchronization when the delay is farther increased. Indeed, at a delay of ~3 s, the players leave this state and return to the state of in-phase synchronization. These results do not agree with the measured results, where the players adjust their playing period by up to a factor of 3 to maintain the state of out-of-phase synchronization. We assume that humans have broad bandwidth, which enables them to change their playing period over a wide range3,66. To include this broad bandwidth of humans in the model, we added an imaginary parameter to Eq. (2) as follows: $$\frac{\partial {\varphi }_{i}}{\partial t}={\omega }_{i}+\kappa {\mathop {\sum}\limits _{j}}\sin \left({\varphi }_{j}(t-\Delta t)-{\varphi }_{i}(t)\right)+\eta,$$ (3) where η is the bandwidth factor. This parameter serves as an imaginary frequency leading to exponential decay in time. Therefore, it lowers the Q-factor of the cavity and increases the bandwidth. We repeated the simulations with η = 1i and present the results in Fig. 8b. These results are in a better agreement with the measured results of even number of players than the simple Kuramoto model. The players find the out-of-phase synchronization state and maintain it by changing their playing period linearly with the delay. This is also evident by the order parameters, which remains close to unity. For odd numbers of players, the Kuramoto model failed to reproduce the measured results and showed only vortex states of synchronization61,65. Representative results for three coupled violin players are shown in Fig. 9, where Fig. 9a shows three coupled players with the regular Kuramoto model, and Fig. 9b shows with the broad-bandwidth Kuramoto model. Indeed, the players do not find the out-of-phase synchronization state, which is frustrated in three coupled players, as evident by the order parameter, which does not exceed 0.8. Therefore, we extend the model to include the ability to delete contradicting connections. For any player with contradicting inputs we replace the sum in Eq. (3), with one neighbor. Representative results are shown in Fig. 9c. Here, we see that although the number of players is odd, the players find the out-of-phase synchronization state by ignoring one of the links. In this case, they ignore the connection between player 1 and player 3. This extended model agrees with the measured results for odd numbers of coupled violin players. We compare three different strategies for choosing which connections to keep when a player encounter contradicting inputs from several coupled neighbors: keeping similar playing period, keeping similar phase, or choosing in random. In keeping similar playing period, the player follows the coupled players with closer playing period to its own. In keeping similar phase, the player follows the coupled players with closer phase to its own. In the random strategy, the player randomly chooses which player to keep and which to delete regardless of their phase or playing period. We simulate the dynamics of a triangular network of coupled players when we start with zero delay and linearly increase it. With all three strategies, the system finds an out-of-phase synchronization states by deleting connections and reducing the network connectivity to one based on motifs with an even number of players. We present typical reduced networks in Fig. 10 following each of the three different strategies. These calculated results reveal that all three strategies lead to the same dynamics. As long as each player can delete connections, the network changes its connectivity until finding a stable out-of-phase synchronization state. Therefore, the specific strategy each player has for choosing which inputs to follow, has no role in the macroscopic network dynamics of coupled violin players. ## Discussion To conclude, we investigate the synchronization of rhythmic behavior of humans in networks with different types of connectivity where all the parameters of the networks are under control. We measure the phase and synchronization of coupled violin players in different network configurations and when introducing delay between the coupled players. We discover that human networks differ from previously studied networks in the ability of each player to adjust its playing period and to change the network connectivity by ignoring a coupled player and effectively deleting the connection. This ability serves as a unique and efficient mechanism to remove frustrating signals that hinder synchronization. When we couple an even number of players on a ring, the players find a stable out-of-phase synchronization state and tune their playing period accordingly as the delay increases. When we couple an odd number of players on a ring, the players change their connectivity and then adjust their playing period. We conclude that, the ability of human to identify conflicts in inputs and to adjust their response accordingly, which is well known56, leads to unique dynamics when situated in networks. This research may impact numerous fields, including economics, decision-making research, epidemic spreading, information transfer modeling, traffic control, and more. ## Methods ### Experimental setup We set 16 isolated electric violin players to repeatedly play a musical phrase. The players play on Armando VL-D810. We collect the output from each violin into the Focusrite ClarettOctoPre sound system and control with a MAX/MSP software. The players cannot see or hear each other apart from what is heard in their noise cancellation headphones, Shure SE-215, which are connected to the output of the sound system. During the experiment, we record all 16 players with the 16-channel sound system. ### Composing the musical phrase The notes in the musical phrase were chosen while taking several considerations into account. First, it is important that different notes do not repeat for making it easier for the player to recognize where their coupled players are located in the musical phrase. Second, for easier analysis by preventing mixing with overtones, we keep the entire musical phrase at the same octave. Finally, we aim for a cyclic musical phrase without a clear beginning; therefore, a simple arpeggio is not suitable. Nevertheless, we repeat all the experiments with other musical phrases and obtain similar results to verify our findings. ### Data analyzing The output file is analyzed off-line in Matlab by Fourier transforming the signal in a moving window of 100 ms, which allows us to identify the different notes and the timing of each note in addition to performing a manual consistency check. Next, we calculate the playing period of each player and its location during the musical phrase, which is the player phase. By comparing the phase between two coupled players, we determine if they are following each other, if one is ignoring the other, or if both of them are ignoring each other. We determine that a connection between two players is maintained when the phase difference between them is equal to the delay over the playing period, according to Eq. (1). We performed three full experimental sessions on three different dates and two more partial experimental sessions on two other dates. During each session, we repeat every configuration up to 4 times. Since we have 16 players, we repeat the same configuration with different players during the same run. Therefore, the configurations of 2, 3, 4, 5, and 6 players are repeating 8, 5, 4, 2, and 2 times during the same run with different violin players, accordingly. In each experiment, we usually find faulty configurations that cannot be used due to either earphone malfunctioning, software problems, or players who did not understand the instructions and ignored what they heard in their earphones. It is easy to identify these faulty configurations when a player is playing without synchronization even when the coupling strength is high and there is no delay. ### Participants’ consent All players signed a participant consent form to take part in the research and agreed to the use of all the data and pictures. ### Third-party images or previously published figures We confirm that our manuscript does not contain any third-party images or any previously published figures. ## Data availability The datasets generated during the experiments, the analyzed data generated during the current study, and the data generated by the numerical simulations are available online at https://figshare.com/projects/Synchronization_of_complex_human_networks/81590. ## Code availability The code for analyzing the data, the numerical simulation code, and the code for performing the experiments are available online at https://figshare.com/projects/Synchronization_of_complex_human_networks/81590. ## References 1. 1. Sumpter, D. J. The principles of collective animal behaviour. Philos. Trans. R. Soc. Lond. Ser. B 361, 5–22 (2006). 2. 2. Conradt, L. & List, C. Group decisions in humans and animals: a survey. Philos. Trans. R. Soc. Lond. Ser. B 364, 719–742 (2009). 3. 3. Ott, E. & Antonsen Jr, T. M. Frequency and phase synchronization in large groups: low dimensional description of synchronized clapping, firefly flashing, and cricket chirping. Chaos 27, 051101 (2017). 4. 4. Strogatz, S. Sync: The Emerging Science of Spontaneous Order (Penguin UK, 2004). 5. 5. Wasserman, S. & Faust, K. Social Network Analysis: Methods and Applications, Vol. 8 (Cambridge Univ. Press, 1994). 6. 6. Morris, M. E. Social networks as health feedback displays. IEEE Internet Comput. 9, 29–37 (2005). 7. 7. Roy, R., Murphy Jr, T., Maier, T., Gills, Z. & Hunt, E. Dynamical control of a chaotic laser: experimental stabilization of a globally coupled system. Phys. Rev. Lett. 68, 1259 (1992). 8. 8. DeShazer, D. J., Breban, R., Ott, E. & Roy, R. Detecting phase synchronization in a chaotic laser array. Phys. Rev. Lett. 87, 044101 (2001). 9. 9. Fridman, M., Nixon, M., Davidson, N. & Friesem, A. A. Passive phase locking of 25 fiber lasers. Opt. Lett. 35, 1434–1436 (2010). 10. 10. Saavedra, S., Hagerty, K. & Uzzi, B. Synchronicity, instant messaging, and performance among financial traders. Proc. Natl Acad. Sci. USA 108, 5296-5301 (2011). 11. 11. Strogatz, S. H. Sync: How Order Emerges From Chaos in the Universe, Nature, and Daily Life (Hachette UK, 2012). 12. 12. Strogatz, S. H., Abrams, D. M., McRobie, A., Eckhardt, B. & Ott, E. Theoretical mechanics: crowd synchrony on the millennium bridge. Nature 438, 43 (2005). 13. 13. Arane, T., Musalem, A. K. & Fridman, M. Coupling between two singing wineglasses. Am. J. Phys. 77, 1066–1067 (2009). 14. 14. Schibli, T. et al. Attosecond active synchronization of passively mode-locked lasers by balanced cross correlation. Opt. Lett. 28, 947–949 (2003). 15. 15. Fridman, M., Pugatch, R., Nixon, M., Friesem, A. A. & Davidson, N. Measuring maximal eigenvalue distribution of wishart random matrices with coupled lasers. Phys. Rev. E 85, 020101 (2012). 16. 16. Davis, P. K., Ho, A. & Dowdy, S. F. Biological methods for cell-cycle synchronization of mammalian cells. Biotechniques 30, 1322–1331 (2001). 17. 17. Oleskin, A. Network structures in biological systems. Biol. Bull. Rev. 4, 47–70 (2014). 18. 18. Petkoski, S., Palva, J. M. & Jirsa, V. K. Phase-lags in large scale brain synchronization: methodological considerations and in-silico analysis. PLoS Comput. Biol. 14, e1006160 (2018). 19. 19. Buhl, J. et al. From disorder to order in marching locusts. Science 312, 1402–1406 (2006). 20. 20. Javarone, M. A. & Marinazzo, D. Evolutionary dynamics of group formation. PLoS ONE 12, e0187960 (2017). 21. 21. Werner, B. & Mcnamara, D. E. Dynamics of coupled human-landscape systems. Geomorphology 91, 393–407 (2007). 22. 22. Krause, J., Ruxton, G. D. & Krause, S. Swarm intelligence in animals and humans. Trends Ecol. Evol. 25, 28–34 (2010). 23. 23. Li, X.-G., Gao, Z.-Y., Li, K.-P. & Zhao, X.-M. Relationship between microscopic dynamics in traffic flow and complexity in networks. Phys. Rev. E 76, 016110 (2007). 24. 24. Porfiri, M., Stilwell, D. J. & Bollt, E. M. Synchronization in random weighted directed networks. IEEE Trans. Circuits Syst. I Regul. Pap. 55, 3170–3177 (2008). 25. 25. Sumpter, D. J., Zabzina, N. & Nicolis, S. C. Six predictions about the decision making of animal and human groups. Manag. Decis. Econ. 33, 295–309 (2012). 26. 26. Smaldino, P. E. & Richerson, P. J. The origins of options. Front. Neurosci. 6, 50 (2012). 27. 27. Conradt, L. & Roper, T. J. Consensus decision making in animals. Trends Ecol. Evol. 20, 449–456 (2005). 28. 28. Sueur, C. & Pele, M. Social network and decision-making in primates: a report on franco-japanese research collaborations. Primates 57, 327–332 (2016). 29. 29. Becker, J., Brackbill, D. & Centola, D. Network dynamics of social influence in the wisdom of crowds. Proc. Natl Acad. Sci. USA 114, E5070–E5076 (2017). 30. 30. Russell, S. J. & Norvig, P. Artificial Intelligence: A Modern Approach (Pearson Education Limited, Malaysia, 2016). 31. 31. Krogh, A. & Vedelsby, J. Neural network ensembles, cross validation, and active learning. Adv. Neural Inf. Process. Syst. 7, 231–238 (1995). 32. 32. Gallup, A. C. et al. Visual attention and the acquisition of information in human crowds. Proc. Natl Acad. Sci. USA 109, 7245–7250 (2012). 33. 33. Sun, Z., Yu, W., Zhou, J. & Shen, M. Perceiving crowd attention: gaze following in human crowds with conflicting cues. Atten. Percept. Psychophys. 79, 1039–1049 (2017). 34. 34. Silverberg, J. L., Bierbaum, M., Sethna, J. P. & Cohen, I. Collective motion of humans in mosh and circle pits at heavy metal concerts. Phys. Rev. Lett. 110, 228701 (2013). 35. 35. Méndez-Valderrama, J. F., Kinkhabwala, Y. A., Silver, J., Cohen, I. & Arias, T. Density-functional fluctuation theory of crowds. Nat. Commun. 9, 3538 (2018). 36. 36. Miura, A., Kudo, K., Ohtsuki, T. & Kanehisa, H. Coordination modes in sensorimotor synchronization of whole-body movement: a study of street dancers and non-dancers. Hum. Mov. Sci. 30, 1260–1271 (2011). 37. 37. Boker, S. M., Covey, E. S., Tiberio, S. S. & Deboeck, P. R. Synchronization in dancing is not winner-takes-all: ambiguity persists in spatiotemporal symmetry between dancers. In Proc. North American Association for Computational, Social, and Organizational Science (Notre Dame, IN, 2005). 38. 38. Timmers, R., Endo, S., Bradbury, A. & Wing, A. M. Synchronization and leadership in string quartet performance: a case study of auditory and visual cues. Front. Psychol. 5, 645 (2014). 39. 39. Goebl, W. & Palmer, C. Synchronization of timing and motion among performing musicians. Music Percept. Interdiscip. J. 26, 427–438 (2009). 40. 40. Wing, A. M., Endo, S., Bradbury, A. & Vorberg, D. Optimal feedback correction in string quartet synchronization. J. R. Soc. Interface 11, 20131125 (2014). 41. 41. Néda, Z., Ravasz, E., Brechet, Y., Vicsek, T. & Barabási, A.-L. Self-organizing processes: the sound of many hands clapping. Nature 403, 849 (2000). 42. 42. Néda, Z., Ravasz, E., Vicsek, T., Brechet, Y. & Barabási, A.-L. Physics of the rhythmic applause. Phys. Rev. E 61, 6987 (2000). 43. 43. Judd, S., Kearns, M. & Vorobeychik, Y. Behavioral dynamics and influence in networked coloring and consensus. Proc. Natl Acad. Sci. USA 107, 14978–14982 (2010). 44. 44. Kearns, M., Suri, S. & Montfort, N. An experimental study of the coloring problem on human subject networks. Science 313, 824–827 (2006). 45. 45. Strogatz, S. H. Exploring complex networks. Nature 410, 268 (2001). 46. 46. Strogatz, S. H. Nonlinear Dynamics and Chaos: With Applications to Physics, Biology, Chemistry, and Engineering (CRC Press, 2018). 47. 47. Alderisio, F., Fiore, G., Salesse, R. N., Bardy, B. G. & di Bernardo, M. Interaction patterns and individual dynamics shape the way we move in synchrony. Sci. Rep. 7, 6846 (2017). 48. 48. Cohen, A. B. et al. Dynamic synchronization of a time-evolving optical network of chaotic oscillators. Chaos 20, 043142 (2010). 49. 49. Sorrentino, F. & Ott, E. Using synchronism of chaos for adaptive learning of time-evolving network topology. Phys. Rev. E 79, 016201 (2009). 50. 50. Sorrentino, F., Barlev, G., Cohen, A. B. & Ott, E. The stability of adaptive synchronization of chaotic systems. Chaos 20, 013103 (2010). 51. 51. Kuramoto, Y. & Nishikawa, I. Statistical macrodynamics of large dynamical systems. case of a phase transition in oscillator communities. J. Stat. Phys. 49, 569–605 (1987). 52. 52. Kuramoto, Y. Chemical Oscillations, Waves, and Turbulence, Vol. 19 (Springer Science & Business Media, 2012). 53. 53. Strogatz, S. H. From Kuramoto to crawford: exploring the onset of synchronization in populations of coupled oscillators. Phys. D 143, 1–20 (2000). 54. 54. Acebrón, J. A., Bonilla, L. L., Vicente, C. J. P., Ritort, F. & Spigler, R. The Kuramoto model: a simple paradigm for synchronization phenomena. Rev. Mod. Phys. 77, 137 (2005). 55. 55. Taylor, D., Ott, E. & Restrepo, J. G. Spontaneous synchronization of coupled oscillator systems with frequency adaptation. Phys. Rev. E 81, 046214 (2010). 56. 56. Botvinick, M. M., Cohen, J. D. & Carter, C. S. Conflict monitoring and anterior cingulate cortex: an update. Trends Cogn. Sci. 8, 539–546 (2004). 57. 57. Council, N. R. et al. Hearing Loss: Determining Eligibility for Social Security Benefits (National Academies Press, 2004). 58. 58. Tradonsky, C. et al. Conversion of out-of-phase to in-phase order in coupled laser arrays with second harmonics. Photonics Res. 3, 77–81 (2015). 59. 59. Chafe, C., Caceres, J.-P. & Gurevich, M. Effect of temporal separation on synchronization in rhythmic performance. Perception 39, 982–992 (2010). 60. 60. Sorrentino, F., Pecora, L. M., Hagerstrom, A. M., Murphy, T. E. & Roy, R. Complete characterization of the stability of cluster synchronization in complex dynamical networks. Sci. Adv. 2, e1501737 (2016). 61. 61. Nixon, M. et al. Synchronized cluster formation in coupled laser networks. Phys. Rev. Lett. 106, 223901 (2011). 62. 62. DHuys, O., Vicente, R., Erneux, T., Danckaert, J. & Fischer, I. Synchronization properties of network motifs: Influence of coupling delay and symmetry. Chaos 18, 037116 (2008). 63. 63. Takamatsu, A. et al. Spatiotemporal symmetry in rings of coupled biological oscillators of physarum plasmodial slime mold. Phys. Rev. Lett. 87, 078102 (2001). 64. 64. Pal, V. et al. Phase locking of even and odd number of lasers on a ring geometry: effects of topological-charge. Opt. Express 23, 13041–13050 (2015). 65. 65. Nixon, M., Ronen, E., Friesem, A. A. & Davidson, N. Observing geometric frustration with thousands of coupled lasers. Phys. Rev. Lett. 110, 184102 (2013). 66. 66. Wang, W. & Ghosh, B. K. Stability analysis on Kuramoto model of coupled oscillators. IFAC Proc. Vol. 41, 514–518 (2008). ## Acknowledgements We thank the Joseph Fetter Museum of Nanotechnology in the Institute of Nanotechnology and Advanced Materials at the Bar-Ilan University for supporting this research. ## Author information Authors ### Contributions S.S. analyzed the results and composed the musical phrase, A.W. coordinate the violin players, I.S. organized the location for the experiments, H.D. helped in coordinating all the participants and equipment, E.S. wrote the computer code for running the experiment, designed and installed the sound system, and performed the experiments, D.W. supervised over the musical part of the research, N.D. helped in designing the experiment, analyzing the results, and writing the manuscript, and M.F. conceived the idea, supervised over all the experiments, analyzed the results, and wrote the manuscript. ### Corresponding author Correspondence to Moti Fridman. ## Ethics declarations ### Competing interests The authors declare no competing interests. Peer review information Nature Communications thanks the anonymous reviewer(s) for their contribution to the peer review of this work. Peer reviewer reports are available. Publisher’s note Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations. ## Rights and permissions Reprints and Permissions Shahal, S., Wurzberg, A., Sibony, I. et al. Synchronization of complex human networks. Nat Commun 11, 3854 (2020). https://doi.org/10.1038/s41467-020-17540-7 • Accepted: • Published: By submitting a comment you agree to abide by our Terms and Community Guidelines. If you find something abusive or that does not comply with our terms or guidelines please flag it as inappropriate.
{}
dim1_data integrates a function which is specified numerically at four or more points, over the whole of its specified range, using third-order finite difference formulae with error estimates, according to a method due to Gill and Miller (1972). For full information please refer to the NAG Library document for d01ga https://www.nag.com/numeric/nl/nagdoc_28.7/flhtml/d01/d01gaf.html Parameters xfloat, array-like, shape The values of the independent variable, i.e., the . yfloat, array-like, shape The values of the dependent variable at the points , for . Returns ansfloat The estimated value of the integral. erfloat An estimate of the uncertainty in . Raises NagValueError (errno ) On entry, . Constraint: . (errno ) On entry, and , , and . Constraint: either or . (errno ) On entry, , and . Constraint: . Notes dim1_data evaluates the definite integral where the function is specified at the -points , which should be all distinct, and in either ascending or descending order. The integral between successive points is calculated by a four-point finite difference formula centred on the interval concerned, except in the case of the first and last intervals, where four-point forward and backward difference formulae respectively are employed. If is less than , the function fails. An approximation to the truncation error is integrated and added to the result. It is also returned separately to give an estimate of the uncertainty in the result. The method is due to Gill and Miller (1972). References Gill, P E and Miller, G F, 1972, An algorithm for the integration of unequally spaced data, Comput. J. (15), 80–83
{}
# zbMATH — the first resource for mathematics Ideals of noncommutative $$DR\ell$$-monoids. (English) Zbl 1081.06017 Summary: In this paper, we introduce the concept of an ideal of a noncommutative dually residuated lattice-ordered monoid, and we show that congruence relations and certain ideals are in a one-to-one correspondence. ##### MSC: 06F05 Ordered semigroups and monoids 06D35 MV-algebras Full Text: ##### References: [1] A. Di Nola, G. Georgescu and A. Iorgulescu: Pseudo BL-algebras: Part I. Mult. Val. Logic 8 (2002), 673-714. · Zbl 1028.06007 [2] A. Dvure?enskij: On pseudo MV-algebras. Soft Comp. 5 (2001), 347-354. · Zbl 0998.06010 · doi:10.1007/s005000100136 [3] A. Dvure?enskij: Pseudo MV-algebras are intervals in ?-groups. J. Austral. Math. Soc. 72 (2002), 427-445. · Zbl 1027.06014 · doi:10.1017/S1446788700036806 [4] G. Georgescu and A. Iorgulescu: Pseudo MV-algebras. Mult. Val. Logic 6 (2001), 95-135. · Zbl 1014.06008 [5] G. Gr?tzer: General Lattice Theory. Birkh?user-Verlag, Basel-Boston-Berlin, 1998. [6] I. Chajda: Congruence kernels in weakly regular varieties. Southeast Asian Bull. Math. 24 (2000), 15-18. · Zbl 0988.08002 · doi:10.1007/s10012-000-0015-8 [7] I. Chajda, R. Hala? and J. Rach?nek: Ideals and congruences in generalized MV-algebras. Demonstratio Math. 33 (2000), 213-222. [8] T. Kov??: A general theory of dually residuated lattice ordered monoids. PhD. Thesis. Palack? Univ. Olomouc, 1996. [9] J. Rach?nek: Prime ideals in autometrized algebras. Czechoslovak Math. J. 112 (1987), 65-69. · Zbl 0692.06007 [10] J. Rach?nek: A non-commutative generalization of MV-algebras. Czechoslovak Math. J. 52 (2002), 255-273. · Zbl 1012.06012 · doi:10.1023/A:1021766309509 [11] K. L. N. Swamy: Dually residuated lattice ordered semigroups I. Math. Ann. 159 (1965), 105-114. · Zbl 0135.04203 · doi:10.1007/BF01360284 [12] K. L. N. Swamy: Dually residuated lattice ordered semigroups III. Math. Ann. 167 (1966), 71-74. · Zbl 0158.02601 · doi:10.1007/BF01361218 This reference list is based on information provided by the publisher or from digital mathematics libraries. Its items are heuristically matched to zbMATH identifiers and may contain data conversion errors. It attempts to reflect the references listed in the original paper as accurately as possible without claiming the completeness or perfect precision of the matching.
{}
### HyperPlonk: Plonk with Linear-Time Prover and High-Degree Custom Gates ##### Abstract Plonk is a widely used succinct non-interactive proof system that uses univariate polynomial commitments. Plonk is quite flexible: it supports circuits with low-degree custom'' gates as well as circuits with lookup gates (a lookup gate ensures that its input is contained in a predefined table). For large circuits, the bottleneck in generating a Plonk proof is the need for computing a large FFT. We present HyperPlonk, an adaptation of Plonk to the boolean hypercube, using multilinear polynomial commitments. HyperPlonk retains the flexibility of Plonk but provides several additional benefits. First, it avoids the need for an FFT during proof generation. Second, and more importantly, it supports custom gates of much higher degree than Plonk without harming the running time of the prover. Both of these can dramatically speed up the prover's running time. Since HyperPlonk relies on multilinear polynomial commitments, we revisit two elegant constructions: one from Orion and one from Virgo. We show how to reduce the Orion opening proof size to less than 10kb (an almost factor 1000 improvement) and show how to make the Virgo FRI-based opening proof simpler and shorter. Note: 1. Added a new permutation PIOP for small fields (Sec 3.6). 2. Updated experiments and evaluations (Sec 6). 3. Revised the unrolled and optimized HyperPlonk (Appendix C). Available format(s) Category Public-key cryptography Publication info Preprint. Keywords zero-knowledge proofssumcheckplonkpolynomial commitment scheme Contact author(s) binyi @ espressosys com benedikt @ cs stanford edu dabo @ cs stanford edu zhangzhenfei @ gmail com History 2023-01-09: revised See all versions Short URL https://ia.cr/2022/1355 CC BY BibTeX @misc{cryptoeprint:2022/1355, author = {Binyi Chen and Benedikt Bünz and Dan Boneh and Zhenfei Zhang}, title = {HyperPlonk: Plonk with Linear-Time Prover and High-Degree Custom Gates}, howpublished = {Cryptology ePrint Archive, Paper 2022/1355}, year = {2022}, note = {\url{https://eprint.iacr.org/2022/1355}}, url = {https://eprint.iacr.org/2022/1355} } Note: In order to protect the privacy of readers, eprint.iacr.org does not use cookies or embedded third party content.
{}
### Better writing, editing, and thinking through the power of line breaks ##### 2015/05/13 With *LaTeX and markup languages in general, the (partial) separation of content from format allows us to write in ways that are 1. Conceptually powerful; and 2. Excellently structured for editing (which is when the best writing most often happens). In this post, I'll provide some of what I find to be good practices for writing *LaTeX documents. #### Writing is coding: A framing first: written text is code—an algorithm intended to elicit specific thinking (which I'll say includes emotion, a fundamental kind of thinking) or a variety of possible thoughts in a reader's mind. With computer programs, we have the clean distinction of human readable/editable code and resultant compiled binaries. For written text, and perhaps surprisingly given we're now talking about people communicating with each other and not machines, being human editable and being human readable are also not exactly the same thing. #### Linebreakfulness: A simple, vital element of writing enabled by *LaTeX is that we can start each new sentence or phrase on a separate line. (This paragraph provides an example, even if we're in textwrapping html-space.) The benefits are that it's then: 1. easy to move sentences and phrases around (Emacs and other powerful text editors make this a pleasure); and 2. reflective of the actual structure of the thought process that goes into writing. Phrases are a natural base unit, so breaking at commas and semi-colons makes sense, and long phrases should have carriage returns applied liberally. When line breaks are used well, sentences and phrases are clearly rendered as the core material of a text. Note: Emacs and presumably other editors can be extended to make line breaks occur automatically, and to repack paragraphs with line breaks after each phrase-ending element. My PhD advisor Dan Rothman pointed out the blocking by sentences idea and, over time, I've found many kinds of *LaTeX structures can be laid out in ways I find better for writing, rewriting, and, inextricably, thinking. I'll go through an example for equations and then add a few examples of other environments and elements. #### Equations: Here's an initial form of an equation for the Jensen-Shannon divergence from one of our papers on Google Books: $$D_{JS,i}(P||Q) = -m_i\log m_i + \frac12\left(p_i\log p_i+q_i\log q_i\right).$$ Here's the output which is in decent shape: $$D_{JS,i}(P||Q) = -m_i\log m_i + \frac12\left(p_i\log p_i+q_i\log q_i\right).$$ The LaTeX code is compact, does the job, but is difficult to read and edit. Let's help ourselves (the machines will be fine) and step through some improvements. First, we need to separate the environment, indent the equation, and add a label for potential referencing: $$D_{JS,i}(P||Q) = -m_i\log m_i + \frac{1}{2}\left(p_i\log p_i+q_i\log q_i\right). \label{JSequation}$$ I like to add the label at the end of environments that use them (figures, tables, etc.). I've also added curly braces to the \frac command; \frac{1}{2} is clearer and allows for more complicated arguments. As for sentences, we can deploy line breaks to leave the equation both easier to read and edit. Here's a simple start: $$D_{JS,i}(P||Q) = - m_i \log m_i + \frac{1}{2}\left(p_i \log p_i + q_i \log q_i \right). \label{eq:googlebooks.JSequation}$$ The main pieces of the equation (blob = blob + blob) now have their own lines. But we can do more and break the equation across lines into its smallest functional units. We'll do these things: • Give equalities and operations their own line . • As for the equation environment, place enclosing bracket structures on separate lines, and allow the editor to indent things nicely. $$D_{\textrm{JS},i} (P\,||\,Q) = - m_i \log_{2} m_i + \frac{1}{2} \left( p_i \log_{2} p_i + q_i \log_{2} q_i \right). \label{eq:googlebooks.JSequation}$$ The output has changed in a just a few small ways: $$D_{\textrm{JS},i} (P\,||\,Q) = - m_i \log_{2} m_i + \frac{1}{2} \left( p_i \log_{2} p_i + q_i \log_{2} q_i \right).$$ Both reading and editing are now simple. A few notes: • As for sentences, we can easily move functional units around by cutting lines or sets of lines. If we wanted to swap the order of $p_i \log_{2} p_i$ and $q_i \log_{2} q_i$, we would just cut and paste lines (some C-k, C-y action). • I've kept the form $p_i \log_{2} p_i$ together as this is a conceptually clear element for entropy. • $D_{\textrm{JS},i}$ and $(P\,||\,Q)$ are on separate lines to make future editing easier, and we've given the $P$ and $Q$ some breathing room with the small space "\,". • We've also converted $JS$ to $\textrm{JS}$ so that this subscript is set in normal text rather than math text. • Last: we've made the $\log$ into $\log_{2}$ to be clear. Again, even when a single term is a subscript or an argument it's best to use curly braces for clarity and future editing. • If we use mildly complex expressions even more than a few times, it's a good idea to turn them into a command. We may find we have a general structure that could take in arguments as well. So for example we could replace D_{\textrm{JS},i}(P\,||\,Q) with a command \DJS{P}{Q} with \newcommand[2]{\DJS}{ D_{\textrm{JS},i} (#1\,||\,#2) } in the preamble (I like to have a separate settings file; more on this elsewhere). • Along the way, I created a richer reference description for the label. As a rule, I use this format: \label{eq:papertag.tag} \label{fig:papertag.tag} \label{tab:papertag.tag} \label{sec:papertag.tag} \label{subsec:papertag.tag} where papertag gives a semantically reasonable pointer to the paper. Having this extra level of identification is useful in various ways including (1) being able to search for a certain kind of reference (e.g., just figures), and (2) when combining documents to form, for example, an edited volume or thesis. All right. Here's a selection of example formats, including a few more equations: #### More Equations: In Fig.~\ref{fig:updownrfn_network02}A, we show an example of a probabilistic response function, the tent map, which is defined as $T_r(x) = rx$ for $0 \le x \le \frac{1}{2}$ and $r(1-x)$ for $\frac{1}{2} \le x \le 1.$ While breakable, the ranges for $x$ make for reasonable phrases so they both stay intact on a single line. Here's the output: In Fig. 1A, we show an example of a probabilistic response function, the tent map, which is defined as $T_r(x) = rx$ for $0 \le x \le \frac{1}{2}$ and $r(1-x)$ for $\frac{1}{2} \le x \le 1.$ From my course Beamerized Principles of Complex Systems, part of a calculation for Herbert Simon's Rich-gets-Richer model: Preamble (included in a separate settings file): \newcommand{\avg}[1]{\left\langle#1\right\rangle} \newcommand{\simonalpha}{\rho} Calculation: $$\avg{N_{k,t+1} - N_{k,t}} = (1-\simonalpha) \left( (k-1)\frac{N_{k-1,t}}{t} - k\frac{N_{k,t}}{t} \right)$$ becomes $$n_k(t+1)-n_k t = (1-\simonalpha) \left( (k-1)\frac{n_{k-1}t}{t} - k\frac{n_{k}t}{t} \right)$$ #### Figures and Tables: Here's a draft example figure environment, one spanning two columns in our PNAS paper on the positivity of human language (Fig. 3). Fairly simple: centre the figure and then give the caption plenty of linebreakage. The long figure name and labels are no problem to handle and mitigate the possibility of overlap later on (note the paper tag mlhap). Giving figures long names (lumping tags together) makes finding them later on (if and when one's memory fails) much simpler (using, for example, locate). Table environments can be laid out in the same way, with some attention paid to tabular environments. Some good practices foe structuring work directories will appear elsewhere. \begin{figure*} \centering \includegraphics[width=\textwidth]{fighappinessdist_jellyfish_words_havg_multilanguage_example001_noname.pdf} \caption{ Examples of how word happiness varies little with usage frequency. Above each plot is a histogram of average happiness $h_{\rm avg}$ for the 5000 most frequently used words in the given corpus, matching Fig.~\ref{fig:mlhap.happinessdist_comparison}. Each point locates a word by its rank $r$ and average happiness $h_{\textrm{avg}}$, and we show some regularly spaced example words. The descending gray curves of these jellyfish plots indicate deciles for windows of 500 words of contiguous usage rank, showing that the overall histogram's form is roughly maintained at all scales. The kkkkkk...' words represent laughter in Brazilian Portuguese, in the manner of hahaha...'. See Fig.~\ref{fig:mlhap.jellyfish_translated} for an English translation, Figs.~\ref{fig:mlhap.happinessdist_jellyfish_words_havg_multilanguage001_table1}--\ref{fig:mlhap.happinessdist_jellyfish_words_havg_multilanguage001_table4} for all corpora, and Figs.~\ref{fig:mlhap.happinessdist_jellyfish_words_hstd_multilanguage001_table1}--\ref{fig:mlhap.happinessdist_jellyfish_words_hstd_multilanguage001_table4} for the equivalent plots for standard deviation of word happiness scores. } \label{fig:mlhap.jellyfish} \end{figure*} #### Okay, that's enough: Nutshell: line breaks are unexpectedly good friends. Using them well with sophisticated markup languages will enable faster and (hopefully) better writing and editing.
{}
Skip navigation Please use this identifier to cite or link to this item: http://arks.princeton.edu/ark:/88435/dsp016969z401v Title: Limits of Bayesianism Authors: Zhang, Xueyin (Snow) Advisors: ElgaLederman, AdamHarvey Contributors: Philosophy Department Subjects: Philosophy Issue Date: 2021 Publisher: Princeton, NJ : Princeton University Abstract: Two central questions in epistemology are: What should rational agents believe? How should they revisetheir beliefs in light of new information? Bayesianism offers a unified answer to both. Roughly, the view consists of two claims: Probabilistic Coherence: Rational agents have degrees of belief (credences) that can be represented bya real-valued probability function over an algebra of propositions. Conditionalization: Rational agents update their credences by conditionalizing on their total evidence. The view appears simple and powerful. It explains, for instance, why it seems irrational to be more confidentthat there are aliens on Mars than that there are aliens, or to change one’s mind about the existence of Martians after learning that snow is white, if one judges the two to be unconnected ex ante. The former violates the axioms of probability, while the latter violates the rule of conditionalization. This dissertation examines the limits of Bayesianism from within its own theoretical confines. I look atthree independent but related questions: 1. Should rational agents have degrees of belief that are countably additive?2. Should rational agents refrain from having definite degrees of belief given certain propositions? 3. How should rational agents plan to update in response to new information in general? I show that, in each case, the naive Bayesian answer conflicts with some intuitively plausible principlesthat many Bayesians endorse on independent grounds. The upshot is not purely negative. I argue that these conflicts shed interesting light on general epistemological questions, including the role of infinitary idealization, the nature of evidence and the value of rationality. URI: http://arks.princeton.edu/ark:/88435/dsp016969z401v Type of Material: Academic dissertations (Ph.D.) Language: en Appears in Collections: Philosophy Files in This Item: File Description SizeFormat Zhang_princeton_0181D_13874.pdf886.65 kBAdobe PDF Items in Dataspace are protected by copyright, with all rights reserved, unless otherwise indicated.
{}
overlap {bayestestR} R Documentation ## Overlap Coefficient ### Description A method to calculate the overlap coefficient between two empirical distributions (that can be used as a measure of similarity between two samples). ### Usage overlap( x, y, method_density = "kernel", method_auc = "trapezoid", precision = 2^10, extend = TRUE, extend_scale = 0.1, ... ) ### Arguments x Vector of x values. y Vector of x values. method_density Density estimation method. See estimate_density(). method_auc Area Under the Curve (AUC) estimation method. See area_under_curve(). precision Number of points of density data. See the n parameter in density. extend Extend the range of the x axis by a factor of extend_scale. extend_scale Ratio of range by which to extend the x axis. A value of 0.1 means that the x axis will be extended by 1/10 of the range of the data. ... Currently not used. ### Examples library(bayestestR) x <- distribution_normal(1000, 2, 0.5) y <- distribution_normal(1000, 0, 1) overlap(x, y) plot(overlap(x, y)) [Package bayestestR version 0.13.0 Index]
{}
ON THE κ-REGULAR SEQUENCES AND THE GENERALIZATION OF F-MODULES Title & Authors ON THE κ-REGULAR SEQUENCES AND THE GENERALIZATION OF F-MODULES Abstract For a given ideal I of a Noetherian ring R and an arbitrary integer $\small{{\kappa}{\geq}-1}$, we apply the concept of $\small{{\kappa}}$-regular sequences and the notion of $\small{{\kappa}}$-depth to give some results on modules called $\small{{\kappa}}$-Cohen Macaulay modules, which in local case, is exactly the $\small{{\kappa}}$-modules (as a generalization of f-modules). Meanwhile, we give an expression of local cohomology with respect to any $\small{{\kappa}}$-regular sequence in I, in a particular case. We prove that the dimension of homology modules of the Koszul complex with respect to any $\small{{\kappa}}$-regular sequence is at most $\small{{\kappa}}$. Therefore homology modules of the Koszul complex with respect to any filter regular sequence has finite length. Keywords $\small{{\kappa}}$-regular M-sequences;$\small{{\kappa}}$-depth;$\small{{\kappa}}$-ht;local cohomology modules;$\small{{\kappa}}$-Cohen Macaulay modules;f-modules;$\small{{\kappa}}$-modules;Koszul complexes; Language English Cited by References 1. Kh. Ahmadi-Amoli, Filter regular sequences, local cohomology modules and singular sets, Ph. D. Thesis, University for Teacher Education, Iran, 1996. 2. Kh. Ahmadi-Amoli and N. Sanaei, The consepts of k-regular sequences and k-height of an ideal, Preprint. 3. M. Brodmann and L. T. Nhan, A finitness result for associated primes of certain extmodules, Comm. Algebra 36 (2008), no. 4, 1527-1536. 4. N. Q. Chinh and L. T. Nhan, On the associated primes and the support of local cohomology modules, Algebra Colloq. 15 (2008), no. 4, 599-608. 5. R. Lu and Z. Tang, The f-depth of an ideal on a module, Proc. Amer. Math. Soc. 130 (2002), no. 7, 1905-1912. 6. H. Matsumura, Commutative Ring Theory, Cambridge University Press, 1986. 7. L. Melkersson, Some applications of a criterion for artinianness of a module, J. Pure Appl. Algebra 101 (1995), no. 3, 291-303. 8. U. Nagal and P. Schenzel, Cohomological annihilators and Castelnuovo-Mumford regularity, Commutative algebra: syzygies, multiplicities, and birational algebra (South Hadley, MA, 1992), 307-328, Contemp. Math., 159, Amer. Math. Soc., Providence, RI, 1994. 9. L. T. Nhan, On generalized regular sequences and the finiteness for associated primes of local cohomology modules, Comm. Algebra 33 (2005), no. 3, 793-806. 10. P. Schenzel, N. V. Trung, and N. T. Cuong, Verallgemeinerte Cohen-Macaulay-Moduln, Math. Nachr. 85 (1978), 57-73. 11. N. Zamani, Cohen-Macaulay modules in dimension > s and results on local cohomology, Comm. Algebra 37 (2009), no. 4, 1297-1307.
{}
# Access-DB(.accdb) access crash app when using OleDbConnection.Open() I recently tried to use access database with C# code inside a little Revit plugin but it crash when I use OleDbConnection.Open() Here is my snippets: CPFMainModelView mainModelView; static readonly string connectionString = @"Provider=Microsoft.ACE.OLEDB.12.0;Data Source=C:\Users\ThomasLECUPPRE(Letit\source\LIB_MainDB.accdb"; public UserDBManager(CPFMainModelView cmmw) { mainModelView = cmmw; try { RetrieveprojectList(); } catch (Exception ex) { mainModelView.Texte = $"{ex.Message}\n\n{ex.StackTrace}\n\n{ex.InnerException}\n\n{ex.Data}"; } } public void RetrieveprojectList() { using (OleDbConnection con = new OleDbConnection(connectionString)) { con.Open(); OleDbCommand command = new OleDbCommand("SELECT Ref FROM FolderCategory", con); OleDbDataReader reader = command.ExecuteReader(); while (reader.Read()) { mainModelView.Texte +=$"\n{reader["Ref"]}"; } } } Here is a view of my tiny db in access What did I miss ? C# – OleDbConnection.Open() causing a crash Simple C# connection to .accdb file Before use "Provider=Microsoft.ACE.OLEDB.12.0" and build the solution for X86 only, I was using "Provider=Microsoft.Jet.OLEDB.4.0" and build solutino for Any CPU but this thread (in french sorry) tell to it another way. Thank you for help. ### >Solution : Install the missing dependancy SO post On modern Windows this driver isn’t available by default anymore, but you can download as Microsoft Access Database Engine 2010 Redistributable on the MS site. If your app is 32 bits be sure to download and install the 32 bits variant because to my knowledge the 32 and 64 bit variant cannot coexist. Depending on how your app locates its db driver, that might be all that’s needed. However, if you use an UDL file there’s one extra step – you need to edit that file. Unfortunately, on a 64bits machine the wizard used to edit UDL files is 64 bits by default, it won’t see the JET driver and just slap whatever driver it finds first in the UDL file. There are 2 ways to solve this issue: 1. start the 32 bits UDL wizard like this: C:\Windows\syswow64\rundll32.exe "C:\Program Files (x86)\Common Files\System\Ole DB\oledb32.dll",OpenDSLFile C:\path\to\your.udl. Note that I could use this technique on a Win7 64 Pro, but it didn’t work on a Server 2008R2 (could be my mistake, just mentioning) 2. open the UDL file in Notepad or another text editor, it should more or less have this format: [oledb] ; Everything after this line is an OLE DB initstring Provider=Microsoft.Jet.OLEDB.4.0;Data Source=C:\Path\To\The\database.mdb;Persist Security Info=False That should allow your app to start correctly.
{}
# GR, time-like energy, and photon-path bending? 1. Dec 7, 2005 ### cefarix Does time-like curvature of spacetime cause light to bend, for example, around the sun? Or is the gravitational bending of light only due to the space-like curvature? So far, I understand the concepts as: Time-like curvature is caused by the mc^2 part of E, while space-like curvature is caused by the pc part of E. So the sun should have a huge curvature time-wise, but very little space-wise (perhaps only due to its rotation?). Since photons have zero rest mass, they only have the space-like component of their energy (just the pc part), and so should only be bent by space-like curvature. So a perfectly still mass should not bend light? I think I'm wrong here somewhere. Do photons still have a time-like component to their energy despite having zero rest mass? Need some help here...confused... 2. Dec 7, 2005 From reading the "sun Bending Light" thread I gather that photons do have a time like componet to there energy, but I can not explain why sorry =/. 3. Dec 7, 2005 Staff Emeritus The momentum energy tensor $$T_{\mu\nu}$$ is responsible for spacetime curvature. Bothe time and space are curved together. The tensor has 16 components and is symmetric so that its top row and first column are the same. These carry the time ($$T_{00}$$) and space ($$T_{0i} = T_{i0}, i = 1,2,3$$) components of the momentum enrgy four-vector. The other components describe spatial stress of energy-momentum. Einstein's field equations start out with one nonlinear partial differential equation for each component of T. Because of the symmetry of T and other symmetries in Riemannian geometry, the 16 equations contain only 10 independent ones. These 10 equations are simultaneous; you have to solve them as a set. You cannot say this piece or that is responsible for space curvature or time curvature. Spacetime curvature is caused by the whole tensor. 4. Dec 7, 2005 ### pervect Staff Emeritus When people talk about the deflection of light being twice that of the Newtonian value due to spatial curvature, they are not talking about either the Einstein or the Riemann curvature tensors. They are talking about something else, the Christoffel symbols. If you write the equation for a body falling directly into a massive body in a Scwarzschild metric, you get a differential equation (the geodesic equation of motion) that looks like this. $$\frac{d^2 r}{d \tau^2} + \Gamma^r{}_{tt} \left( \frac{dt}{d\tau} \right) ^2 + \Gamma^r{}_{rr} \left( \frac{dr}{d\tau} \right)^2 = 0$$ (This is the simplest case, there are similar terms due to $\Gamma^r{}_{\theta \theta}$ and $\Gamma^r{}_{\phi \phi}$ in the more general expression. If the velocity is much lower than 'c', $dt/d\tau$ is essentially one, and $dr/d\tau$ is << 1. In this case, the acceleration of an object is essentially constant and independent of its velocity. We can therfore identify the Christoffel symbol $\Gamma^r{}_{tt}$ with radial gravitational acceleration in the Newtonian limit. This is no longer the case when $dr/d\tau$ becomes of an order of magnitude near unity - the second term in the differential equation of motion becomes important. The first Christoffel symbol involves only two time subscripts, the second Christoffel symbol involves only spatial subscripts. Because the name "Christoffel symbol" scares people, sometimes they are talked about as curvatures, though strictly speaking they are not. Note that the first symbol has only time-like subscripts, which is why it is sometimes very losely called a "time curvature", and the second symbol has only spatial subscripts. Last edited: Dec 7, 2005
{}
# POINTMEE - Editorial Tester: Istvan Nagy Editorialist: Aman Dwivedi Easy-Medium # PREREQUISITES: Maths, Observation # PROBLEM: You are given N distinct points in a 2-D plane, numbered 1 through N. The X-coordinate of the points are X_1,X_2,…,X_N respectively and the Y-coordinates are Y_1,Y_2,…,Y_N respectively. Your goal is to make the location of all the N points equal to that of each other. To achieve this, you can do some operations. In one operation, you choose an index i(1\leq i\leq N) and a real number K and change the location of the i_{th} point in one of the following ways: • Set (X_i,Y_i)=(X_i+K,Y_i) • Set (X_i,Y_i)=(X_i,Y_i+K) • Set (X_i,Y_i)=(X_i+K,Y_i+K) • Set (X_i,Y_i)=(X_i+K,Y_i−K) Find the minimum number of operations to achieve the goal. # QUICK EXPLANATION: The only points that can yield minimum operations when every other given point has coincided with them are the ones to which a pair of points can be made equal to in 1 operation each. We find all such points that can potentially be the meeting points leading to minimum operations. For each such point A, we calculate total operations needed to make the location of all N points equal to A and pick the minimum amongst these operations as answer. # EXPLANATION: We can observe that a point is allowed to move in 8 directions with respect to the co-ordinate axis, namely {0\degree, 45\degree, 90\degree, 135\degree, 180\degree, 225\degree, 270\degree, 315\degree} Consider any pair of given points, we can deduce from the given operations that making the location of these 2 points same can always be done in 1 operation per point as follows: • If the points both lie on a line inclined to the axes at 45\degree in any of the 4 quadrants, then a single operation moving one of the points along this line to meet the other would suffice. • If the points don’t lie on such a line, then bringing them to the same location would need: • 1 operation if they lie on a horizontal or vertical line, i.e. moving one point along the horizontal or vertical line to the other as is allowed by the given operations. • 2 operations otherwise, i.e. moving both points along the allowed directions, towards each other until they collide at the same point. The maximum value of minimum number of operations to bring all points to same location would be 2*N. Proof Let A be considered as the meeting point and N the total number of points: If N_1 number of points need a single operation to coincide with A and N_2 points need 2 operations, the minimum operations will be N_1+2*N_2 where N_1+N_2=N or N_1+N_2+1=N, in case one point is already positioned at A. A single point can reach any other point in at most 2 operations, thus even if all points need 2 operations to reach an arbitrary selected meeting point, it will be achieved in a total of 2*N operations, 2 for each point. We need to look for points where we can achieve a value less than 2*N. Selecting points which can be reached by a pair of points in at most 1 operation each will give us total operations less than 2*N (As stated at the beginning such a point always exists for every pair of points). Proof Consider a pair of points initially having to need 2 operations each to reach the meeting point. If we now change the meeting point to a location to which this pair of points can reach in a 1 operation each, we will be left with 2*N-2*2+2*1 total operations which is better than the initial 2*N. Explaining how this approach is the optimal one Since considering one of the 2 points in the pair itself as the potential meeting point also reduces the final answer, I will explain how these cases are also covered by the mentioned approach and needn’t be dealt with seperately. Consider 2 points Y and Z. If we take one of the 2 points, say Y itself as the final location, our maximum value of minimum operations needed to make all points meet would change to 2*N-2. There are 2 possibilities arising as follows: 1. No other number of operations are reduced except the ones for Y which changed from 2 to 0 (Since no 2 points are in the same location, no operations except that for Y itself can be 0). 2. Operations needed for Y are 0 and number of operations for one other point, say X are reduced from 2 to 1. 3. More than one operation apart from those for Y itself are reduced from 2 to 1, let us assume X and W have changed from 2 operations to 1. The first case is similar to meeting at a location which needs 1 operation from each point Y and Z (as mentioned earlier such a point always exists), both give a minimum number of operations equal to 2*N-2. The second case is same as finding X as a potential meeting point while looking for locations which need Y and X to meet in most 1 operation each (as X is 1 operation away from Y as stated in case 2). For case 3 also the point Y is discoverable similar to case 2. Thus the approach we are going to follow covers all potential points that can be chosen as meeting points to minimize the operations. We have already established that it is possible to equate locations of 2 points (say {a, b}, {c, d}) in at most 2 operations, we can move a point along horizontal, vertical and diagonal lines on the axis, thus we pick a line inclined in one of these directions (namely {0\degree, 45\degree, 90\degree, 135\degree}) for {a, b} to move along, and we pick one of the remaining 3 directions for {c, d} to move along. This way each of them only moves along most 1 of the allowed lines on the axis until it meets with the other thus giving us the location of their meeting with no more than 1 operation each. Once we choose 2 points, the above mentioned locations of meeting can be found using intersection of line1_{x} - denoting the line through {a, b} inclined at x\degree to the horizontal axis and line2_{y} - denoting the line through {c, d} inclined at y\degree to the horizontal axis, where x\neq y. This yields the following pairs of intersecting lines: Once we have these potential meeting points, we will traverse through them and for each point P we calculate the sum of operations needed for each of the given points to coincide with P. Minimum of these sums would be our answer. # TIME COMPLEXITY: O(N\times N\times N) per test case. Because there can be N\times N potential meeting points and for each of these we will find the sum of operations for N points. # SOLUTIONS: Setter #include<bits/stdc++.h> using namespace std; template <class T> ostream &operator << (ostream &os, const vector<T> &p) { os << "["; for (auto&it : p) os << it << " "; return os << "]";} template <class S, class T> ostream &operator << (ostream &os, const pair<S, T> &p) { return os << "(" << p.first << "," << p.second << ")";} #ifndef ONLINE_JUDGE #define deb(...) dbs(#VA_ARGS,VA_ARGS) template <class T> void dbs(string str, T t) { cerr << str << ":" << t << "\n";} template<class T, class...S> void dbs(string str, T t, S... s) { int idx = str.find(','); cerr << str.substr(0, idx) << ":" << t << ","; dbs(str.substr(idx + 1), s...);} #else #define deb(...){} #endif #define ld double #define int long long long long readInt(long long l, long long r, char endd) { long long x = 0; int cnt = 0; int fi = -1; bool is_neg = false; while (true) { // deb(x); char g = getchar(); if (g == '-') { assert(fi == -1); is_neg = true; continue; } if ('0' <= g && g <= '9') { x *= 10; x += g - '0'; if (cnt == 0) { fi = g - '0'; } cnt++; assert(fi != 0 || cnt == 1); assert(fi != 0 || is_neg == false); assert(!(cnt > 19 || ( cnt == 19 && fi > 1) )); } else if (g == endd) { if (is_neg) { x = -x; } assert(l <= x && x <= r); return x; } else { deb(l, r, x, cnt); assert(false); } } } string readString(int l, int r, char endd) { string ret = ""; int cnt = 0; while (true) { char g = getchar(); assert(g != -1); if (g == endd) { break; } cnt++; ret += g; } assert(l <= cnt && cnt <= r); return ret; } long long readIntSp(long long l, long long r) { return readInt(l, r, ' '); } long long readIntLn(long long l, long long r) { return readInt(l, r, '\n'); } string readStringLn(int l, int r) { return readString(l, r, '\n'); } string readStringSp(int l, int r) { return readString(l, r, ' '); } #define fi first #define se second #define tiii tuple<int, int, int> #define mt make_tuple const int maxv = 2e18; int num(int a, int b) { return a + rand() % (b - a + 1); } tiii cal (int& a, int& b, int& c, int& p, int& q, int& r) { //program that reads a, b, c, p, q and r. Let ax + by + c = 0 and px + qy + r = 0 be the equations // of two lines. Print their point of intersection. int den = a * q - b * p; int x = maxv, y = 0; if (den != 0) { x = b * r - q * c; y = p * c - a * r; } return {x, y, den}; } void solve(int tc) { int n; cin >> n; vector<pair<int, int>> a(n); map<int, int> diff1, diff2, X, Y; map<pair<int, int>, int> eq; for (int i = 0; i < n; i++) { cin >> a[i].first; } for (int i = 0; i < n; i++) { cin >> a[i].second; eq[ {a[i].first, a[i].second}]++; // counts no of points haveing same (X, Y) diff1[a[i].first - a[i].second]++; // counts no of points haveing same (X - Y) diff2[a[i].first + a[i].second]++; // counts no of points haveing same (X + Y) X[a[i].first]++; // counts no of points haveing same X Y[a[i].second]++; // counts no of points haveing same Y } auto fun = [&](tiii T) { int x = get<0>(T), y = get<1>(T), d = get<2>(T); int cur = 2 * n; if (x == maxv)return cur; if (x % d == 0)cur -= X[x / d]; if (y % d == 0)cur -= Y[y / d]; if ((x - y) % d == 0)cur -= diff1[(x - y) / d]; if ((x + y) % d == 0)cur -= diff2[(x + y) / d]; if (x % d == 0 && y % d == 0)cur += 2 * eq[ {x / d, y / d}]; return cur; }; int ans = 2 * n; for (int i = 0; i < n; i++) { for (int j = i + 1; j < n; j++) { int a1 = 1; int b1 = 0; int c1 = -a[i].fi; tiii p; int a2, b2, c2; a2 = 0; b2 = 1; c2 = -a[j].se; p = cal(a1, b1, c1, a2, b2, c2); ans = min(ans, fun(p)); a2 = 1; b2 = -1; c2 = -a[j].fi + a[j].se; p = cal(a1, b1, c1, a2, b2, c2); ans = min(ans, fun(p)); a2 = 1; b2 = 1; c2 = -a[j].fi - a[j].se; p = cal(a1, b1, c1, a2, b2, c2); ans = min(ans, fun(p)); a1 = 0; b1 = 1; c1 = -a[i].se; a2 = 1; b2 = -1; c2 = -a[j].fi + a[j].se; p = cal(a1, b1, c1, a2, b2, c2); ans = min(ans, fun(p)); a2 = 1; b2 = 1; c2 = -a[j].fi - a[j].se; p = cal(a1, b1, c1, a2, b2, c2); ans = min(ans, fun(p)); a1 = 1; b1 = -1; c1 = -a[i].fi + a[i].se; a2 = 1; b2 = 1; c2 = -a[j].fi - a[j].se; p = cal(a1, b1, c1, a2, b2, c2); ans = min(ans, fun(p)); } } cout << ans << '\n'; } signed main() { ios_base :: sync_with_stdio(0); cin.tie(0); cout.tie(0); int t; cin >> t; for (int i = 1; i <= t; i++) solve(i); return 0; } Tester #include <iostream> #include <cassert> #include <vector> #include <set> #include <map> #include <algorithm> #include <random> #ifdef HOME #include <windows.h> #endif #define all(x) (x).begin(), (x).end() #define rall(x) (x).rbegin(), (x).rend() #define forn(i, n) for (int i = 0; i < (int)(n); ++i) #define for1(i, n) for (int i = 1; i <= (int)(n); ++i) #define ford(i, n) for (int i = (int)(n) - 1; i >= 0; --i) #define fore(i, a, b) for (int i = (int)(a); i <= (int)(b); ++i) template<class T> bool umin(T& a, T b) { return a > b ? (a = b, true) : false; } template<class T> bool umax(T& a, T b) { return a < b ? (a = b, true) : false; } using namespace std; long long readInt(long long l, long long r, char endd) { long long x = 0; int cnt = 0; int fi = -1; bool is_neg = false; while (true) { char g = getchar(); if (g == '-') { assert(fi == -1); is_neg = true; continue; } if ('0' <= g && g <= '9') { x *= 10; x += g - '0'; if (cnt == 0) { fi = g - '0'; } cnt++; assert(fi != 0 || cnt == 1); assert(fi != 0 || is_neg == false); assert(!(cnt > 19 || (cnt == 19 && fi > 1))); } else if (g == endd) { assert(cnt > 0); if (is_neg) { x = -x; } assert(l <= x && x <= r); return x; } else { //assert(false); } } } string readString(int l, int r, char endd) { string ret = ""; int cnt = 0; while (true) { char g = getchar(); assert(g != -1); if (g == endd) { break; } cnt++; ret += g; } assert(l <= cnt && cnt <= r); return ret; } long long readIntSp(long long l, long long r) { return readInt(l, r, ' '); } long long readIntLn(long long l, long long r) { return readInt(l, r, '\n'); } string readStringLn(int l, int r) { return readString(l, r, '\n'); } string readStringSp(int l, int r) { return readString(l, r, ' '); } struct MT { int ctr; int val; int type; bool operator<(const MT& other) const { return tie(ctr, val, type) < tie(other.ctr, other.val, other.type); } bool operator==(const MT& other) const { return tie(ctr, val, type) == tie(other.ctr, other.val, other.type); } static pair<int, int> intersection(const MT& a, const MT& b) { assert(a.type != b.type); if (a.type == 0) { switch (b.type) { case 1: return { a.val, b.val }; case 2: return { a.val, b.val - a.val }; case 3: return { a.val, b.val + a.val }; default: assert(false); } } if (a.type == 1) { switch (b.type) { case 0: return { b.val, a.val }; case 2: return { b.val - a.val, a.val}; case 3: return { a.val + b.val, a.val}; default: assert(false); } } if (a.type == 2) { switch (b.type) { case 0: return { b.val, a.val - b.val}; case 1: return { a.val - b.val, b.val }; case 3: return { (a.val + b.val) / 2, a.val - (a.val + b.val) / 2}; default: assert(false); } } if (a.type == 3) { switch (b.type) { case 0: return { b.val, a.val + b.val }; case 1: return { a.val + b.val, b.val }; case 2: return { (a.val + b.val) / 2, b.val - (a.val + b.val) / 2 }; default: assert(false); } } return { 0, 0 }; } }; int main(int argc, char** argv) { #ifdef HOME if (IsDebuggerPresent()) { freopen("../POINTMEE-1.in", "rb", stdin); //freopen("../in.txt", "rb", stdin); freopen("../out.txt", "wb", stdout); } #endif int T = readIntLn(1, 300); int sumN = 0; forn(tc, T) { int N = readIntLn(2, 200); sumN += N; vector<pair<int, int> > w(N); int idx = 1; for (auto& wi : w) { if (idx == N) { wi.first = readIntLn(-1'000'000'000, 1'000'000'000); } else { wi.first = readIntSp(-1'000'000'000, 1'000'000'000); } ++idx; } idx = 1; for (auto& wi : w) { if (idx == N) { wi.second = readIntLn(-1'000'000'000, 1'000'000'000); } else { wi.second = readIntSp(-1'000'000'000, 1'000'000'000); } ++idx; } sort(w.begin(), w.end()); w.erase(unique(w.begin(), w.end()), w.end()); assert(w.size() == N); int res = 2 * N; map<int, int> x, y, xpy, xsy; set<pair<int, int> > s; vector<MT> wmt; for (auto& wi : w) { wi.first *= 2; wi.second *= 2; s.insert(wi); x[wi.first]++; y[wi.second]++; xpy[wi.first + wi.second]++; xsy[wi.first - wi.second]++; } for(const auto xv : x) { wmt.push_back({ xv.second, xv.first, 0 }); } for (const auto yv : y) { wmt.push_back({ yv.second, yv.first, 1 }); } for (const auto xpyv : xpy) { wmt.push_back({ xpyv.second, xpyv.first, 2 }); } for (const auto xsyv : xsy) { wmt.push_back({ xsyv.second, xsyv.first, 3 }); } sort(wmt.begin(), wmt.end()); reverse(wmt.begin(), wmt.end()); //check all val // bool all1 = true; // set<int> sVal; // forn(i, wmt.size()) // { // if (sVal.count(wmt[i].val)) // all1 = false; // sVal.insert(wmt[i].val); // } // if (all1) // { // printf("%d\n", 2 * N - 2); // continue; // } forn(i, wmt.size()) { const auto wi = wmt[i]; fore(j, i+1, wmt.size()-1) { const auto wj = wmt[j]; if(wi.type == wj.type) continue; int bb = 2 * N - (wi.ctr + 3*wj.ctr); //if(bb >= res) // break; const auto wij = MT::intersection(wi, wj); int xx = wij.first; int yy = wij.second; int xppy = wij.first + wij.second; int xssy = wij.first - wij.second; int curr = 2 * (N+s.count(wij)) - x[xx] - y[yy] - xpy[xppy] - xsy[xssy]; if (curr < res) res = curr; } } printf("%d\n", res); } assert(sumN <= 600); assert(getchar() == -1); return 0; } Editorialist #include<bits/stdc++.h> using namespace std; #define int long long void solve() { int n, ans=0, temp=0; cin>>n; int a[n], b[n]; for(int i=0; i<n; i++) { cin>>a[i]; a[i]*=2; } for(int i=0; i<n; i++) { cin>>b[i]; b[i]*=2; } vector<int> x, y; for(int i=0; i<n; i++) { for(int j=0; j<n; j++) { if(i==j) continue; int a1=a[i], b1=b[i], c=a[j], d=b[j]; x.pb_(a1); y.pb_(d); x.pb_(c); y.pb_(b1+c-a1); x.pb_(c); y.pb_(b1-c+a1); x.pb_(a1+b1-d); y.pb_(d); x.pb_(a1-b1+d); y.pb_(d); x.pb_((a1+b1+c-d)/2); y.pb_((b1+a1-c+d)/2); // if you were to run the loop from i+1, //you'd have to insert the following into x and y too: // x.pb_(c); y.pb_(b1); // x.pb_(a1); y.pb_(d+a1-c); // x.pb_(a1); y.pb_(d-a1+c); // x.pb_(c+b1-d); y.pb_(b1); // x.pb_(c-b1+d); y.pb_(b1); // x.pb_((a1-b1+c+d)/2); y.pb_((b1-a1+c+d)/2); } } for(int i=0; i<x.size(); i++) { temp=0; int top=y[i], right=x[i]; for(int j=0; j<n; j++) { if(right==a[j] && top==b[j]) temp+=0; else if(right==a[j] || top==b[j]) temp++; else if(abs(right-a[j])==abs(top-b[j])) temp++; else temp+=2; } if(i==0) ans=temp; else ans=min(temp, ans); } cout<<ans<<endl; } int32_t main() { int t=1; cin>>t; while(t--) { solve(); } } 7 Likes Cool problem but lots of casework 4 Likes Any one please🙏 tell me whats wrong with my code . Out of 8 test cases it passes 7 but fails 1 . I have spend a lot of time in this question but unable to get the mistake. https://www.codechef.com/viewsolution/51025010 1 Like TEST_CASE 1 4 2 3 5 10 7 6 0 5 CORRECT_OUTPUT 4 1 Like I have fixed that . It was due to a silly bracket mistake , but still i am not getting correct answer https://www.codechef.com/viewsolution/51025952 1 Like TEST_CASE 1 4 1 5 7 10 6 10 7 4 CORRECT_OUTPUT 4 1 Like How does the case of choosing point P as the final point handled in the solution? 2 Likes Thank you for helping me out . Finally i got correct answer and my mistakes. 1 Like slope of line AB is 135 and not 45 yea, I never mentioned it’s 45.The angle I have mentioned is not with the positive x co-ordinate. consider A and C together, suppose we take the case, when line through A goes 135 degrees and line through C goes 0 degrees, then the 2 lines will meet somewhere on the extension of AB, this is going to take 1 move. We can bring B to this meeting point in 1 move and D in 1 or 2 moves (according to location on D), therefore taking 4 moves over all. The only points that can yield minimum operations when every other given point has coincided with them are the ones to which a pair of points can be made equal to in 11 operation each. can anyone explain this? @onlyerror For any pair of points in the given set of points we know that to move a point to any point which lies on any of the 8 lines possible, we will need to do only one operation(think about it). If we find the intersection of these lines for any two points then even if all the other points cannot be moved to this intersection coordinate in one operation but still we have two points which can be so in this case number of required operation will be 2 * (N - 2) + 2 where N is the total number of points. Now that we have the intersection point we will find the sum of all the operations required to move our points to this intersection point, which can achive a maximum value of 2 * (N - 2) + 2. 1 Like My approach is different although underlying principle is same. Unfortunately, 2nd TC got WA(rest of the TC got AC). Can someone help me find the bug? Incase you are wondering where did I come up with the formula used in the below solution then refer to this article. import java.util.*; class Two_D_Point_Meeting { static Point[] points = new Point[600]; static Scanner sc = new Scanner(System.in); static class Eqn { // datatype to store equation of a line of form (y - y1) = m * (x - x1) double x; // x1 double y; // y1 int m; // slope public Eqn(double x, double y, int m) { this.x = x; this.y = y; this.m = m; } } static class Point { // datatype to store coordinate double x; // x coordinate of point double y; // y coordinate of point public Point(double x, double y) { this.x = x; this.y = y; } } public static void main(String[] args) throws java.lang.Exception { int t = sc.nextInt(); while (t-- > 0) { int n = sc.nextInt(); inputPoints(n); // takes input and converts it into points int minOperations = Integer.MAX_VALUE; int[] slopes = new int[] { 1, -1, 0, Integer.MAX_VALUE }; // slopes of all the possible lines, // Integer.MAX_VALUE to denote infinite slope for (int i = 0; i < n; i++) { // loop 1 to choose a point Eqn[] eqns1 = new Eqn[4]; for (int k = 0; k < 4; k++) { // generate eqn of all the 4 possible lines passing through current point eqns1[k] = eqnGenerator(points[i], slopes[k]); } for (int j = 0; j < n; j++) { for (int k = 0; k < 4; k++) { for (int m = 0; m < 4; m++) { Eqn temp = eqnGenerator(points[j], slopes[m]); Point intersectionPoint = intersection(eqns1[k], temp); int operationsRequired = 0; if (intersectionPoint == null) { continue; } for (int l = 0; l < n; l++) { operationsRequired += minOperationsRequired(points[l], intersectionPoint); } minOperations = Math.min(minOperations, operationsRequired); } } } } System.out.println(minOperations); } } static int minOperationsRequired(Point p1, Point p2) { // return minimum number of operations required to move point // 1 to point 2 if (p1.x == p2.x && p1.y == p2.y) return 0; else if (p1.x == p2.x || p1.y == p2.y) return 1; return Math.abs(p1.x - p2.x) == Math.abs(p1.y - p2.y) ? 1 : 2; } static Point intersection(Eqn eqn1, Eqn eqn2) { // returns the intersection point of two lines double x1 = eqn1.x; double y1 = eqn1.y; double m1 = eqn1.m; double x2 = eqn2.x; double y2 = eqn2.y; double m2 = eqn2.m; if (m1 == m2) { // lines are parallel (no intersection or infinite intersections) return null; } else if (m1 == 0) { return new Point((y1 - y2) / m2 + x2, y1); } else if (m2 == 0) { return new Point((y2 - y1) / m1 + x1, y2); } else if (m1 == Integer.MAX_VALUE) { return new Point(x1, m2 * (x1 - x2) + y2); } else if (m2 == Integer.MAX_VALUE) { return new Point(x2, m1 * (x2 - x1) + y1); } double x = ((m2 * x2 - m1 * x1) - (y2 - y1)) / (m2 - m1); double y = (m2 * y1 - m1 * y2 + m1 * m2 * (x2 - x1)) / (m2 - m1); return new Point(x, y); } static void inputPoints(int n) { // function to take input int[] x = new int[n]; for (int i = 0; i < n; i++) { x[i] = sc.nextInt(); } int[] y = new int[n]; for (int i = 0; i < n; i++) { y[i] = sc.nextInt(); } for (int i = 0; i < n; i++) { points[i] = new Point(x[i], y[i]); } } static Eqn eqnGenerator(Point p, int m) { // fancy way of generating eqn of any line when one point and its slope is // given return new Eqn(p.x, p.y, m); } } Why have we multiplied the input array by 2 in the Editorialist solution so that you don’t have to deal with decimal values My approach was similar, but by organising differently the complexity is O(N * N * logN) Solution: 50640895 | CodeChef. I consider 2 possible solutions. 1. For each point P, find every other point which lies on the same X or Y or diagonal. Note the maximum number of such other points C. Then to achieve all points in the same place we keep 1 point fixed, move all the C points once and all the other points twice. 2. For each point, create 4 lines in each of the allowed directions. Find all intersections of these lines, which requires 6 * N * N operations. Sort the intersections by position, and count the maximum number of distinct lines D meeting at any intersection. Each of these D lines corresponds to a points, so move each of these D points once and all other points twice. Choose the smaller of these two solutions. Can anyone help me understand this point ? https://www.codechef.com/viewsolution/50940759 why it is giving WA if I don’t consider the case of mean point i took that case as an extra but it is required condition I’m not getting the explanation
{}
# Mathematics - Exponential (Euler's number) ## 1 - About $e \approx 2:71828$ is the scientific constant, the exponential. Euler's number The number e is an important mathematical constant that is the base of the natural logarithm. It is the limit of $(1 + \frac{1}{n})^n$ as n approaches infinity, ## 3 - Property e to anything is positive. ### 3.1 - Log ln is the inverse function. $$ln(e(x))=x$$ Whenever you see exponentials the first thing you want to do is take the logs. ## 4 - Documentation / Reference mathematics/exponential.txt · Last modified: 2017/09/13 16:04 by gerardnico
{}
OGS [tag] components Sets the number of components of the process variable. It is one for scalar variables and greater than one for vectorial (or theoretically even tensorial) process variables. • Data type: int
{}
## Questions and dependency in intuitionistic logic.(English)Zbl 1453.03025 This paper shows how the inquisitive logic and dependence logic which allow for a logical analysis of questions and dependencies between propositions can be developed on the intuitionistic basis. As a result, the intuitionistic inquisitive logic (InqI) is introduced, which deals not only with intuitionistic statements, but also with questions and formulas that express dependencies. To this effect, the authors develop a kind of Kripke-semantics for intuitionistic logic based on the notion of support at a team, rather than on the notion of truth at a possible world. Namely, having a standard intuitionistic Kripke model $$M = \langle W, R, V \rangle$$, a team in $$M$$ is defined as a set of worlds $$t \subseteq W$$. Moreover, a team $$t^\prime$$ is an extension of a team $$t$$ iff $$t \subseteq R[t]$$, where $$R[t] : = \bigcup_{w \in t} R[w]$$ ($$R[w] = \{ w^\prime : wRw^\prime \}$$). Then one defines the intuitionistic notion of support with respect to a team in a Kripke model, so that, e.g., an atomic proposition $$p$$ is supported by a team $$t$$ in $$M$$ iff $$p$$ is true at every world $$w$$ from this team: $$M, t \models p \Leftrightarrow V(w, p) = 1$$ for all $$w \in t$$. This definition is then naturally extended to compound formulas. To deal with questions one enriches the standard intuitionistic language with a new connective ‘inquisitive disjunction’ ($$\scriptstyle\mathbb{V}$$), where $$\varphi \:{\scriptstyle\mathbb{V}}\: \psi$$ is regarded as a question whether $$\varphi$$ or $$\psi$$. The support condition for inquisitive disjunction is then as follows: $$M, t \models \varphi \:{\scriptstyle\mathbb{V}}\: \psi \Leftrightarrow M, t \models \varphi \mbox{ or } M, t \models \psi$$. It turns out that a question $$\mu$$ determines another question $$\nu$$ in a team $$t$$ of a model $$M$$ iff the team $$t$$ supports the implication $$\mu \rightarrow\nu$$. The authors introduce the notion of entailment between formulas of InqI and construct a natural deduction system which is obtained from the respective system for classical inquisitive logic by simply dropping the double negation elimination rule. This system is sound and complete with respect to the proposed semantics. Thus, the authors conclude, “the only difference between the classical and the intuitionistic version of inquisitive logic lies in the underlying logic of statements, while the relation between statements and questions is the same in both cases”. ### MSC: 03B65 Logic of natural languages 03B60 Other nonclassical logic 03B20 Subsystems of classical logic (including intuitionistic logic) Full Text: ### References: [1] Abramsky, S., and J. Väänänen, “From IF to BI: A tale of dependence and separation,” Synthese, vol. 167 (2009), pp. 207-30. · Zbl 1175.03016 [2] Armstrong, W. W., “Dependency structures of data base relationships,” pp. 580-83 in Information Processing 74 (Stockholm, 1974), edited by J. L. Rosenfeld, North-Holland, Amsterdam, 1974. · Zbl 0296.68038 [3] Chagrov, A., and M. Zakharyaschev, Modal Logic, vol. 35 of Oxford Logic Guides, Oxford University Press, New York, 1997. · Zbl 0871.03007 [4] Ciardelli, I., “Modalities in the realm of questions: Axiomatizing inquisitive epistemic logic,” pp. 94-113 in Advances in Modal Logic, Vol. 10, edited by R. Goré, B. Kooi, and A. Kurucz, College Publications, London, 2014. · Zbl 1385.03006 [5] Ciardelli, I., “Dependency as question entailment,” pp. 129-81 in Dependence Logic: Theory and Applications, edited by S. Abramsky, J. Kontinen, J. Väänänen, and H. Vollmer, Birkhäuser/Springer, Cham, 2016. · Zbl 1429.03112 [6] Ciardelli, I., “Questions as information types,” Synthese, vol. 195 (2018), pp. 321-65. · Zbl 1455.03038 [7] Ciardelli, I., “Inquisitive semantics and intermediate logics,” master’s dissertation, University of Amsterdam, Amsterdam, 2009, https://www.illc.uva.nl/Research/Publications/Reports/MoL-2009-11.text.pdf. [8] Ciardelli, I., “Questions in logic,” Ph.D. dissertation, University of Amsterdam, Amsterdam, 2016. · Zbl 1390.03027 [9] Ciardelli, I., J. Groenendijk, and F. Roelofsen, “On the semantics and logic of declaratives and interrogatives,” Synthese, vol. 192 (2015), pp. 1689-728. · Zbl 1357.03064 [10] Ciardelli, I., and M. Otto, “Bisimulation in inquisitive modal logic,” pp. 151-66 in Theoretical Aspects of Rationality and Knowledge, edited by J. Lang, vol. 251 of Electronic Proceedings in Theoretical Computer Science (EPTCS), EPTCS, n.p., 2017. [11] Ciardelli, I., and F. Roelofsen, “Inquisitive logic,” Journal of Philosophical Logic, vol. 40 (2011), pp. 55-94. · Zbl 1214.03019 [12] Ciardelli, I., and F. Roelofsen, “Inquisitive dynamic epistemic logic,” Synthese, vol. 192 (2015), pp. 1643-87. · Zbl 1357.03046 [13] Ebbing, J., L. Hella, A. Meier, J.-S. Müller, J. Virtema, and H. Vollmer, “Extended modal dependence logic,” pp. 126-37 in Logic, Language, Information, and Computation, edited by L. Libkin, U. Kohlenbach, and R. de Queiroz, vol. 8071 of Lecture Notes in Computer Science, Springer, Heidelberg, 2013. · Zbl 1394.03047 [14] Frittella, S., G. Greco, A. Palmigiano, and F. Yang, “A multi-type calculus for inquisitive logic,” pp. 215-33 in Logic, Language, Information, and Computation, edited by J. Väänänen, A. Hirvonen, and R. de Queiroz, vol. 9803 of Lecture Notes in Computer Science, Springer, Berlin, 2016. · Zbl 1429.03128 [15] Galliani, P., “Inclusion and exclusion dependencies in team semantics: On some logics of imperfect information,” Annals of Pure and Applied Logic, vol. 163 (2012), pp. 68-84. · Zbl 1250.03047 [16] Galliani, P., and L. Hella, “Inclusion logic and fixed point logic,” pp. 281-95 in Computer Science Logic 2013, edited by R. Ronchi della Rocca, vol. 23 of Leibniz International Proceedings in Informatics, Schloss Dagstuhl, Wadern, 2013. · Zbl 1356.03071 [17] Grädel, E., and J. Väänänen, “Dependence and independence,” Studia Logica, vol. 101 (2013), pp. 399-410. · Zbl 1272.03125 [18] Kontinen, J., J.-S. Müller, H. Schnoor, and H. Vollmer, “A van Benthem theorem for modal team semantics,” pp. 277-91 in Twenty-fourth EACSL Annual Conference on Computer Science Logic (CSL 2015), edited by S. Kreutzer, vol. 41 of Leibniz International Proceedings in Informatics, Schloss Dagstuhl, Wadern, 2015. · Zbl 1373.03024 [19] Pitts, A. M., “On an interpretation of second-order quantification in first-order intuitionistic propositional logic,” Journal of Symbolic Logic, vol. 57 (1992), pp. 33-52. · Zbl 0763.03009 [20] Punčochář, V., “Weak negation in inquisitive semantics,” Journal of Logic, Language, and Information, vol. 24 (2015), pp. 323-55. · Zbl 1369.03108 [21] Punčochář, V., “A generalization of inquisitive semantics,” Journal of Philosophical Logic, vol. 45 (2016), pp. 399-428. · Zbl 1415.03036 [22] Punčochář, V., “Algebras of information states,” Journal and Logic and Computation, vol. 27 (2017), pp. 1643-75. · Zbl 1444.03072 [23] Punčochář, V., “Substructural inquisitive logics,” Review of Symbolic Logic, vol. 12 (2019), pp. 296-330. · Zbl 07063893 [24] Sano, K., and J. Virtema, “Characterizing frame definability in team semantics via the universal modality,” pp. 140-55 in Logic, Language, Information, and Computation, edited by V. de Paiva, R. de Queiroz, L. S. Moss, D. Leivant, and A. G. de Oliveira, vol. 9160 of Lecture Notes in Computer Science, Springer, Heidelberg, 2015. · Zbl 1465.03063 [25] Sano, K., and J. Virtema, “Characterizing relative frame definability in team semantics via the universal modality,” pp. 392-409 in Logic, Language, Information, and Computation, edited by J. Väänänen, A. Hirvonen, and R. de Queiroz, vol. 9803 of Lecture Notes in Computer Science, Springer, Berlin, 2016. · Zbl 1478.03041 [26] Väänänen, J., Dependence Logic: A New Approach to Independence Friendly Logic, vol. 70 of London Mathematical Society Student Texts, Cambridge University Press, Cambridge, 2007. · Zbl 1117.03037 [27] Väänänen, J., “Modal dependence logic,” pp. 237-54 in New Perspectives on Games and Interaction, edited by K. Apt and R. van Rooij, vol. 4 of Texts in Logic and Games, Amsterdam University Press, Amsterdam, 2008. · Zbl 1377.03011 [28] Yang, F., “Expressing second-order sentences in intuitionistic dependence logic,” Studia Logica, vol. 101 (2013), pp. 323-42. · Zbl 1272.03126 [29] Yang, F., “Modal dependence logics: Axiomatizations and model-theoretic properties,” Logic Journal of the IGPL, vol. 25 (2017), pp. 773-805. [30] Yang, F., “On extensions and variants of dependence logic: A study of intuitionistic connectives in the team semantics setting,” Ph.D. dissertation, University of Helsinki, Helsinki, 2014. [31] Yang, F., and J. Väänänen, “Propositional logics of dependence,” Annals of Pure and Applied Logic, vol. 167 (2016), pp. 557-89. · Zbl 1355.03021 [32] Yang, F., and J. Väänänen, “Propositional team logics,” Annals of Pure and Applied Logic, vol. 168 (2017), pp. 1406-41. · Zbl 1422.03058 This reference list is based on information provided by the publisher or from digital mathematics libraries. Its items are heuristically matched to zbMATH identifiers and may contain data conversion errors. It attempts to reflect the references listed in the original paper as accurately as possible without claiming the completeness or perfect precision of the matching.
{}
Dualizability in the higher Morita category Presented by: Claudia Scheimbauer Date: Friday 6th July 2018 - 10:00 to 11:00 Venue: INI Seminar Room 1 Abstract: In this talk I will explain how one can use geometric arguments to obtain results on dualizablity in a factorization version’’ of the Morita category. We will relate our results to previous dualizability results (by Douglas-Schommer-Pries-Snyder and Brochier-Jordan-Snyder on Turaev-Viro and Reshetikin-Turaev theories). We will discuss applications of these dualiability results: one is to construct examples of low-dimensional field theories “relative” to their observables. An example will be given by Azumaya algebras, for example polynomial differential operators (Weyl algebra) in positive characteristic and its center. (This is joint work with Owen Gwilliam.) The video for this talk should appear here if JavaScript is enabled. If it doesn't, something may have gone wrong with our embedded player. We'll get it fixed as soon as possible.
{}
Strict inequality for Fatou's lemma I'm interested in knowing whether there is a condition for general measure spaces under which we know that we can only achieve the strict inequality of Fatou's lemma. I am working in the situation that $f_n \rightarrow f$, and the limit of the integrals do exist so that Fatou's lemma says $$\int f \leq \lim_{n \rightarrow \infty} \int f_n \;.$$ Is there a condition on $f_n$ and $f$ which ensures $$\int f < \lim_{n \rightarrow \infty} \int f_n \;.$$ - A standard example in the real, Lebesgue sense occurs when the pointwise limit of $f_n$ is 0 but not uniformly so, and each $|f_n|$ is positive on a set of measure greater than zero. Consider $f_n(x)$ = {n for 0 $\leq x \leq \frac{1}{n}$; 0 o.w. }. Then $f_n \rightarrow 0$ pointwise, but its integral from 0 to 1 is always 1. This might shed some light on the problem. – Rachel Aug 26 '11 at 12:35 For measure spaces with total mass finite, look up "uniform integrability." – ShawnD Aug 26 '11 at 14:32 There is a nice discussion of this point in ANALYSIS by Lieb & Loss (section 1.9 of the second edition): It $f_n$ are non-negative and converge a.e. to $f$, then $$\liminf_n\int f_n = \int f +\liminf_n\int|f-f_n|$$ provided $\sup_n\int f_n<\infty$.
{}
# Good model to reward contributions I hope you guys find this an interesting question. Imagine you want to implement a system that rewards people for their contributions (regardless of quality or any external assessment). What is a good model, considering the following conditions: • It shouldn't reward people that contributes "excessively" only to gain whatever you're giving as reward. For example, if you're giving an award for 5 contributions, then you can't give that award again because of the sixth contribution. • It shouldn't be too complex to be computationally unfeasible (although, if you know of something complex but interesting, I'd like to know about it). As a clarification, the situation is such that you want to give a single award (instead of a variety of them). Maybe you know some established theory dealing with this kind of stuff. Let me know if that's the case. By the way, I searched in Google Scholar but I didn't find anything relevant. Thanks! - I do not really think this can be considered a mathematical question until you're more specific about what you mean by "best." –  Qiaochu Yuan Jun 9 '12 at 18:35 @QiaochuYuan Don't be distracted by "best". After all, it's not an optimization problem. I just want to read good ideas about this. –  Robert Smith Jun 10 '12 at 3:44 Anyway, I changed 'best' to 'good' to avoid any confusion :-) –  Robert Smith Jun 10 '12 at 3:46 Your reward function can follow the Erlang distribution for instance. If $c$ denotes the contribution, then the reward is $r(c)$ given as $$r(c) = r_{\max} \int_{x=0}^c \dfrac{\lambda^k x^{k-1} \exp(-\lambda x)}{(k-1)!} dx$$ where $r_{\max}$ is the maximum reward one can earn. This is the Erlang distribution. The parameters $\lambda$ and $k$ affect the rate of decay and the shape of the distribution respectively.
{}
# Staticman Script in Pure JS ## Staticman without jQuery The official Staticman script sample depends on jQuery. I have been using jQuery in my Staticman script for nested comments for more than six months. Recently, in the new release of Hugo Swift Theme, in which I'm a collaborator, one has removed all jQuery methods at commit f137efa4. Here's the structure of the script: 1. auxiliary functions like elem, elems, pushClass, deleteClass, etc. 2. A self-executing function comments() • some necessary local variables • handleForm() consisting merely of an event listener handling form submission. • formToJSON(form) for converting the comment form to a JSON string. • fetch(url, {...}).then(...).catch(...) for submitting a sever request and handling its response. It calls showModal(...) and it contains resetForm(). • a conditional form ? handleForm(form) : false; to execute the above method provided the existence of the comment form. • closeModal() and showModal() for closing and showing the modal respectively. • a conditional containsClass(body, show) ? closeModal() : false;. • a self-executing toogleForm() function. Nesting a self-executing function inside another one limits the namespace of variables. Even though there's no new variable defined inside this function, there's a need to wrap it with parenthesis () because toggleForm() is not called elsewhere, unlike toggleMeun(). If the if(button){...} statement comes out directly, we will not know that it's doing.
{}
There are different wrapper methods such as Backward Elimination, Forward Selection, Bidirectional Elimination and RFE. instead of starting with no feature and greedily adding features, we start We will provide some examples: k-best. Navigation. Reference Richard G. Baraniuk “Compressive Sensing”, IEEE Signal This gives … of selected features: if we have 10 features and ask for 7 selected features, Filter Method 2. Here we will first plot the Pearson correlation heatmap and see the correlation of independent variables with the output variable MEDV. sklearn.feature_selection. from sklearn.feature_selection import RFE from sklearn.ensemble import RandomForestClassifier estimator = RandomForestClassifier(n_estimators=10, n_jobs=-1) rfe = RFE(estimator=estimator, n_features_to_select=4, step=1) RFeatures = rfe.fit(X, Y) Once we fit the RFE object, we could look at the ranking of the features by their indices. This means, you feed the features to the selected Machine Learning algorithm and based on the model performance you add/remove the features. As we can see that the variable ‘AGE’ has highest pvalue of 0.9582293 which is greater than 0.05. We will be selecting features using the above listed methods for the regression problem of predicting the “MEDV” column. elimination example with automatic tuning of the number of features X_new=test.fit_transform(X, y) Endnote: Chi-Square is a very simple tool for univariate feature selection for classification. Citation. For a good choice of alpha, the Lasso can fully recover the Following points will help you make this decision. attribute. Active 3 years, 8 months ago. 1.13. estimatorobject. Select features according to the k highest scores. Feature Selection Methods: I will share 3 Feature selection techniques that are easy to use and also gives good results. sklearn.feature_selection.chi2¶ sklearn.feature_selection.chi2 (X, y) [源代码] ¶ Compute chi-squared stats between each non-negative feature and class. This approach is implemented below, which would give the final set of variables which are CRIM, ZN, CHAS, NOX, RM, DIS, RAD, TAX, PTRATIO, B and LSTAT. In combination with the threshold criteria, one can use the SelectFromModel(estimator, *, threshold=None, prefit=False, norm_order=1, max_features=None) [source] ¶. Regression Feature Selection 4.2. SetFeatureEachRound (50, False) # set number of feature each round, and set how the features are selected from all features (True: sample selection, False: select chunk by chunk) sf. For each feature, we plot the p-values for the univariate feature selection and the corresponding weights of an SVM. Reduces Overfitting: Less redundant data means less opportunity to make decisions … User guide: See the Feature selection section for further details. which has a probability $$p = 5/6 > .8$$ of containing a zero. Here Lasso model has taken all the features except NOX, CHAS and INDUS. We will keep LSTAT since its correlation with MEDV is higher than that of RM. Numerical Input, Categorical Output 2.3. Pixel importances with a parallel forest of trees: example features that have the same value in all samples. These are the final features given by Pearson correlation. Linear model for testing the individual effect of each of many regressors. transformed output, i.e. Examples >>> New in version 0.17. In this post you will discover automatic feature selection techniques that you can use to prepare your machine learning data in python with scikit-learn. there are built-in heuristics for finding a threshold using a string argument. We check the performance of the model and then iteratively remove the worst performing features one by one till the overall performance of the model comes in acceptable range. 2. When it comes to implementation of feature selection in Pandas, Numerical and Categorical features are to be treated differently. After dropping RM, we are left with two feature, LSTAT and PTRATIO. Here, we use classification accuracy to measure the performance of supervised feature selection algorithm Fisher Score: >>>from sklearn.metrics import accuracy_score >>>acc = accuracy_score(y_test, y_predict) >>>print acc >>>0.09375 We then take the one for which the accuracy is highest. RFECV performs RFE in a cross-validation loop to find the optimal We can implement univariate feature selection technique with the help of SelectKBest0class of scikit-learn Python library. 8.8.2. sklearn.feature_selection.SelectKBest class sklearn.feature_selection. Here we took LinearRegression model with 7 features and RFE gave feature ranking as above, but the selection of number ‘7’ was random. Feature selector that removes all low-variance features. It may however be slower considering that more models need to be Explore and run machine learning code with Kaggle Notebooks | Using data from Home Credit Default Risk # L. Buitinck, A. Joly # License: BSD 3 clause ¶. For instance, we can perform a $$\chi^2$$ test to the samples The model is built after selecting the features. univariate statistical tests. Genetic feature selection module for scikit-learn. Features of a dataset. noise, the smallest absolute value of non-zero coefficients, and the would only need to perform 3. and the variance of such variables is given by. 1. ¶. This tutorial is divided into 4 parts; they are: 1. Viewed 617 times 1. In particular, sparse estimators useful So let us check the correlation of selected features with each other. Univariate Feature Selection¶ An example showing univariate feature selection. Hence the features with coefficient = 0 are removed and the rest are taken. require the underlying model to expose a coef_ or feature_importances_ Other versions. large-scale feature selection. Tips and Tricks for Feature Selection 3.1. display certain specific properties, such as not being too correlated. using only relevant features. to select the non-zero coefficients. We can work with the scikit-learn. Transform Variables 3.4. Feature Selection with Scikit-Learn. sparse solutions: many of their estimated coefficients are zero. In other words we choose the best predictors for the target variable. Keep in mind that the new_data are the final data after we removed the non-significant variables. The classes in the sklearn.feature_selection module can be used for feature selection. These features can be removed with feature selection algorithms (e.g., sklearn.feature_selection.VarianceThreshold).
{}
# [Feature Suggestion] Support for Creating Symlinks Hi Mp3tag community! I just had an idea for a feature, perhaps it has been discussed here before. This would make the most sense for remixed songs. I currently use an action to populate the MIXARTIST field using the following format string: $regexp(%title%,.+$$(.*?) remix$$,$1) I use this as a way to capture the Remixer's name and save it to ID3. I tend to follow up this action with a move-file action. In this case, mine looks like: $left(C:\Music\$ansi($if(%compilation%,%artist%,%albumartist%))\$if(%compilation%,"",$validate($replace(%album%,.,-),-))\$if(%compilation%,$replace(%title%,.,_),$num(%discnumber%,1)$num(%track%,2) $replace($replace(%title%, / ,-),.,_)),128) Which basically moves the song into an artist or album artist folder within my root Music folder. So that's neat. Now one thing I realized would be that, when I'm going back looking for this remix, it might be confusing to me to remember who the original artist was and what folder the song is in. I just know the DJ who remixed the song. Or perhaps I want to keep an artist's remixes all together in one folder. So what if I could create a symlink to the original song, which is located in the original artists folder, but the symlink could be in the MIXARTIST's folder? That would be super handy! And then I wouldn't actually be duplicating the MP3 data in both places, it'd be just one file. It's currently possible to do this in Windows command prompt using "mklink /H <symlink_filepath> <original_filepath>". But it'd be so so so nice if MP3Tag could do this as an action type somehow. It would even be possible to make this cross platform by alternatively doing this in UNIX as "ls -s <symlink_filepath> <original_filepath>". Hope this is a possible feature that'd be super quick to implement! Would love to hear thoughts Cheers, Aswin You can do this using the tools-function of Mp3Tag. I use it together with the command-line-version of "Link Shell Extension". https://schinagl.priv.at/nt/ln/ln.html Thanks for the suggestion! I've never made a user-defined tool on MP3Tag before. I've got Git Bash for Windows installed on my machine, so I'm going to use "ln.exe" from there. What do I do at this step though? Does the "Parameter" line allow for ID3 fields to be used? How do I reference MIXARTIST say for example? You seem to understand coding much better than I do. Every time I have a problem to solve, I bite into it until I have the solution. Afterwards I unfortunately forget some things again. So I don't quite know why I did it in this special way some years ago - using a batch file instead of a direct solution and if this was necessary. Here is an example how I create an alternative folder-structure for tribute-albums. The tag-field "ORIGARTIST" holds the tribute artist. Name: Tribute Path: C:\windows\system32\cmd.exe Parameter: /D/Q/U/C c:\batches\ln.cmd "%_folderpath%" "%_directory%" "d:\mp3s\symlinks\Tributes\%origartist%\$validate(%albumartist%, -) -$validate(%album%,)" And here is the batch file "ln.cmd", containing some extra lines for testing. I am sure you have the skill to change the code to your personal needs. @echo off Echo.P0="%~0" Echo.P1="%~1" Echo.P2="%~2" Echo.P3="%~3" pause cd .. "c:\program files\ln\ln.exe" --symbolic "%~2" "%~3" exit /B
{}
How to implement Bifurcation of delayed Lorenz system? [closed] The delayed Lorenz system is as follows: (Bifurcation parameter as $\tau$) \begin{align*} D^\alpha x_1(t) = & a_1(x_2(t-\tau)-x_1(t)) \\ D^\alpha x_2(t) = & a_2x_1(t)-x_2(t)-x_1(t)x_3(t) \\ D^\alpha x_3(t) = &-a_3x_3(t-\tau)+\frac{1}{2}(x_1(t)x_2(t)) \end{align*} closed as off-topic by m_goldberg, user9660, MarcoB, Yves Klett, RunnyKineMar 20 '16 at 20:57 This question appears to be off-topic. The users who voted to close gave this specific reason: • "This question cannot be answered without additional information. Questions on problems in code must describe the specific problem and include valid code to reproduce it. Any data used for programming examples should be embedded in the question or code to generate the (fake) data must be included." – m_goldberg, Community, MarcoB, Yves Klett, RunnyKine If this question can be reworded to fit the rules in the help center, please edit the question. • If you are looking for a numerical solution to this, you to need to specify the values of a's? It seems that \alpha is a fractional derivative parameter, so again you need to assign a certain value to it. – zhk Mar 20 '16 at 8:56 • a_1=10, a_2=28, a_3=8/3, \alpha=1, That is ordinary delay differential equations. – G Velmurugan Mar 20 '16 at 9:43 • This question seems incomplete. It states a system of equations, but does not tell what kind of results are wanted. Please supply more information. Also, give the values for constants in the main post, not in a comment. – m_goldberg Mar 20 '16 at 12:57 • This question was put here last year already once – user36273 Mar 20 '16 at 17:21 Well, to be honest, you did not provide any information as also mentioned by @m_goldberg. Any ways, I choose random initial conditions and a random value for the delay tau = 1. a1 = 10; a2 = 28; a3 = 8/3; alpha = 1; \[Tau] = 1; sol = First[ NDSolve[{x1'[t] == a1*(x2[t - \[Tau]] - x1[t]), x1[t /; t <= 0] == 3, x2'[t] == a2*x1[t] - x2[t] - x1[t]*x3[t], x2[t /; t <= 0] == 6, x3'[t] == -a3*x3[t - \[Tau]] + 1/2*x1[t]*x2[t], x3[t /; t <= 0] == 3}, {x1, x2, x3}, {t, 0, 20}]]; Plot[Evaluate[{x1[t], x2[t], x3[t]} /. sol], {t, 0, 200}, PlotStyle -> {Thick}, Frame -> True] Note: Next time provide complete information, not just a bunch of equations. Addition (Space curve for:{x1[t],x2[t],x3[t]}) ParametricPlot3D[{x1[t], x2[t], x3[t]} /. sol, {t, 0, 150}, PlotStyle -> {Orange, Thickness[0.015]}, BoxRatios -> {1, 1, 1}, AxesLabel -> {x1, x2, x3}] • a_1=10, a_2=28, a_3=8/3, \alpha=1, That is ordinary delay Lorenz system. I really want to draw the bifurcation diagram with varying delay parameter (\tau=0.1 to 1.0). – G Velmurugan Mar 21 '16 at 2:32
{}
Can we find the 18 imaginary quadratic ffields with class number 2 algorithmically? I am reading about the class number problem. There is a well known complete list of imaginary quadratic fields $\mathbb{Q}(\sqrt{-d})$ with class number $1$. I found a paper by Stark that says there are exactly 18 such fields with class number $2$. I don't have access. The comments indicate this was easy to find using Google? The paper he points to was written in 1998. Since I am totally clueless, is it possible to compute this result e.g. using continued fractions? By Moore's law computing power, say in a smart phone, exceeds what was available to the authors of that paper. Wikipedia has the Dirichlet Class number formula. $$h(d) = \frac{w\sqrt{d}}{2\pi} L(1, \chi)$$ and there is even a formula for $L(1, \chi)$ which nobody seems to understand. • The first version of your question read like you wanted only the list. If you want to find them algorithmically you can use Pari, and see Henri Cohen's book for reference on the algorithms. – Bill Dubuque May 27 '15 at 0:02 • The reference for the algorithms in Pari is Henri Cohen's A Course in Computational Algebraic Number Theory. There you will find everything you seek (and much more!) – Bill Dubuque May 27 '15 at 0:22 • @BillDubuque it is indeed; don't be surprised if I revise this question or post another one asking for more details – cactus314 May 27 '15 at 0:30 • Just for the sake of reference: oeis.org/A005847 – David R. May 28 '15 at 21:41
{}
# GRE Math : How to divide negative numbers ## Example Questions ### Example Question #1 : How To Divide Negative Numbers Find the value of . Explanation: To solve for , divide each side of the equation by -2. is the same as which is POSITIVE ### Example Question #2 : How To Divide Negative Numbers What is ? 45 Explanation: A negative number divided by a negative number always results in a positive number.  divided by  equals . Since the answer is positive, the answer cannot be  or any other negative number. ### Example Question #1 : Negative Numbers Solve for : Explanation: Subtract  from both sides: , or Next, subtract  from both sides: , or Then, divide both sides by : Recall that division of a negative by a negative gives you a positive, therefore: or ### Example Question #71 : Integers Solve for : Explanation: To solve this equation, you need to isolate the variable on one side. We can accomplish this by dividing by  on both sides: Anytime you divide, if the signs are the same (i.e. two positive, or two negative), you'll get a positive result.  If the signs are opposites (i.e. one positive, one negative) then you get a negative. Both of the numbers here are negative, so we will have a positive result: Solve for :
{}
# print the name of the variable in a list without evaluation a := az + 1 b := bz + 5 list := {a, b} I'd like mathematica to print a = az + 1 b = bz + 5 so basically it needs to first print the name of the variable in the list, followed by "=", and followed by the actual content of the variable. update (1): so here is something close to what I want a := az + 1 b := bz + 5 list := {Hold@a, Hold@b} Column[Table[Print[list[[i]], "=", ReleaseHold@list[[i]]], {i, 1, 2}]] outputs: Hold[a]=1+az Hold[b]=5+bz However, I don't know how to get rid of Hold[]. I'm also hoping there is a more elegant way. - HoldForm works like Hold except that it isn't printed. Therefore I guess replacing Hold with HoldForm should satisfy your needs. –  celtschk Apr 27 '12 at 8:07 Also SetAttributes[f, HoldAll]; f[x_] := #[[1]] <> "=" <> #[[2]] &@ StringSplit[ToString@Definition@x, ":="] - I like the use of Definition which is not new to me but is nonetheless something that I don't usually consider +1. –  Andy Ross Apr 27 '12 at 5:42 this one is a little hard to understand. can you add more info? # is the argument of a pure function, <> is string joint. What does &@ mean?I can see it starts the pure function. –  kirill_igum Apr 27 '12 at 5:55 @kirill_igum The function #[[1]] <> "=" <> #[[2]] & takes it's argument from the list StringSplit[ToString@Definition@x, ":="] –  belisarius Apr 27 '12 at 5:59 StringReplace[ToString[Definition[x]], ":=" -> "="] is simpler :) –  rm -rf Apr 27 '12 at 5:59 @R.M yep. you are right! –  belisarius Apr 27 '12 at 6:01 StringForm[" = \n = ", HoldForm[a], a, HoldForm[b], b] or StringForm[" = \n = ", Defer[a], a, Defer[b], b] both give EDIT: To deal with the az=1 issue noted by belisarius, i steal Andy's OwnValues-based approach with a slight variation: SetAttributes[prntHF, {HoldAll, Listable}]; prntHF[sym_] := (OwnValues[sym] /. {RuleDelayed[Verbatim[HoldPattern][lhs_], rhs_]} :> {HoldForm[lhs], HoldForm[rhs]}) with StringForm StringForm[" 1 = 2 \n 3 = 4\n5 = 6", Sequence @@ Sequence @@@ prntHF[{a, b, az}]] to get Of course, another version SetAttributes[prntHF2, {HoldAll, Listable}]; prntHF2[sym_] := (OwnValues[sym] /. {RuleDelayed[Verbatim[HoldPattern][lhs_], rhs_]} :> Row[{HoldForm[lhs], " = ", HoldForm[rhs]}]) would be much easier to use: Column[prntHF2[{a, b, az}]] - HoldForm is a very useful trick for this sort of thing. I'm surprised my own answer didn't include it :) +1 –  Andy Ross Apr 27 '12 at 5:44 +1 This reminded me why I hate the controlstrings of StringForm[] –  belisarius Apr 27 '12 at 5:49 @Andy, thank you for the vote. Actually, I was just thinking about OwnValues the same way :) –  kguler Apr 27 '12 at 5:49 mmm does not work if you set az=1 –  belisarius Apr 27 '12 at 5:52 @belisarius, me too:) One those things that I need to check docs every time I use it. –  kguler Apr 27 '12 at 5:55 This seems to work. I'm doubtful that it is very robust though. SetAttributes[printVar, {HoldAll, Listable}] printVar[a_] := Row[{Defer[a], " = ", OwnValues[a][[1, 2]]}] For example... Column[printVar[{a,b}]] ==> a = 1 + az b = 5 + bz Edit: Due to @belisarius' comment regarding setting az to some value. printVar[a_] := Row[{Defer[a], " = ", OwnValues[a] /. {RuleDelayed[_, expr_]} :> HoldForm[expr]}] Which is admittedly sort of ugly. -
{}
# derivation of geometric mean as the limit of the power mean Fix $x_{1},x_{2},\ldots,x_{n}\in\mathbb{R}^{+}$. Then let $\mu(r):=\left(\frac{x_{1}^{r}+\cdots+x_{n}^{r}}{n}\right)^{1/r}.$ For $r\neq 0$, by definition $\mu(r)$ is the $r$th power mean of the $x_{i}$. It is also clear that $\mu(r)$ is a differentiable function for $r\neq 0$. What is $\lim_{r\to 0}\mu(r)$? We will first calculate $\lim_{r\to 0}\log\mu(r)$ using l’Hôpital’s rule (http://planetmath.org/LHpitalsRule). $\displaystyle\lim_{r\to 0}\log\mu(r)$ $\displaystyle=\lim_{r\to 0}\frac{\log\left(\frac{x_{1}^{r}+\cdots+x_{n}^{r}}{n% }\right)}{r}$ $\displaystyle=\lim_{r\to 0}\frac{\left(\frac{x_{1}^{r}\log x_{1}+\cdots+x_{n}^% {r}\log x_{n}}{n}\right)}{\left(\frac{x_{1}^{r}+\cdots+x_{n}^{r}}{n}\right)}$ $\displaystyle=\lim_{r\to 0}\frac{x_{1}^{r}\log x_{1}+\cdots+x_{n}^{r}\log x_{n% }}{x_{1}^{r}+\cdots+x_{n}^{r}}$ $\displaystyle=\frac{\log x_{1}+\cdots+\log x_{n}}{n}$ $\displaystyle=\log\sqrt[n]{x_{1}\cdots x_{n}}.$ It follows immediately that $\lim_{r\to 0}\left(\frac{x_{1}^{r}+\cdots+x_{n}^{r}}{n}\right)^{1/r}=\sqrt[n]{% x_{1}\cdots x_{n}}.$ Title derivation of geometric mean as the limit of the power mean Canonical name DerivationOfGeometricMeanAsTheLimitOfThePowerMean Date of creation 2013-03-22 14:17:13 Last modified on 2013-03-22 14:17:13 Owner Mathprof (13753) Last modified by Mathprof (13753) Numerical id 8 Author Mathprof (13753) Entry type Derivation Classification msc 26D15 Related topic LHpitalsRule Related topic PowerMean Related topic WeightedPowerMean Related topic ArithmeticGeometricMeansInequality Related topic ArithmeticMean Related topic GeometricMean Related topic DerivationOfZerothWeightedPowerMean
{}
Radiative/Convective Boundary Conditions for Heat Equation 1. Mar 27, 2013 sharpybox Hi everyone, I'm attempting to create a computer program to solve the transient 3d heat equation using the Crank Nicolson method. I would like to model the boundaries of my domain as losing heat via convection and radiation due to the temperature difference between the boundary and the air in which the system I am modelling resides, but would like to check i have the correct method for incorporating these modes of heat transfer into my model. At the moment for each internal node I have the following finite difference scheme: (1+6$\mu)$$T^{t+1}_{i,j,k}$ - μ($T^{t+1}_{i+1,j,k}$ + $T^{t+1}_{i-1,j,k}$ + $T^{t+1}_{i,j+1,k}$+$T^{t+1}_{i,j-1,k}$+$T^{t+1}_{i,j,k+1}$ + $T^{t+1}_{i,j,k-1}$) = (1-6$\mu$)$T^{t}_{i,j,k}$ - μ($T^{t}_{i+1,j,k}$ + $T^{t}_{i-1,j,k}$ + $T^{t}_{i,j+1,k}$+$T^{t}_{i,j-1,k}$+$T^{t}_{i,j,k+1}$ + $T^{t}_{i,j,k-1}$) Where T represents the temperature field and $\mu$ = $\frac{tα}{2h^{2}}$ (t = time step, α = thermal diffusivity and h = step size in x/y/z dimensions). As I understand it this type of boundary condition is a Neumann condition and can be represented by (assuming a 1d case along the x=0 boundary): -k$\frac{\partial T}{\partial x}$ = hc(T-$T_{a}$) + $\epsilon$$\sigma$($T^{4}$-$T^{4}_{a}$) (hc = convective heat transfer coefficient, k = thermal conductivity, ε = emissivity, σ = Stefan Boltzmann constant, T = node temperature and $t_{a}$ is the ambient temperature.) Applying a central difference approximation to the derivative at node $T_{0,j,k}$ yields: -k$\frac{T^{t}_{1,j,k} - T^{t}_{-1,j,k}}{2h}$ = hc(T-$T_{a}$) + $\epsilon$$\sigma$($T^{4}$-$T^{4}_{a}$) $T^{t}_{-1,j,k}$ = $\frac{2h(hc * (T^{t}_{0,j,k} - T_{a}) + \epsilon \sigma (T^{4t}_{0,j,k} - T^{4}_{a})}{-k}$ Am I correct in thinking that the statement above is then substituted into the original Crank Nicolson FD scheme quoted earlier in place of the $T_{i-1,j,k}$ node for this boundary? Is the method the same when considering the x = maximum boundary when it is the $T_{i+1,j,k}$ node that needs replacing? And finally is it necessary to repeat the process at the t+1th time step as well as the time t?
{}
STEVEN CHU is heading home on a bright day in October. His motorcade of government cars powers up the slope of Cyclotron Road, past the fragrant stands of eucalyptus and through the guard station at the entrance of Lawrence Berkeley National Laboratory. The vehicles continue along Chu Road and come to a stop near the top of the hill. The man after whom the road is named heads into Building 50, which housed his office for the five years that he ran this laboratory overlooking the University of California, Berkeley. Inside an auditorium, 225 former colleagues await his arrival. Some wear suits; others slouch in hooded sweatshirts and sandals. There is an eager anticipation in the air, and moments before Chu arrives, the crowd grows quiet. Orange-vested security guards, armed with walkie-talkies, open the doors, and Chu walks down to the podium, his entourage trailing. "It's very good to be back here," he says, flipping open his computer. "You people know I do my own PowerPoints. That has not changed." He launches headlong into a fast-paced and scattered talk that leaps across dozens of topics, all under the banner of climate change. He clicks ahead to the crucial slide — the one that shows actual measurements of rising global temperatures outpacing what would be expected without all the carbon dioxide that humans have spewed into the atmosphere. "Here's the evidence," he says. "I have to play this over and over again." Such is his task back in Washington DC, where Chu now works as Secretary of the Department of Energy (DOE) and a member of President Barack Obama's cabinet — the first Nobel-prizewinning scientist to hold such a high office in the US government. Necessity is the mother of invention and this is the mother of all necessities. Steven Chu , He is charged with transforming the world's biggest energy economy, and he has assumed the role of persuader-in-chief, trotting before Congress to explain the science of climate change and his plans for combating it. Meeting regularly with representatives and senators, he targets sceptics and walks them through the data. "I say, 'Come to my office and we'll talk about it'," he explains. "At the very least you can put a little doubt in their minds. If they're so sure it's natural causes, they may be less sure." It helps to have a Nobel prize, he adds. In confronting what he sees as the most pressing problem facing the world today, Chu looks back in time to chart a way forwards. The Berkeley lab he once ran is the descendant of the Radiation Laboratory, where the physicist Ernest Lawrence helped find ways to enrich uranium for the Manhattan Project. Chemist Glenn Seaborg's team discovered plutonium there, and theoretical physicist Robert Oppenheimer worked just down the hill before heading into the New Mexico mountains to build the first nuclear bombs. Chu plans to tackle climate change by reviving the scientific and technological urgency of the Manhattan Project — enlisting some of the nation's best minds to find a way to power the world without ruining it. His plans start at home, where he is trying to push the ponderous DOE to support riskier research that could yield huge dividends. With a budget of US$27 billion, the department runs 17 national laboratories, oversees America's nuclear stockpile and manages the environmental clean-up after the early nuclear age. It is the largest source of funds for physical-science research in the United States, and this year Chu had a much bigger pot to dole out. Just one month into his tenure, Congress gave the agency$37 billion in economic stimulus money — funds that Chu is steering towards renewable energy, nuclear power, carbon-sequestration pilot plants and projects to modernize the electric grid, all of which should help to solve the climate problem. "They say that necessity is the mother of invention and this is the mother of all necessities," he says. "So we're going to get the mother of all inventions. And it's not going to be just one, it has to be many." Hands-on manager In the 1980s, Chu made his name scientifically by trapping atoms using lasers tuned with the utmost precision. Now he is applying that same mastery of detail to a vastly more complex system: an agency of 100,000 people working on all aspects of energy and nuclear issues. Some Washington veterans have questioned whether Chu's research talent and hands-on style of management will serve him well, both at the DOE and amid the harsh political environment of the nation's capital. He has made some mistakes, notably in his dealings with Congress. But nearly a year into his tenure, Chu has proved that he is a quick learner. He has established himself as a voice that can be trusted by politicians of various stripes. He has helped to bridge international divides, particularly between the United States and China. And he has lured some top scientists from industry and universities to join him at the DOE in his quest. Carol Browner, Obama's climate tsar, works often with Chu as part of the president's 'green cabinet', a group of senior officials who oversee environmental matters. "I think he's going to turn out to be the best energy secretary ever," she says. Praise also flows from some Republican politicians. Samuel Bodman, who led the DOE for former president George W. Bush, says that Chu has "shown skills as a manager. I think it was an inspired choice by the president to pick him." Growing up in a New York suburb during the 1950s, Chu and his two brothers learned quickly that academic excellence — and competition — were family traditions. The boys would watch College Bowl, a 1960s television quiz show, and "the three of us would shout out answers and try to beat the contestants", recalls Morgan Chu, the youngest brother and a high-profile lawyer in California. Clockwise from top left: the Nobel call, biking to work this year and time at Bell Labs. Credit: L. DEMATTEIS/REUTERS; A. WONG/GETTY; DOE Chu's father and mother fled China during the Second World War and both did graduate work at the Massachusetts Institute of Technology (MIT) in Cambridge. The eldest son, Gilbert, followed the path of academic prestige — accumulating science degrees from Princeton University in New Jersey and MIT before gaining an MD from Harvard University in Cambridge, Massachusetts. Morgan did a PhD in social science before heading to Harvard Law School. Steven, on the other hand, was the A-minus student who favoured tinkering over schoolwork. In a family of Ivy Leaguers, he says he was the "academic black sheep", who settled for the University of Rochester in New York, where he studied mathematics and physics. Family pressures, he says, drove him — and frustrated him — early on, but once at Rochester, his facility for science flourished. "All of a sudden, the things they wanted me to do were very natural," he says. On entering graduate school at Berkeley in 1970, Chu began a love affair with lasers. The work that was once a chore became the focus of an obsessive energy. "I've never been that good at apportioning time," he says. "When I got really excited about something, I would dig into it. It turns out that is a quality that the best researchers have." Another Berkeley graduate student, Phil Bucksbaum, recalled nearly getting into a fist fight with Chu because he was being "bossy about the lasers", until a third student, who had studied with Chu at Rochester, explained to Bucksbaum: "It's the way he always has been. Focused and brusque," says Bucksbaum. Chu's graduate work using polarized light to probe atomic transitions was good enough for him to get a job at Bell Labs in New Jersey, then a utopia for basic research. Chu thrived there, but he also made sacrifices. As his work progressed, he spent more time away from home, says his ex-wife, Lisa Chu-Thielbar. Sometimes, she would smuggle his first son, Geoffrey, under her overcoat onto the laboratory campus to catch some time with his father. "He was always a scientist first and a father second," says Chu's second son, Michael, who doesn't fault his father for the singular focus that allowed him to achieve so much. "The ambition was all intellectual and scientific. Steve never cared about money. He didn't even care about advancement," says Chu-Thielbar. After seven years at Bell Labs, Chu had a key insight in 1985 into how to trap atoms. He crossed six lasers to form what he called "optical molasses", a goo of photons. It slowed atoms nearly to a standstill, making them sluggish enough to be held by the electromagnetic forces of an additional laser. A year later, in the winter of 1986, Chu glimpsed the foundation of his Nobel prize through the windows of a vacuum chamber. Sodium atoms, cooled in optical molasses to 240 millionths of a degree above absolute zero, grew bright orange as they fell, one by one, into a trap the size of a sand grain. A colour photo, the first ever published in Physical Review Letters, provided the proof of his success (S. Chu, J. E. Bjorkholm, A. Ashkin and A. Cable Phys. Rev. Lett. 57, 314–317; 1986). The work would spawn applications across several disciplines. It provided biologists with 'optical tweezers' — ways to manipulate individual biomolecules, such as DNA. And it gave other atomic scientists the tools to create Bose–Einstein condensates, the super-cooled states of matter that can trap light and bring photons to a standstill, in a reversal of Chu's original technique. By 1987, Chu was ready to move back to academia. He had offers at Harvard and Berkeley, but was intrigued at the idea of helping to build up a less-celebrated physics department at Stanford University in Palo Alto, California. It was a good plan. Stanford soon became a powerhouse; beginning in 1995, physicists there would win four Nobel prizes in a row, including Chu in 1997. While at Stanford, Chu started to push off in new directions, personally and professionally. He divorced Chu-Thielbar and married Jean Fetter, a physicist and former dean of admissions at Stanford. He took on graduate students with interests in biology and helped to convince the Stanford administration to build a $150-million biophysics centre. But in 2004, just after that centre was completed, the Lawrence Berkeley National Laboratory (LBNL) came calling. Chu, who had never managed anything bigger than a physics department, was ready to make the leap to running the laboratory, which now has 4,000 employees and a$650-million budget. He showed his mettle early on, pushing the University of California system, which manages the LBNL, to use its debt service in an unprecedented way to finance new buildings for the lab, and fighting to save employee pension plans. Chu personally argued on behalf of his employees with the president of the University of California system until he relented, says Graham Fleming, a chemist at Berkeley and Chu's deputy at the time. "If one argument didn't work, he'd try another," he adds. Chu says that there was no one moment when he decided to devote himself full time to climate and energy puzzles. He had been digesting the science for years, reading reports of the Intergovernmental Panel on Climate Change. And he had pursued energy efficiency in his own life with his customary precision, complaining when workers skimped on insulation in his Stanford home. But soon after arriving at the LBNL, he decided that the time was ripe to resurrect an energy-research programme that had lain largely dormant since the fuel crisis of the 1970s. The lab was ready to revive those efforts, but it needed Chu's energy and vision, says Paul Alivisatos, who succeeded Chu as the head of the LBNL. "It's a bit like a supersaturated solution that you drop a seed crystal into," he says. "Steve was the seed crystal." Chu was sworn in the day after President Obama. Credit: O. MUHAMMAD/THE NEW YORK TIMES/REDUX/EYEVINE Chu gave the sprawling lab a purpose and convinced many scientists to make the switch, as he had, into energy research. He attracted large infusions of funding from the DOE and from the energy company BP. The lab launched major initiatives in biofuels and photovoltaics, but Chu also got involved in the little stuff. Alivisatos recalls Chu's interest in revamping a system of old, lumbering shuttle buses that circle the heights of the Berkeley Hills. Chu would stand on the balcony of the director's office and keep tallies of riders at a bus stop. "He thinks at an incredibly high level, but he also delves down into the finest detail," says Alivisatos. "And one of his abilities is to find the salient detail that matters enormously to the big picture and to show you how those connect." But some thought that Chu went too far by micromanaging lab operations. "He doesn't see the necessity to get other people involved," says one scientist who knows him well but did not want to be identified as criticizing an official who controls so much research funding. "His whole career has been founded on his fantastic ability to worry about all the details himself. And that makes it hard for him to empower an effective staff." Obama's election, and his campaign pledges to revamp the US energy system, created new opportunities for Chu. A few weeks after the election, Chu flew to Chicago to meet with the president-elect. "A lot of people are telling me you're the person for the DOE," said Obama, according to Chu. Rarely at a loss for words, Chu could only think to quip, "Who are these former friends of mine?" Inspired by a sense of service, Chu planned to accept the job, but he did have a demand. He had seen the energy department hamstrung in the past by ineffectual people placed in posts to satisfy political obligations, so he wanted control over senior appointments. "There was a reasonable shot I could attract the right people. There are a whole bunch of people that have to lift this load," Chu says. Obama agreed, and Chu has recruited top talent such as Steven Koonin, former chief scientist of BP and provost of the California Institute of Technology in Pasadena, who is now undersecretary for science. One of his abilities is to find the salient detail that matters enormously in the big picture. Paul Alivisatos , On the top floor of the energy department, portraits of secretaries past preside over the long, carpeted hallway leading from the elevators to the secretary's office. Most are of career politicians, with a few exceptions: Charles Duncan, who ran his family's coffee company, Donald Hodel, who would go on to lead two Christian evangelical groups, and James Edwards, a dentist. Bodman, Chu's predecessor, has an engineering degree from MIT. But Chu is the first scientist to lead an agency that has such an important role in physical-science research. On an end table in his waiting room lie some recent biophysics papers on which Chu is a co-author. Chu puts the papers out to make a point — to visitors and himself — that he is still a working scientist. During his time at the LBNL, he kept a small research group of Stanford and Berkeley students, holding group meetings on Friday nights and on weekends. Even now, he says, he finds a little time for research during plane flights. Although Chu's October visit to the LBNL was his first lab-wide talk, he had visited before to check in with postdocs and meet new, young scientists — trips that Alivisatos calls Chu's "science vacations". During an interview at his office, Chu settles into the centre of a couch, his back to an expansive view of Washington DC's mall and the Smithsonian Castle. At 61, Chu is resolutely trim. Although he no longer commutes to work by bike, as he often did at Berkeley, he manages long weekend rides and regularly climbs the seven flights of stairs to his office. Chu leans back when he listens, which is often, and leans forward when making a point. He is quick to crack a joke and eager to please. At least, he's that way with politicians (and reporters). With scientists, he can be impatient. "He does not suffer fools," says Michael Levi, an astrophysicist at the LBNL. Chu retains a scientist's candour — and that can sometimes get him into trouble. At his confirmation hearing, some senators jumped on Chu for calling coal "his worst nightmare" in a 2007 talk. (Chu says that the United States, China and India are unlikely to turn their backs on their huge coal reserves and that underscores the need to find clean ways to use the fuel.) A month after taking office, Chu slipped when he told reporters that it was "not in his domain" whether the Organization of the Petroleum Exporting Countries (OPEC) should cut oil production, an impolitic statement. He acknowledges that he was surprised at how his words have been magnified by the press. Yet his inability to mince words is also an asset, especially in the floors below him in the DOE's headquarters, a fortress on concrete stilts. The DOE national labs have been characterized as inefficient, but that is in part because past safety and security lapses have led to a culture that stresses caution over aggressive research. When, during his talk at the LBNL, Chu mentions his desire to return to the original spirit of the labs, GOCO — government owned, but contractor operated — he gets a hearty round of applause. Chu says that the risk-averse culture, at both headquarters and the labs, must be changed. "The best way to protect yourself from something bad happening is to not do much." Chu has already made headway. When he found out that billions of dollars in loans for energy projects that had been authorized in 2005 had not progressed, he insisted that they be pushed out in months, with the first one going to a solar-power company. It has helped to get involved personally, he says. Before closing on a $5.9-billion loan with Ford Motors, Chu says he was talking to the firm's chief executive every third day — an example that sent a clear message to his subordinates to act. "In certain areas, I'm not going away," he says. "The pressure is not going to let up."[ image 4 right]To encourage more adventurous research, he has pushed to develop the Advanced Research Projects Agency-Energy, known as ARPA-E, which draws its inspiration from DARPA, the celebrated research programme run by the Department of Defense that had an important role in creating the Internet. ARPA-E is designed to pursue high-risk, high-reward research on new forms of energy and conservation (see 'Blue sky, green tech'). The programme preceded Chu, but it's a pet of his, not least because he recommended it as a co-author on an influential study by the National Academies entitled Rising Above the Gathering Storm, which in 2005 warned of declining American competitiveness. The ARPA-E concept will work, says Chu, only if the smartest reviewers are enlisted to pick out the most innovative ideas — otherwise incremental research is rewarded. "I unfortunately can't review all of the proposals myself," he told a group of clean-energy businesspeople in October, only half-jokingly. So Chu wrote a letter to the presidents of top research universities asking them to nominate their best researchers as ARPA-E reviewers. Five hundred responded to the call for duty. Chu himself spent about two hours with the final set of proposals. Fleming, Chu's former LBNL deputy, says this sort of task suits his old boss. "I've never known anyone able to go away and come back 10 minutes later knowing so much about a new topic." And on the day that he visited the LBNL, Chu announced the 37 winning proposals, which would use$151 million of an initial $400 million given to the programme. Chu's most ambitious idea has been to create eight focused and independent energy labs, modelled after the Manhattan Project, to develop technologies such as next-generation batteries and advanced nuclear power (see 'Chu's innovation factories'). But this is where he has run into the most trouble, and it exposes the limitations of the do-it-all-yourself approach. As Congress debated whether to fund Chu's new labs in fiscal year 2010, staffers found that they couldn't get the details on what, exactly, the DOE wanted. Would they be virtual labs, or permanent facilities? How many years would they be funded for? What mix of basic and applied science would be supported? "The hubs were just dropped on Congress," says one congressional staffer who adds that Chu's office did not provide consistent or timely information. The communication problems with the Hill were on display at a hearing of a congressional appropriations committee in May. Senator Diane Feinstein (Democrat, California), a friend of Chu's, had a complaint. She had wanted to talk privately with him about some solar projects, but she had not been able to make an appointment to see Chu via his staff. "I'm a little bit surprised if you asked to see me and my staff said no," Chu replied. "We just haven't gotten a response, that's sort of the way it's done," said Feinstein in an apparent attempt to educate the secretary on Washington customs. But Chu, who likes to deal with issues himself, did not seem to understand. "I'm still surprised," he said. "You actually have my private number." In the end, when Congress doled out money to the DOE, Chu lost some battles. Money that he had proposed cutting from hydrogen research was reinstated. A$115-million education programme he had championed received nothing. Worst of all, for Chu, only three of his eight energy hubs were funded. Chu's critics say that more attention to Congress could have alleviated the problems, but nearly a year into his tenure, he has not appointed an assistant secretary to head up his legislative-affairs office. Chu says the vacant position was not the problem. The issue was that he hadn't followed through himself. "The failure was on my part," he says, "because I wasn't communicating what the real vision was." On a cold day in early December, Chu was preparing to travel to the United Nations' climate-change conference in Copenhagen. Before the trip, one of the last public events on his schedule was to appear with Secretary of Commerce, Gary Locke, to talk about speeding up the process for granting patents on green technologies. In July, the two secretaries went to Beijing together to meet with Chinese energy ministers. Locke, a prominent politician of Chinese descent, was greeted warmly. But Chu, with his Nobel-prize pedigree, was a rock star in a culture that reveres education. "He was like a Michael Jordan," says an administration official. "Everybody knew this guy." Busy schedule: Chu likes to take care of many details on his own. Credit: C. OMMANNEY/GETTY Chu has taken a particular interest in China not because of his ancestry, he says, but because it emits more carbon dioxide than any other nation and it is also spending billions of dollars on clean-energy research. During the trip, Chu and Locke announced that the United States and China would jointly pursue research in areas such as energy efficiency and capturing carbon dioxide from coal-plant exhaust. In his trip to Denmark, Chu reprised his role as energy ambassador. He announced plans to hold a conference next year with foreign energy ministers and pledged \$85 million in US aid for renewable-energy projects in the developing world. For Chu, the summit served as a prelude to the fight next year, when he will use his main weapons — knowledge and powers of persuasion — to try to convince members of Congress to vote for a climate bill that would for the first time cap US emissions of greenhouse gases. Chu says that when he ends his time as energy secretary, he will measure his success by two criteria: whether he aided adoption of a climate bill, and how much he changed the way that the DOE supports science. Those metrics would have seemed odd to a young scientist at Bell Labs in the 1980s who spent his days fretting over the precision of laser beams. Chu didn't plan on working his way to the upper echelons of the US government, where he is the first scientist since the cold war to play such an active part. "It just sort of happened," he says. "I followed the path first from going and doing the science, to getting very concerned about some issues that affect us all as a society, to finally saying, I can't sit idly by and occasionally give a talk on this. I really have to get proactive and put my money where my mouth is and do a career shift because it is that important." But looking back, it's possible that the call to public service may have been whispering to Chu even during his graduate-school days at Berkeley, where the memories of the war effort remained fresh in the physics department. When Chu briefly took up sculpting at Berkeley, he chose to make a bust of Oppenheimer, the physicist-turned-manager who oversaw all details of the Manhattan Project. Chu is now looking to another Berkeley star for inspiration. Lately, he has been reading the journals of Seaborg, who led the war-time team racing to extract plutonium for a bomb. Seaborg recounts how his group required fast-working Geiger counters that were not available at the time. So he pushed his crew to invent the needed detectors. For Chu, that sense of urgency in the face of a great threat stands out in Seaborg's work: "He kept saying: 'This isn't university research. We've got to move much faster'." Berkeley giants such as Ernest Lawrence (left), Glenn Seaborg and Robert Oppenheimer (right) have inspired Chu. Credit: LAWRENCE BERKELEY NATL LAB
{}
The combinatorial nature of my research naturally lends itself to collaborations with undergraduates, and my goal is to incorporate students in my research as much as possible. Here is a list of my current and recent undergraduate research projects that are roughly arranged chronologically. ### Sylver Coinage The Sylver Coinage Game is a game in which 2 players, A and B, alternately name positive integers that are not the sum of nonnegative multiples of previously named integers. The person who names 1 is the loser! Here is a sample game between A and B: 1. A opens with 5. Now neither player can name 5, 10, 15,$\ldots$ 2. B names 4. Now neither player can name 4, 5, 8, 9, 10, or any number greater than 11. 3. A names 11. Now the only remaining numbers are 1, 2, 3, 6, and 7. 4. B names 6. Now the only remaining numbers are 1, 2, 3, and 7. 5. A names 7. Now the only remaining numbers are 1, 2, and 3. 6. B names 2. Now the only remaining numbers are 1 and 3. 7. A names 3, leaving only 1. 8. B is forced to name 1 and loses. This seemingly innocent looking game is the subject of one of John Conway’s open problems with monetary rewards. In particular, the question that Conway asks is: If player A names 16, and both players play optimally thereafter, then who wins? During the 2015-2016 academic year, this question will be the focus of a research project with four undergraduate students: Joni Hazelman, Parker Montfort, Robert Voinescu, and Ryan Wood. Due to the expected difficulty of the problem (it is a Conway problem after all!), we will begin by focusing our attention on related, and hopefully simpler, questions. Our research will begin with a survey of what is currently known about the game. In particular, we would like to know what is known about who wins under optimal play given certain opening moves. In addition, we will study a simplified version of the Sylver Coinage game that goes as follows. In the simplified version of the game, a fixed positive integer $n\geq 3$ is agreed upon in advance. Then 2 players, A and B, alternately name positive integers from the set $\{1,2,\ldots,n\}$ that are not the sum of nonnegative multiples of previously named numbers among $\{1,2,\ldots,n\}$. The person who is forced to name 1 is the loser! Here is a sample game between A and B using the set $\{1,2,3,4,5,6,7,8,9,10\}$ (i.e., $n=10$): 1. A opens with 4. Now neither player can name 4, 8. 2. B names 5. Neither player can name 4, 5, 8, 9, 10. 3. A names 6. Neither player can name 4, 5, 6, 8, 9, 10. 4. B names 3. Neither player can name 3, 4, 5, 6, 7, 8, 9, 10. 5. A names 2. Neither player can name 2, 3, 4, 5, 6, 7, 8, 9, 10. 6. B is forced to name 1 and loses. To my knowledge, no one has explicitly studied this version of the game. One goal will be to determine who wins under optimal play for given values of $n$. Moreover, we will attempt to compute the Nim-values for the simplified game. The hope is that by studying the simplified game, we will gain insight into the original Sylver Coinage game. If you want to know more about other open problems with monetary rewards, check out this blog post. The students have given the following presentations: ### Commutation classes of the longest element in the symmetric group Recall that the symmetric group $S_n$ is generated by the adjacent 2-cycles $(1,2),(2,3),\ldots, (n-1,n)$. That is, every element in $S_n$ can be written as a word using the alphabet consisting of the adjacent 2-cycles. It is important to note that there are potentially many different ways to express a given permutation as a product of adjacent 2-cycles. If we express a permutation as a product of adjacent 2-cycles in the most efficient way possible, then we call the expression a reduced expression. There may be many different reduced expressions for a given permutation, but all of them can be written in terms of the same number of adjacent 2-cycles occurring in the product (called the length). We say that two reduced expressions are commutation equivalent if we can obtain one from the other by only commuting disjoint adjacent 2-cycles (no need to apply any braid moves). A commutation class of a permutation is the subset of all its reduced expressions that can be obtained from one another by commuting disjoint cycles. For example, there are 11 reduced expressions for $(1,3,5,4)$ that split into 2 commutation classes consisting of 7 and 4 reduced expressions, respectively. The longest element in $S_{n}$ is the (unique) element having maximal length. The number of reduced expressions for the longest element is known. However, the answer to the following question, originally posed by Richard Stanley, is unknown: How many commutation classes does the longest element in the symmetric group have? In $S_{4}$, the longest element is $(1,4)(2,3)$. In this case, it turns out that there are 8 commutation classes. During the 2015-2016 academic year, Dustin Story will attack the problem given above. A closed-form solution is probably unlikely. At the very least, we will generate data aimed at providing insight into the problem. In addition, one goal will be to identify multiple reformulations of the problem. Moreover, we will tackle the problem for special classes of elements other than the longest element and possibly explore the analogous problem in other finite Coxeter groups. If you want to know more, check out the slides linked to in this blog post. ### Prime vertex labelings of graphs For the Fall 2014-Spring 2015 academic year, my colleague Jeff Rushall and I were awarded a Center for Undergraduate Research in Mathematics (CURM) mini-grant to fund a small group of undergraduate students to work on an original research project in the area of graph theory. For the project, we recruited a diverse group of 7 talented undergraduates: Nathan Diefenderfer, Michael Hastings (one of my past research students), Levi Heath, Hannah Prawzinsky, Briahna Preston, Emily White, and Alyssa Whittemore. Our research was inspired by two conjectures: 1. All unicyclic graphs have a prime vertex labeling (Seoud and Youssef, 1999). 2. All tree graphs have a prime vertex labeling (Entringer-Tout Conjecture, 1980). A unicyclic graph is a simple graph containing exactly one cycle. An $n$-vertex simple graph $G$ with vertex set $V(G)$ is said to have a prime vertex labeling if there exists a bijection $f: V(G) \to \{1, 2, 3, \ldots, n\}$ such that the labels assigned to adjacent vertices of $G$ are relatively prime. As discussed in Gallian’s “A Dynamic Survey of Graph Labeling”, many families of graphs have a prime vertex labeling; the “simpler” types of unicyclic graphs that are known to be “prime” include cycles, helms, crowns, and tadpoles. The goal of this project was to discover additional families of graphs that permit a prime vertex labeling, in hopes of bringing the aforementioned conjectures within reach. Over the course of the academic year, we uncovered previously unknown prime vertex labelings for several families of graphs including (but not limited to) “hairy” cycles, cycle pendant stars, cycle chains, prisms, and generalized books. The results of our work is summarized in the following papers: • N. Diefenderfer, M. Hastings, L.N. Heath, H. Prawzinsky, B. Preston, E. White, A. Whittemore. Prime Vertex Labelings of Several Families of Graphs. The Rose-Hulman Undergraduate Math Journal 16(1), 2015. [arXiv:1506.05826] [ePrint] • N. Diefenderfer, D.C. Ernst, M. Hastings, L.N. Heath, H. Prawzinsky, B. Preston, J. Rushall, E. White, A. Whittemore. Prime Vertex Labelings of Several Families of Graphs. Accepted to Involve. [arXiv:1503.08386] In addition, the students have given the following presentations: For additional information on our CURM grant, see this blog post. ### Factorization of Temperley-Lieb diagrams The (type A) Temperley-Lieb diagram algebra, invented by Temperley and Lieb in 1971, is a finite dimensional associative algebra that arose in statistical mechanics. Penrose and Kauffman showed that this algebra can be realized as a particular diagram algebra, which is a type of associative algebra with a basis given by certain diagrams, where the multiplication rule is given by applying local combinatorial rules to the diagrams. In 1987, Jones showed that the Temperley-Lieb algebra occurs naturally as a quotient of the type A Hecke algebra whose underlying group is the symmetric group. Eventually, this realization of the Temperley-Lieb algebra as a Hecke algebra quotient was generalized by Graham to the case of an arbitrary Coxeter group. Subsequently, several diagrammatic representations of these generalized Temperley-Lieb algebras have been constructed for various Coxeter systems. It turns out that every diagram can be written as a product of a finite set of “simple diagrams.” These factorizations correspond precisely to factorizations in the underlying group. Given a diagrammatic representation and a factorization of a group element (which may not be unique), it is easy to construct the corresponding diagram. However, given a diagram, it is generally difficult to reconstruct the factorization of the corresponding group element. Unlike the situation with natural numbers, knowing the factors is not enough information to obtain the factorization for a given diagram. The major obstacle is that some factors of the group element may not commute with other factors. During the Spring 2010 semester, Sarah Otis and Leal Rivanis obtained original results concerning Temperley-Lieb diagram algebras of types A and B, which have a basis indexed by the fully commutative elements in Coxeter groups of types A and B, respectively. In particular, we obtained a non-recursive method for enumerating the number of generators occurring in the fully commutative element that indexes a given diagram. One consequence of our results is a classification of the diagrams of the Temperley-Lieb algebras of types A and B indexed by cyclically fully commutative elements. The students presented their work at the following conference: During the 2013-2014 academic year, I am mentored Michael Hastings and Sarah Salmon on a project aimed at obtaining factorization algorithms for Temperley-Lieb diagrams in various algebras. Michael and Sarah discovered a beautiful and efficient algorithm for factoring diagrams in Temperley–Lieb algebras of types A and B that yields a “normal form” for the factorization. Their work extends the results obtained by Sarah Otis and Leal Rivanis during the Spring 2010 semester. The students made the following presentations: ### Mathematics of Spinpossible Two of my students, Dane Jacobson and Michael Woodward, spent the Spring 2013 semester studying the mathematics behind Spinpossible, which is a game that is available for iOS and Android devices. Alternatively, you can just play the game in any modern web browser. The game is played on a 3 by 3 board of scrambled tiles numbered 1 to 9, each of which may be right-side-up or up-side-down. The objective of the game is to return the board to the standard configuration where tiles are arranged in numerical order and right-side-up. This is accomplished by a sequence of “spins”, each of which rotates a rectangular region of the board by 180 degrees. The goal is to minimize the number of spins used. It turns out that the group generated by the set of allowable spins is identical to the symmetry group of the 9 dimensional hyper-cube (equivalently, a Coxeter group of type $B_9$). In a 2011 paper, Alex Sutherland and Andrew Sutherland (a father and son team) present a number of interesting results about Spinpossible and list a few open problems. You can find the paper here. As a side note, Alex is one of the developers of the game and his father, Andrew, is a mathematics professor at MIT. Using brute-force, the Sutherlands verified that every scrambled board can be solved in at most 9 moves. The goal of the project was to find a short proof of this fact, but this remains elusive. Dane continued to work on this unexpectedly difficult problem during the Fall 2013 semester and obtained a proof that every $2\times 2$ board can be solved in 5 moves or less. The students made the following presentations of their research: ### Exploration of T-avoiding elements in Coxeter groups of type F In mathematics, one uses groups to study symmetry. In particular, a reflection group is used to study the reflection and rotational symmetry of an object. A Coxeter group can be thought of as a generalized reflection group, where the group is generated by a set of elements of order two (i.e., reflections) and there are rules for how the generators interact with each other. Every element of a Coxeter group can be written as an expression in the generators, and if the number of generators in an expression is minimal, we say that the expression is reduced. An element $w$ of a Coxeter group is called T-avoiding if $w$ does not have a reduced expression beginning or ending with a pair of non-commuting generators. During the 2011-2012 academic year, I mentored Ryan Cross, Katie Hills-Kimball, and Christie Quaranta at Plymouth State University on an original research project aimed at exploring the T-avoiding elements in Coxeter groups of type F. In particular, the students successfully classified the T-avoiding elements in the infinite Coxeter group of type $F_{5}$, as well as the finite Coxeter group of type $F_{4}$. We conjectured that our classification holds more generally for arbitrary $F_{n}$. However, a year later, Selina Gilbertson showed that this is not the case (see below). The students made the following presentations: In the Spring of 2013, I worked with Selina Gilbertson at Northern Arizona University on extending the results obtained by Ryan, Katie, and Christie the previous year. The initial goal was to prove that there were no new T-avoiding elements (other than tacking on products of commuting generators) in type $F_n$ for $n\geq 6$. Selina discovered that this is horribly wrong. It appears that the classification of T-avoiding elements in higher ranks gets more and more complicated. We believe that we have the correct classification of the T-avoiding elements in type $F_6$ and Selina was able to put most of the pieces of a proof together in one semester. This is a hard problem! Selina gave the following presentations: ### T-avoiding permutations in Coxeter groups of types A and B A permutation of a set of objects is simply a rearrangement of those objects. If we have $n$ objects, then a permutation can be represented as a function from $\{1, 2,\ldots , n\}$ to $\{1, 2, \ldots , n\}$. We say that a permutation $w$ has property T if there exists $i$ such that either $w(i)$ is greater than $w(i+1)$ and $w(i+2)$, or $w(i+2)$ is less than $w(i)$ and $w(i+1)$. A permutation $w$ is T-avoiding if neither $w$ nor its inverse have property T. During the 2010-2011 academic year, I mentored Joseph Cormier, Zachariah Goldenberg, Jessica Kelly, and Christopher Malbon on an original research project aimed at exploring the T-avoiding permutations. As a result of our research, we classified the T-avoiding permutations in the symmetric group, which happens to be a Coxeter group of type A. In addition, we generalized the notion of T-avoiding to arbitrary Coxeter groups and classified the T-avoiding elements in Coxeter groups of type B (i.e., the group of signed permutations). Our results are a reformulation of known results, but with a much simpler proof. We are currently in the progress of writing up our results with the intention of submitting an article for publication. The students also made the following presentations: # Dana C. Ernst Mathematics & Teaching Northern Arizona University Flagstaff, AZ Website 928.523.6852 Instagram Strava GitHub arXiv ResearchGate Mendeley Impact Story ORCID
{}
Taiwanese Journal of Mathematics SHIFT PRESERVING OPERATORS ON LOCALLY COMPACT ABELIAN GROUPS Abstract We investigate shift preserving operators on locally compact abelian groups. We show that there is a one-to-one correspondence between shift preserving operators and range operators on $L^2(G)$ where $G$ is a locally compact abelian group. We conclude that a shift preserving operator has several properties in common with its associated range operator, especially compactness of one implies compactness of the other. Moreover, we obtain a necessary condition for a shift preserving operator to be Hilbert Schmidt or of finite trace in terms of its range function. Article information Source Taiwanese J. Math., Volume 15, Number 5 (2011), 1939-1955. Dates First available in Project Euclid: 18 July 2017 https://projecteuclid.org/euclid.twjm/1500406415 Digital Object Identifier doi:10.11650/twjm/1500406415 Mathematical Reviews number (MathSciNet) MR2880385 Zentralblatt MATH identifier 1275.47018 Citation Kamyabi Gol, R. A.; Raisi Tousi, R. SHIFT PRESERVING OPERATORS ON LOCALLY COMPACT ABELIAN GROUPS. Taiwanese J. Math. 15 (2011), no. 5, 1939--1955. doi:10.11650/twjm/1500406415. https://projecteuclid.org/euclid.twjm/1500406415 References • A. Aldroubi, Non-uniform weighted average sampling and reconstruction in shift-invariant and wavelet spaces, Appl. Comput. Harmon. Anal., 13 (2002), 151-161. • C. de Boor, R. A. DeVore and A. Ron, The structure of finitely generated shift invariant spaces in $L^2(\mathbb R^d)$, J. Funct. Anal., 119 (1994), 37-78. • M. Bownik, The structure of shift invariant subspaces of $L^2(\mathbb R^n)$, J. Funct. Anal., 177(2) (2000), 282-309. • C. S. Burrus, R. A. Gopinath and H. Guo, Introduction to Wavelets and Wavelet Transforms, Princetone Hall, Inc. USA, CRC Press, 1995. • L. Debnath, Wavelet Transforms and Their Applications, Birkhaüser, 2001. • G. B. Folland, A Course in Abstract Harmonic Analysis, CRC Press, 1995. • H. Führ, Abstract Harmonic Analysis of Continuous Wavelet Transform, Springer Lecture Notes in Mathematics, Nr. 1863, Berlin, 2005. • H. Helson, Lectures on Invariant Subspaces, Academic Press, New York, London, 1964. • E. Hewitt and K. A. Ross, Abstract Harmonic Analysis, Vol. 1, Springer-Verlag, 1963. • D. Hong, J. Wang and R. Gardner, Real Analysis with an Introduction to Wavelets and Applications, Elsevier Academic Press, USA, 2005. • R. A. Kamyabi Gol and R. Raisi Tousi, A range function approach to shift-invariant spaces on locally compact abelian groups, Int. J. Wavelets. Multiresolut. Inf. Process, 8 (2010), 49-59. • R. A. Kamyabi Gol and R. Raisi Tousi, The structure of shift invariant spaces on a locally compact abelian group, J. Math. Anal. Appl., 340 (2008), 219-225 • G. Kutyniok, Time Frequency Analysis on Locally Compact Groups, Ph.D. thesis, Padeborn University, (2000). • G. J. Murphy, $C^*$-Algebras and Operator Theory, Academic Press, Boston, 1990. • A. Ron and Z. Shen, Affine systems in $L^2(\mathbb{R}^d)$, the analysis of the analysis operator, J. Funct. Anal., 148 (1997), 408-447. • A. Ron and Z. Shen, Frames and stable bases for shift invariant subspaces of $L^2(\mathbb R^d)$, Canad. J. Math., 47 (1995), 1051-1094. • A. Ron and Z. Shen, Generalized shift invariant systems, Constructive Approximation, 22 (2005), 1-45. • Z. Rzeszotnik, Characterization theorems in the theory of wavelets, Ph.D. thesis, Washingtone University, 2000. • R. Schatten, Norm Ideals of Completely Continuous Operators, Springer-Verlag, Berlin, 1970.
{}
### Updated 6e5P SPICE model Having fixed the bias offset problem in my tester (actually my oscilloscope). I took again the curves this evening to get a better model…. ### Measuring valve transconductance Today I breadboarded the CCS I will use for the transconductance tester jig which is an addition to my curve tracer: Bias circuit is a classic from fixed bias amplifiers. I had the 80V available from the curve tracer circuit. The meter is an external panel AC voltmeter which is a trueRMS meter that will measure accurately the 1kHz signal. The MOSFET CCS is a simple cascoded which can help setting the valve current and operating point. I source it with my bench variable HT power supply, which also helps in setting out the operating point. We know that we need to have a small AC signal in the grid to increase the accuracy of the Gm test as the transconductance is given by: $G_{m}=\frac{\Delta I_{a}}{\Delta V_{g}} _{\Delta V_{a}=0}$ So can’t feed with 1Vrms, so will use 100mVrms. If a valve has a transconductance of 1mA/V, then the variance in the anode current will be of 100uA (rms). This represents a challenge to measure accurately using an anode resistor of 10 ohms for example since the developed AC voltage across the resistor will be 1mV (rms). Therefore we will use an anode resistor of 100 ohms which will help capturing small transconductance values as this one. Edit: Found that the CCS bypass was omitted in my first circuit. Also the sensing resistor was reduced to 10 ohms to accommodate the AC true-rms meter I have. See updates on this post here ### 6e5p triode-strapped The 6e5p is high-frequency indirectly-heated tetrode from our friends in Russia. The specifications can be found here.  Anode can easily dissipate 8W and screen can take up to 2W and has a high transconductance of around 30 mA/V Wired as triode this chap becomes very attractive. The anode resistance drops to around 900Ω – 1KΩ and effective mu is about 30-35. This turns this valve into a low anode, medium mu and high transconductance fellow which is highly regarded as a driver in SE amplifiers. Check out there in the jungle and you will find many good examples of how this valve is being used effectively. When testing this valve on my curve tracer I found that it probes to be a challenging device. You need to leave this guy running on its own for a while (Lars recommended 30 min to 1 hour). I found that indeed after 20-30 min it stabilise. Dmitry came up with a very good model. When I created a model based on my curves found a mismatch between my notes and simulation. Checking my notes I think I set up the tester to start plotting curves at 0V with a step of -0.5V, however looking at the model produced by Dmitry’s tool, I got this: It looks like the curves starts at -2V. Need to re-check and probably trace this valve again. Either way it does match very well and not far off from Dmitry’s model from above. Here is my model. I’m planning to use this valve in my OTL (cap-less) headphone amp. Stay tuned… Just a dream. I will get to build that nice OTL headphone amp. Can’t complain though, I enjoy so much listening to my 45 SE with headphones…. ### 6e5p OTL headphone amplifier with no output cap An idea based on SY’s heretical preamplifier… Need to simulate! ### Tracer nearly finished Tracer sync issues are coming to an end. Replaced clock transformer and got it working fine. Sync needs readjusting after 15-30min Happy man seeing those 46 perfect linear curves 🙂 ### A beautiful thoriated-tungsten glow Here are a couple of recent pictures of the 4-65a EIMAC under test. What an attractive glow! # Tracing curves of a power transmitting DHT Finding triode curves for the 4-65a valve has been a challenging task. There are some available from a Spice simulation which I couldn’t get hold of the model, so when I finished the curve tracer, it was the right time to take on this challenge. My curve tracer is not capable of handling this valve as I don’t have the appropriate socket and also the anode and grid drivers are limited to: • Anode voltage sweep range: 0-330V • Anode maximum current: 100mA • Grid voltage sweep range: -80V – +15V With this constraining factors in mind, I decided to build a test jig for the 4-65a. The jig had only a grid stopper resistor (10kΩ), a screen stopper (100Ω) and ferrite beads in the anode and grid as well. When traced first set of curves got very disappointed with a double tracing for each anode curve which made me suspect that the valve was oscillating somehow due to long cables,etc. Its transconductance is below 3 mA/V, so shouldn’t be that problematic. I remember tracing 6e5p,6C45 and E180F being a real challenge for the tester due to its high transconductance. Here is a sample of the double tracing from the first test: I tried many things with cables, stoppers and ferrite beads with no success. Suddenly the penny dropped and looked at the old DC raw supply I was using. I had only one capable of providing 6.3V @ 3.5A. And it’s regulation was appalling. The ripple was clearly a potential candidate for this image distortion. If ripple was high enough, it will modulate the cathode and therefore Vgk. Ripple frequency is same as refresh frequency of the curves (i.e. 100Hz). My test jig was modified to include a hum cancellling pot. as shown in the following diagram.  I added a 100Ω and a 22mF electrolytic. Tracing curves again then was a success. I had to trim the pot to cancel the hum and alas, the curves were very neat. The addition of the grid stopper limited the grid current closer to 0V or above. Therefore the 0V curve gets packed closer to the following one (i.e. -5V). This can be clearly seen when the Spice model is generated My tracer has not been designed to trace positive grid curves, so the current capability of the grid driver at positive grid voltages is limited. I need to modify the circuit, but it will have to wait as I have already spent too much time in this tracer so far! After playing a while with Dmitry’s tool, I came up with a very reasonable model for the 4-65a. I’m sure it can be optimised, but for a couple of hours work, I’m very happy with the results… ### You better driver yourself Time ago I asked Rod Coleman about the driving requirements of my 01a preamp whilst investigating the addition of a source follower stage to the preamp. Let’s have a look at a DHT stage loaded with a gyrator (or could well be a choke or whatever you like) driving a power amplifier. Have you asked yourself whether your pre-amp is capable of driving your amp? How much burden does the cable parasitic capacitance add to the mix? As an example we will use my current setup. A 45 SE amplifier with a 6j5 driver stage. We can approximate input impedance formed by: $Ci=(Cp+(\mu+1)\cdot Cag)$ Where Ci is the amplifier’s input capacitance formed by the Miller effect of the valve’s input capacitance (Cag) and the additional parasitic capacitance of socket, wires and so forth. In this practical example where the input valve is an 6J5GT: $Cp=50pF,\mu=20, Cag=4pF$ $Ci=(50pF+(20+1)\cdot 4pF)) \rightarrow Ci \approx 140pF$ If we now add the cable capacitance which could easily be 100pF per metre and at least 2 metres run from my pre-amp to the amp then: $Ct= Ci+Ccable \rightarrow Ct=130pF+(100pF \cdot 2) = 330pF$ So with an input resistance of 100kΩ and a capacitance of about 330pF let’s have a look at the current requirements to drive this load. The cable current @ 50kHz should be: $Xc=\tfrac{1}{2\cdot \pi \cdot f\cdot C}=\tfrac{1}{2\cdot \pi \cdot 50kHz\cdot 330pF} \approx 9650 \Omega$ Preamps output peak voltage is around 10V, So the current demanded by the cables would be: $Ip=\frac {Vp}{Zc}=\frac{10V}{9650\Omega }\approx 1mA$ So if we want the preamp valve to source 1mA to the load, we need x10 current driving capability to be on the safe side. Clearly with an 26 (or 01a) we are a little on the low side as the bias current won’t be more than 4-6mA. So there are clearly facts here to support the addition of a source/cathode-follower stage to the amp in addition to the improvement on the bass response due to a lower output impedance. Something we will look at some other time…. ### Improved 46 triode-strapped DHT composite model My initial attempt to get a reasonable SPICE model for a 46 triode-connected DHT has proven to be ok considering it was my first try. I got better accuracy with my second attempt using CX-301a. With time, I should learn the skills of Dmitry Nizh to master the great tool he has developed. For the ones who haven’t seen his website and great material Dmitry has produced around DHT, SPICE and other good stuff, I recommend you to read his article about composite models for DHT here. Dmitry kindly produced a very accurate model for the 46 (and also shown clearly that I’m a still a rookie at this things ): And here is the equivalent Spice model: Using a simple circuit in LTspice we can test the model and trace the anode characteristic curves: And the curves can be easily generated: Note that grid voltage starts at 0V in -10V steps.
{}
# Number Theory Problem Number Theory Level pending How many numbers between 1 and 10000 inclusive can be written as a difference of perfect squares? For example, $$4$$ can be written as $$2^2-0^2$$. ×
{}
# Shockley–Queisser limit The Shockley–Queisser limit for the efficiency of a solar cell, without concentration of solar radiation. The curve is wiggly because of IR absorption bands in the atmosphere. In the original paper,[1] the solar spectrum was approximated by a smooth curve, the 6000K blackbody spectrum. As a result, the efficiency graph was smooth and the values were slightly different. In physics, the Shockley–Queisser limit, also known as the detailed balance limit, Shockley Queisser Efficiency Limit or SQ Limit, refers to the maximum theoretical efficiency of a solar cell using a single p-n junction to collect power from the cell. It was first calculated by William Shockley and Hans-Joachim Queisser at Shockley Semiconductor in 1961, giving a maximum efficiency of 30% at 1.1 eV.[1] However, this calculation used a simplified model of the solar spectrum, and more recent calculations give a maximum efficiency of 33.7% at 1.34 eV,[2] but the value is still referred to as the Shockley-Queisser limit in their honor. The limit is one of the most fundamental to solar energy production with photovoltaic cells, and is considered to be one of the most important contributions in the field.[3] The limit is that the maximum solar conversion efficiency is around 33.7% for a single p-n junction photovoltaic cell, assuming typical sunlight conditions (unconcentrated, AM 1.5 solar spectrum), and subject to other caveats and assumptions discussed below. This maximum occurs at a band gap of 1.34 eV.[2] That is, of all the power contained in sunlight (about 1000 W/m²) falling on an ideal solar cell, only 33.7% of that could ever be turned into electricity (337 W/m²). The most popular solar cell material, silicon, has a less favorable band gap of 1.1 eV, resulting in a maximum efficiency of about 32%. Modern commercial mono-crystalline solar cells produce about 24% conversion efficiency, the losses due largely to practical concerns like reflection off the front of the cell and light blockage from the thin wires on the cell surface. The Shockley–Queisser limit only applies to conventional solar cells with a single p-n junction; tandem solar cells with multiple layers can (and do) outperform this limit, and so can solar thermal and certain other solar energy systems. In the extreme limit, for a tandem solar cell with an infinite number of layers, the corresponding limit is 86.8% using concentrated sunlight.[4] (See Solar cell efficiency.) ## Background The Shockley–Queisser limit, zoomed in near the region of peak efficiency. In a traditional solid-state semiconductor such as silicon, a solar cell is made from two doped crystals, one an n-type semiconductor, which has extra free electrons, and the other a p-type semiconductor, which is lacking free electrons, referred to as "holes." When initially placed in contact with each other, some of the electrons in the n-type portion will flow into the p-type to "fill in" the missing electrons. Eventually enough will flow across the boundary to equalize the Fermi levels of the two materials. The result is a region at the interface, the p-n junction, where charge carriers are depleted on each side of the interface. In silicon, this transfer of electrons produces a potential barrier of about 0.6 V to 0.7 V.[5] When the material is placed in the sun, photons from the sunlight can be absorbed in the p-type side of the semiconductor, causing electrons in the valence band to be promoted in energy to the conduction band. This process is known as photoexcitation. As the name implies, electrons in the conduction band are free to move about the semiconductor. When a load is placed across the cell as a whole, these electrons will flow from the p-type side into the n-type side, lose energy while moving through the external circuit, and then go back into the p-type material where they can re-combine with the valence-band holes they left behind. In this way, sunlight creates an electric current.[5] ## The limit The Shockley–Queisser limit is calculated by examining the amount of electrical energy that is extracted per photon of incoming sunlight. There are several considerations: Any material, that is not at absolute zero (0 Kelvin), emits electromagnetic radiation through the black-body radiation effect. In a cell at room temperature, this represents approximately 7% of all the energy falling on the cell. Any energy lost in a cell is turned into heat, so any inefficiency in the cell increases the cell temperature when it is placed in sunlight. As the temperature of the cell increases, the outgoing radiation and heat loss through conduction and convection also increase, until an equilibrium is reached. In practice, this equilibrium is normally reached at temperatures as high as 360 Kelvin, and consequently, cells normally operate at lower efficiencies than their room-temperature rating. Module datasheets normally list this temperature dependency as TNOCT (NOCT - Nominal Operating Cell Temperature). For a "blackbody" at normal temperatures, a very small part of this radiation (the number per unit time and per unit area given by Qc, "c" for "cell") is photons having energy greater than the band gap (wavelength less than about 1.1 microns for silicon), and part of these photons (Shockley and Queisser use the factor tc) are generated by recombination of electrons and holes, which decreases the amount of current that could be generated otherwise. This is a very small effect, but Shockley and Queisser assume that the total rate of recombination (see below) when the voltage across the cell is zero (short circuit or no light) is proportional to the blackbody radiation Qc. This rate of recombination plays a negative role in the efficiency. Shockley and Queisser calculate Qc to be 1700 photons per second per square centimetre for silicon at 300K. ### Recombination Black curve: The limit for open-circuit voltage in the Shockley–Queisser model (i.e., voltage at zero current). The red dotted line shows that this voltage is always below the bandgap. This voltage is limited by recombination. Absorption of a photon creates an electron-hole pair, which could potentially contribute to the current. However, the reverse process must also be possible, according to the principle of detailed balance: an electron and a hole can meet and recombine, emitting a photon. This process reduces the efficiency of the cell. Other recombination processes may also exist (see "Other considerations" below), but this one is absolutely required. In the Shockley–Queisser model, the recombination rate depends on the voltage across the cell but is the same whether or not there is light falling on the cell. A factor fc gives the ratio of recombination that produces radiation to total recombination, so the rate of recombination per unit area when V = 0 is 2tcQc/fc and thus depends on Qc, the flux of blackbody photons above the band-gap energy. The factor of 2 was included on the assumption that radiation emitted by the cell goes in both directions. (This is actually debatable if a reflective surface is used on the shady side.) When the voltage is non-zero, the concentrations of charge carriers (electrons and holes) change (see Shockley diode equation), and according to the authors the rate of recombination changes by a factor of exp(V/Vc), where Vc is the voltage equivalent of the temperature of the cell, or "thermal voltage", namely ${\displaystyle V_{c}=kT_{c}/q}$ (q being the charge of an electron). Thus the rate of recombination, in this model, is proportional to exp(V/Vc) times the blackbody radiation above the band-gap energy: ${\displaystyle \int _{\nu _{g}}^{\infty }{\frac {1}{\exp \left({\frac {h\nu }{kT_{c}}}\right)-1}}e^{\frac {qV}{kT_{c}}}{\frac {2\pi \nu ^{2}}{c^{2}}}d\nu }$ (This is actually an approximation to the more accurate expression[6][7] ${\displaystyle \int _{\nu _{g}}^{\infty }{\frac {1}{\exp \left({\frac {h\nu -qV}{kT_{c}}}\right)-1}}{\frac {2\pi \nu ^{2}}{c^{2}}}d\nu ,}$ which is correct so long as the cell is thick enough to act as a black body. The difference in maximum theoretical efficiency however is negligibly small, except for tiny bandgaps below 200meV.[8]) The rate of generation of electron-hole pairs not due to incoming sunlight stays the same, so recombination minus spontaneous generation is ${\displaystyle I_{0}[\exp(V/V_{c})-1].}$ where ${\displaystyle I_{0}=2qt_{c}Q_{c}/f_{c}.}$ (Shockley and Queisser take fc to be a constant, although they admit that it may itself depend on voltage.) The rate of generation of electron-hole pairs due to sunlight is ${\displaystyle I_{sh}=q(t_{s}f_{\omega }Q_{s}-2t_{c}Q_{c})}$ where ${\displaystyle f_{\omega }Q_{s}}$ is the number of photons above the band-gap energy falling on the cell per unit area, and ts is the fraction of these that generate an electron-hole pair. This rate of generation is called Ish because it is the "short circuit" current (per unit area). When there is a load, then V will not be zero and we have a current equal to the rate of generation of pairs due to the sunlight minus the difference between recombination and spontaneous generation: ${\displaystyle I=I_{sh}-I_{0}[\exp(V/V_{c})-1].}$ The open-circuit voltage is therefore given (assuming fc does not depend on voltage) by ${\displaystyle V_{oc}=V_{c}\ln \left({\frac {I_{sh}}{I_{0}}}+1\right).}$ The product of the short-circuit current Ish and the open-circuit voltage Voc Shockley and Queisser call the "nominal power". It is not actually possible to get this amount of power out of the cell, but we can get close (see "Impedance matching" below). The ratio of the open-circuit voltage to the band-gap voltage Shockley and Queisser call V. Under open-circuit conditions, we have ${\displaystyle \ln I_{sh}=\ln I_{0}+\ln[\exp(V/V_{c})-1].}$ Asymptotically, this gives ${\displaystyle -V_{g}/V_{s}\sim -V_{g}/V_{c}+V/V_{c}}$ or ${\displaystyle V/V_{g}\sim 1-V_{c}/V_{s}}$ where Vs is the voltage equivalent of the temperature of the sun. As the ratio Vc/Vs goes to zero, the open-circuit voltage goes to the band-gap voltage, and as it goes to one, the open-circuit voltage goes to zero. This is why the efficiency falls if the cell heats up. In fact this expression represents the thermodynamic upper limit of the amount of work that can be obtained from a heat source at the temperature of the sun and a heat sink at the temperature of the cell. ### Spectrum losses Since the act of moving an electron from the valence band to the conduction band requires energy, only photons with more than that amount of energy will produce an electron-hole pair. In silicon the conduction band is about 1.1 eV away from the valence band, this corresponds to infrared light with a wavelength of about 1.1 microns. In other words, photons of red, yellow and blue light and some near-infrared will contribute to power production, whereas radio waves, microwaves, and most infrared photons will not.[9] This places an immediate limit on the amount of energy that can be extracted from the sun. Of the 1,000 W/m² in AM1.5 sunlight, about 19% of that has less than 1.1 eV of energy, and will not produce power in a silicon cell. Another important contributor to losses is that any energy above and beyond the bandgap energy is lost. While blue light has roughly twice the energy of red light, that energy is not captured by devices with a single p-n junction. The electron is ejected with higher energy when struck by a blue photon, but it loses this extra energy as it travels toward the p-n junction (the energy is converted into heat).[9] This accounts for about 33% of the incident sunlight, meaning that, for silicon, from spectrum losses alone there is a theoretical conversion efficiency limit of about 48%, ignoring all other factors. There is a trade-off in the selection of a bandgap. If the band gap is large, not as many photons create pairs, whereas if the band gap is small, the electron-hole pairs do not contain as much energy. Shockley and Queisser call the efficiency factor associated with spectrum losses u, for "ultimate efficiency function". Shockley and Queisser calculated that the best band gap for sunlight happens to be 1.1 eV, the value for silicon, and gives a u of 44%. They used blackbody radiation of 6000K for sunlight, and found that the optimum band gap would then have an energy of 2.2 kTs. (At that value, 22% of the blackbody radiation energy would be below the band gap.) Using a more accurate spectrum may give a slightly different optimum. A blackbody at 6000 K puts out 7348 W per square centimetre, so a value for u of 44% and a value of 5.73×1018 photons per joule (corresponding to a band gap of 1.09 V, the value used by Shockley and Queisser) gives Qs equal to 1.85×1022 photons per second per square centimetre. ### Impedance matching If the resistance of the load is too high, the current will be very low, while if the load resistance is too low, the voltage drop across it will be very low. There is an optimal load resistance that will draw the most power from the solar cell at a given illumination level. Shockley and Queisser call the ratio of power extracted to IshVoc the impedance matching factor, m. (It is also call the fill factor.) The optimum depends on the shape of the I versus V curve. For very low illumination, the curve is more or less a diagonal line, and m will be 1/4. But for high illumination, m approaches 1. Shockley and Queisser give a graph showing m as a function of the ratio zoc of the open-circuit voltage to the thermal voltage Vc. According to the authors, this ratio is well approximated by ln(fQs/Qc), where f is the combination of factors fsfωts/(2tc), in which fω is the solid angle of the sun divided by π. The maximum value of f without light concentration (with reflectors for example) is just fω/2, or 1.09×10−5, according to the authors. Using the above mentioned values of Qs and Qc, this gives a ratio of open-circuit voltage to thermal voltage of 32.4 (Voc equal to 77% of the band gap). The authors derive the equation ${\displaystyle z_{oc}=z_{m}+\ln(1+z_{m})}$ which can be solved to find zm, the ratio of optimal voltage to thermal voltage. For a zoc of 32.4, we find zm equal to 29.0. One can then use the formula ${\displaystyle m={\frac {z_{m}^{2}/z_{oc}}{1+z_{m}-\exp(-z_{m})}}}$ to find the impedance matching factor. For a zoc of 32.4, this comes to 86.5%. ### All together Considering the spectrum losses alone, a solar cell has a peak theoretical efficiency of 48% (or 44% according to Shockley and Queisser – their "ultimate efficiency factor"). Thus the spectrum losses represent the vast majority of lost power. Including the effects of recombination and the I versus V curve, the efficiency is described by the following equation: ${\displaystyle \eta =t_{s}u(x_{g})v(f,x_{c},x_{g})m(vx_{g}/x_{c})}$ with ${\displaystyle x_{g}=V_{g}/V_{s}}$ ${\displaystyle x_{c}=V_{c}/V_{s}}$ where u, v, and m are respectively the ultimate efficiency factor, the ratio of open-circuit voltage to band-gap voltage, and the impedance matching factor (all discussed above). Letting ts be 1, and using the values mentioned above of 44%, 77%, and 86.5% for the three factors gives about 29% overall efficiency. Shockley and Queisser say 30% in their abstract, but do not give a detailed calculation. A more recent reference gives, for a single-junction cell, a theoretical peak performance of about 33.7%, or about 337 W/m² in AM1.5.[1][9] When the amount of sunlight is increased using reflectors or lenses, the factor fω (and therefore f) will be higher. This raises both v and m. Shockley and Queisser include a graph showing the overall efficiency as a function of band gap for various values of f. For a value of 1, the graph shows a maximum efficiency of just over 40%, getting close to the ultimate efficiency (by their calculation) of 44%. ### Other considerations Shockley and Queisser's work considered the most basic physics only; there are a number of other factors that further reduce the theoretical power. #### Limited mobility When an electron is ejected through photoexcitation, the atom it was formerly bound to is left with a net positive charge. Under normal conditions, the atom will pull off an electron from a surrounding atom in order to neutralize itself. That atom will then attempt to remove an electron from another atom, and so forth, producing an ionization chain reaction that moves through the cell. Since these can be viewed as the motion of a positive charge, it is useful to refer to them as "holes", a sort of virtual positive electron. Like electrons, holes move around the material, and will be attracted towards a source of electrons. Normally these are provided through an electrode on the back surface of the cell. Meanwhile, the conduction-band electrons are moving forward towards the electrodes on the front surface. For a variety of reasons, holes in silicon move much more slowly than electrons. This means that during the finite time while the electron is moving forward towards the p-n junction, it may meet a slowly moving hole left behind by a previous photoexcitation. When this occurs, the electron recombines at that atom, and the energy is lost (normally through the emission of a photon of that energy, but there are a variety of possible processes). Recombination places an upper limit on the rate of production; past a certain rate there are so many holes in motion that new electrons will never make it to the p-n junction. In silicon this reduces the theoretical performance under normal operating conditions by another 10% over and above the thermal losses noted above. Materials with higher electron (or hole) mobility can improve on silicon's performance; gallium arsenide (GaAs) cells gain about 5% in real-world examples due to this effect alone. In brighter light, when it is concentrated by mirrors or lenses for example, this effect is magnified. Normal silicon cells quickly saturate, while GaAs continue to improve at concentrations as high as 1500 times. Recombination between electrons and holes is detrimental in a solar cell, so designers try to minimize it. However, radiative recombination—when an electron and hole recombine to create a photon that exits the cell into the air—is inevitable, because it is the time-reversed process of light absorption. Therefore, the Shockley–Queisser calculation takes radiative recombination into account; but it assumes (optimistically) that there is no other source of recombination. More realistic limits, which are lower than the Shockley–Queisser limit, can be calculated by taking into account other causes of recombination. These include recombination at defects and grain boundaries. In crystalline silicon, even if there are no crystalline defects, there is still Auger recombination, which occurs much more often than radiative recombination. By taking this into account, the theoretical efficiency of crystalline silicon solar cells was calculated to be 29.4%.[10] ## Exceeding the limit Breakdown of the causes for the Shockley–Queisser limit. The black height is energy that can be extracted as useful electrical power (the Shockley–Queisser efficiency limit); the pink height is energy of below-bandgap photons; the green height is energy lost when hot photogenerated electrons and holes relax to the band edges; the blue height is energy lost in the tradeoff between low radiative recombination versus high operating voltage. Designs that exceed the Shockley–Queisser limit work by overcoming one or more of these three loss processes. It is important to note that the analysis of Shockley and Queisser was based on the following assumptions: 1. One electron–hole pair excited per incoming photon 2. Thermal relaxation of the electron–hole pair energy in excess of the band gap 3. Illumination with non-concentrated sunlight None of these assumptions is necessarily true, and a number of different approaches have been used to significantly surpass the basic limit. ### Tandem cells The most widely explored path to higher efficiency solar cells has been multijunction photovoltaic cells, also known as "tandem cells". These cells use multiple p-n junctions, each one tuned to a particular frequency of the spectrum. This reduces the problem discussed above, that a material with a single given bandgap cannot absorb sunlight below the bandgap, and cannot take full advantage of sunlight far above the bandgap. In the most common design, a high-bandgap solar cell sits on top, absorbing high-energy, low-wavelength light, and transmitting the rest. Beneath it is a lower-bandgap solar cell which absorbs some of the lower-energy, longer-wavelength light. There may be yet another cell beneath that one, with as many as four layers in total. The calculation of the fundamental efficiency limits of these multijunction cells works in a fashion similar to those for single-junction cells, with the caveat that some of the light will be converted to other frequencies and re-emitted within the structure. Using methods similar to the original Shockley–Queisser analysis with these considerations in mind produces similar results; a two-layer cell can reach 42% efficiency, three-layer cells 49%, and a theoretical infinity-layer cell 68% in non-concentrated sunlight.[4] The majority of tandem cells that have been produced to date use three layers, tuned to blue (on top), yellow (middle) and red (bottom). These cells require the use of semiconductors that can be tuned to specific frequencies, which has led to most of them being made of gallium arsenide (GaAs) compounds, often germanium for red, GaAs for yellow, and GaInP2 for blue. They are very expensive to produce, using techniques similar to microprocessor construction but with "chip" sizes on the scale of several centimeters. In cases where outright performance is the only consideration, these cells have become common; they are widely used in satellite applications for instance, where the power-to-weight ratio overwhelms practically every other consideration. They also can be used in concentrated photovoltaic applications (see below), where a relatively small solar cell can serve a large area. Tandem cells are not restricted to high-performance applications; they are also used to make moderate-efficiency photovoltaics out of cheap but low-efficiency materials. One example is amorphous silicon solar cells, where triple-junction tandem cells are commercially available from Uni-Solar and other companies. ### Light concentration Sunlight can be concentrated with lenses or mirrors to much higher intensity. The sunlight intensity is a parameter in the Shockley–Queisser calculation, and with more concentration, the theoretical efficiency limit increases somewhat. If, however, the intense light heats up the cell, which often occurs in practice, the theoretical efficiency limit may go down all things considered. In practice, the choice of whether or not to use light concentration is based primarily on other factors besides the small change in solar cell efficiency. These factors include the relative cost per area of solar cells versus focusing optics like lenses or mirrors, the cost of sunlight-tracking systems, the proportion of light successfully focused onto the solar cell, and so on. A wide variety of optical systems can be used to concentrate sunlight, including ordinary lenses and curved mirrors, fresnel lenses, arrays of small flat mirrors, and luminescent solar concentrators.[11][12] Another proposal suggests spreading out an array of microscopic solar cells on a surface, and focusing light onto them via microlens arrays,[13] while yet another proposal suggests designing a semiconductor nanowire array in such a way that light is concentrated in the nanowires.[14] ### Intermediate band photovoltaics There has been some work on producing mid-energy states within single crystal structures. These cells would combine some of the advantages of the multi-junction cell with the simplicity of existing silicon designs. A detailed limit calculation for these cells with infinite bands suggests a maximum efficiency of 77.2%[15] To date, no commercial cell using this technique has been produced. ### Photon upconversion As discussed above, photons with energy below the bandgap are wasted in ordinary single-junction solar cells. One way to reduce this waste is to use photon upconversion, i.e. incorporating into the module a molecule or material that can absorb two or more below-bandgap photons and then emit one above-bandgap photon. Another possibility is to use two-photon absorption, but this can only work at extremely high light concentration.[16] ### Thermal photon upconversion Thermal upconversion is based on the absorption of photons with low energies in the upconverter, which heats up and re-emits photons with higher energies.[17] The upconversion efficiency can be improved by controlling the optical density of states of the absorber[18] and also by tuning the angularly-selective emission characteristics. For example, a planar thermal upconverting platform can have a front surface that absorbs low-energy photons incident within a narrow angular range, and a back surface that efficiently emits only high-energy photons.[19] A hybrid thermophotovoltaic platform exploiting thermal upconversion was theoretically predicted to demonstrate maximum conversion efficiency of 73% under illumination by non-concentrated sunlight. A detailed analysis of non-ideal hybrid platforms that allows for up to 15% of absorption/re-emission losses yielded limiting efficiency value of 45% for Si PV cells. ### Hot electron capture One of the main loss mechanisms is due to the loss of excess carrier energy above the bandgap. It should be no surprise that there has been a considerable amount of research into ways to capture the energy of the carriers before they can lose it in the crystal structure.[20] One system under investigation for this is quantum dots.[21] ### Multiple exciton generation A related concept is to use semiconductors that generate more than one excited electron per absorbed photon, instead of a single electron at the band edge. Quantum dots have been extensively investigated for this effect, and they have been shown to work for solar-relevant wavelengths in prototype solar cells.[21][22] Another, more straightforward way to utilise multiple exciton generation is a process called singlet fission (or singlet exciton fission) by which a singlet exciton is converted into two triplet excitons of lower energy. This allows for higher theoretical efficiencies when coupled to a low bandgap semiconductor[23] and quantum efficiencies exceeding 100% have been reported.[24] Also in materials where the (excited) electrons interact strongly with the remaining electrons such as Mott insulators multiple excitons can be generated.[25] ### Fluorescent downconversion/downshifting Another possibility for increased efficiency is to convert the frequency of light down towards the bandgap energy with a fluorescent material. In particular, to exceed the Shockley–Queisser limit, it is necessary for the fluorescent material to convert a single high-energy photon into several lower-energy ones (quantum efficiency > 1). For example, one photon with more than double the bandgap energy can become two photons above the bandgap energy. In practice, however, this conversion process tends to be relatively inefficient. If a very efficient system were found, such a material could be painted on the front surface of an otherwise standard cell, boosting its efficiency for little cost.[26] In contrast, considerable progress has been made in the exploration of fluorescent downshifting, which converts high-energy light (e. g., UV light) to low-energy light (e. g., red light) with a quantum efficiency smaller than 1. The cell may be more sensitive to these lower-energy photons. Dyes, rare-earth phosphors and quantum dots are actively investigated for fluorescent downshifting.[27] For example, silicon quantum dots enabled downshifting has led to the efficiency enhancement of the state-of-the-art silicon solar cells.[28] ### Thermophotovoltaic downconversion Thermophotovoltaic cells are similar to phosphorescent systems, but use a plate to act as the downconvertor. Solar energy falling on the plate, typically black-painted metal, is re-emitted as lower-energy IR, which can then be captured in an IR cell. This relies on a practical IR cell being available, but the theoretical conversion efficiency can be calculated. For a converter with a bandgap of 0.92 eV, efficiency is limited to 54% with a single-junction cell, and 85% for concentrated light shining on ideal components with no optical losses and only radiative recombination.[29] ## References 1. ^ a b c William Shockley and Hans J. Queisser (March 1961). "Detailed Balance Limit of Efficiency of p-n Junction Solar Cells" (PDF). Journal of Applied Physics. 32: 510–519. Bibcode:1961JAP....32..510S. doi:10.1063/1.1736034. 2. ^ a b S. Rühle (2016). "Tabulated values of the Shockley–Queisser limit for single junction solar cells". Solar Energy. 130: 139–147. Bibcode:2016SoEn..130..139R. doi:10.1016/j.solener.2016.02.015. 3. ^ "Hans Queisser". Computer History Museum. Retrieved January 17, 2017. 4. ^ a b A. De Vos, "Detailed balance limit of the efficiency of tandem solar cells", Journal of Physics D: Applied Physics Volume 13, Issue 5 (14 May 1980), page 839-846 doi:10.1088/0022-3727/13/5/018 5. ^ a b "Photovoltaic Cells (Solar Cells), How They Work". specmat.com. Archived from the original on 18 May 2007. Retrieved 2 May 2007. 6. ^ A. De Vos & H. Pauwels (1981). "On the Thermodynamic Limit of Photovoltaic Energy Conversion". Appl. Phys. 25: 119–125. Bibcode:1981ApPhy..25..119D. doi:10.1007/BF00901283. 7. ^ W. Ruppel and P. Würfel (1980). "Upper limit for the conversion of solar energy". IEEE Transactions on Electron Devices. 27: 877. Bibcode:1980ITED...27..877R. doi:10.1109/T-ED.1980.19950. This paper finds the same open-circuit voltage and short-circuit current as de Vos and Pauwels, but does not give the correct function for I(V). 8. ^ Byrnes, Steven. "The Shockley-Queisser limit". Retrieved 2016-03-10. 9. ^ a b c C. S. Solanki and G. Beaucarne, "Advanced Solar Cell Concepts", Interuniversity Microelectronics Center, Belgium 10. ^ A. Richter; M. Hermle; S.W. Glunz (Oct 2013). "Reassessment of the limiting efficiency for crystalline silicon solar cells". IEEE Journal of Photovoltaics. 3 (4): 1184–1191. doi:10.1109/JPHOTOV.2013.2270351. 11. ^ Elizabeth A. Thomson, "MIT opens new 'window' on solar energy", MIT News, 10 July 2008 12. ^ Kittidachachan, Pattareeya; Danos, Lefteris; Meyer, Thomas J. J.; Alderman, Nicolas; Markvart, Tom (19 December 2007). "Photon Collection Efficiency of Fluorescent Solar Collectors". CHIMIA International Journal for Chemistry. 61 (12): 780–786. doi:10.2533/chimia.2007.780. 13. ^ "Microsystems Enabled Photovoltaics, Sandia National Laboratories". Archived from the original on 5 April 2013. Retrieved 26 March 2013. 14. ^ Krogstrup, Peter; Jørgensen, Henrik Ingerslev; Heiss, Martin; Demichel, Olivier; Holm, Jeppe V.; Aagesen, Martin; Nygard, Jesper; Fontcuberta i Morral, Anna (24 March 2013). "Single-nanowire solar cells beyond the Shockley–Queisser limit". Nature Photonics. 7 (4): 306–310. arXiv:1301.1068. Bibcode:2013NaPho...7..306K. doi:10.1038/nphoton.2013.32. 15. ^ Andrew S. Brown and Martin A. Green, "Impurity photovoltaic effect: Fundamental energy conversion efficiency limits", Journal of Applied Physics, Volume 92, Issue 1 August 2002, pg. 1392, doi:10.1063/1.1492016 16. ^ Bahram Jalali, Sasan Fathpour, and Kevin Tsia, "Green Silicon Photonics", Optics and Photonics News, Vol. 20, Issue 6, pp. 18-23 (2009), doi:10.1364/OPN.20.6.000018 17. ^ N.J. Ekins-Daukes et al., Appl. Phys. Lett. 82, 1974 (2003) doi:10.1063/1.1561159 18. ^ D.J Farrell et al., Appl. Phys. Lett. 99, 111102 (2011) doi:10.1063/1.3636401 19. ^ S.V. Boriskina, G. Chen, 2014, 314, 71–78, doi:10.1016/j.optcom.2013.10.042 20. ^ Gavin Conibeer et all, "Hot Carrier Solar Cell: Implementation of the Ultimate Photovoltaic Converter", Global Climate & Energy Project, Stanford University, September 2008 21. ^ a b A. J. Nozik, "Quantum Dot Solar Cells", National Renewable Energy Laboratory, October 2001 22. ^ O. E. Semonin, "Peak External Photocurrent Quantum Efficiency Exceeding 100% via MEG in a Quantum Dot Solar Cell", Science, 2011 vol. 334 (6062) pp. 1530-1533 23. ^ B. Ehrler, "Singlet Exciton Fission-Sensitized Infrared Quantum Dot Solar Cells", Nano Letters, 2012 vol. 12 (2) pp. 1053-1057 24. ^ D. N. Congreve, "External Quantum Efficiency Above 100% in a Singlet-Exciton-Fission–Based Organic Photovoltaic Cell", Science, 2013 vol. 340 (6130) pp. 334-337 25. ^ P. Werner; K. Held & M. Eckstein (2014). "Role of impact ionization in the thermalization of photoexcited Mott insulators". Phys. Rev. B. 90: 235102. arXiv:1408.3425. Bibcode:2014PhRvB..90w5102W. doi:10.1103/PhysRevB.90.235102. 26. ^ "Sunovia, EPIR Demonstrate Optical Down-Conversion For Solar Cells" 27. ^ Klampaftis, Efthymios; Ross, David; McIntosh, Keith R.; Richards, Bryce S. (August 2009). "Enhancing the performance of solar cells via luminescent down-shifting of the incident spectrum: A review". Solar Energy Materials and Solar Cells. 93 (8): 1182–1194. doi:10.1016/j.solmat.2009.02.020. 28. ^ Pi, Xiaodong; Zhang, Li; Yang, Deren (11 October 2012). "Enhancing the Efficiency of Multicrystalline Silicon Solar Cells by the Inkjet Printing of Silicon-Quantum-Dot Ink". The Journal of Physical Chemistry C. 116 (40): 21240–21243. doi:10.1021/jp307078g. 29. ^ Nils-Peter Harder and Peter Würfel, "Theoretical limits of thermophotovoltaic solar energy conversion", Semiconductor Science and Technology, Volume 18 Issue 5 (May 2003), S151-S157, doi:10.1088/0268-1242/18/5/303
{}
Date Speaker Title Abstract January 24 Stanislav Atanasov The Weil Conjectures In this talk, we start with a well-known example of counting points on Grassmannians over finite fields. This will provide us motivation for introducing the deep and far-reaching connections between non-singular complex varieties and their realizations over finite fields, known as the Weil Conjectures. These conjectures concern properties of zeta functions, and we explain how some of these properties follow easily from the existence of an appropriate cohomology theory. January 31 Theo Coyne Symplectic Manifolds and Embeddings I will introduce and motivate the basic concepts in symplectic geometry and explain why they are important (in physics, for example).  One important problem in symplectic geometry is determining when one symplectic manifold embeds symplectically into another.  I will summarize some methods and results used to address this question. February 7 Noah Olander How to Use Finite Fields for Problems Concerning Infinite Fields Following J.P. Serre’s paper of the same title as this talk, I will give an algebraic proof of the Ax-Grothendieck Theorem - which appears to be a theorem of complex analysis - using finite fields. I will discuss what makes this argument work, and if time permits, I will prove another result that appears in Serre’s paper. February 14 Henry Liu Topological Quantum Field Theory and Gauge Theories The study of QFTs has inspired many modern mathematical constructions and results. QFTs which are unchanged by diffeomorphism are called topological; we will play around with the structure of such QFTs in (1+1) dimensions and prove a baby version of the celebrated Verlinde formula. If time permits, we’ll define gauge theories and their quantizations, and apply the baby Verlinde formula to them to get some interesting group/representation theoretic identities. February 21 Kevin Kwan Definitely Maybe - Probability and Statistics in Number Theory There has been a series of profound advancements in number theory in the 20th century, thanks to the understanding of the anatomy of integers and the fruitful interactions between statistics, probability theory, analysis and number theory. This will be a light survey talk on the heuristics and results in this direction, with emphasis on the distributions of prime divisors and prime gaps. February 28 Alex Zhang From Morse to Floer: Topological Invariants from Functions From generic nice functions, we could get information on the topological invariant of the manifold. Andreas Floer generalized this idea to functionals on loop space of a closed symplectic manifold to prove Arnold's conjecture under some further conditions. This construction has far richer structure than just the homology of the manifold and actually encodes all of its symplectic info. I will show some connection with the talks before on symplectic manifold and TQFT if time allows. March 7 Semon Rezchikov Feynman Diagram Techniques (Canceled Due to Snow) There is no reason why Feynman diagrams couldn't be a taught in an advanced calculus class. I will discuss something actually useful: how to compute asymptotic series for certain exponential integrals. We will start with one variable, where the asymptotic series will be sums over graphs, and try to get to matrix integrals (the Feynman diagrams for which involve surfaces with boundary, i.e. `interacting strings'.) March 14 No meeting March 21 Shizhang Li Hypergeometric Series and Igusa's Formula (Canceled Due to Snow) Consider a 2nd order ODE: z(1-z)f'' + (1-2z)f' - (1/4) f = 0 known as hypergeometric differential equation. In the first part of my talk, I will briefly discuss it's solution found by Euler and studied systematically by Gauss known as Gauss hypergeometric series. Then, in the second part of my talk, I will discuss some seemingly completely unrelated formula (Igusa's formula) about counting points of Elliptic curves over characteristic $p$ (of Legendre form). For the rest of the talk, I will try to tell the audience why and how these two things are related. March 28 Linh Truong A Categorification of a Knot Polynomial We will describe Khovanov's groundbreaking "categorification" of the Jones polynomial of a knot. Khovanov homology is a topological invariant of knots and links with an easy combinatorial definition. It has been used to answer some longstanding questions in three-dimensional topology such as the Milnor Conjecture. The objective of this talk is to define Khovanov homology, compute it for some examples, and state some of its properties. April 4 George Drimba Alexandrov Geometry and its Applications In this talk, I will motivate and explain various aspects and techniques of Alexandrov Geometry as well as applications to geometric questions. April 11 Alex Pieloch Why Pants Are Important The goal of this talk is to outline the proof that the moduli space of Riemann surfaces of genus $g$ is $3g-3$ complex dimensional. Along the way, we hope to illustrate how the combinatorics of simple closed curves on a surface give a great deal of information about the geometry of the moduli space of Riemann surfaces of genus $g$. Our proof of the main result heavily relies on understanding the hyperbolic geometry of pairs of pants, that is, closed disks with two boundary components. We will first study the hyperbolic geometry of a pair of pants.  After this, we will be able to introduce the Teichm\"{u}ller space of a surface and its associated Fenchel--Nielsen coordinates, which will enable us to prove the main result. April 18 Pak-Hin Lee From Cutting Squares to Combinatorics and 2-adics While it is easy to dissect a square into any even number of triangles of equal area, it turns out to be impossible to do the same with an odd number of triangles. The only proof of this result to date, due to Monsky (1970), requires input from both combinatorial topology (!) and the 2-adic numbers (!!), which is surprising given that the problem is geometric in nature. In this talk, we will introduce all the necessary concepts and explain Monsky's proof in detail. April 25 Alex Perry Brauer Groups The Brauer group is an object with diverse applications in algebraic geometry. In this talk, I will focus on the Brauer group of a field, which classifies finite-dimensional division algebras over the field. May 2 No meeting
{}
# Why we always put log() before the joint pdf when we use MLE(Maximum likelihood Estimation)? Maybe this question is simple, but I really need some help. When we use the Maximum Likelihood Estimation(MLE) to estimate the parameters, why we always put the log() before the joint density? To use the sum in place of product? But why? The wikipedia said it will be convenient. Why? Thank you. • We don't always do it. But besides the 'turn products into sums', there's also the fact that the numbers become far more tractable (computing pure likelihoods on typical to large sample sizes can often suffer from underflow problems), and finally there's the fact that a very large number of densities and probability functions involve exponentiation or powers, so even the individual terms may become nicer to work with. In fact, there are typically so many reasons to do it, you would usually only avoid it with good reason. – Glen_b Sep 25 '13 at 2:07 • If your data consists of $n$ independent samples, then the likelihood function is the product of the likelihood functions of the individual samples. To maximize the likelihood, differentiate the likelihood etc, which results in the sum of $n$ terms and is sometimes a mess to work with. If we take the logarithm and then try to maximize the logarithm of the likelihood (which yields the same maximum) we have the derivative is the sum of $n$ terms each of which depends on a separate sample value: much easier to work with than a sum of $n$ terms that all depend on all the $n$ sample values. – Dilip Sarwate Sep 25 '13 at 2:09 • Another viewpoint: if $f(x)>0$, for every $x$, then $(\log f(x))'=f'(x)/f(x)=0$ if and only if $f'(x)=0$. Hence, maximization of the log-likelihood gives you the same answer (the mle), and, as already pointed, generally makes things analytically simpler. – Zen Sep 25 '13 at 3:09 Apart from the reasons mentioned in the comments to your question, there is another important one: in applying maximum likelihood estimation, we essentially solve a maximization problem with respect to the unknown coefficients. Recall that finding the global maximum of a function is not a simple matter, in case where we have many unknowns, and when the objective functions lacks (or is unknown whether it possesses) certain general properties, like concavity (in the case of maximization), especially when the maximization will be done through an iterative procedure (as with be the case for most likelihood functions). Moreover, concavity of the objective function is an important condition in proving consistency of the ML estimator when the parameter space is not compact (for example when you estimate variances, $\sigma^2$, the paramater space is not compact but open from below, since by conception $\sigma^2 >0$. So we would want our objective function to be concave with respect to the parameters, to guarantee a global maximum. In linear models, if we have concavity in the variable, we obtain concavity in the parameters. Now there are many widely used distributions, whose density functions are not concave, but their natural logarithms are (we call such functions "log-concave"). The Normal density is the most prominent example: the function $$f_X(x) =\frac {1}{\sigma\sqrt{2\pi}}e^{-\frac 12 (\frac{x-\mu}{\sigma})^2}$$ is neither convex nor concave in $x$ (it has a middle concave part and is convex in the tails). But the function $$\ln f_X(x) =\ln \left(\frac {1}{\sigma\sqrt{2\pi}}\right) -\frac 12 \left(\frac{x-\mu}{\sigma}\right)^2$$ is globally concave in $x$. (Then by using the invariance property of the ML estimator, we can show by a suitable one-to-one transformation of the unknown parameter vector, that the function is concave in the re-parametrized vector). But in general, the basic point is that taking logs produces concavity of the objective function, which is a very desirable property. • +1 The point about convexity of $-\log(L)$ is good, although perhaps overspecialized (it does not apply to many likelihoods). But I believe the answer goes much deeper than that: as statisticians, we are (or at least should be) at least as interested in the possible errors in our estimates as we are in the estimates themselves. Those errors are computed from relatively simple properties of $-\log(L)$ and not from properties of $L$ itself. – whuber Sep 25 '13 at 15:09 In addition to the mathematical reason that Alecos wrote, let me give you a computational reason. Remember that the likelihood function is nothing but the joint density of random variables (expressed as a function of the parameters), i.e. $$Pr(\mathbf{x}) = Pr(x_{1})\cdot Pr(x_{2})\cdot\ldots\cdot Pr(x_{n}) = \prod_{i}^{n} Pr(x_{i})$$ for i.i.d. data. The probability density $$0 \leq Pr(x_{i}) \leq 1$$ for all $$i$$, so this number $$Pr(\mathbf{x})$$ becomes very small quickly as $$n$$ increases. Suppose all $$Pr(x_{i}) = 0.5$$ and $$n=1000$$, then $$\prod_{i}^{n} Pr(x_{i}) = 0.5^{1000} = 9.33 \cdot 10^{-302}$$ For only slightly larger datasets, or slightly smaller $$Pr(x_{i})$$, we are outside the representable range for software packages. For instance, the smallest representable number in R is $$2.225074\cdot10^{-308}$$. On the flipside, we have $$\log(Pr(\mathbf{x})) = \sum_{i}^{n} \log \left( Pr(x_{i}) \right) = 1000\cdot \log(0.5) = -693.1472$$ and even for $$n=1000000$$ we only have $$\log(Pr(\mathbf{x})) = 1000000\cdot \log(0.5) = -693147.2$$. • Similarly, the gradients are much easier for a large summation compared with a large product. – Cliff AB Mar 10 at 23:39
{}
# What is the maximum / minimum operational temperature? As per the title, what are the maximum and minimum operational temperature ratings of the Pi before it stops reliably working? Could this also depend on the SD card in use? • while sleep 1;do tput sc;tput cup 0 $(($(tput cols)-2));cat /sys/class/thermal/thermal_zone0/temp;tput rc;done & will display the cpu temp in the top right corner of the console. For monitoring. – NVRM Sep 30 at 19:49 • @NVRM using watch /opt/vc/bin/vcgencmd measure_temp (outputs new measure per each two seconds) sounds like an easier / cleaner way than your approach to constant temperature monitoring. But, I might be wrong. – trejder Oct 7 at 12:30 • Well, those are fairly similar, the cat approach works on all linux, while vcgencmd is specific to the pi. – NVRM Oct 7 at 16:27 From the RPi FAQ: What is the usable temperature range? The Raspberry Pi is built from commercial chips which are qualified to different temperature ranges; the LAN9512 is specified by the manufacturers being qualified from 0°C to 70°C, while the AP is qualified from -40°C to 85°C. You may well find that the board will work outside those temperatures, but we’re not qualifying the board itself to these extremes. • Does anyone know what the "AP" is, ie, which chip/component? – Darryl Hein May 14 '13 at 20:28 • This is the Application Processor (Broadcom BCM2835), CPU of the board. – bayindirh Jun 15 '13 at 8:32 • And it stands to reason that as the device is powered, the heat produced could factor.. IE: Starting it at -40 deg C may fail, but if it's been left running, the components should be warmer than ambient and thus not at -40. The environment should factor. -40, here I come, I'm parking some Pis outside in Canada. :) – James T Snell Oct 31 '14 at 4:41 • @Doc Funny you should mention that eh. ;) I've parked one outdoors (within tupperwear, air/water tight, but no insulation) at -20 C for a few hours, and did not notice the actual core temp (from the built-in sensor) drop below + 20 C. – goldilocks Jul 9 '16 at 20:20 • is there any way to detect the LAN chip temp, since its dead temp is much lower than Soc Chip, I believe the /opt/vc/bin/vcgencmd measure_temp command can only report Soc Chip temp, correct? – Shawn Jul 1 '18 at 1:59 It'll go way down to < -70°C according to the article: Raspberry Pi proven to be stable when submerged in liquid nitrogen. • Brilliant article that one is! :D Just need a steady supply of liquid nitrogen now :) – Piotr Kula Jan 10 '17 at 8:06 • Hmmm.. Overclocking, anyone? – SDsolar Apr 8 '18 at 2:13 My experience with Raspberry Pi 3: The SoC will start to throttle down at approximately 80 degrees Celsius, and will, in my experience, never allow itself to be warmer than 85 degrees Celsius. This is of course the core temperature - the temperature outside the chip will have to be much lower to facilitate efficient heat exchange. While you (probably, don't take my word for it) cannot destroy the SoC by leaving it uncooled, the performance will be severely impacted. (Same goes for the power supply, BTW). In our lab, we started noticing frame drops and significant degradation of video processing capability, only to find out that 1) it got too hot without the heatsink 2) the voltage dropped below 4.6V due to 5V supply wires that were too long. In any kind of extreme scenario, it is most likely that your processing power will decrease first, and other problems will appear much later, if ever. This can lead huge waste of time when trying to hunt down software bugs ("why is my program suddenly running so slowly?!?"), only to discover that the wires are too thin, or the heat sink is too small, so beware! Regarding the low boundary, you should check all the components. I recently booted Raspi3 at -12C cold and the camera did not work (first time in weeks, but other times the temperatures over night were not so low). After 15 minutes of waiting I rebooted it and it started working normally. Also, I think that the networking/USB chip on the board itself is not rated below 0 C. If you need such extremes, I suggest waiting for Compute Module 3, which will have range -20 to 80C, simply by not providing the problematic chip at all :) I see the OP's question has been answered authoritatively, but here are my 2 cents worth of experience: With the basic clear plastic no-fan enclosure and heat sinks the ARM AP runs at about 50C (122F), and my Pi3 works fine. When I take off the top part of the plastic shell the temperature drops to 47-48. So my conclusion is that the enclosure is not causing any measurable harm in this regard. The command to return the CPU temperature in stdout is vcgencmd measure_temp I see in comments that uhoh mentions that if you want to use the temperature in a Python program, the command os.popen('vgencmd measure_temp').read() will return the textual version of the temperature number. ## ------------------------------------------------------ Here is the way I use Popen to get the temperature into an integer variable: from subprocess import Popen, PIPE . . . cmd = 'vgencmd measure_temp' p = Popen(cmd,stdout=PIPE, stderr=PIPE, shell=True) stdout, stderr = p.communicate() CPUtemp = int(stdout) . . etc The above is taken from this code: Ping a website and have an output turn on if online/offline? This post shows how to use fping in a few different ways even though the results come in as stderr Also includes a cradle-to-grave example which makes use of the data and plots it live as it comes in. It shows Python and gnuplot. We don't see enough of these whole-system examples here. • just make sure there is good air movement in environments with very high temps to begin with :) – stevieb Sep 16 '16 at 18:28 • Excellent point. My room temperature is 75F which is just under 24C, and the door is open to outside so there is good air flow. – SDsolar Sep 16 '16 at 18:48 • This is very helpful! (will up vote in 12 hours when my quota for the day expires). It seems that os.popen('vcgencmd measure_temp').read() makes the temperature (text) available within Python as well. as does commands.getstatusoutput('vcgencmd measure_temp')[1] – uhoh Mar 19 '18 at 12:24 • TNX for the Python commands. That approach to using os.popen and .read like that could come in very handy. Not to mention commands.getstatusoutput ... [1] I tend to use Python with external commands a lot but hadn't run across either of those. Well done. btw, the more questions you answer the more points you will get and the restrictions will get lifted pretty quickly. This particular SE is a tough crowd, but they definitely recognize valuable contributions to the database. – SDsolar Mar 19 '18 at 16:38 • I maxed out on up votes today; while I spent several hours looking for some things I kept running across (what I felt to be) helpful posts with zero votes. I haven't figured out what makes some SE sites more generous and others not as much. – uhoh Mar 19 '18 at 20:10 The following is a little outside of the question, but a general use case that might give some ideas. This can be adapted to any kinds of inputs, gpio sensors, internet datas. How to graph the CPU temperature overtime? Install gnuplot Gnuplot can graph datas in the terminal, does not require any X server and use very little ressources. It works smooth even on the slowest raspberry pi's model 1/zero. sudo apt install gnuplot Script example to build a gnuplot file: temperature script to store the data overtime. #!/bin/sh echo $(date +%s ; cat /sys/class/thermal/thermal_zone0/temp) | tee >> temperature.plot Give execution rights to this script: chmod +x temperature Detach and run in 1s loop till next reboot: nohup watch ./temperature & Later, graph the datas: gnuplot -e "set terminal dumb$(tput cols) \$(tput lines);plot 'temperature.plot' using 0:2 with lines" This is a barebone example, temperature in Celsius * 1000, and seconds since the start, to be extended in your own scripts suite. To kill the watch loop, killall watch Happy hacking ;) ## protected by goldilocks♦Jul 9 '16 at 20:16 Thank you for your interest in this question. Because it has attracted low-quality or spam answers that had to be removed, posting an answer now requires 10 reputation on this site (the association bonus does not count).
{}
# Diffuse Interstellar Bands and the Ultraviolet Extinction Curves: The Missing Link Revisited [GA] A large number of interstellar absorption features at ~ 4000\AA\ — 1.8 {\mu}m, known as the “diffuse interstellar bands” (DIBs), remains unidentified. Most recent works relate them to large polycyclic aromatic hydrocarbon (PAH) molecules or ultrasmall carbonaceous grains which are also thought to be responsible for the 2175 \AA\ extinction bump and/or the far ultraviolet (UV) extinction rise at $\lambda^{-1} > 5.9\ {\mu}m^{-1}$. Therefore, one might expect some relation between the UV extinction and DIBs. Such a relationship, if established, could put important constraints on the carrier of DIBs. Over the past four decades, whether DIBs are related to the shape of the UV extinction curves has been extensively investigated. However, the results are often inconsistent, partly due to the inconsistencies in characterizing the UV extinction. Here we re-examine the connection between the UV extinction curve and DIBs. We compile the extinction curves and the equivalent widths of 40 DIBs along 97 slightlines. We decompose the extinction curve into three Drude-like functions composed of the visible/near-infrared component, the 2175 \AA\ bump, and the far-UV extinction at $\lambda^{-1} > 5.9\ {\mu}m^{-1}$. We argue that the wavelength-integrated far-UV extinction derived from this decomposition technique best measures the strength of the far-UV extinction. No correlation is found between the far-UV extinction and most (~90\%) of the DIBs. We have also shown that the color excess E(1300-1700), the extinction difference at 1300 \AA\ and 1700 \AA\ often used to measure the strength of the far-UV extinction, does not correlate with DIBs. Finally, we confirm the earlier findings of no correlation between the 2175 \AA\ bump and DIBs or between the 2175 \AA\ bump and the far-UV extinction. F. Xiang, A. Li and J. Zhong Mon, 24 Oct 16 24/53 Comments: 84 pages, 16 figures, 45 tables; accepted for publication in The Astrophysical Journal (2016)
{}
# $A$ and $D$ are in circumference of a circle and $B$ and $C$ are its inner points such that $PA$= $12$, $\frac{AB}{CD}$ = $\frac{1}{2}$. Find $PC$ There is something misunderstanding with that question that I think it to have inadequte context or information (obviously for my little knowledge). So I couldn't solve the problem. $$PE$$ is a tangent of the below diagram. The diameter of small circle is equal to the radius of the large circle and $$BC$$ is the diameter of the small circle. If $$PA$$ = $$12$$, $$\frac{AB}{CD}$$ = $$\frac{1}{2}$$, then what is the value of $$PC$$ ? I denoted the center of small circle $$H$$ and frew three altitude lines $$AG$$, $$HE$$ and $$DI$$ from the vertices $$A$$ , $$H$$ and $$DI$$ and also denoted $$AB$$ = $$x$$ and $$BH$$ = $$y$$. So, $$HE$$ = $$y$$. $$\triangle AGP$$ $$\sim$$ $$\triangle HEP$$ $$\sim$$ $$\triangle DIP$$. So, I showed the relation of $$AG$$ and $$DI$$ with $$HI$$ with the help of their proportion of length because of their similarity. But I was unable to show relation with the rest property and I couldn't find the value of $$x$$ and $$y$$. I think I went into messed situation for solving the above problem. I don't think so even my used method is going to the right direction. It will be very helpful for me if someone please tells me in which way should I go or where is my mistake to find the right way. Thanks in advance. Let $$AB = x$$ and $$CD = 2x$$ You can use the power of the point with respect to both circle and point $$P$$: $$PA\cdot PD (=PE^2)= PB\cdot PC$$ $$\implies 12(12+x+2r+2x)= (12+x)(12+x+2r)$$ so $$x+2r = 12\implies PC = 12+x+2r = 24$$ Note: Information about relation betwen both radius of the circles is irrelevant. • Why is it true your condition? I don’t understand – Federico Fallucca Feb 14 at 16:52 • Do you know the power of the point? @FedericoFallucca – Aqua Feb 14 at 16:54 • no sorry, I don’t know this concept – Federico Fallucca Feb 14 at 16:54 • Google it or wiki – Aqua Feb 14 at 16:55 • @greedoid You are absolutely right. The relation is totally irrelevant to the problem because I made the relation according to my own view and there was no description about the relation of radius and diameter of the small and the larger circle. I will try to notice the fact and I gave the information to provide some context. Pardon me for my fault. – Anirban Niloy Feb 14 at 17:19
{}
# Is ln 2x the same as ln x 2? ## How do you find the derivative of y ln x 2? ln2x is simply another way of writing (lnx)2 and so they are equivalent. ## Is ln 2x the same as ln x 2? In normal mathematical usage lnx2 means ln(x2); both are equal to 2lnx and have derivative 2x. Of course (lnx)2 is something altogether different; if W|A interprets lnx2 to mean (lnx)2, it should give different derivatives, but that’s a non-standard interpretation of lnx2. derivative of ln 2x ## What does Lnx 2 mean? What is the Derivative of log x whole square? The derivative of (log x)2 using the chain rule is 2 log x d/dx(log x) 2 log x [ 1/(x ln 10)] (2 log x) / (x ln 10). ## What is the derivative of y ln x 2? Thus, the derivative of ln x2 is 2/x. Note this result agrees with the plots of tangent lines for both positive and negative x. For x 2, the derivative is 2/2 1, which agrees with the plot. 1/x ## Is ln x 2 the same as ln x )) 2? ln2x is simply another way of writing (lnx)2 and so they are equivalent. ## How do you simplify ln 2x? In normal mathematical usage lnx2 means ln(x2); both are equal to 2lnx and have derivative 2x. Of course (lnx)2 is something altogether different; if W|A interprets lnx2 to mean (lnx)2, it should give different derivatives, but that’s a non-standard interpretation of lnx2. ## What is the differentiation of ln 2x? You use the chain rule : (fu2218g)'(x)(f(g(x)))’f'(g(x))u22c5g'(x) . In your case : (fu2218g)(x)ln(2x),f(x)ln(x)andg(x)2x . ## What does Lnx 2 equal? Step 1: Rewrite ln x2 Using Logarithm Properties The logarithm of x to a power n equals n times the logarithm of x. Thus, ln x2 2 ln x ## Is Lnx 2 the same as Lnx 2? ln2x is simply another way of writing (lnx)2 and so they are equivalent. ## How do you do Lnx 2? ln^2(x) is defined for all xx26gt;1, by ln^2(x) ln(ln(x)). The same definition works for all complex numbers except x 0 or 1 where logarithm to the base e is denoted by log for complex numbers since hardly anybody uses log to any other base there. 7.1K views. xb7 Related answers. 1/x ## How do you find the derivative of ln ln x 2? The derivative of yln(2) is 0 . Remember that one of the properties of derivatives is that the derivative of a constant is always 0 . If you view the derivative as the slope of a line at any given point, then a function that consists of only a constant would be a horizontal line with no change in slope. ## How do you find the derivative of ln 2? The derivative of ln x is 1/x. We know that the domain of ln x is x x26gt; 0 and thus, d/dx (ln |x|) 1/x as well. Derivative of ln(f(x)) using chain rule is 1/(f(x)) xb7 f'(x). ## Is ln x )) 2 the same as ln 2 X? ln2x is simply another way of writing (lnx)2 and so they are equivalent. ## What is the difference between ln 2 x and ln x )) 2? ln^2(x) means simply to square the value of ln(x). Whereas, 2ln(x) means to double the value of ln(x). ## What is the derivative of ln x )) 2? From the definition of the derivative using limits, the derivative of ln x2 2 (1/x) 2/x as before. ## How can you simplify ln? In normal mathematical usage lnx2 means ln(x2); both are equal to 2lnx and have derivative 2x. Of course (lnx)2 is something altogether different; if W|A interprets lnx2 to mean (lnx)2, it should give different derivatives, but that’s a non-standard interpretation of lnx2. ## How do you get rid of ln? Explanation: According to log properties, the coefficient in front of the natural log can be rewritten as the exponent raised by the quantity inside the log. Notice that natural log has a base of . This means that raising the log by base will eliminate both the and the natural log. ## What is ln 2x differentiated? From the definition of the derivative using limits, the derivative of ln x2 2 (1/x) 2/x as before. ## What is the second derivative of ln 2x? The derivative of 2x is equal to 2 as the formula for the derivative of a straight line function f(x) ax + b is given by f'(x) a, where a, b are real numbers. Differentiation of 2x is calculated using the formula d(ax+b)/dx a. ## What is the second derivative of Lnx 2? ln^2(x) means simply to square the value of ln(x). Whereas, 2ln(x) means to double the value of ln(x). ## What is the derivative of Lnx 2? In normal mathematical usage lnx2 means ln(x2); both are equal to 2lnx and have derivative 2x. Of course (lnx)2 is something altogether different; if W|A interprets lnx2 to mean (lnx)2, it should give different derivatives, but that’s a non-standard interpretation of lnx2. ## What happens when ln is squared? Logarithm and exponentials are inverse functions meaning the logarithm will undo the exponential. Thus, ln e1/x 1/x. From the definition of the derivative using limits, the derivative of ln x2 2 (1/x) 2/x as before. ## Can you integrate Lnx 2? ln^2(x) means simply to square the value of ln(x). Whereas, 2ln(x) means to double the value of ln(x).
{}
# [texhax] text in blocks Axel E. Retif axel.retif at mac.com Sat Aug 9 06:06:06 CEST 2008 On 8 Aug, 2008, at 19:20, Alexandre Almeida wrote: > Hi! > > I don't know how to make the text be displayed like this: > > Book review 1 This is a big book in every sense of the word. [...] > If I type a long line in the review, the next line will be displayed > below the "Book review" number. Would this work? \documentclass{article} \usepackage[text={30pc,44pc}]{geometry} \usepackage{lipsum} \usepackage{multicol} \usepackage{tabulary} \begin{document} \begin{multicols}{2} \lipsum[1] \end{multicols} \setlength\tymin{6pc} \setlength\tymax{\maxdimen} \noindent\begin{tabulary}{30pc}{@{}L J@{}} Book review 1 & This is a big book in every sense of the word. Drawing on more than two decades of scholarship in the field\dots\\ Book review 2 & The culmination of literally decades of careful scholarship, this book is both magnificent and a bit frustrating\dots\\ Book review 3 & One of the drawbacks of the standard geographical organization of the historical discipline\dots \end{tabulary} \begin{multicols}{2} \lipsum[1] \end{multicols} \end{document} Best, Axel
{}
MATLAB File Help: cv.integral Index cv.integral Calculates the integral of an image s = cv.integral(src) [s, sqsum, tilted] = cv.integral(src) [...] = cv.integral(src, 'OptionName',optionValue, ...) ## Input • src Source image as W x H, 8-bit, 16-bit or floating-point (single or double). ## Output • s Integral image as (W+1) x (H+1), 32-bit integer or floating-point (single or double). • sqsum Integral image for squared pixel values. It is (W+1) x (H+1), double-precision floating-point array. • tilted Integral for the image rotated by 45 degrees. It is (W+1) x (H+1) array with the same data type as s. ## Options • SDepth desired depth of the integral and the tilted integral images, int32, single, or double. default -1 • SQDepth desired depth of the integral image of squared pixel values, single or double. default -1 The functions calculate one or more integral images for the source image as follows: s(X,Y) = \sum_{x<X,y<Y} src(x,y) sqsum(X,Y) = \sum_{x<X,y<Y} src(x,y)^2 tilted(X,Y) = \sum_{y<Y,abs(x-X+1)<=Y-y-1} src(x,y) Using these integral images, you can calculate sum, mean, and standard deviation over a specific up-right or rotated rectangular region of the image in a constant time, for example: \sum_{x_1 <= x < x_2, y_1 <= y < y_2} src(x,y) = s(x_2, y_2) - s(x_1, y_2) - s(x_2, y_1) + s(x_1, y_1) It makes possible to do a fast blurring or fast block correlation with a variable window size, for example. In case of multi-channel images, sums for each channel are accumulated independently.
{}
### properties of real numbers activity If you switch … Cancel: Text box style: Font: Size: px. Here are the Real Number System Maze Activities. Addition. The properties of real numbers are: Commutative property of addition. Irrational numbers are a separate category of their own. Determine which properties of real numbers that is applied in each statement in exercise 13 – 30. 3 + 5 = 8 or 5 + 3 = 8 b. Multiplication. Have students complete the Properties of Real Numbers handout individually. Let us do one interesting activity with whole numbers. The properties help us to add, subtract, multiply, divide, and various other mathematical operations. The associative property of addition and multiplication states that the way of grouping of numbers doesn’t matter; the result will be the same. Write the smallest natural and smallest whole number. Writing and evaluating expressions worksheet The commutative property of addition means the order in which the numbers are added does not matter. Property 3: Associative Property. Subtract the chosen whole number with any other whole number. Main content: Properties of Real Numbers Other contents: Commutative, Associative Add to my workbooks (1) Download file pdf Embed in my website or blog Add to Google Classroom Add to Microsoft Teams Share through Whatsapp: Link to this worksheet: Copy: JayAre Finish!! 1 12. Quadratic equations word problems worksheet. We have thousands of printable worksheets such as Properties Of Real Numbers Worksheet With Answers Pdf/page/2 that can be downloaded for free. In other words, the placement of addends can be changed and the results will be equal. 2000+ Worksheets available here and free to be downloaded! •Ex: x = 5, then y = x + 6 is the same as y = 5 + 6. Here is a brief look at several of the properties: Commutative: a + b = b + a & ab = ba This property is all about the order. Let us explore these properties on the four binary operations (Addition, subtraction, multiplication and division) in mathematics. Worksheets; New Lessons; Search. Estimating percent worksheets. Properties of Whole Numbers: We all are well aware with the definition of the whole numbers. Font … Note: students should already be familiar with the list of the properties of real numbers. Commutative property of multiplication In this lesson, we are going to go over the different properties of real numbers (ℜ). For example, it does not make a difference if we put on the right shoe before the left or vice-versa. Activity A: Identity Properties for Addition and … Some of the worksheets for this concept are Sets of numbers in the real number system, Components of the real number system, 6th number grade system, Sets of real numbers date period, Real numbers precalculus, Real numbers, Real numbers and number operations, Introduction to 1 real numbers … Addition. The worksheets also go over certain properties of real numbers, such as the commutative property of addition, relate to actual equations. 7 x 8 = 56 (whole number) 5 x 6 = 30 (whole number) 0 x 15 = 0 (whole number) From the above example we can conclude that multiplication of two whole numbers is also found to be a whole number. 5. Properties of triangle worksheet. That is probably one of the main reasons we all learn how to count and add and subtract from a very young age. What do you want to do? Have them come stand at the front of the room under one of the signs you prepared for real numbers. Every year a few more properties are added to the list to master. Thus, R … In Algebra 2 these are of the up-most importance because these properties are not only essential pieces to knowing what to do IN a problem, but they are also a lot of times listed in the Directions of the problem. Search for: The Properties of Real Numbers. 1) additive identity 2) commutative property of addition 3) associative property of addition 4) additive inverse 6 The equation is an example of the 1) associative law 2) commutative law 3) distributive law 4) transitive law 7 While solving the equation 4(x+2) =28, Becca wrote 4x+8 =28. It starts with 0 and has the set of all natural numbers in it. E-learning is the future today. Find the Missing Numbers. 3 x 5 = 15 or 5 x 3 = 15 Associative Property a. This means if you add 2 + 1 to get 3, you can also add 1 + 2 to get 3. If m and n are the numbers, then the general form will be m + n = n + m for addition and … Covid-19 has led the world to go through a phenomenal transition . Q.1. 1. The Closure Properties. Using Properties of Real Numbers. Name the property. The Real Number System - Displaying top 8 worksheets found for this concept.. Q.2. 15 8. There are some properties of real numbers like closure property, commutative property and associative property. Sets of Numbers in the Real Number System Reals A real number is either a rational number or an irrational number. We are giving you the real number system activities pdf files but in order to get the Editable Versions and the Answer Keys you will need to Join the Math Teacher Community. Your students will use these worksheets to review identifying the property associated with a given expression. When we put together the rational numbers and the irrational numbers, we get the set of real numbers. There are a number of properties that can be used to help us work with real numbers. Identifying property 2. Other Properties . Identifying equivalent algebraic expressions: Worksheet 8.1 Name ……………………………… Date ……………………………… Score – 27 9. In this section, you will review the properties of real numbers. These properties make up the third component of what is called a mathematical system.These three components are a set of numbers, operations with the set of numbers, and properties of the numbers (and operations). 4. •If a = b, then b may be substituted for a in any expression containing a. Identifying property 1. So what are typical examples of using real numbers in a normal day? Closure Property : The sum of any two real is always a real number. 10. Check my answers: Email my answers to my teacher . Commutative Property . The numbers used to measure real-world quantities such as length, area, volume, speed, electrical charges, probability of rain, room temperature, gross national products, growth rates, and so forth are called real numbers.They include such numbers as $$10$$, $$– 17$$, $$\frac{{17}}{{14}}$$, $$0$$, $$2.71828$$, $$\sqrt 2$$, $$– \frac{{\sqrt 2 }}{2}$$, $$3 \times {10^8}$$ and $$\pi$$. The printable properties worksheets for 3rd grade and 4th grade kids include commutative and associative properties of addition and multiplication. Add it with any other whole number. For clarity, “properties” in this context refer to the characteristics or … Worksheet On Whole Numbers. We also acknowledge previous National Science Foundation support under grant numbers 1246120, … 2, , 7,0.121231234..., 13π−3 Non-Integer Fractions A non … Understanding the properties of real numbers will help us simplify numerical and algebraic expressions, solve equations, and more as you progress in studying algebra. Pearson science 10 activity book. Consider “m, n and r” are three real numbers. Mental Math Sparky went shopping at Wally World and Likewise, the commutative property of multiplication means the places of factors can be changed without affecting the result. Objectives At the end of the lesson, you should be able to: •recall the different properties of real numbers •write equivalent statements involving variables using the properties of real numbers 3. Properties of Real Numbers 2. Real numbers are closed under addition, subtraction, and multiplication.. That means if a and b are real numbers, then a + b is a unique real number, and a ⋅ b is a unique real number.. For example: 3 and 11 are real numbers. 7. Figure P.8 is a diagram that represents different mathematical systems. If you ever need some worksheets to improve your children\'s skills, download them from here. There are four main properties which include commutative property, associative property, distributive property and identity property. The commutative property, therefore, concerns itself with the ordering of operations, including the addition and multiplication of real numbers, integers, and rational numbers. a. commutative property of addition b. commutative property of multiplication c. associative property of addition d. associative property of multiplication additive identity f. multiplicative identity g. distributive property h. additive inverse i. multiplicative inverse 16. Properties of Real Numbers Sort, and have pairs of students complete it. Associative … Note that the set of numbers for the system … Multiplication. We also cover the topics of: commutative property of multiplication, the … •“Commutative” comes from … This is worksheet on whole numbers. Download All; Solve the Equation. What will you get? Think of any whole number of your choice. Properties of number pack one contains sixteen work cards with a wide variety of activities covering odd and even numbers, multiples, rectangle numbers, and factors. Stay Home , Stay Safe and keep learning!!! Figure $$\PageIndex{1}$$ illustrates how the number sets are related. Properties of Real Numbers. For some activities we perform, the order of certain operations does not matter, but the order of other operations does. Figure $$\PageIndex{1}$$ - This diagram illustrates the relationships between the different types of real numbers. Properties of number pack two … When two numbers are multiplied together, the product is the same regardless of the order in which the numbers are multiplied. Commutative Property •Changing the order of addition or multiplication does not matter. Welcome to the Properties of Real Numbers Worksheets section at Tutorialspoint.com. Decimal place value worksheets. Rational Numbers Maze Activity Answer Files. The same thing is true for operations in mathematics. Real numbers are extremely useful in everyday life. On this page, you will find worksheets on identifying like terms, combining like terms with whole number coefficients, introduction to properties of addition, multiplying a constant and a linear monomial, distributive property with whole number coefficients, factoring a linear binomial, identifying parts in an algebraic … Find the multiplicative inverse of each number. 2 4, 7,0, , 11 3 − Rationals A rational number is any number that can be put in the form p q where p and q are integers and 0q ≠ . When two numbers are added, the sum is the same regardless of the order in which the numbers are added. One can group numbers in any way but the answer will remain the same. Multiply any two whole numbers and observe the product. Then the above properties can be described using m, n, and r as shown below: Commutative Property. This is called ‘Closure property of addition’ of real numbers. Basic Number Properties Commutative Property a. Parenthesis can be done, irrespective of the order of terms. Which property did she use? 12 5 1 8 3, , ,4 , 62713 − Irrationals An irrational number is a nonrepeating, nonterminating decimal. Activity. This SMILE resource contains three packs of games, investigations, worksheets and practical activities supporting the teaching and learning of the properties of number, and a booklet 'Squares and Primes'. This lesson on Properties of Real Numbers is one that gets covered at the beginning of every Algebra course. Commutative Properties. This property is known as the closure property for addition of whole numbers. Distributive property of multiplication worksheet - I. Distributive property of multiplication worksheet - II. This next part is optional -- i.e., you can get through the definition of the real numbers without ever … 1) distributive … Simply put, the commutative property states that the factors in an equation can be rearranged freely without affecting the outcome of the equation. We are now going to look at a bunch of theorems we can now prove using The Axioms of the Field of Real Numbers.All of these theorems are elementary in that they should be relatively obvious to the reader. Remember that the real numbers are made up of all the rational and irrational numbers. Fill in the missing numbers and find what property is used. Nov 10, 2020 - Students will write in the property of real numbers that is demonstrated. The LibreTexts libraries are Powered by MindTouch ® and are supported by the Department of Education Open Textbook Pilot Project, the UC Davis Office of the Provost, the UC Davis Library, the California State University Affordable Learning Solutions Program, and Merlot. Substitution Property of Equality •If numbers are equal, then substituting one in for the another does not change the equality of the equation. Choose six students to start off being 'it' for the Real Number Game. 11. Is it a whole number? Compare both left and right hand side of the equations to find the value of x and … 5 Which property of real numbers is illustrated by the equation − 3 + 3 =0? Theorems on The Properties of The Real Numbers. Discuss with the class each of the properties, and discuss how properties of operations with real numbers are helpful in real life. Circulate around the room, and check each pair’s sort. Integers and absolute value worksheets. You will again get … However, it does matter whether we put on shoes or socks first. 3 + 11 = 14 and 3 ⋅ 11 = 33 Notice that both 14 and 33 are real numbers.. Any time you add, subtract, or multiply two real numbers, the result will be a … The … 1-1 Rational Numbers - Answers - Maze Activity (Editable Doc) Grant numbers 1246120, … Basic number properties commutative property of addition, relate to actual equations whole number Score. But the answer will remain the same as y = 5 + 6 equation be! Of terms put, the placement of addends can be rearranged freely without affecting the result r Find! Distributive property of addition ’ of real numbers handout individually or 5 + 6 is the same thing true... Will remain the same as y = x + 6 is the same ……………………………… Score using properties of numbers... Pdf/Page/2 that can be downloaded for free operations with real numbers are multiplied circulate around the room under of. Illustrates the relationships between the different properties of real numbers that represents different mathematical systems … real numbers is by. Room under one of the order in which the numbers are: commutative,... Numbers Sort, and discuss how properties of number pack two … the commutative property +... Part is optional -- i.e., you can get through the definition of the properties help us add. Remember that the factors in an equation can be done, irrespective of the order which... Displaying top 8 worksheets found for this concept number with any other whole number start being. Added does not matter, but the answer will remain the same thing is true operations. Property states that the real number System Maze Activities if you add 2 + properties of real numbers activity to 3. Activity with whole numbers the outcome of the order of addition or multiplication does not matter 1246120... Expressions: Worksheet 8.1 Name ……………………………… Date ……………………………… Score using properties of real numbers that probably! Means the places of factors can be rearranged freely without affecting the result + 1 to get,... ) - this diagram illustrates the relationships between the different types of numbers. Some Activities we perform, the placement of addends can be changed without affecting the outcome the! Part is optional -- i.e., you will again get … the real number System - Displaying top worksheets. Property •Changing the order in which the numbers are added, the commutative property property! Will review the properties of real numbers are made up of all natural numbers in the numbers... You ever need some worksheets to review identifying the property associated with a given expression learning!!!!! ℜ ) properties of real numbers activity Worksheet on whole numbers and Find what property is used ). Other operations does distributive property and identity property starts with 0 and has the set of all the and. Get through the definition of the order of other operations does not matter of all natural numbers in it being... The World to go over certain properties of real numbers pair ’ s Sort Wally World and are. Commutative and associative properties of real numbers, we are going to go over properties! Means the order of other operations does Sparky went shopping at properties of real numbers activity World and are... 2 to get 3 sum is the same regardless of the signs you for! + 5 = 8 or 5 + 3 = 8 b. multiplication all. Addition ’ of real numbers the front of the signs you prepared for real numbers how properties of operations real... May be substituted for a in any expression containing a be familiar with the properties of real numbers activity each the., it does not matter, but the order in which the numbers are extremely useful everyday! And associative properties of number pack two … the real numbers is illustrated by the.! Different mathematical systems in the Missing numbers and Find what property is used is probably of... Factors can be changed without affecting the outcome of the order in which the numbers are added does not a... All natural numbers in it and 4th grade kids include commutative property of multiplication means the order in which numbers...: we all learn how to count and add and subtract from very... The result of printable worksheets such as the commutative property of addition means the places of factors be. For this concept for 3rd grade and 4th grade kids include commutative property of addition ’ of real numbers:. Grade kids include commutative property of real numbers relate to actual equations comes from … Worksheet whole... Will be equal with a properties of real numbers activity expression properties on the right shoe before the or. Expressions: Worksheet 8.1 Name ……………………………… Date ……………………………… Score using properties of real are. Activity with whole numbers: we all learn how to count and add and subtract a! Operations ( addition, subtraction, multiplication and division ) in mathematics for example, it does whether! Commutative and associative property a using m, n, and check each ’! Also acknowledge previous National Science Foundation support under grant numbers 1246120, … number. We also acknowledge previous National Science Foundation support under grant numbers 1246120, … Basic number properties commutative property addition! Same thing is true for operations in mathematics the product is the same regardless of the main reasons we learn. To add, subtract, multiply, divide, and r as shown below: commutative.. The rational numbers and the irrational numbers product is the same thing is true for operations in mathematics exercise! Some worksheets to improve your children\ 's skills, download them from here number two! Be substituted for a in any expression containing a 1246120, … number. Have thousands of printable worksheets such as the commutative property •Changing the order of terms choose six to. + 6 these properties on the four binary operations ( addition, subtraction, and... Is the same thing is true for operations in mathematics next part is optional -- i.e., you also! Types of real numbers are made up of all the rational numbers and observe the.! Numbers in any way but the order in which the numbers are added of addition ’ of numbers... Matter whether we put on the right shoe before the left or vice-versa has led the to... That the factors in an equation can be downloaded for free of each number Activities. Can group numbers in the real numbers P.8 is a diagram that represents mathematical. Will remain the same regardless of the real number System Maze Activities some properties of real numbers represents mathematical. Complete it Sparky went shopping at Wally World and here are the number! Get 3 we have thousands of printable worksheets such as the commutative property, distributive property of,! Right shoe before the left or vice-versa by the equation algebraic expressions: Worksheet 8.1 Name ……………………………… Date Score... \ ) illustrates how the number sets are related using real numbers is... Thus properties of real numbers activity r … Find the multiplicative inverse of each number here free... When we put on shoes or socks first 8 3,,,4, −... Simply put, the placement of addends can be downloaded for free worksheets available and. Which property of addition and multiplication National Science Foundation support under grant numbers 1246120, … Basic number properties property!, subtraction, multiplication and division ) in mathematics of numbers in any expression containing.. Four main properties which include commutative property of addition represents different mathematical systems real in. N and r ” are three real numbers are added to the list the. Properties of real numbers extremely useful in everyday life three real numbers individually! R as shown below: commutative property of addition means the places of can. 8 or 5 + 6 15 or 5 + 3 =0 using real numbers individually! Numbers without ever between the different types of real numbers Basic number properties of real numbers activity commutative property associative. Should already be familiar with the list of the room, properties of real numbers activity discuss how of. From … Worksheet on whole numbers: we all learn how to count and add and subtract from a young... Subtraction, multiplication and division ) in mathematics of certain operations does and division ) in.... All natural numbers in the Missing numbers and observe the product is the same of... Equivalent algebraic expressions: Worksheet 8.1 Name ……………………………… Date ……………………………… Score using properties of real numbers, such as of. Examples of using real numbers helpful in real life are helpful in real life starts with 0 and has set! A: identity properties for addition and multiplication property: the sum the. Need some worksheets to review identifying the property associated with a given expression irrespective of order!, you can also add 1 + 2 to get 3, you will again get … the number..., but the answer will remain the same as y = 5, then y = +! Definition of the signs you prepared for real numbers in a normal day comes from … on... Some worksheets to review identifying the property associated with a given expression and various other mathematical.... Be downloaded number sets are related, and have pairs of students complete it already be familiar with the of... Need some worksheets to review identifying the property associated with a given expression has the set of real is... As properties of real numbers are extremely useful in everyday life figure P.8 is a diagram represents... Addends can be changed without affecting the outcome of the room, and have pairs students. Science Foundation support under grant numbers 1246120, … Basic number properties commutative property a previous National Science support! B, then y = 5 + 3 =0 5 1 8 3, you get... Identity properties for addition and multiplication the right shoe before the left vice-versa... Worksheet 8.1 Name ……………………………… Date ……………………………… Score using properties of real numbers like closure property the... Is applied in each statement in exercise 13 – 30 2000+ worksheets available and! Substituted for a in any way but the answer will remain the same as y = 5 + 3 8...
{}
Dividing curve area in to equal parts? 1. Oct 30, 2008 pjunky For any given curve,we can find out the area bounded by the curve. Using 'Simpson's 1/3 rule' I found out the area of the curve. Now how to divide the area in to 'n' equal parts, so that total Area=sum of n areas. Thanks. Last edited: Oct 30, 2008 2. Oct 30, 2008 mathman If you want to divide the area into n EXACTLY equal parts, it will be very hard, since you need (in general) to compute the area first and then subdivide and check each subdivision before proceeding to the next. However when doing numerical integration, the usual procedure is divide the x axis domain into n equal parts, and use Simpson's rule. To keep things simple, choose n to be a multiple of 3. 3. Oct 31, 2008 HallsofIvy Staff Emeritus Yes. I puzzled over this for a while myself. In using Simpson's rule, you divide the x-axis into equal parts, which is very easy, not the (unknown) area, which is very hard! 4. Oct 31, 2008 pjunky Ok so may be I'll try to write a paper on this topic. Thanks
{}
# Mesh ordering algorithms used by COMSOL Multiphysics Ordering of elements in an unstructured mesh is undoubtedly very important for the performance of computations. For example, it determines the structure of sparse matrices arising from PDE discretizations, which affects the performance of most linear algebra operations like matrix-vector product. Recently we used an unstructured tetrahedral mesh generated by COMSOL Multiphysics. For given mesh, we construct a matrix with the following structure: each row corresponds to some facet $F$ of the mesh and the non-zero elements correspond to the facets of the one or two cells adjacent to the facet $F$, i.e. for tetrahedral mesh each row has at most 7 non-zero elements. The matrix structure for the mesh generated by COMSOL is shown on the following figure. Simultaneously, we experiment with other mesh generators, but we currently don't have a good way to order the mesh. Using a simple approach of an in-order traversal of an octree of the cell centers, we get the following structure of the same matrix as above, which is not as "nice" as the original: Unfortunately the COMSOL documentation does not say anything about mesh ordering algorithms. Does anybody have an idea which algorithm it might be using or which algorithms might generate similar (or better) structure? • Just eyeballing the result, that looks like the reverse Cuthill McKee ordering. I suggest, however, that you look at the Deal.II page on mesh orderings before you choose the one to use. The orderings attempt to achieve different goals (bandwidth reduction, Cholesky fill-in reduction, ...) that are not simultaneously achieveable. dealii.org/8.5.0/doxygen/deal.II/namespaceDoFRenumbering.html en.wikipedia.org/wiki/Cuthill%E2%80%93McKee_algorithm – Tyler Olsen Apr 30 '17 at 20:25 • Thank you, that indeed looks like the Cuthill-McKee ordering according to the figures at Wikipedia. I'll try to hook the boost implementation of the method into our code and look at the results, although comparing with the Deal.II pictures, the result will most likely depend on the way the algorithm orders vertices with the same degree. Out of curiosity, are there any heuristics for that? – Jakub Klinkovský Apr 30 '17 at 20:55 • RCM is a variant on a breadth-first search, so you really only need to define a tie-breaker for equal-degree neighbors. Not being an expert at all on the nuances of these choices, my first thought would be to first try nothing (i.e. order in which you see them is order that they go in). You might also try something like computing the sum of the node degree + neighbor degrees, which will hopefully break a few ties. Fortunately, these sorts of things are pretty easy to try out with a baseline implementation in place. – Tyler Olsen Apr 30 '17 at 21:02 The idea of "ordering the nodes" in a finite element mesh to improve the computational time of the sparse solver originated in the large structural analysis FE codes of the 70's. Those codes typically used banded or variable-band storage schemes for the sparse matrices so reducing the bandwidth was the main criterion. That is the origin of the old Cuthill-McKee and Reverse-Cuthill-McKee bandwidth reduction algorithms. Those concepts are essentially obsolete today in all FE codes, such as COMSOL, that use modern sparse solvers. You say that your second ordering is "not as nice" as your first, presumably because the bandwidth is larger than the first. Bandwidth is a completely irrelevant metric in all modern sparse solvers. Here is a specific example that you may find surprising. In a classic paper from 1973, nested dissection, Alan George showed that the optimum numbering for the Laplace equation discretized with a simple square finite element mesh was obtained by what he called "nested dissection." The non-zeros for a $7\times7$ mesh with equations ordered according to the nested dissection algorithm are shown in the figure below. As you can see, the bandwidth of the matrix is nearly equal to the number of equations. This is the equation ordering that minimizes the number of floating point operations during factorization. All modern sparse solvers that I am aware of include algorithms for reordering the equations for computational efficiency and they are invoked by default. Many include multiple algorithms and will try two or more to find the one which minimizes the factorization time. The latest COMSOL documentation I have found shows that the direct sparse solver options are MUMPS, PARDISO, and SPOOLES. All three of these have excellent algorithms for reordering equations. They include algorithms that attempt to find an ordering similar to George's nested dissection on general meshes. For your specific problem, it is worthwhile to review COMSOL's guidelines regarding choice of direct solver. And you might want to try one of the iterative solver options where the equation ordering is irrelevant. But, in either case, the numbers you use to label the nodes in your mesh are not going to have any significant effect on the computational time. So pick ones that are most convenient for you. • Obviously the optimality of the ordering depends on the method that is used to solve the system. For direct solvers the ordering usually involves minimizing fill-in, which is irrelevant for iterative solvers. But I wouldn't say that ordering is completely irrelevant for iterative solvers. For example, the dominant operation of Krylov methods is usually the matrix-vector multiplication. If the matrix is so large that the vector does not fit into cache, then the cache misses due to bad ordering will kill the performance of I'd say any Krylov method, even GMRES where SpMV is not dominant. – Jakub Klinkovský May 1 '17 at 7:40 Based on the comment by @TylerOlsen, I've applied the boost implementation of the reverse Cuthill-McKee ordering on the same matrix from the original question and here is the result: The structure looks very similar, but the bandwidth is significantly smaller compared to the original matrix with COMSOL ordering. Also changing the starting vertex in the Cuthill-McKee algorithm (e.g. enforcing to use the same starting vertex as COMSOL) does not have a significant effect on the bandwidth for this particular mesh. I consider this to be even better result. Since I can't formally accept comments, I'm going to self-accept this answer, but of course the credit goes to @TylerOlsen.
{}
Finding an algorithm to return the $\log n$ largest element in an array I have just proved that for every $$\alpha, \beta>0 : (\log n)^\alpha=O(n^\beta)$$. Now, given an array of $$n$$ elements, I want to find an efficient comparison based algorithm for finding the $$\log n$$ largest elements in the array, and returning them in sorted order. I will appreciate your help on what is the best way to solve this question, and how should I think when countering such a question. • You can omit requirement $\alpha \gt 0$. – zkutch Apr 25 at 10:57 • It just makes the statement stronger. take $\alpha>1$ – nir shahar Apr 25 at 11:00 • Thanks, but this is not what I asked... I have proved the above and should use it – Math4me Apr 25 at 11:11 A possibility (don't think it's the simplest way, though) is: • Transform the array into a max-heap; • find (without extracting) the $$\log n$$ largests elements in the heap. The first step can be done in $$O(n)$$. For the second step, let's give a little bit details: • The largest element is found among one element (it's the root); • the second largest must be found among two elements (the two children of the root); • in general, the $$k$$-th largest element must be found among $$k$$ elements, because each time you select an element to be the largest, you remove it from the candidates and add its two children. That means that after having found the $$k$$ largest elements, finding the $$k+1$$-th is done by searching the maximum value among $$k+1$$, so it is done in $$O(k)$$. Since we want to find at most $$\log n$$ elements, the total search takes $$O((\log n)^2)$$ time, and with the property you proved, it is $$O(n)$$. That means that the total complexity of the algorithm is $$O(n)$$ which is obviously optimal since you need to browse all elements of the array. Edit: Actually, I realize (that confirms what I initially said) that there is a simpler way to do it: Use a quickselect algorithm to find the $$\log n$$-th largest element (no plural here) of the array (this is done in $$O(n)$$). By doing so in place, the $$\log n$$ largest elements will be placed in consecutive positions at the end of the array. You can then sort those $$\log n$$ elements in time complexity $$O(\log n \log \log n) \subset O((\log n)^2) \subset O(n)$$. • Excellent! Thank you so much - that was very helpful! I appreciate it! :D – Math4me Apr 25 at 13:25 • Can you please explain more about the quickselect option (I know this sort algorithm) – Math4me Apr 25 at 14:16 • @Math4me It is quite long to explain, so I suggest you read the wikipedia article I linked in my answer. Note that the quickselect algorithm is not the same as quicksort (though they have similarities). – Nathaniel Apr 25 at 14:25 • I will read, thanks!! – Math4me Apr 25 at 14:32 • No need for an edit tag, see here cs.meta.stackexchange.com/q/657/472 – Juho Apr 25 at 16:31 Further to what Nathaniel said, you can think of this as a simple modification of quicksort, where you ignore any partitions which fall outside the range of interest. Given: algorithm quicksort(A, lo, hi) is if lo < hi then p := partition(A, lo, hi) quicksort(A, lo, p - 1) quicksort(A, p + 1, hi) Then this algorithm sorts the top $$k$$ elements in order, leaving the other elements in an unknown order: algorithm quicksort_k(A, k, lo, hi) is if lo < hi then p := partition(A, lo, hi) quicksort_k(A, k, lo, p - 1) if k > p then quicksort_k(A, k, p + 1, hi) The expected running time is $$O(n \log k)$$. The same trick works with any sort algorithm that is conceptually based on partitioning, such as MSD radix sort.
{}
# Linear representation theory of generalized dihedral groups This article gives specific information, namely, linear representation theory, about a family of groups, namely: generalized dihedral group. View linear representation theory of group families | View other specific information about generalized dihedral group This article discusses the linear representations of the generalized dihedral group corresponding to a finite abelian group $H$. This group is defined as: $G := \langle H,x \mid x^2 = e, xhx = h{^-1} \ \forall \ h \in H \rangle$. ## The irreducible representations For the discussion below, we let $n$ denote the order of $H$. We let $S$ denote the set of squares in $H$, and $K = H/S$ is the quotient group. $K$ is an elementary abelian $2$-group, and we denote its order by $2^k$. ### One-dimensional representations There are $2^{k+1}$ of these, described as follows. Since $K$ is an elementary abelian group of order $2^k$, it has $2^k$ one-dimensional representations. Each of these gives rise to a one-dimensional representation of $H$, by composing with the quotient map from $H$ to $K$. Further, each such representation takes values $\pm 1$. For every such representation $\rho$ of $H$, there are two corresponding one-dimensional representations of $G$ whose restriction to $H$ is $\rho$: one representation sends the element $x$ to $(1)$, and the other representation sends $x$ to $-1$. Thus, we get a total of $2^{k+1}$ one-dimensional representations. ### Two-dimensional irreducible representations These two-dimensional representations arise from all the representations of $H$ that do not descend to $K$. There are $n - 2^k$ of these. Start with any representation $\rho$ of $H$ that does not have $S$ in its kernel. Consider the induced representation to $G$. Then, this induced representation is irreducible. Further, the induced representations for two representations are equal if and only if they are complex conjugates of each other -- this can readily be verified by looking at character values. Since $\rho$ does not descend to $K$, it is not equal to its complex conjugate. Thus, we obtain $(n - 2^k)/2$ inequivalent two-dimensional irreducible representations this way.
{}
# Curve defined by 3 equations Suppose that $X$ is a curve in $\mathbb{A}^3$ (in the AG sense, let's say over an algebraically closed field $k$) that contains no lines perpendicular to the $xy$ plane, and that there exist two polynomials $f,g\in k[x,y,z]$ such that $\{f=0\}\cap\{g=0\}=X\cup l_1\cup\cdots\cup l_n$, where $l_i$ are lines perpendicular to the $xy$ plane (and can possibly intersect $X$). Is it possible to find a third polynomial $h$ such that the intersection $\{f=0\}\cap\{g=0\}\cap\{h=0\}=X$? Since $X$ is algebraic, of course given a point that does not lie on $X$, there is a polynomial that is zero on $X$ and not zero on that point. I want to see if I can "cut out" $X$ with one other equation. - For instance, suppose $\ell_1$ intersects the $x-y$ plane at $(x,y) = (a,b)$. Consider the homomorphism $k[x,y,z] \rightarrow k[z]$ sending $x\mapsto a$ and $y \mapsto b$. The image of $I(X)$ is some prime ideal of $k[z]$, which is principal. Now look at the pullback of a generator.
{}
# Limit at Infinity of Real Identity Function Jump to: navigation, search ## Theorem Let $I_\R: \R \to \R$ be the identity function on $\R$. Then: $(1): \quad \displaystyle \lim_{x \to +\infty} \ I_\R \left({x}\right) = +\infty$ $(2): \quad \displaystyle \lim_{x \to -\infty} \ I_\R \left({x}\right) = -\infty$ ## Proof We have that the Derivative of Identity Function is $1$. Hence, by Derivative of Monotone Function, $x$ is strictly increasing. Now, by the definition of infinite limits at infinity, the first assertion is: $\forall M \in \R_{>0}: \exists N \in \R_{>0}: x > N \implies f \left({x}\right) > M$ For every $M$, choose $N = M$. The second assertion is proved similarly. $\blacksquare$
{}
4.4k views Consider the following rooted tree with the vertex labeled $P$ as the root: The order in which the nodes are visited during an in-order traversal of the tree is 1. $SQPTRWUV$ 2. $SQPTUWRV$ 3. $SQPTWUVR$ 4. $SQPTRUWV$ in DS edited | 4.4k views 0 @Arjun sir: where to find such questions? where should i read this from? and in exam if such questions, wont it be better not to attempt unless we know the exact algorithm? –1 Option D will be correct (even if S is considered to be the left child of Q) .I am getting the option D as the answer . plz explain ( If W is the left child of U,then it cud be the answer??) +1 QSPTRUWV, please correct me if i am wrong. 0 When I was studying using NPTEL videos by IITD (by Dr. Naveen Garg), I wrote down in my notes that inorder traversal is only applicable for binary trees; not applicable for generic trees wherein the degree of nodes is greater than $2$. But, I am not sure whether the professor actually said so, or it was what I understood. Any comments? For this problem, I considered both traversal sequence (given below), and ended up getting $A$ as the answer. Is this approach correct? 1. $T - R - \text{(subtree U-V)}$ 2. $T - \text{(subtree U-V)} - R$ 0 0 The first child of a node is considered as the left child so here S becomes left of Q. (A)  the inorder traversal order of a ternary tree is left $\rightarrow$ root $\rightarrow$ middle $\rightarrow$ right. by Boss (19.9k points) edited by +8 S is middle subtree of Q or left subtree..?? As per the answer they have assume it as left subtree.. why?? visually it seems middle subtree.. so middle subtree can not come with empty left subtree?? is this rule?? +9 IDK about that rule... But then looking at the options it clear that it must be a left child only. 0 what is the inorder traversal of a 4-ary tree? 0 @Susgmita It will depend on how tree is and which type of sequence is given. If it's having numerical sequence then it will be a sorted array in ascending order 0 @ according to Inorder preorder should be root-left-middle-right  & postorder should be left-middle-right-root. Is that corect?? 0 what would be the the traversal order in terms of root middle left and right if pre ans post order was asked ? in order traversal is ternary tree is left-> ROOT->middle->right .what will be pre and post for ternary tree? is there any patteen? 0 What would inorder traversal of n-ary tree look like? +1 @PRK  see this link for pre order and post order for n-ary tree https://leetcode.com/articles/introduction-to-n-ary-trees/ 0 @Vicky Bajoria, by default if only one child is present in a tree and  it's considered left child, that's standard. I hope it solves your doubt even with node W. The inorder traversal of a ternary tree is given by Left > Root > Middle > Right. But if you apply this traversal sequence on this tree, the order is SQPTWURV. According to the answer given by various books, the answer is (A). (A) can only be the answer if we consider 'S' to be the left child of 'Q', and 'W' to be the left child of 'U'. by Loyal (9.3k points) +1 Here order will not start from S as well. It will start from Q because S is the middle element which will come after root element Q. 0 I think the examiner designed the option to suggest that, if a node has only one child, then it left child. Therefore, S comes before Q and W comes before U in inorder traversal of the given tree. For inorder traversal you can take a trick whenever you visit the node second time take it into inorder sequence.. :D by Junior (973 points) 0 Is there such trick for other traversals as well, they may come in handy when solving for n-ary tree. @ankit srivastava +2 Yes Preorder: When you visit the node the first time we can print the node In order: When you visit the node the second time we can print the node Postorder: When you visit the node the third time we can print the node 0 F /         \ /                \ G                   H @Lakshman Patel RJIT : for post order it is not clear, how do we visit G & H third time, we could only visit second time, can yiu explain ?? +2 Yes,see this 0 ya, thanx 0 The trick is right for binary but how to deal with n-ary (just like this question which is ternary)? Thanks :) 0 How to use dummy nodes in n-ary trees (n>2) like in this question? 0 this trick will work for a binary tree. It's good to follow the actual procedure. If you have some question than apply similar to a binary tree and check it out. +1 @Lakshman Patel RJIT Can you tell what will the the preorder and postorder of this tree? 0 Inorder transversal of ternary tree is :- left -> Root->Middle->Right from the given figure, it is not clear whether W is middle child of U or left child of U if W is middle child of U then  SQPTRUWV (option D) if W is left child of U then       SQPTRWUV  (option A) by (457 points) +1 vote Inorder Traversal: Left, Root, Middle, Right. If single child is given of a node then First child of the node is considered as the left child so here S becomes left child of Q. so answer will be option (A) by Active (4.5k points) 0 Will the preorder and postorder traversal of a ternary tree be: Pre: Root, Left, Mid, Right Post: Left, Mid, Right, Root ?? OPTION C IS RIGHT by (15 points) +1 Answer is option A). You are doing some mistake. Please check one more time. 1 2 3
{}
# How do you simplify 3^6div3^4? ${3}^{2} = 9$ Using the $\textcolor{b l u e}{\text{law of exponents}}$ $\textcolor{\mathmr{and} a n \ge}{\text{Reminder }} \textcolor{red}{\overline{\underline{| \textcolor{w h i t e}{\frac{2}{2}} \textcolor{b l a c k}{{a}^{m} / {a}^{n} = {a}^{m - n}} \textcolor{w h i t e}{\frac{2}{2}} |}}}$ rArr3^6÷3^4=3^6/3^4=3^(6-4)=3^2=9
{}
# How would you talk about relative time in the past? I'm not sure if I'm overcomplicating this. I'm talking about events in the past, and I want to use relative time markers within that time frame. For example, "the day after that", or "the day before that", or "a year later". I'm not sure if I can use 明日【あした】, 昨日【きのう】 and 来年【らいねん】, because as I see it, I would be referring to the present timeline, the next day/previous day/next year for me. Is this so, or am I overcomplicating it? Is there a separate set of words for talking about relative time in the past, or is this something that is simply understood from context? There are a few ways to express this. You can use 翌{よく} as in: 翌年{よくとし}(orよくねん) These means year, month, week, day, following a particular point in time. Other ways are to use 次の〇 or 前の〇 as in: 次{つぎ}の年{とし} or 前{まえ}の年{とし} (前年{ぜんねん} for more of a 熟語{じゅくご} feel) • Nice tips about 次【次】 and 前【まえ】, though I don't quite understand what 翌 is meant to mean in this context. – Lou Jan 6 '15 at 21:27 • 翌just means "following". So 翌日{よくじつ} is "the following day". Here is an example I pulled from alc: 過去最高気温となるカ氏65度を記録した月曜日に続き、翌日には気温が氷点下まで落ち込み大きな積雪となったことで、地域の2校が休校となりました。 After a record high of 65 degrees Fahrenheit on Monday, temperatures plummeted [slipped] to sub-zero levels the following day, causing two local schools to close due to the large snow accumulation. – user224579 Jan 6 '15 at 21:44 • (Sorry for the ugly entry..no formatting in comments..) – user224579 Jan 6 '15 at 21:45 Past Perspective: Formal As usual, expect to hear lots of "on" sounds. Preceding time: 「[前]{ぜん} + time word」 [前年]{ぜんねん}、[前月]{ぜんげつ}、[前週]{ぜんしゅう}、[前日]{ぜんじつ} Succeeding time: 「[翌]{よく} + time word」  [翌年]{よくねん}、[翌月]{よくげつ}, etc. Informal That means lots of "kun" sounds. Preceding time: 「(その)[前]{まえ}の + time word」 (その)[前]{まえ}の[年]{とし}、[前]{まえ}の[月]{つき}, etc. Succeeding time: 「(その)[次]{つぎ}の + time word」 (その)[次]{つぎ}の[週]{しゅう}、次の[日]{ひ}, etc. 「その」 is optional but is used frequently.
{}
Hydraulics / fluid dynamics Homework Equations y1+P1/$\gamma$+$\alpha$v12/2g = y2+P2/$\gamma$+$\alpha$v22/2g + hL The Attempt at a Solution so it gives me pressure at p2 but not p1. I tried using Bernoulli's equation to find p1 i did plugged into energy equation above and got an answer but it did not make sense i got hL = .01 ft (I assumed that v1 = v2 but i do not think that is valid) Attachments • 22.3 KB Views: 175 Related Introductory Physics Homework Help News on Phys.org haruspex Homework Helper Gold Member Please post your working (and not as an image - those are for textbook extracts and diagrams). ...That is the problem exercise. Do you want me to type it instead? I would like some direction so I can try it and post my work. Am I correct to use Bernoulli's equation to find P1? and will velocities be the same in both sections? haruspex Homework Helper Gold Member That is the problem exercise. Do you want me to type it instead? The image you posted is fine. I am asking you to post your own working, but not as an image. I find that if I do not say that up front the working gets posted as an image which is hard to read and even harder to comment on. Am I correct to use Bernoulli's equation to find P1? and will velocities be the same in both sections? Yes and yes. ok, So I start off with Bernoulli's equation solving for P1. Because v1 = v2 the kinetic energy term will cancel out leaving us with P1 + $\rho$gh1 = P2 + $\rho$gh2 P1 +(1.94)(32.2)(80) = 2592 + (1.94)(32.3)(12) P1 = -1655.8 psf This does not make sense to me ( having a negative pressure at the top of the system) Anyways now I will use the energy equation assume $\alpha$ = 1 also because v1 = v2 velocity head will cancel out from equation y1 + P1/$\gamma$ +$\alpha$v12/2g = y2 + P2/$\gamma$ +$\alpha$v22/2g + hL 80 + (-1655.8/62.4) = 12 + (2592/62.4) + hL hL = -.07 ft This does not make sense to me (negative head) also I am certain that I am approaching this the wrong way because I did not use all the givens in the problem like Q, and L of pipe, and diameter of the pipe. haruspex Homework Helper Gold Member I start off with Bernoulli's equation Ok, I misunderstood what you meant by that in post #1. Bernoulli's equation is an energy equation, but it assumes no losses. I took your "relevant equation" to be a modified Bernoulli that allowed for losses. However, I am not familiar with this form, so please explain what α and γ represent, or post a link. I would take the inlet pressure to be atmospheric and the outlet pressure to be 18 psi higher. I am not sure what the height info means, but it sounds like there is a drop of 68 ft along the pipe. That's a bit over two atmospheres, so with no flow one would expect a pressure difference of over 30 psi. It is not clear what form of answer is required... is it the percentage loss or rate of loss? Divide Bernoulli's equation by $\gamma$ and include head losses. Everything is in terms of head (length) $\gamma$ = $\rho$g alpha is coefficient correction haruspex Divide Bernoulli's equation by $\gamma$ and include head losses. Everything is in terms of head (length) $\gamma$ = $\rho$g alpha is coefficient correction
{}
# Pigeon Hole Problem with 3 integers So, given any set of three integers, prove there is a pair whose sum us even, and then prove or disprove that there is a pair whose sum is odd. To prove that there is a pair whose sum is even, couldn't I say that since there are 3 integers that are either even or odd, there must be 2 that are even, or 2 that are odd, in which the sum of the even pair or odd pair is even? For the second part, I know that there can be a few possibilities for an odd sum, but that is dependent upon the set of integers, so I'm not sure how to exactly prove that. • The question seems a little ill-worded- is it supposed to read along the lines of: 'there must be a pair whose sum is even/odd'? – Sherlock Holmes Oct 1 '14 at 4:50 According to pigeonhole principle, since there are $3$ integers (pigeons) and each of which can be even or odd ($2$ holes), either there are $\lceil \dfrac{3}{2}\rceil = 2$ odd or $2$ even integers, both of which give an even sum.
{}
# How are quarks elementary when they can become leptons? [duplicate] From a recently reignited [casual] curiosity into particle physics thanks to the Fermilab YouTube channel, I read about the g-2 experiment, followed by muons, naturally. Muons, it turns out have short lives, and they decay into an electron (and antineutrino), for example. Reading on how muons are created, I learned about the role of the cosmic rays, and intuitively I understand how 3 quarks can end up as 3 quarks and a quark–antiquark pion. [Charged] pions decay to muons, so my curiosity was satisfied, but only momentarily . . . Until I realized this muon creation and the subsequent electron originated from quarks. Q: How are quarks elementary when they can become leptons? I'm either misunderstanding the meaning of a particle being elementary, or I'm missing something. (I can't get my head around that point; I also casually understand that the weak interaction is involved.) Does it work the opposite way as well (lepton → quark)? Or, given enough time, will all elementary particles eventually decay to leptons? (Thinking out loud; not necessarily extra questions.) I could not find the answer to my question on Wikipedia or via Google. I checked the related topics, for example, Are quarks and leptons actually fundamental particles? But it's a different question about what makes up quarks/leptons (I'm content with the fact they're as small as they get according to the Standard Model). As far as I can see those posts do not address the quark → lepton decay. All the examples given are quark → another quark (or lepton → another lepton), which I have no issue with (the muon → electron example I gave). The same for the decays of virtual/mediating particles, e.g., photons and Higgs bosons. My issue is (was) the class-changing decay of the non-virtual particles. ## marked as duplicate by Jon Custer, GiorgioP, Aaron Stevens, Cosmas Zachos, ZeroTheHeroJul 20 at 1:00 • Quark number and lepton number are conserved. When the $\pi^+$ pion decays it produces an antimatter $\mu^+$ antimuon and a (normal matter) $\nu_\mu$ neutrino, so the lepton number remains zero (as does the quark number). – PM 2Ring Jul 9 at 4:00 • A muon is also an elementary particle, and yet it decays into an electron and an antineutrino, as you know. Being an "elementary" particle doesn't mean it can't undergo interactions and come out as different particles, as long as quantum numbers are conserved. – Dmitry Brant Jul 9 at 4:01 • Possible duplicate of Is the Higgs boson an elementary particle? If so, why does it decay?, Decay of elementary particle? and links therein. – AccidentalFourierTransform Jul 9 at 12:10 Particles are called elementary if they are not made up of other particles. However, interactions can change an elementary particle into another kind of elementary particle. Quarks and leptons are currently believed to be elementary. (This could change if we could observe particles interacting at higher energies than, say, the LHC can achieve.) However the weak interaction can, for example, turn an up quark into a down quark, and an electron into an electron neutrino. When they change, they either emit or absorb a W or Z boson, the particles that carry the weak force. A quark can’t directly turn into a lepton, but two quarks can indirectly produce two leptons. For example, an up quark and a down antiquark can turn into a $$W^+$$ boson, which can then turn into a positron and an electron antineutrino. When an elementary particle like a muon decays into other particles, it doesn’t mean that those particles were inside the muon before it decayed. It means that the muon changed into a muon neutrino, emitting a $$W^-$$ boson in the process, and then that $$W^-$$ boson changed into an electron and an electron antineutrino. • Shouldn't it be a $W^+$ into a positron and electron neutrino, in the third paragraph? – Lucas Baldo Jul 9 at 4:31 • Thank you. I understand no sub-particles are involved (last sentence in the question). Also muon → electron I'm fine with (both leptons). So my misunderstanding was that quarks and leptons were their own "groups", but basically it's the concept of particles in general and the before/after spin/charge, as also commented, is all that matters, and they can jump groups, so to speak, right? – ymb1 Jul 9 at 4:31 • Would it help the OP somehow if you said that in the example you gave, no quarks (=quark + antiquark) turn into no leptons (=electron + antineutrino)? – Martin Kochanski Jul 9 at 4:57 • After reading this answer, it's not clear to me what "made up of other particles" means. What prevents me from thinking than a deuteron is an elementary particle that can "decay", or "change" into a proton and a neutron? This seems to be the crux of this answer, and it is not explained in detail. – Federico Poloni Jul 9 at 13:00 • @FedericoPoloni What should prevent you from thinking that is the experimental evidence that the deuteron has internal structure. Deuteron-electron scattering reveals that there is a proton and a neutron inside a deuteron, and quarks inside each of those. So the deuteron is observed to be a complicated composite particle. A muon, by contrast, appears to have no internal structure when we scatter electrons of it. It scatters like a single point charge. – G. Smith Jul 9 at 14:22 We have a plethora of data on particle interactions since last century, and laboriously have come up with a mathematical model for particle physics that works, i.e. it gives the right numerically answers for these data and, important, is successful in predicting new data, as recently happened with the Higgs boson. It is called the standard model for this reason and has an elementary particle table. These are what the successful model uses as elementary particles (together with their antiparticles), i.e. point particles carrying quantum numbers and masses that , when used to get the crossection of an interaction or the decay width or.. work beautifully and the model is continually validated. In this model the elementary particles interact with the three interactions according to their quantum numbers, and some such interactions and the energy supplied by their masses allow them to decay to other elementary particles. The Z and W and the Higgs also decay into lower mass elementary particles. So the answer is that it is a model that defines what elementary particles are which is working very well. If in the future a string theory model, for example, can embed the standard model into vibrations of a string, there will be only one elementary entity, the string. It is all in the model. • In all honesty, the OP wasn't challenging the standard model or the wisdom of the physicists who have accepted a particular set of particles as elementary based on scientific justifications (which you nicely summarize). The OP was rather trying to understand what the concept of an elementary particle is. It is natural to confuse the existence of the decay of a particle with the particle being made up of what it decays into. And I think that the question of the OP stemmed out of such a confusion. Your answer then stays orthogonal to actually addressing OP's confusion IMO. – Dvij Mankad Jul 9 at 21:40 • @FeynmansOutforGrumpyCat Well, I have tried to say what an elementary particle is for particle physics. Having started graduate school at 1961, neutrons and protons were still considered elementary. It is the model used that defines what observations mean, imo. I have tried to state this, that what is elementary today may not be in 300 years. – anna v Jul 10 at 3:12
{}
# Math Help - integral 1. ## integral im not sure how to integrate this $\int\frac{1}{x^2(x^2+1)}$ i had this on a test and i got it wrong 2. Originally Posted by acosta0809 im not sure how to integrate this $\int\frac{1}{x^2(x^2+1)}$ i had this on a test and i got it wrong Partial fraction decomp is how I'd do it. 3. A couple of ways to do this one. Notice that: $\frac{1 {\color{red}\ + \ x^2 \ - \ x^2}}{x^2 \left(x^2 + 1\right) } = \frac{1 + {\color{red}x^2}}{x^2 \left(x^2 + 1\right)} - \frac{{\color{red}x^2}}{x^2\left(x^2 + 1\right)} = \frac{1}{x^2} - \frac{1}{1+x^2}$ So your itnegral is the same as: $\int \left( \frac{1}{x^2} - \frac{1}{1+x^2}\right) \ dx$ which is a lot easier. You could've gotten the same thing if you did something along the lines of partial fraction decomposition if you aren't comfortable with the 'adding 0' in the numerator .
{}
# Derive some facts of the negative binomial distribution The previous post called The Negative Binomial Distribution gives a fairly comprehensive discussion of the negative binomial distribution. In this post, we fill in some of the details that are glossed over in that previous post. We derive the following points: • Discuss the several versions of the negative binomial distribution. • The negative binomial probabilities sum to one, i.e., the negative binomial probability function is a valid one. • Derive the moment generating function of the negative binomial distribution. • Derive the first and second moments and the variance of the negative binomial distribution. • An observation about independent sum of negative binomial distributions. ________________________________________________________________________ Three versions The negative binomial distribution has two parameters $r$ and $p$, where $r$ is a positive real number and $0. The first two versions arise from the case that $r$ is a positive integer, which can be interpreted as the random experiment of a sequence of independent Bernoulli trials until the $r$th success (the trials have the same probability of success $p$). In this interpretation, there are two ways of recording the random experiment: $X =$ the number of Bernoulli trials required to get the $r$th success. $Y =$ the number of Bernoulli trials that end in failure before getting the $r$th success. The other parameter $p$ is the probability of success in each Bernoulli trial. The notation $\binom{m}{n}$ is the binomial coefficient where $m$ and $n$ are non-negative integers and $m \ge n$ is defined as: $\displaystyle \binom{m}{n}=\frac{m!}{n! \ (m-n)!}=\frac{m(m-1) \cdots (m-(n-1))}{n!} \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ (0)$ With this in mind, the following are the probability functions of the random variables $X$ and $Y$. $\displaystyle P(X=x)= \binom{x-1}{r-1} p^r (1-p)^{x-r} \ \ \ \ \ \ \ x=r,r+1,r+2,\cdots \ \ \ \ \ \ \ (1)$ $\displaystyle P(Y=y)=\binom{y+r-1}{y} p^r (1-p)^y \ \ \ \ \ \ \ y=0,1,2,\cdots \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ (2)$ The thought process for (1) is that for the event $X=x$ to happen, there can only be $r-1$ successes in the first $x-1$ trials and one additional success occurring in the last trial (the $x$th trial). The thought process for (2) is that for the event $Y=y$ to happen, there are $y+r$ trials ($y$ failures and $r$ successes). In the first $y+r-1$ trials, there can be only $y$ failures (or equivalently $r-1$ successes). Note that $X=Y+r$. Thus knowing the mean of $Y$ will derive the mean of $X$, a fact we will use below. Instead of memorizing the probability functions (1) and (2), it is better to understand and remember the thought processes involved. Because of the natural interpretation of performing Bernoulli trials until the $r$th success, it is a good idea to introduce the negative binomial distribution via the distributions described by (1) and (2), i.e., the case where the parameter $r$ is a positive integer. When $r=1$, the random experiment is a sequence of independent Bernoulli trials until the first success (this is called the geometric distribution). Of course, (1) and (2) can also simply be used as counting distributions without any connection with a series of Bernoulli trials (e.g. used in an insurance context as the number of losses or claims arising from a group of insurance policies). The binomial coefficient in (0) is defined when both numbers are non-negative integers and that the top one is greater than or equal to the bottom one. However, the rightmost term in (0) can be calculated even when the top number $m$ is not a non-negative integer. Thus when $m$ is any real number, the rightmost term (0) can be calculated provided that the bottom number $n$ is a positive integer. For convenience we define $\binom{m}{0}=1$. With this in mind, the binomial coefficient $\binom{m}{n}$ is defined for any real number $m$ and any non-negative integer $n$. The third version of the negative binomial distribution arises from the relaxation of the binomial coefficient $\binom{m}{n}$ just discussed. With this in mind, the probability function in (2) can be defined for any positive real number $r$: $\displaystyle P(Y=y)=\binom{y+r-1}{y} p^r (1-p)^y \ \ \ \ \ \ \ y=0,1,2,\cdots \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ (3)$ where $\displaystyle \binom{y+r-1}{y}=\frac{(y+r-1)(y+r-2) \cdots (r+1)r}{y!}$. Of course when $r$ is a positive integer, versions (2) and (3) are identical. When $r$ is a positive real number but is not an integer, the distribution cannot be interpreted as the number of failures until the occurrence of $r$th success. Instead, it is used as a counting distribution. ________________________________________________________________________ The probabilities sum to one Do the probabilities in (1), (2) or (3) sum to one? For the interpretations of (1) and (2), is it possible to repeatedly perform Bernoulli trials and never get the $r$th success? For $r=1$, is it possible to never even get a success? In tossing a fair coin repeatedly, soon enough you will get a head and even if $r$ is a large number, you will eventually get $r$ number of heads. Here we wish to prove this fact mathematically. To show that (1), (2) and (3) are indeed probability functions, we use a fact concerning Maclaurin’s series expansion of the function $(1-x)^{-r}$, a fact that is covered in a calculus course. In the following two results, $r$ is a fixed positive real number and $y$ is any non-negative integer: $\displaystyle \binom{y+r-1}{y}=(-1)^y \ \binom{-r}{y} \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ (4)$ $\displaystyle \sum \limits_{y=0}^\infty (-1)^y \ \binom{-r}{y} \ x^y=(1-x)^{-r} \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ (5)$ The result (4) is to rearrange the binomial coefficient in probability function (3) to another binomial coefficient with a negative number. This is why there is the word “negative” in negative binomial distribution. The result (5) is the Maclaurin’s series expansion for the function $(1-x)^{-r}$. We first derive these two facts and then use them to show that the negative binomial probabilities in (3) sum to one. The following derives (4). \displaystyle \begin{aligned} \binom{y+r-1}{y}&=\frac{(y+r-1)(y+r-2) \cdots (r+1)r}{y!} \\&=(-1)^y \ \frac{(-r)(-r-1) \cdots (-r-(y-1))}{y!} \\&=(-1)^y \ \binom{-r}{y} \end{aligned} To derive (5), let $f(x)=(1-x)^{-r}$. Based on a theorem that can be found in most calculus text, the function $f(x)$ has the following Maclaurin’s series expansion (Maclaurin’s series is simply Taylor’s series with center = 0). $\displaystyle (1-x)^{-r}=f(0)+f^{'}(0)x+\frac{f^{(2)}(0)}{2!}x^2+\frac{f^{(3)}(0)}{3!}x^3+\cdots + \frac{f^{(n)}(0)}{n!}x^n+\cdots$ where $-1. Now, filling in the derivatives $f^{(n)}(0)$, we have the following derivation. \displaystyle \begin{aligned} (1-x)^{-r}&=1+rx+\frac{(r+1)r}{2!}x^2+\frac{(r+2)(r+1)r}{3!}x^3 \\& \ \ \ \ \ \ \ \ +\cdots+\frac{(r+y-1)(r+y-2) \cdots (r+1)r}{y!}x^y +\cdots \\&=1+(-1)^1 (-r)x+(-1)^2\frac{(-r)(-r-1)}{2!}x^2 \\& \ \ \ \ \ \ +(-1)^3 \frac{(-r)(-r-1)(-r-2)}{3!}x^3 +\cdots \\& \ \ \ \ \ \ +(-1)^y \frac{(-r)(-r-1) \cdots (-r-y+2)(-r-y+1)}{y!}x^y +\cdots \\&=(-1)^0 \binom{-r}{0}x^0 +(-1)^1 \binom{-r}{1}x^1+(-1)^2 \binom{-r}{2}x^2 \\& \ \ \ \ \ \ +(-1)^3 \binom{-r}{3}x^3+\cdots +(-1)^y \binom{-r}{y}x^y+\cdots \\&=\sum \limits_{y=0}^\infty (-1)^y \ \binom{-r}{y} \ x^y \end{aligned} We can now show that the negative binomial probabilities in (3) sum to one. Let $q=1-p$. \displaystyle \begin{aligned} \sum \limits_{y=0}^\infty \binom{y+r-1}{y} \ p^r \ q^y &=p^r \ \sum \limits_{y=0}^\infty (-1)^y \ \binom{-r}{y} \ q^y \ \ \ \ \ \ \ \ \ \ \ \text{using } (4) \\&=p^r \ (1-q)^{-r} \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \text{using } (5)\\&=p^r p^{-r} \\&=1 \end{aligned} ________________________________________________________________________ The moment generating function We now derive the moment generating function of the negative binomial distribution according to (3). The moment generation function is $M(t)=E(e^{tY})$ over all real numbers $t$ for which $M(t)$ is defined. The following derivation does the job. \displaystyle \begin{aligned} M(t)&=E(e^{tY}) \\&=\sum \limits_{y=0}^\infty \ e^{t y} \ \binom{y+r-1}{y} \ p^r \ (1-p)^y \\&=p^r \ \sum \limits_{y=0}^\infty \ \binom{y+r-1}{y} \ [(1-p) e^t]^y \\&=p^r \ \sum \limits_{y=0}^\infty \ (-1)^y \binom{-r}{y} \ [(1-p) e^t]^y \ \ \ \ \ \ \ \ \ \ \ \text{using } (4) \\&=p^r \ [1-(1-p) \ e^t]^{-r} \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \text{using } (5) \\&=\frac{p^r}{[1-(1-p) \ e^t]^{r}}\end{aligned} The above moment generating function works for the negative binomial distribution with respect to (3) and thus to (2). For the distribution in (1), note that $X=Y+r$. Thus $E(e^{tX})=E(e^{t(Y+r)})=e^{tr} \ E(e^{tY})$. The moment generating function of (1) is simply the above moment generating function multiplied by the factor $e^{tr}$. To summarize, the moment generating functions for the three versions are: $\displaystyle M_X(t)=E[e^{tX}]=\frac{p^r \ e^{tr}}{[1-(1-p) \ e^t]^{r}} \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \text{for } (1)$ $\displaystyle M_Y(t)=E[e^{tY}]=\frac{p^r}{[1-(1-p) \ e^t]^{r}} \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \text{for } (2) \text{ and } (3)$ The domain of the moment generating function is the set of all $t$ that for which $M_X(t)$ or $M_Y(t)$ is defined and is positive. Based on the form that it takes, we focus on making sure that $1-(1-p) \ e^t>0$. This leads to the domain $t<-\text{ln}(1-p)$. ________________________________________________________________________ The mean and the variance With the moment generating function derived in the above section, we can now focus on finding the moments of the negative binomial distribution. To find the moments, simply take the derivatives of the moment generating function and evaluate at $t=0$. For the distribution represented by the probability function in (3), we calculate the following: $E(Y)=M_Y^{'}(0)$ $E(Y^2)=M_Y^{(2)}(0)$ $Var(Y)=E(Y^2)-E(Y)^2$ After taking the first and second derivatives and evaluate at $t=0$, the first and the second moments are: $\displaystyle E(Y)=r \ \frac{1-p}{p}$ $\displaystyle E(Y^2)=\frac{r(1-p)[1+(1-p)]}{p^2}$ The following derives the variance. \displaystyle \begin{aligned} Var(Y)&=E(Y^2)-E(Y)^2 \\&=\frac{r(1-p)[1+(1-p)]}{p^2}-\frac{(1-p)^2}{p^2} \\&=\frac{r(1-p)[1+r(1-p)-r(1-p)]}{p^2} \\&=\frac{r(1-p)}{p^2} \end{aligned} The above formula is the variance for the three versions (1), (2) and (3). Note that $Var(Y)>E(Y)$. In contrast, the variance of the Poisson distribution is identical to its mean. Thus in the situation where the variance of observed data is greater than the sample mean, the negative binomial distribution should be a better fit than the Poisson distribution. ________________________________________________________________________ The independent sum There is an easy consequence that follows from the moment generating function derived above. The sum of several independent negative binomial distributions is also a negative binomial distribution. For example, suppose $T_1,T_2, \cdots,T_n$ are independent negative binomial random variables (version (3)). Suppose each $T_j$ has parameters $r_j$ and $p$ (the second parameter is identical). The moment generating function of the independent sum is the product of the individual moment generating functions. Thus the following is the moment generating function of $T=T_1+\cdots+T_n$. $\displaystyle M_T(t)=E[e^{tT}]=\frac{p^g}{[1-(1-p) \ e^t]^{g}}$ where $g=r_1+\cdots+r_n$. The moment generating function uniquely identifies the distribution. The above $M_T(t)$ is that of a negative binomial distribution with parameters $g$ and $p$ according to (3). A special case is that the sum of $n$ independent geometric distributions is a negative binomial distribution with the $r$ parameter being $r=n$. The following is the moment generating function of the sum $W$ of $n$ independent geometric distributions. $\displaystyle M_W(t)=E[e^{tW}]=\frac{p^n}{[1-(1-p) \ e^t]^{n}}$ ________________________________________________________________________ $\copyright \ \text{2015 by Dan Ma}$ # Getting from Binomial to Poisson In many binomial problems, the number of Bernoulli trials $n$ is large, relatively speaking, and the probability of success $p$ is small such that $n p$ is of moderate magnitude. For example, consider problems that deal with rare events where the probability of occurrence is small (as a concrete example, counting the number of people with July 1 as birthday out of a random sample of 1000 people). It is often convenient to approximate such binomial problems using the Poisson distribution. The justification for using the Poisson approximation is that the Poisson distribution is a limiting case of the binomial distribution. Now that cheap computing power is widely available, it is quite easy to use computer or other computing devices to obtain exact binomial probabiities for experiments up to 1000 trials or more. Though the Poisson approximation may no longer be necessary for such problems, knowing how to get from binomial to Poisson is important for understanding the Poisson distribution itself. Consider a counting process that describes the occurrences of a certain type of events of interest in a unit time interval subject to three simplifying assumptions (discussed below). We are interested in counting the number of occurrences of the event of interest in a unit time interval. As a concrete example, consider the number of cars arriving at an observation point in a certain highway in a period of time, say one hour. We wish to model the probability distribution of how many cars that will arrive at the observation point in this particular highway in one hour. Let $X$ be the random variable described by this probability distribution. We wish to konw the probability that there are $k$ cars arriving in one hour. We start with using a binomial distribution as an approximation to the probability $P(X=k)$. We will see that upon letting $n \rightarrow \infty$, the $P(X=k)$ is a Poisson probability. Suppose that we know $E(X)=\alpha$, perhaps an average obtained after observing cars at the observation points for many hours. The simplifying assumptions alluded to earlier are the following: 1. The numbers of cars arriving in nonoverlapping time intervals are independent. 2. The probability of one car arriving in a very short time interval of length $h$ is $\alpha h$. 3. The probability of having more than one car arriving in a very short time interval is esstentially zero. Assumption 1 means that a large number of cars arriving in one period does not imply fewer cars will arrival in the next period and vice versa. In other words, the number of cars that arrive in any one given moment does affect the number of cars that will arrive subsequently. Knowing how many cars arriving in one minute will not help predict the number of cars arriving at the next minute. Assumption 2 means that the rate of cars arriving is dependent only on the length of the time interval and not on when the time interval occurs (e.g. not on whether it is at the beginning of the hour or toward the end of the hour). The assumptions 2 and 3 allow us to think of a very short period of time as a Bernoulli trial. Thinking of the arrival of a car as a success, each short time interval will result in only one success or one failure. To start, we can break up the hour into 60 minutes (into 60 Bernoulli trials). We then consider the binomial distribution with $n=60$ and $p=\frac{\alpha}{60}$. So the following is an approximation to our desired probability distribution. $\displaystyle (1) \ \ \ \ \ P(X=k) \approx \binom{60}{k} \biggl(\frac{\alpha}{60}\biggr)^k \biggr(1-\frac{\alpha}{60}\biggr)^{60-k} \ \ \ \ \ k=0,1,2,\cdots, 60$ Conceivably, there can be more than 1 car arriving in a minute and observing cars in a one-minute interval may not be a Bernoulli trial. For a one-minute interval to qualify as a Bernoulli trial, there is either no car arriving or 1 car arriving in that one minute. So we can break up an hour into 3,600 seconds (into 3,600 Bernoulli trials). We now consider the binomial distribution with $n=3600$ and $p=\frac{\alpha}{3600}$. $\displaystyle (2) \ \ \ \ \ P(X=k) \approx \binom{3600}{k} \biggl(\frac{\alpha}{3600}\biggr)^k \biggr(1-\frac{\alpha}{3600}\biggr)^{3600-k} \ \ \ \ \ k=0,1,2,\cdots, 3600$ It is also conceivable that more than 1 car can arrive in one second and observing cars in one-second interval may still not qualify as a Bernoulli trial. So we need to get more granular. We can divide up the hour into $n$ equal subintervals, each of length $\frac{1}{n}$. The assumptions 2 and 3 ensure that each subinterval is a Bernoulli trial (either it is a success or a failure; one car arriving or no car arriving). Assumption 1 tells us that all the $n$ subintervals are independent. So breaking up the hour into $n$ moments and counting the number of moments that are successes will result in a binomial distribution with parameters $n$ and $p=\frac{\alpha}{n}$. So we are ready to proceed with the following approximation to our probability distribution $P(X=k)$. $\displaystyle (3) \ \ \ \ \ P(X=k) \approx \binom{n}{k} \biggl(\frac{\alpha}{n}\biggr)^k \biggr(1-\frac{\alpha}{n}\biggr)^{n-k} \ \ \ \ \ k=0,1,2,\cdots, n$ As we get more granular, $n \rightarrow \infty$. We show that the limit of the binomial probability in $(3)$ is the Poisson distribution with parameter $\alpha$. We show the following. $\displaystyle (4) \ \ \ \ \ P(X=k) = \lim \limits_{n \rightarrow \infty} \binom{n}{k} \biggl(\frac{\alpha}{n}\biggr)^k \biggr(1-\frac{\alpha}{n}\biggr)^{n-k}=\frac{e^{-\alpha} \alpha^k}{k!} \ \ \ \ \ \ k=0,1,2,\cdots$ In the derivation of $(4)$, we need the following two mathematical tools. The statement $(5)$ is one of the definitions of the mathematical constant e. In the statement $(6)$, the integer $n$ in the numerator is greater than the integer $k$ in the denominator. It says that whenever we work with such a ratio of two factorials, the result is the product of $n$ with the smaller integers down to $(n-(k-1))$. There are exactly $k$ terms in the product. $\displaystyle (5) \ \ \ \ \ \lim \limits_{n \rightarrow \infty} \biggl(1+\frac{x}{n}\biggr)^n=e^x$ $\displaystyle (6) \ \ \ \ \ \frac{n!}{(n-k)!}=n(n-1)(n-2) \cdots (n-k+1) \ \ \ \ \ \ \ \ k The following is the derivation of $(4)$. \displaystyle \begin{aligned}(7) \ \ \ \ \ P(X=k)&=\lim \limits_{n \rightarrow \infty} \binom{n}{k} \biggl(\frac{\alpha}{n}\biggr)^k \biggr(1-\frac{\alpha}{n}\biggr)^{n-k} \\&=\lim \limits_{n \rightarrow \infty} \ \frac{n!}{k! (n-k)!} \biggl(\frac{\alpha}{n}\biggr)^k \biggr(1-\frac{\alpha}{n}\biggr)^{n-k} \\&=\lim \limits_{n \rightarrow \infty} \ \frac{n(n-1)(n-2) \cdots (n-k+1)}{n^k} \biggl(\frac{\alpha^k}{k!}\biggr) \biggr(1-\frac{\alpha}{n}\biggr)^{n} \biggr(1-\frac{\alpha}{n}\biggr)^{-k} \\&=\biggl(\frac{\alpha^k}{k!}\biggr) \lim \limits_{n \rightarrow \infty} \ \frac{n(n-1)(n-2) \cdots (n-k+1)}{n^k} \\&\times \ \ \ \lim \limits_{n \rightarrow \infty} \biggr(1-\frac{\alpha}{n}\biggr)^{n} \ \lim \limits_{n \rightarrow \infty} \biggr(1-\frac{\alpha}{n}\biggr)^{-k} \\&=\frac{e^{-\alpha} \alpha^k}{k!} \end{aligned} In $(7)$, we have $\displaystyle \lim \limits_{n \rightarrow \infty} \ \frac{n(n-1)(n-2) \cdots (n-k+1)}{n^k}=1$. The reason being that the numerator is a polynomial where the leading term is $n^k$. Upon dividing by $n^k$ and taking the limit, we get 1. Based on $(5)$, we have $\displaystyle \lim \limits_{n \rightarrow \infty} \biggr(1-\frac{\alpha}{n}\biggr)^{n}=e^{-\alpha}$. For the last limit in the derivation we have $\displaystyle \lim \limits_{n \rightarrow \infty} \biggr(1-\frac{\alpha}{n}\biggr)^{-k}=1$. We conclude with some comments. As the above derivation shows, the Poisson distribution is at heart a binomial distribution. When we divide the unit time interval into more and more subintervals (as the subintervals get more and more granular), the resulting binomial distribution behaves more and more like the Poisson distribution. The three assumtions used in the derivation are called the Poisson postulates, which are the underlying assumptions that govern a Poisson process. Such a random process describes the occurrences of some type of events that are of interest (e.g. the arrivals of cars in our example) in a fixed period of time. The positive constant $\alpha$ indicated in Assumption 2 is the parameter of the Poisson process, which can be interpreted as the rate of occurrences of the event of interest (or rate of changes, or rate of arrivals) in a unit time interval, meaning that the positive constant $\alpha$ is the mean number of occurrences in the unit time interval. The derivation in $(7)$ shows that whenever a certain type of events occurs according to a Poisson process with parameter $\alpha$, the counting variable of the number of occurrences in the unit time interval is distributed according to the Poisson distribution as indicated in $(4)$. If we observe the occurrences of events over intervals of length other than unit length, say, in an interval of length $t$, the counting process is governed by the same three postulates, with the modification to Assumption 2 that the rate of changes of the process is now $\alpha t$. The mean number of occurrences in the time interval of length $t$ is now $\alpha t$. The Assumption 2 now states that for any very short time interval of length $h$ (and that is also a subinterval of the interval of length $t$ under observation), the probability of having one occurrence of event in this short interval is $(\alpha t)h$. Applyng the same derivation, it can be shown that the number of occurrences ($X_t$) in a time interval of length $t$ has the Poisson distribution with the following probability mass function. $\displaystyle (8) \ \ \ \ \ P(X_t=k)=\frac{e^{-\alpha t} \ (\alpha t)^k}{k!} \ \ \ \ \ \ \ \ k=0,1,2,\cdots$ # Relating Binomial and Negative Binomial The negative binomial distribution has a natural intepretation as a waiting time until the arrival of the rth success (when the parameter r is a positive integer). The waiting time refers to the number of independent Bernoulli trials needed to reach the rth success. This interpretation of the negative binomial distribution gives us a good way of relating it to the binomial distribution. For example, if the rth success takes place after $k$ failed Bernoulli trials (for a total of $k+r$ trials), then there can be at most $r-1$ successes in the first $k+r$ trials. This tells us that the survival function of the negative binomial distribution is the cumulative distribution function (cdf) of a binomial distribution. In this post, we gives the details behind this observation. A previous post on the negative binomial distribution is found here. A random experiment resulting in two distinct outcomes (success or failure) is called a Bernoulli trial (e.g. head or tail in a coin toss, whether or not the birthday of a customer is the first of January, whether an insurance claim is above or below a given threshold etc). Suppose a series of independent Bernoulli trials are performed until reaching the rth success where the probability of success in each trial is $p$. Let $X_r$ be the number of failures before the occurrence of the rth success. The following is the probablity mass function of $X_r$. $\displaystyle (1) \ \ \ \ P(X_r)=\binom{k+r-1}{k} p^r (1-p)^k \ \ \ \ \ \ k=0,1,2,3,\cdots$ Be definition, the survival function and cdf of $X_r$ are: $\displaystyle (2) \ \ \ \ P(X_r > k)=\sum \limits_{j=k+1}^\infty \binom{j+r-1}{j} p^r (1-p)^j \ \ \ \ \ \ k=0,1,2,3,\cdots$ $\displaystyle (3) \ \ \ \ P(X_r \le k)=\sum \limits_{j=0}^k \binom{j+r-1}{j} p^r (1-p)^j \ \ \ \ \ \ k=0,1,2,3,\cdots$ For each positive integer $k$, let $Y_{r+k}$ be the number of successes in performing a sequence of $r+k$ independent Bernoulli trials where $p$ is the probability of success. In other words, $Y_{r+k}$ has a binomial distribution with parameters $r+k$ and $p$. If the random experiment requires more than $k$ failures to reach the rth success, there are at most $r-1$ successes in the first $k+r$ trails. Thus the survival function of $X_r$ is the same as the cdf of a binomial distribution. Equivalently, the cdf of $X_r$ is the same as the survival function of a binomial distribution. We have the following: \displaystyle \begin{aligned}(4) \ \ \ \ P(X_r > k)&=P(Y_{k+r} \le r-1) \\&=\sum \limits_{j=0}^{r-1} \binom{k+r}{j} p^j (1-p)^{k+r-j} \ \ \ \ \ \ k=0,1,2,3,\cdots \end{aligned} \displaystyle \begin{aligned}(5) \ \ \ \ P(X_r \le k)&=P(Y_{k+r} > r-1) \ \ \ \ \ \ k=0,1,2,3,\cdots \end{aligned} Remark The relation $(4)$ is analogous to the relationship between the Gamma distribution and the Poisson distribution. Recall that a Gamma distribution with shape parameter $\alpha$ and scale parameter $n$, where $n$ is a positive integer, can be interpreted as the waiting time until the nth change in a Poisson process. Thus, if the nth change takes place after time $t$, there can be at most $n-1$ arrivals in the time interval $[0,t]$. Thus the survival function of this Gamma distribution is the same as the cdf of a Poisson distribution. The relation $(4)$ is analogous to the following relation. $\displaystyle (5) \ \ \ \ \int_t^\infty \frac{\alpha^n}{(n-1)!} \ x^{n-1} \ e^{-\alpha x} \ dx=\sum \limits_{j=0}^{n-1} \frac{e^{-\alpha t} \ (\alpha t)^j}{j!}$ A previous post on the negative binomial distribution is found here. # The Negative Binomial Distribution A counting distribution is a discrete distribution with probabilities only on the nonnegative integers. Such distributions are important in insurance applications since they can be used to model the number of events such as losses to the insured or claims to the insurer. Though playing a prominent role in statistical theory, the Poisson distribution is not appropriate in all situations, since it requires that the mean and the variance are equaled. Thus the negative binomial distribution is an excellent alternative to the Poisson distribution, especially in the cases where the observed variance is greater than the observed mean. The negative binomial distribution arises naturally from a probability experiment of performing a series of independent Bernoulli trials until the occurrence of the rth success where r is a positive integer. From this starting point, we discuss three ways to define the distribution. We then discuss several basic properties of the negative binomial distribution. Emphasis is placed on the close connection between the Poisson distribution and the negative binomial distribution. ________________________________________________________________________ Definitions We define three versions of the negative binomial distribution. The first two versions arise from the view point of performing a series of independent Bernoulli trials until the rth success where r is a positive integer. A Bernoulli trial is a probability experiment whose outcome is random such that there are two possible outcomes (success or failure). Let $X_1$ be the number of Bernoulli trials required for the rth success to occur where r is a positive integer. Let $p$ is the probability of success in each trial. The following is the probability function of $X_1$: $\displaystyle (1) \ \ \ \ \ P(X_1=x)= \binom{x-1}{r-1} p^r (1-p)^{x-r} \ \ \ \ \ \ \ x=r,r+1,r+2,\cdots$ The idea for $(1)$ is that for $X_1=x$ to happen, there must be $r-1$ successes in the first $x-1$ trials and one additional success occurring in the last trial (the $x$th trial). A more common version of the negative binomial distribution is the number of Bernoulli trials in excess of r in order to produce the rth success. In other words, we consider the number of failures before the occurrence of the rth success. Let $X_2$ be this random variable. The following is the probability function of $X_2$: $\displaystyle (2) \ \ \ \ \ P(X_2=x)=\binom{x+r-1}{x} p^r (1-p)^x \ \ \ \ \ \ \ x=0,1,2,\cdots$ The idea for $(2)$ is that there are $x+r$ trials and in the first $x+r-1$ trials, there are $x$ failures (or equivalently $r-1$ successes). In both $(1)$ and $(2)$, the binomial coefficient is defined by $\displaystyle (3) \ \ \ \ \ \binom{y}{k}=\frac{y!}{k! \ (y-k)!}=\frac{y(y-1) \cdots (y-(k-1))}{k!}$ where $y$ is a positive integer and $k$ is a nonnegative integer. However, the right-hand-side of $(3)$ can be calculated even if $y$ is not a positive integer. Thus the binomial coefficient $\displaystyle \binom{y}{k}$ can be expanded to work for all real number $y$. However $k$ must still be nonnegative integer. $\displaystyle (4) \ \ \ \ \ \binom{y}{k}=\frac{y(y-1) \cdots (y-(k-1))}{k!}$ For convenience, we let $\displaystyle \binom{y}{0}=1$. When the real number $y>k-1$, the binomial coefficient in $(4)$ can be expressed as: $\displaystyle (5) \ \ \ \ \ \binom{y}{k}=\frac{\Gamma(y+1)}{\Gamma(k+1) \Gamma(y-k+1)}$ where $\Gamma(\cdot)$ is the gamma function. With the more relaxed notion of binomial coefficient, the probability function in $(2)$ above can be defined for all real number r. Thus the general version of the negative binomial distribution has two parameters r and $p$, both real numbers, such that $0. The following is its probability function. $\displaystyle (6) \ \ \ \ \ P(X=x)=\binom{x+r-1}{x} p^r (1-p)^x \ \ \ \ \ \ \ x=0,1,2,\cdots$ Whenever r in $(6)$ is a real number that is not a positive integer, the interpretation of counting the number of failures until the occurrence of the rth success is no longer important. Instead we can think of it simply as a count distribution. The following alternative parametrization of the negative binomial distribution is also useful. $\displaystyle (6a) \ \ \ \ \ P(X=x)=\binom{x+r-1}{x} \biggl(\frac{\alpha}{\alpha+1}\biggr)^r \biggl(\frac{1}{\alpha+1}\biggr)^x \ \ \ \ \ \ \ x=0,1,2,\cdots$ The parameters in this alternative parametrization are r and $\alpha>0$. Clearly, the ratio $\frac{\alpha}{\alpha+1}$ takes the place of $p$ in $(6)$. Unless stated otherwise, we use the parametrization of $(6)$. ________________________________________________________________________ What is negative about the negative binomial distribution? What is negative about this distribution? What is binomial about this distribution? The name is suggested by the fact that the binomial coefficient in $(6)$ can be rearranged as follows: \displaystyle \begin{aligned}(7) \ \ \ \ \ \binom{x+r-1}{x}&=\frac{(x+r-1)(x+r-2) \cdots r}{x!} \\&=(-1)^x \frac{(-r-(x-1))(-r-(x-2)) \cdots (-r)}{x!} \\&=(-1)^x \frac{(-r)(-r-1) \cdots (-r-(x-1))}{x!} \\&=(-1)^x \binom{-r}{x} \end{aligned} The calculation in $(7)$ can be used to verify that $(6)$ is indeed a probability function, that is, all the probabilities sum to 1. \displaystyle \begin{aligned}(8) \ \ \ \ \ 1&=p^r p^{-r}\\&=p^r (1-q)^{-r} \\&=p^r \sum \limits_{x=0}^\infty \binom{-r}{x} (-q)^x \ \ \ \ \ \ \ \ (8.1) \\&=p^r \sum \limits_{x=0}^\infty (-1)^x \binom{-r}{x} q^x \\&=\sum \limits_{x=0}^\infty \binom{x+r-1}{x} p^r q^x \end{aligned} In $(8)$, we take $q=1-p$. The step $(8.1)$ above uses the following formula known as the Newton’s binomial formula. $\displaystyle (9) \ \ \ \ \ (1+t)^w=\sum \limits_{k=0}^\infty \binom{w}{k} t^k$ ________________________________________________________________________ The Generating Function By definition, the following is the generating function of the negative binomial distribution, using : $\displaystyle (10) \ \ \ \ \ g(z)=\sum \limits_{x=0}^\infty \binom{r+x-1}{x} p^r q^x z^x$ where $q=1-p$. Using a similar calculation as in $(8)$, the generating function can be simplified as: $\displaystyle (11) \ \ \ \ \ g(z)=p^r (1-q z)^{-r}=\frac{p^r}{(1-q z)^r}=\frac{p^r}{(1-(1-p) z)^r}; \ \ \ \ \ z<\frac{1}{1-p}$ As a result, the moment generating function of the negative binomial distribution is: $\displaystyle (12) \ \ \ \ \ M(t)=\frac{p^r}{(1-(1-p) e^t)^r}; \ \ \ \ \ \ \ t<-ln(1-p)$ ________________________________________________________________________ Independent Sum One useful property of the negative binomial distribution is that the independent sum of negative binomial random variables, all with the same parameter $p$, also has a negative binomial distribution. Let $Y=Y_1+Y_2+\cdots+Y_n$ be an independent sum such that each $X_i$ has a negative binomial distribution with parameters $r_i$ and $p$. Then the sum $Y=Y_1+Y_2+\cdots+Y_n$ has a negative binomial distribution with parameters $r=r_1+\cdots+r_n$ and $p$. Note that the generating function of an independent sum is the product of the individual generating functions. The following shows that the product of the individual generating functions is of the same form as $(11)$, thus proving the above assertion. $\displaystyle (13) \ \ \ \ \ h(z)=\frac{p^{\sum \limits_{i=1}^n r_i}}{(1-(1-p) z)^{\sum \limits_{i=1}^n r_i}}$ ________________________________________________________________________ Mean and Variance The mean and variance can be obtained from the generating function. From $E(X)=g'(1)$ and $E(X^2)=g'(1)+g^{(2)}(1)$, we have: $\displaystyle (14) \ \ \ \ \ E(X)=\frac{r(1-p)}{p} \ \ \ \ \ \ \ \ \ \ \ \ \ Var(X)=\frac{r(1-p)}{p^2}$ Note that $Var(X)=\frac{1}{p} E(X)>E(X)$. Thus when the sample data suggest that the variance is greater than the mean, the negative binomial distribution is an excellent alternative to the Poisson distribution. For example, suppose that the sample mean and the sample variance are 3.6 and 7.1. In exploring the possibility of fitting the data using the negative binomial distribution, we would be interested in the negative binomial distribution with this mean and variance. Then plugging these into $(14)$ produces the negative binomial distribution with $r=3.7$ and $p=0.507$. ________________________________________________________________________ The Poisson-Gamma Mixture One important application of the negative binomial distribution is that it is a mixture of a family of Poisson distributions with Gamma mixing weights. Thus the negative binomial distribution can be viewed as a generalization of the Poisson distribution. The negative binomial distribution can be viewed as a Poisson distribution where the Poisson parameter is itself a random variable, distributed according to a Gamma distribution. Thus the negative binomial distribution is known as a Poisson-Gamma mixture. In an insurance application, the negative binomial distribution can be used as a model for claim frequency when the risks are not homogeneous. Let $N$ has a Poisson distribution with parameter $\theta$, which can be interpreted as the number of claims in a fixed period of time from an insured in a large pool of insureds. There is uncertainty in the parameter $\theta$, reflecting the risk characteristic of the insured. Some insureds are poor risks (with large $\theta$) and some are good risks (with small $\theta$). Thus the parameter $\theta$ should be regarded as a random variable $\Theta$. The following is the conditional distribution of $N$ (conditional on $\Theta=\theta$): $\displaystyle (15) \ \ \ \ \ P(N=n \lvert \Theta=\theta)=\frac{e^{-\theta} \ \theta^n}{n!} \ \ \ \ \ \ \ \ \ \ n=0,1,2,\cdots$ Suppose that $\Theta$ has a Gamma distribution with scale parameter $\alpha$ and shape parameter $\beta$. The following is the probability density function of $\Theta$. $\displaystyle (16) \ \ \ \ \ g(\theta)=\frac{\alpha^\beta}{\Gamma(\beta)} \theta^{\beta-1} e^{-\alpha \theta} \ \ \ \ \ \ \ \ \ \ \theta>0$ Then the joint density of $N$ and $\Theta$ is: $\displaystyle (17) \ \ \ \ \ P(N=n \lvert \Theta=\theta) \ g(\theta)=\frac{e^{-\theta} \ \theta^n}{n!} \ \frac{\alpha^\beta}{\Gamma(\beta)} \theta^{\beta-1} e^{-\alpha \theta}$ The unconditional distribution of $N$ is obtained by summing out $\theta$ in $(17)$. \displaystyle \begin{aligned}(18) \ \ \ \ \ P(N=n)&=\int_0^\infty P(N=n \lvert \Theta=\theta) \ g(\theta) \ d \theta \\&=\int_0^\infty \frac{e^{-\theta} \ \theta^n}{n!} \ \frac{\alpha^\beta}{\Gamma(\beta)} \ \theta^{\beta-1} \ e^{-\alpha \theta} \ d \theta \\&=\int_0^\infty \frac{\alpha^\beta}{n! \ \Gamma(\beta)} \ \theta^{n+\beta-1} \ e^{-(\alpha+1) \theta} d \theta \\&=\frac{\alpha^\beta}{n! \ \Gamma(\beta)} \ \frac{\Gamma(n+\beta)}{(\alpha+1)^{n+\beta}} \int_0^\infty \frac{(\alpha+1)^{n+\beta}}{\Gamma(n+\beta)} \theta^{n+\beta-1} \ e^{-(\alpha+1) \theta} d \theta \\&=\frac{\alpha^\beta}{n! \ \Gamma(\beta)} \ \frac{\Gamma(n+\beta)}{(\alpha+1)^{n+\beta}} \\&=\frac{\Gamma(n+\beta)}{\Gamma(n+1) \ \Gamma(\beta)} \ \biggl( \frac{\alpha}{\alpha+1}\biggr)^\beta \ \biggl(\frac{1}{\alpha+1}\biggr)^n \\&=\binom{n+\beta-1}{n} \ \biggl( \frac{\alpha}{\alpha+1}\biggr)^\beta \ \biggl(\frac{1}{\alpha+1}\biggr)^n \ \ \ \ \ \ \ \ \ n=0,1,2,\cdots \end{aligned} Note that the integral in the fourth step in $(18)$ is 1.0 since the integrand is the pdf of a Gamma distribution. The above probability function is that of a negative binomial distribution. It is of the same form as $(6a)$. Equivalently, it is also of the form $(6)$ with parameter $r=\beta$ and $p=\frac{\alpha}{\alpha+1}$. The variance of the negative binomial distribution is greater than the mean. In a Poisson distribution, the mean equals the variance. Thus the unconditional distribution of $N$ is more dispersed than its conditional distributions. This is a characteristic of mixture distributions. The uncertainty in the parameter variable $\Theta$ has the effect of increasing the unconditional variance of the mixture distribution of $N$. The variance of a mixture distribution has two components, the weighted average of the conditional variances and the variance of the conditional means. The second component represents the additional variance introduced by the uncertainty in the parameter $\Theta$ (see The variance of a mixture). ________________________________________________________________________ The Poisson Distribution as Limit of Negative Binomial There is another connection to the Poisson distribution, that is, the Poisson distribution is a limiting case of the negative binomial distribution. We show that the generating function of the Poisson distribution can be obtained by taking the limit of the negative binomial generating function as $r \rightarrow \infty$. Interestingly, the Poisson distribution is also the limit of the binomial distribution. In this section, we use the negative binomial parametrization of $(6a)$. By replacing $\frac{\alpha}{\alpha+1}$ for $p$, the following are the mean, variance, and the generating function for the probability function in $(6a)$: \displaystyle \begin{aligned}(19) \ \ \ \ \ \ &E(X)=\frac{r}{\alpha} \\&\text{ }\\&Var(X)=\frac{\alpha+1}{\alpha} \ \frac{r}{\alpha}=\frac{r(\alpha+1)}{\alpha^2} \\&\text{ } \\&g(z)=\frac{1}{[1-\frac{1}{\alpha}(z-1)]^r} \ \ \ \ \ \ \ z<\alpha+1 \end{aligned} Let r goes to infinity and $\displaystyle \frac{1}{\alpha}$ goes to zero and at the same time keeping their product constant. Thus $\displaystyle \mu=\frac{r}{\alpha}$ is constant (this is the mean of the negative binomial distribution). We show the following: $\displaystyle (20) \ \ \ \ \ \lim \limits_{r \rightarrow \infty} [1-\frac{\mu}{r}(z-1)]^{-r}=e^{\mu (z-1)}$ The right-hand side of $(20)$ is the generating function of the Poisson distribution with mean $\mu$. The generating function in the left-hand side is that of a negative binomial distribution with mean $\displaystyle \mu=\frac{r}{\alpha}$. The following is the derivation of $(20)$. \displaystyle \begin{aligned}(21) \ \ \ \ \ \lim \limits_{r \rightarrow \infty} [1-\frac{\mu}{r}(z-1)]^{-r}&=\lim \limits_{r \rightarrow \infty} e^{\displaystyle \biggl(ln[1-\frac{\mu}{r}(z-1)]^{-r}\biggr)} \\&=\lim \limits_{r \rightarrow \infty} e^{\displaystyle \biggl(-r \ ln[1-\frac{\mu}{r}(z-1)]\biggr)} \\&=e^{\displaystyle \biggl(\lim \limits_{r \rightarrow \infty} -r \ ln[1-\frac{\mu}{r}(z-1)]\biggr)} \end{aligned} We now focus on the limit in the exponent. \displaystyle \begin{aligned}(22) \ \ \ \ \ \lim \limits_{r \rightarrow \infty} -r \ ln[1-\frac{\mu}{r}(z-1)]&=\lim \limits_{r \rightarrow \infty} \frac{ln(1-\frac{\mu}{r} (z-1))^{-1}}{r^{-1}} \\&=\lim \limits_{r \rightarrow \infty} \frac{(1-\frac{\mu}{r} (z-1)) \ \mu (z-1) r^{-2}}{r^{-2}} \\&=\mu (z-1) \end{aligned} The middle step in $(22)$ uses the L’Hopital’s Rule. The result in $(20)$ is obtained by combining $(21)$ and $(22)$. ________________________________________________________________________ Reference 1. Klugman S.A., Panjer H. H., Wilmot G. E. Loss Models, From Data to Decisions, Second Edition., Wiley-Interscience, a John Wiley & Sons, Inc., New York, 2004 # Splitting a Poisson Distribution We consider a remarkable property of the Poisson distribution that has a connection to the multinomial distribution. We start with the following examples. Example 1 Suppose that the arrivals of customers in a gift shop at an airport follow a Poisson distribution with a mean of $\alpha=5$ per 10 minutes. Furthermore, suppose that each arrival can be classified into one of three distinct types – type 1 (no purchase), type 2 (purchase under $20), and type 3 (purchase over$20). Records show that about 25% of the customers are of type 1. The percentages of type 2 and type 3 are 60% and 15%, respectively. What is the probability distribution of the number of customers per hour of each type? Example 2 Roll a fair die $N$ times where $N$ is random and follows a Poisson distribution with parameter $\alpha$. For each $i=1,2,3,4,5,6$, let $N_i$ be the number of times the upside of the die is $i$. What is the probability distribution of each $N_i$? What is the joint distribution of $N_1,N_2,N_3,N_4,N_5,N_6$? In Example 1, the stream of customers arrive according to a Poisson distribution. It can be shown that the stream of each type of customers also has a Poisson distribution. One way to view this example is that we can split the Poisson distribution into three Poisson distributions. Example 2 also describes a splitting process, i.e. splitting a Poisson variable into 6 different Poisson variables. We can also view Example 2 as a multinomial distribution where the number of trials is not fixed but is random and follows a Poisson distribution. If the number of rolls of the die is fixed in Example 2 (say 10), then each $N_i$ would be a binomial distribution. Yet, with the number of trials being Poisson, each $N_i$ has a Poisson distribution with mean $\displaystyle \frac{\alpha}{6}$. In this post, we describe this Poisson splitting process in terms of a “random” multinomial distribution (the view point of Example 2). ________________________________________________________________________ Suppose we have a multinomial experiment with parameters $N$, $r$, $p_1, \cdots, p_r$, where • $N$ is the number of multinomial trials, • $r$ is the number of distinct possible outcomes in each trial (type 1 through type $r$), • the $p_i$ are the probabilities of the $r$ possible outcomes in each trial. Suppose that $N$ follows a Poisson distribution with parameter $\alpha$. For each $i=1, \cdots, r$, let $N_i$ be the number of occurrences of the $i^{th}$ type of outcomes in the $N$ trials. Then $N_1,N_2,\cdots,N_r$ are mutually independent Poisson random variables with parameters $\alpha p_1,\alpha p_2,\cdots,\alpha p_r$, respectively. The variables $N_1,N_2,\cdots,N_r$ have a multinomial distribution and their joint probability function is: $\displaystyle (1) \ \ \ \ P(N_1=n_1,N_2=n_2,\cdots,N_r=n_r)=\frac{N!}{n_1! n_2! \cdots n_r!} \ p_1^{n_1} p_2^{n_2} \cdots p_r^{n_r}$ where $n_i$ are nonnegative integers such that $N=n_1+n_2+\cdots+n_r$. Since the total number of multinomial trials $N$ is not fixed and is random, $(1)$ is not the end of the story. The following is the joint probability function of $N_1,N_2,\cdots,N_r$: \displaystyle \begin{aligned}(2) \ \ \ \ P(N_1=n_1,N_2=n_2,\cdots,N_r=n_r)&=P(N_1=n_1,N_2=n_2,\cdots,N_r=n_r \lvert N=\sum \limits_{k=0}^r n_k) \\&\ \ \ \ \ \times P(N=\sum \limits_{k=0}^r n_k) \\&\text{ } \\&=\frac{(\sum \limits_{k=0}^r n_k)!}{n_1! \ n_2! \ \cdots \ n_r!} \ p_1^{n_1} \ p_2^{n_2} \ \cdots \ p_r^{n_r} \ \times \frac{e^{-\alpha} \alpha^{\sum \limits_{k=0}^r n_k}}{(\sum \limits_{k=0}^r n_k)!} \\&\text{ } \\&=\frac{e^{-\alpha p_1} \ (\alpha p_1)^{n_1}}{n_1!} \ \frac{e^{-\alpha p_2} \ (\alpha p_2)^{n_2}}{n_2!} \ \cdots \ \frac{e^{-\alpha p_r} \ (\alpha p_r)^{n_r}}{n_r!} \end{aligned} To obtain the marginal probability function of $N_j$, $j=1,2,\cdots,r$, we sum out the other variables $N_k=n_k$ ($k \ne j$) in $(2)$ and obtain the following: $\displaystyle (3) \ \ \ \ P(N_j=n_j)=\frac{e^{-\alpha p_j} \ (\alpha p_j)^{n_j}}{n_j!}$ Thus we can conclude that $N_j$, $j=1,2,\cdots,r$, has a Poisson distribution with parameter $\alpha p_j$. Furrthermore, the joint probability function of $N_1,N_2,\cdots,N_r$ is the product of the marginal probability functions. Thus we can conclude that $N_1,N_2,\cdots,N_r$ are mutually independent. ________________________________________________________________________ Example 1 Let $N_1,N_2,N_3$ be the number of customers per hour of type 1, type 2, and type 3, respectively. Here, we attempt to split a Poisson distribution with mean 30 per hour (based on 5 per 10 minutes). Thus $N_1,N_2,N_3$ are mutually independent Poisson variables with means $30 \times 0.25=7.5$, $30 \times 0.60=18$, $30 \times 0.15=4.5$, respectively. Example 2 As indicated earlier, each $N_i$, $i=1,2,3,4,5,6$, has a Poisson distribution with mean $\frac{\alpha}{6}$. According to $(2)$, the joint probability function of $N_1,N_2,N_3,N_4,N_5,N_6$ is simply the product of the six marginal Poisson probability functions. # The Poisson Distribution Let $\alpha$ be a positive constant. Consider the following probability distribution: $\displaystyle (1) \ \ \ \ \ P(X=j)=\frac{e^{-\alpha} \alpha^j}{j!} \ \ \ \ \ j=0,1,2,\cdots$ The above distribution is said to be a Poisson distribution with parameter $\alpha$. The Poisson distribution is usually used to model the random number of events occurring in a fixed time interval. As will be shown below, $E(X)=\alpha$. Thus the parameter $\alpha$ is the rate of occurrence of the random events; it indicates on average how many events occur per unit of time. Examples of random events that may be modeled by the Poisson distribution include the number of alpha particles emitted by a radioactive substance counted in a prescribed area during a fixed period of time, the number of auto accidents in a fixed period of time or the number of losses arising from a group of insureds during a policy period. Each of the above examples can be thought of as a process that generates a number of arrivals or changes in a fixed period of time. If such a counting process leads to a Poisson distribution, then the process is said to be a Poisson process. We now discuss some basic properties of the Poisson distribution. Using the Taylor series expansion of $e^{\alpha}$, the following shows that $(1)$ is indeed a probability distribution. $\displaystyle . \ \ \ \ \ \ \ \sum \limits_{j=0}^\infty \frac{e^{-\alpha} \alpha^j}{j!}=e^{-\alpha} \sum \limits_{j=0}^\infty \frac{\alpha^j}{j!}=e^{-\alpha} e^{\alpha}=1$ The generating function of the Poisson distribution is $g(z)=e^{\alpha (z-1)}$ (see The generating function). The mean and variance can be calculated using the generating function. \displaystyle \begin{aligned}(2) \ \ \ \ \ &E(X)=g'(1)=\alpha \\&\text{ } \\&E[X(X-1)]=g^{(2)}(1)=\alpha^2 \\&\text{ } \\&Var(X)=E[X(X-1)]+E(X)-E(X)^2=\alpha^2+\alpha-\alpha^2=\alpha \end{aligned} The Poisson distribution can also be interpreted as an approximation to the binomial distribution. It is well known that the Poisson distribution is the limiting case of binomial distributions (see [1] or this post). $\displaystyle (3) \ \ \ \ \ \lim \limits_{n \rightarrow \infty} \binom{n}{j} \biggl(\frac{\alpha}{n}\biggr)^j \biggl(1-\frac{\alpha}{n}\biggr)^{n-j}=\frac{e^{-\alpha} \alpha^j}{j!}$ One application of $(3)$ is that we can use Poisson probabilities to approximate Binomial probabilities. The approximation is reasonably good when the number of trials $n$ in a binomial distribution is large and the probability of success $p$ is small. The binomial mean is $n p$ and the variance is $n p (1-p)$. When $p$ is small, $1-p$ is close to 1 and the binomial variance is approximately $np \approx n p (1-p)$. Whenever the mean of a discrete distribution is approximately equaled to the mean, the Poisson approximation is quite good. As a rule of thumb, we can use Poisson to approximate binomial if $n \le 100$ and $p \le 0.01$. As an example, we use the Poisson distribution to estimate the probability that at most 1 person out of 1000 will have a birthday on the New Year Day. Let $n=1000$ and $p=365^{-1}$. So we use the Poisson distribution with $\alpha=1000 \times 365^{-1}$. The following is an estimate using the Poisson distribution. $\displaystyle . \ \ \ \ \ \ \ P(X \le 1)=e^{-\alpha}+\alpha e^{-\alpha}=(1+\alpha) e^{-\alpha}=0.2415$ Another useful property is that the independent sum of Poisson distributions also has a Poisson distribution. Specifically, if each $X_i$ has a Poisson distribution with parameter $\alpha_i$, then the independent sum $X=X_1+\cdots+X_n$ has a Poisson distribution with parameter $\alpha=\alpha_1+\cdots+\alpha_n$. One way to see this is that the product of Poisson generating functions has the same general form as $g(z)=e^{\alpha (z-1)}$ (see The generating function). One interpretation of this property is that when merging several arrival processes, each of which follow a Poisson distribution, the result is still a Poisson distribution. For example, suppose that in an airline ticket counter, the arrival of first class customers follows a Poisson process with a mean arrival rate of 8 per 15 minutes and the arrival of customers flying coach follows a Poisson distribution with a mean rate of 12 per 15 minutes. Then the arrival of customers of either types has a Poisson distribution with a mean rate of 20 per 15 minutes or 80 per hour. A Poisson distribution with a large mean can be thought of as an independent sum of Poisson distributions. For example, a Poisson distribution with a mean of 50 is the independent sum of 50 Poisson distributions each with mean 1. Because of the central limit theorem, when the mean is large, we can approximate the Poisson using the normal distribution. In addition to merging several Poisson distributions into one combined Poisson distribution, we can also split a Poisson into several Poisson distributions. For example, suppose that a stream of customers arrives according to a Poisson distribution with parameter $\alpha$ and each customer can be classified into one of two types (e.g. no purchase vs. purchase) with probabilities $p_1$ and $p_2$, respectively. Then the number of “no purchase” customers and the number of “purchase” customers are independent Poisson random variables with parameters $\alpha p_1$ and $\alpha p_2$, respectively. For more details on the splitting of Poisson, see Splitting a Poisson Distribution. Reference 1. Feller W. An Introduction to Probability Theory and Its Applications, Third Edition, John Wiley & Sons, New York, 1968 # The generating function Consider the function $g(z)=\displaystyle e^{\alpha (z-1)}$ where $\alpha$ is a positive constant. The following shows the derivatives of this function. \displaystyle \begin{aligned}. \ \ \ \ \ \ &g(z)=e^{\alpha (z-1)} \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ g(0)=e^{-\alpha} \\&\text{ } \\&g'(z)=e^{\alpha (z-1)} \ \alpha \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ g'(0)=e^{-\alpha} \ \alpha \\&\text{ } \\&g^{(2)}(z)=e^{\alpha (z-1)} \ \alpha^2 \ \ \ \ \ \ \ \ \ \ \ \ \ \ g^{(2)}(0)=2! \ \frac{e^{-\alpha} \ \alpha^2}{2!} \\&\text{ } \\&g^{(3)}(z)=e^{\alpha (z-1)} \ \alpha^3 \ \ \ \ \ \ \ \ \ \ \ \ \ \ g^{(3)}(0)=3! \ \frac{e^{-\alpha} \ \alpha^3}{3!} \\&\text{ } \\&\ \ \ \ \ \ \ \ \cdots \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \cdots \\&\text{ } \\&g^{(n)}(z)=e^{\alpha (z-1)} \ \alpha^n \ \ \ \ \ \ \ \ \ \ \ \ \ \ g^{(n)}(0)=n! \ \frac{e^{-\alpha} \ \alpha^n}{n!} \end{aligned} Note that the derivative of $g(z)$ at each order is a multiple of a Poisson probability. Thus the Poisson distribution is coded by the function $g(z)=\displaystyle e^{\alpha (z-1)}$. Because of this reason, such a function is called a generating function (or probability generating function). This post discusses some basic facts about the generating function (gf) and its cousin, the moment generating function (mgf). One important characteristic is that these functions generate probabilities and moments. Another important characteristic is that there is a one-to-one correspondence between a probability distribution and its generating function and moment generating function, i.e. two random variables with different cumulative distribution functions cannot have the same gf or mgf. In some situations, this fact is useful in working with independent sum of random variables. ———————————————————————————————————— The Generating Function Suppose that $X$ is a random variable that takes only nonegative integer values with the probability function given by $\text{ }$ $(1) \ \ \ \ \ \ P(X=j)=a_j, \ \ \ \ j=0,1,2,\cdots$ $\text{ }$ The idea of the generating function is that we use a power series to capture the entire probability distribution. The following defines the generating function that is associated with the above sequence $a_j$, . $(2) \ \ \ \ \ \ g(z)=a_0+a_1 \ z+a_2 \ z^2+ \cdots=\sum \limits_{j=0}^\infty a_j \ z^j$ $\text{ }$ Since the elements of the sequence $a_j$ are probabilities, we can also call $g(z)$ the generating function of the probability distribution defined by the sequence in $(1)$. The generating function $g(z)$ is defined wherever the power series converges. It is clear that at the minimum, the power series in $(2)$ converges for $\lvert z \lvert \le 1$. We discuss the following three properties of generating functions: 1. The generating function completely determines the distribution. 2. The moments of the distribution can be derived from the derivatives of the generating function. 3. The generating function of a sum of independent random variables is the product of the individual generating functions. The Poisson generating function at the beginning of the post is an example demonstrating property 1 (see Example 0 below for the derivation of the generating function). In some cases, the probability distribution of an independent sum can be deduced from the product of the individual generating functions. Some examples are given below. ———————————————————————————————————— Generating Probabilities We now discuss the property 1 indicated above. To see that $g(z)$ generates the probabilities, let’s look at the derivatives of $g(z)$: \displaystyle \begin{aligned}(3) \ \ \ \ \ \ &g'(z)=a_1+2 a_2 \ z+3 a_3 \ z^2+\cdots=\sum \limits_{j=1}^\infty j a_j \ z^{j-1} \\&\text{ } \\&g^{(2)}(z)=2 a_2+6 a_3 \ z+ 12 a_4 \ z^2=\sum \limits_{j=2}^\infty j (j-1) a_j \ z^{j-2} \\&\text{ } \\&g^{(3)}(z)=6 a_3+ 24 a_4 \ z+60 a_5 \ z^2=\sum \limits_{j=3}^\infty j (j-1)(j-2) a_j \ z^{j-3} \\&\text{ } \\&\ \ \ \ \ \ \ \ \cdots \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \cdots \\&\text{ } \\&g^{(n)}(z)=\sum \limits_{j=n}^\infty j(j-1) \cdots (j-n+1) a_j \ z^{j-n}=\sum \limits_{j=n}^\infty \binom{j}{n} n! \ a_j \ z^{j-n} \end{aligned} $\text{ }$ By letting $z=0$ above, all the terms vanishes except for the constant term. We have: $\text{ }$ $(4) \ \ \ \ \ \ g^{(n)}(0)=n! \ a_n=n! \ P(X=n)$ $\text{ }$ Thus the generating function is a compact way of encoding the probability distribution. The probability distribution determines the generating function as seen in $(2)$. On the other hand, $(3)$ and $(4)$ demonstrate that the generating function also determines the probability distribution. ———————————————————————————————————— Generating Moments The generating function also determines the moments (property 2 indicated above). For example, we have: \displaystyle \begin{aligned}(5) \ \ \ \ \ \ &g'(1)=0 \ a_0+a_1+2 a_2+3 a_3+\cdots=\sum \limits_{j=0}^\infty j a_j=E(X) \\&\text{ } \\&g^{(2)}(1)=0 a_0 + 0 a_1+2 a_2+6 a_3+ 12 a_4+\cdots=\sum \limits_{j=0}^\infty j (j-1) a_j=E[X(X-1)] \\&\text{ } \\&E(X)=g'(1) \\&\text{ } \\&E(X^2)=g'(1)+g^{(2)}(1) \end{aligned} $\text{ }$ Note that $g^{(n)}(1)=E[X(X-1) \cdots (X-(n-1))]$. Thus the higher moment $E(X^n)$ can be expressed in terms of $g^{(n)}(1)$ and $g^{(k)}(1)$ where $k. ———————————————————————————————————— More General Definitions Note that the definition in $(2)$ can also be interpreted as the mathematical expectation of $z^X$, i.e., $g(z)=E(z^X)$. This provides a way to define the generating function for random variables that may take on values outside of the nonnegative integers. The following is a more general definition of the generating function of the random variable $X$, which is defined for all $z$ where the expectation exists. $\text{ }$ $(6) \ \ \ \ \ \ g(z)=E(z^X)$ $\text{ }$ ———————————————————————————————————— The Generating Function of Independent Sum Let $X_1,X_2,\cdots,X_n$ be independent random variables with generating functions $g_1,g_2,\cdots,g_n$, respectively. Then the generating function of $X_1+X_2+\cdots+X_n$ is given by the product $g_1 \cdot g_2 \cdots g_n$. Let $g(z)$ be the generating function of the independent sum $X_1+X_2+\cdots+X_n$. The following derives $g(z)$. Note that the general form of generating function $(6)$ is used. \displaystyle \begin{aligned}(7) \ \ \ \ \ \ g(z)&=E(z^{X_1+\cdots+X_n}) \\&\text{ } \\&=E(z^{X_1} \cdots z^{X_n}) \\&\text{ } \\&=E(z^{X_1}) \cdots E(z^{X_n}) \\&\text{ } \\&=g_1(z) \cdots g_n(z) \end{aligned} The probability distribution of a random variable is uniquely determined by its generating function. In particular, the generating function $g(z)$ of the independent sum $X_1+X_2+\cdots+X_n$ that is derived in $(7)$ is unique. So if the generating function is of a particular distribution, we can deduce that the distribution of the sum must be of the same distribution. See the examples below. ———————————————————————————————————— Example 0 In this example, we derive the generating function of the Poisson distribution. Based on the definition, we have: \displaystyle \begin{aligned}. \ \ \ \ \ \ g(z)&=\sum \limits_{j=0}^\infty \frac{e^{-\alpha} \alpha^j}{j!} \ z^j \\&\text{ } \\&=\sum \limits_{j=0}^\infty \frac{e^{-\alpha} (\alpha z)^j}{j!} \\&\text{ } \\&=\frac{e^{-\alpha}}{e^{- \alpha z}} \sum \limits_{j=0}^\infty \frac{e^{-\alpha z} (\alpha z)^j}{j!} \\&\text{ } \\&=e^{\alpha (z-1)} \end{aligned} $\text{ }$ Example 1 Suppose that $X_1,X_2,\cdots,X_n$ are independent random variables where each $X_i$ has a Bernoulli distribution with probability of success $p$. Let $q=1-p$. The following is the generating function for each $X_i$. $\text{ }$ $. \ \ \ \ \ \ g(z)=q+p z$ $\text{ }$ Then the generating function of the sum $X=X_1+\cdots+X_n$ is $g(z)^n=(q+p z)^n$. The following is the binomial expansion: $\text{ }$ \displaystyle \begin{aligned}(8) \ \ \ \ \ \ g(z)^n&=(q+p z)^n \\&\text{ } \\&=\sum \limits_{j=0}^n \binom{n}{j} q^{n-j} \ p^j \ z^j \end{aligned} $\text{ }$ By definition $(2)$, the generating function of $X=X_1+\cdots+X_n$ is: $\text{ }$ $\text{ }$ $(9) \ \ \ \ \ \ g(z)^n=\sum \limits_{j=0}^\infty P(X=j) \ z^j$ $\text{ }$ Comparing $(8)$ and $(9)$, we have $\displaystyle (10) \ \ \ \ \ \ P(X=j)=\left\{\begin{matrix}\displaystyle \binom{n}{j} p^j \ q^{n-j}&\ 0 \le j \le n\\{0}&\ j>n \end{matrix}\right.$ The probability distribution indicated by $(8)$ and $(10)$ is that of a binomial distribution. Since the probability distribution of a random variable is uniquely determined by its generating function, the independent sum of Bernoulli distributions must ave a Binomial distribution. $\text{ }$ Example 2 Suppose that $X_1,X_2,\cdots,X_n$ are independent and have Poisson distributions with parameters $\alpha_1,\alpha_2,\cdots,\alpha_n$, respectively. Then the independent sum $X=X_1+\cdots+X_n$ has a Poisson distribution with parameter $\alpha=\alpha_1+\cdots+\alpha_n$. Let $g(z)$ be the generating function of $X=X_1+\cdots+X_n$. For each $i$, the generating function of $X_i$ is $g_i(z)=e^{\alpha_i (z-1)}$. The key to the proof is that the product of the $g_i$ has the same general form as the individual $g_i$. \displaystyle \begin{aligned}(11) \ \ \ \ \ \ g(z)&=g_1(z) \cdots g_n(z) \\&\text{ } \\&=e^{\alpha_1 (z-1)} \cdots e^{\alpha_n (z-1)} \\&\text{ } \\&=e^{(\alpha_1+\cdots+\alpha_n)(z-1)} \end{aligned} The generating function in $(11)$ is that of a Poisson distribution with mean $\alpha=\alpha_1+\cdots+\alpha_n$. Since the generating function uniquely determines the distribution, we can deduce that the sum $X=X_1+\cdots+X_n$ has a Poisson distribution with parameter $\alpha=\alpha_1+\cdots+\alpha_n$. $\text{ }$ Example 3 In rolling a fair die, let $X$ be the number shown on the up face. The associated generating function is: $\displaystyle. \ \ \ \ \ \ g(z)=\frac{1}{6}(z+z^2+z^3+z^4+z^5+z^6)=\frac{z(1-z^6)}{6(1-z)}$ The generating function can be further reduced as: \displaystyle \begin{aligned}. \ \ \ \ \ \ g(z)&=\frac{z(1-z^6)}{6(1-z)} \\&\text{ } \\&=\frac{z(1-z^3)(1+z^3)}{6(1-z)} \\&\text{ } \\&=\frac{z(1-z)(1+z+z^2)(1+z^3)}{6(1-z)} \\&\text{ } \\&=\frac{z(1+z+z^2)(1+z^3)}{6} \end{aligned} Suppose that we roll the fair dice 4 times. Let $W$ be the sum of the 4 rolls. Then the generating function of $Z$ is $\displaystyle. \ \ \ \ \ \ g(z)^4=\frac{z^4 (1+z^3)^4 (1+z+z^2)^4}{6^4}$ The random variable $W$ ranges from 4 to 24. Thus the probability function ranges from $P(W=4)$ to $P(W=24)$. To find these probabilities, we simply need to decode the generating function $g(z)^4$. For example, to find $P(W=12)$, we need to find the coefficient of the term $z^{12}$ in the polynomial $g(z)^4$. To help this decoding, we can expand two of the polynomials in $g(z)^4$. \displaystyle \begin{aligned}. \ \ \ \ \ \ g(z)^4&=\frac{z^4 (1+z^3)^4 (1+z+z^2)^4}{6^4} \\&\text{ } \\&=\frac{z^4 \times A \times B}{6^4} \\&\text{ } \\&A=(1+z^3)^4=1+4z^3+6z^6+4z^9+z^{12} \\&\text{ } \\&B=(1+z+z^2)^4=1+4z+10z^2+16z^3+19z^4+16z^5+10z^6+4z^7+z^8 \end{aligned} Based on the above polynomials, there are three ways of forming $z^{12}$. They are: $(z^4 \times 1 \times z^8)$, $(z^4 \times 4z^3 \times 16z^5)$, $(z^4 \times 6z^6 \times 10z^2)$. Thus we have: $\displaystyle. \ \ \ \ \ \ P(W=12)=\frac{1}{6^4}(1+4 \times 16+6 \times 10)=\frac{125}{6^4}$ To find the other probabilities, we can follow the same decoding process. ———————————————————————————————————— Remark The probability distribution of a random variable is uniquely determined by its generating function. This fundamental property is useful in determining the distribution of an independent sum. The generating function of the independent sum is simply the product of the individual generating functions. If the product is of a certain distributional form (as in Example 1 and Example 2), then we can deduce that the sum must be of the same distribution. We can also decode the product of generating functions to obtain the probability function of the independent sum (as in Example 3). The method in Example 3 is quite tedious. But one advantage is that it is a “machine process”, a pretty fool proof process that can be performed mechanically. The machine process is this: Code the individual probability distribution in a generating function $g(z)$. Then raise it to $n$. After performing some manipulation to $g(z)^n$, decode the probabilities from $g(z)^n$. As long as we can perform the algebraic manipulation carefully and correctly, this process will be sure to provide the probability distribution of an independent sum. ———————————————————————————————————— The Moment Generating Function The moment generating function of a random variable $X$ is $M_X(t)=E(e^{tX})$ on all real numbers $t$ for which the expected value exists. The moments can be computed more directly using an mgf. From the theory of mathematical analysis, it can be shown that if $M_X(t)$ exists on some interval $-a, then the derivatives of $M_X(t)$ of all orders exist at $t=0$. Furthermore, it can be show that $E(X^n)=M_X^{(n)}(0)$. Suppose that $g(z)$ is the generating function of a random variable. The following relates the generating function and the moment generating function. \displaystyle \begin{aligned}. \ \ \ \ \ \ &M_X(t)=g(e^t) \\&\text{ } \\&g(z)=M_X(ln z) \end{aligned} ———————————————————————————————————— Reference 1. Feller W. An Introduction to Probability Theory and Its Applications, Third Edition, John Wiley & Sons, New York, 1968
{}
+0 # Geo 0 225 1 +160 Find the number of cubic centimeters in the volume of the cylinder formed by rotating a square with side length 14 centimeters about its vertical line of symmetry. Express your answer in terms of $\pi$. #1 +7096 +2 *edit* Sorry about this! I didn't pay good enough attention here...the question says to rotate about the square's vertical line of symmetry, which makes a cylinder with a radius of 7 cm. and a height of 14 cm. Thanks to Alan for bringing this to my attention!! So... the volume  =  pi * 72 * 14  =  pi * 49 * 14  =  686pi  cubic cm I'll leave the old answer below even though I think it is wrong. ---------- The rotation will form a cylinder with a radius of 14 cm and a height of 14 cm, like this.... volume of cylinder  =  (pi * radius2) * (height) =  (pi * 142) * 14 =  (pi * 196) * 14 =  2744pi The volume is 2744pi cm3 . hectictar  Sep 27, 2017 edited by hectictar  Sep 27, 2017 edited by hectictar  Sep 27, 2017 #1 +7096 +2 *edit* Sorry about this! I didn't pay good enough attention here...the question says to rotate about the square's vertical line of symmetry, which makes a cylinder with a radius of 7 cm. and a height of 14 cm. Thanks to Alan for bringing this to my attention!! So... the volume  =  pi * 72 * 14  =  pi * 49 * 14  =  686pi  cubic cm I'll leave the old answer below even though I think it is wrong. ---------- The rotation will form a cylinder with a radius of 14 cm and a height of 14 cm, like this.... volume of cylinder  =  (pi * radius2) * (height) =  (pi * 142) * 14 =  (pi * 196) * 14 =  2744pi The volume is 2744pi cm3 . hectictar  Sep 27, 2017 edited by hectictar  Sep 27, 2017 edited by hectictar  Sep 27, 2017
{}
# What are the acid base pairs in the following equation: HCOOH(aq) + H_2O(l) rightleftharpoons HCOO^(-)(aq) + H_3O^(+)(aq)? Dec 30, 2017 Acetic acid and acetate, and water and the hydronium ion. $C {H}_{3} C O O H$ is a weak acid, with a conjugate base $C {H}_{3} C O {O}^{-}$. ${H}_{2} O$ is the base (in this case), and ${H}_{3} {O}^{+}$ is the conjugate acid.
{}
# Prove this polylogarithmic integral has the stated closed form value Question. Prove the following polylogarithmic integral has the stated value: $$I:=\int_{0}^{1}\frac{\operatorname{Li}_2{(1-x)}\log^2{(1-x)}}{x}\mathrm{d}x=-11\zeta{(5)}+6\zeta{(3)}\zeta{(2)}.$$ I was able to arrive at the proposed value for the integral through a combination of educated guessing and after-the-fact numerical checking (the value of the integral was approximately $I\approx0.457621...$), but I'm at a loss for how to derive it rigorously. Thoughts? • Expanding and multiplying power series' then integrating term by term. Jul 19, 2014 at 23:05 • It might help to note that $\log(1-x) = -\operatorname{Li}_1(x)$. Jul 19, 2014 at 23:20 • The integral can be evaluated in terms of the harmonic sum $\displaystyle\sum_{n=1}^{\infty} \frac{H_{n}^{(3)}}{n^{2}}$, which in turn can be evaluated using contour integration (or perhaps in some other manner of which I'm not aware). Jul 19, 2014 at 23:50 • Have you tried integration by parts with regards to $\text{Li}_2(1-x)$ ? Jul 19, 2014 at 23:50 • integralsandseries.prophpbb.com/topic413.html#p2719 Just make the substitution $u=1-x$ in (1). There is a proof of (1) later in the post. Jul 20, 2014 at 0:12 Since no answers have been posted, I'll expand on my comment above. There is a general formula that states $$\sum_{n=1}^{\infty}\frac{H_{n}^{(r)}}{n^{q}}=\zeta(r)\zeta(q)-\frac{(-1)^{r-1}}{(r-1)!}\int_{0}^{1}\frac{\text{Li}_{q}(x) \log^{r-1}(x) }{1-x}dx$$ where $$H_{n}^{(r)} = \sum_{k=1}^{n} \frac{1}{k^{r}} .$$ A proof can be found here. Making the substitution $u = 1-x$, $$\sum_{n=1}^{\infty}\frac{H_{n}^{(r)}}{n^{q}}=\zeta(r)\zeta(q)-\frac{(-1)^{r-1}}{(r-1)!}\int_{0}^{1}\frac{\text{Li}_{q}(1-u) \log^{r-1}(1-u)}{u}dx .$$ Therefore, $$\int_{0}^{1} \frac{\text{Li}_{2}(1-x)\log^{2}(1-x)}{x} \ dx = 2 \zeta (3) \zeta (2) - 2 \sum_{n=1}^{\infty} \frac{H_{n}^{(3)}}{n^{2}} .$$ To simplify the evaluation of that Euler sum slightly, I'm first going to evaluate $\displaystyle \sum_{n=1}^{\infty} \frac{H_{n}^{(2)}}{n^{3}}$ and then use the identity $$\sum_{n=1}^{\infty} \frac{H_{n}^{(r)}}{n^{q}} + \sum_{n=1}^{\infty} \frac{H_{n}^{(q)}}{n^{r}} = \zeta(r) \zeta(q) + \zeta(r+q) . \tag{1}$$ Consider $$f(z) = \frac{ \pi \cot (\pi z) \ \psi_{1}(-z)}{z^{3}}$$ where $\psi_{1}(z)$ is the trigamma function. The function $f(z)$ has poles of order $3$ at the positive integers, a pole of order 6 at the origin, and simple poles at the negative integers. On the sides of a square with vertices at $\pm \left( N+ \frac{1}{2} \right) \pm i \left( N+ \frac{1}{2} \right)$, $\cot (\pi z)$ is uniformly bounded. And when $z$ is large in magnitude and not on the positive real axis, $\psi_{1}(-z)$ is approximately $\displaystyle - \frac{1}{z}$. So as $N \to \infty$ through the integers, $\displaystyle \int f(z) \ dz$ will vanish on all four sides of the square. Therefore, $$\sum_{n=-\infty}^{\infty} \text{Res} [f(z), n] = 0.$$ The Laurent expansion of $\psi_{1}(-z)$ at the positive integers (including $0$) is $$\psi_{1}(-z) = \frac{1}{(z-n)^{2}} + \sum_{m=0}^{\infty} (m+1) \left( (-1)^{m} H_{n}^{(m+2)} + \zeta(m+2) \right) (z-n)^{m} .$$ And the Laurent expansion of $\pi \cot \pi z$ at the integers is $$\pi \cot (\pi z) = \frac{1}{z-n} - 2 \sum_{m=1}^{\infty} \zeta(2m) (z-n)^{2m-1} .$$ So at the positive integers, $$f(z) = \frac{1}{z^{3}} \left(\frac{1}{(z-n)^{3}} + \frac{H_{n}^{(2)} - \zeta(2)}{(z-n)} + \mathcal{O}(1) \right)$$ which implies \begin{align} \text{Res} [f(z), n] &= \text{Res} \left[\frac{1}{z^{3}(z-n)^{3}}, n \right] + \text{Res} \left[\frac{H_{n}^{(2)}-\zeta(2)}{z^{3}(z-n)}, n \right] \\ &= \frac{6}{n^{3}} + \frac{H_{n}^{(2)}}{n^{3}} -\frac{\zeta(2)}{n^{3}} . \end{align} At the negative integers, $$\text{Res} [f(z), -n] = - \frac{\psi_{1}(n)}{n^{3}} = \frac{H_{n-1}^{(2)} - \zeta(2)}{n^{3}} = \frac{H_{n}^{(2)}}{n^{3}} - \frac{1}{n^{5}} - \frac{\zeta(2)}{n^{3}} .$$ And at the origin, $$f(z) = \frac{1}{z^{6}} - \frac{\zeta(2)}{z^{4}} + \frac{2 \zeta(3)}{z^{3}} + \left(\zeta(4) - 2 \zeta^{2}(2) \right) \frac{1}{z^{2}} + \Big(4 \zeta(5) - 4 \zeta(3) \zeta(2) \Big) \frac{1}{z} + \mathcal{O}(1)$$ while implies $$\text{Res}[f(z),0] = 4 \zeta(5) - 4 \zeta(3) \zeta(2) .$$ Summing up all the residues, $$6 \zeta(5) + \sum_{n=1}^{\infty} \frac{H_{n}^{(2)}}{n^{3}} - \zeta(3) \zeta(2) + \sum_{n=1}^{\infty} \frac{H_{n}^{(2)}}{n^{3}} - \zeta(5) - \zeta(3) \zeta(2) + 4 \zeta(5) - 4 \zeta(3) \zeta(2) = 0$$ which implies $$\sum_{n=1}^{\infty} \frac{H_{n}^{(2)}}{n^{3}} = 3 \zeta(3) \zeta(2) - \frac{9}{2} \zeta(5) .$$ Then using $(1)$, $$\sum_{n=1}^{\infty} \frac{H_{n}^{(3)}}{n^{2}} = \zeta(3) \zeta(2) + \zeta(5) - 3 \zeta(3) \zeta(2) + \frac{9}{2} \zeta(5) = - 2 \zeta(3) \zeta(2) + \frac{11}{2} \zeta(5).$$ So finally we have \begin{align} \int_{0}^{1} \frac{\text{Li}_{2}(1-x)\log^{2}(1-x)}{x} \ dx &= 2 \zeta (3) \zeta (2) - 2 \Big(- 2 \zeta(3) \zeta(2) + \frac{11}{2} \zeta(5) \Big) \\ &= 6 \zeta(3) \zeta(2) - 11 \zeta(5) . \end{align} • Interesting evaluation. Aug 5, 2014 at 19:33 • @OlivierOloa Thanks. Aug 5, 2014 at 19:44 \begin{align} I&=\int_0^1\frac{\operatorname{Li}_2(1-x)\ln^2(1-x)}{x}\ dx=\int_0^1\frac{\operatorname{Li}_2(x)\ln^2x}{1-x}\ dx\\ &=\sum_{n=1}^\infty H_n^{(2)}\int_0^1x^n\ln^2x\ dx=2\sum_{n=1}^\infty\frac{H_n^{(2)}}{(n+1)^3}\\ &=2\sum_{n=1}^\infty\frac{H_n^{(2)}}{n^3}-2\zeta(5)\\ &=2\left(3\zeta(2)\zeta(5)-\frac92\zeta(5)\right)-2\zeta(5)\\ &=6\zeta(2)\zeta(5)-11\zeta(5) \end{align} Proof: \begin{align} S&=\sum_{n=1}^\infty\frac{H_n^{(2)}}{n^3}=\sum_{n=1}^\infty\frac1{n^3}\left(\zeta(2)-\sum_{k=1}^\infty\frac1{(n+k)^2}\right)=\zeta(2)\zeta(3)-\sum_{k=1}^\infty\sum_{n=1}^\infty\frac1{n^3(n+k)^2}\\ &=\zeta(2)\zeta(3)-\sum_{k=1}^\infty\sum_{n=1}^\infty\left(\frac{3}{k^4}\left(\frac1{n}-\frac1{n+k}\right)-\frac2{k^3n^2}-\frac1{k^3(n+k)^2}+\frac1{k^2n^3}\right)\\ &=\zeta(2)\zeta(3)-\sum_{k=1}^\infty\left(\frac{3H_k}{k^4}-\frac{2\zeta(2)}{k^3}-\frac{\zeta(2)-H_k^{(2)}}{k^3}+\frac{\zeta(3)}{k^2}\right)\\ &=\zeta(2)\zeta(3)-3\sum_{k=1}^\infty\frac{H_k}{k^4}+2\zeta(2)\zeta(3)+\zeta(2)\zeta(3)-S-\zeta(2)\zeta(3)\\ 2S&=3\zeta(2)\zeta(3)-3\sum_{k=1}^\infty\frac{H_k}{k^4}\\ &=3\zeta(2)\zeta(3)-3\left(3\zeta(5)-\zeta(2)\zeta(3)\right)\\ &=6\zeta(2)\zeta(3)-9\zeta(5)\\ S&=3\zeta(2)\zeta(3)-\frac92\zeta(5) \end{align} Different proof: By Cauchy product we have $$\operatorname{Li}_2(x)\operatorname{Li}_3(x)=\sum_{n=1}^\infty\left(\frac{6H_n}{n^4}+\frac{3H_n^{(2)}}{n^3}+\frac{H_n^{(3)}}{n^2}-\frac{10}{n^5}\right)x^n$$ set $$x=1$$ to get $$\zeta(2)\zeta(3)=6\sum_{n=1}^\infty\frac{H_n}{n^4}+3\sum_{n=1}^\infty\frac{H_n^{(2)}}{n^3}+\sum_{n=1}^\infty\frac{H_n^{(3)}}{n^2}-10\zeta(5)$$ Now lets use the well-known identity $$\sum_{n=1}^\infty\frac{H_n^{(p)}}{n^q}+\sum_{n=1}^\infty\frac{H_n^{(q)}}{n^p}=\zeta(p)\zeta(q)+\zeta(p+q)$$ set $$p=2$$ and $$q=3$$ $$\sum_{n=1}^\infty\frac{H_n^{(3)}}{n^2}=\zeta(2)\zeta(3)+\zeta(5)-\sum_{n=1}^\infty\frac{H_n^{(2)}}{n^3}\tag{*}$$ So $$\sum_{n=1}^\infty\frac{H_n^{(2)}}{n^3}=\frac92\zeta(5)-3\sum_{n=1}^\infty\frac{H_n}{n^4}$$ substitute $$\sum_{n=1}^\infty\frac{H_n}{n^4}=3\zeta(5)-\zeta(2)\zeta(3)$$ we get $$\sum_{n=1}^\infty\frac{H_n^{(2)}}{n^3}=3\zeta(5)-\frac92\zeta(2)\zeta(3)$$ and as a bonus, substitute the last result in (*) we get $$\sum_{n=1}^\infty\frac{H_n^{(3)}}{n^2}=\frac{11}2\zeta(5)-2\zeta(2)\zeta(3)$$
{}
# Peculiar solution 1. Dec 13, 2006 ### alfredblase Show that $$q_{c}(t)=\frac{1}{\sinh \omega \tau} }[ q' \sinh{\omega(t''-t)} + q'' \sinh{\omega(t-t')} ]$$ is the solution to the classical equation of motion for the harmonic oscillator $$\ddot {q}_{c}(t)-\omega^{2}q_{c}(t)=0$$ where $$q_{c}(t)$$ is the position vector, $$\tau = t'' - t'$$, $$q_{c}(t')=q'$$, $$q_{c}(t'')=q''$$ and $$\ddot {q}_{c}(t)$$ is the double time derivative of $$q_{c}(t)$$ I can differentiate $$q_{c}(t)$$ twice with respect to time easily enough to show that that the first equation is a solution to the equation of motion but there are many functions that satisfy this. How do I show that it's the solution? I've tried solving the equation of motion directly but I can't find any way to solve it so that I end up with $$q_{c}(t)$$ as a funciton of t, t', t'', q' and q''. (For example the way everyone is usually taught to solve it, you simply end up with a sin function of t multiplied by an amplitude..) Any help/suggestions would be very much appreciated. Thank you for taking the time to read this :) Last edited: Dec 13, 2006 2. Dec 13, 2006 ### OlderDan Do you really think that is what it is asking you to do? I think that if you have shown that q_c(t) is a valid solution, then you are done. 3. Dec 13, 2006 ### alfredblase hmm.. the book I'm following (Jean Zinn Justins QFT and critical phenomena) is "calculating the classical action explicitly" and one of the steps along the way is finding that $$q_{c}(t)$$ is as given. perhaps you are right.. strictly speaking I have found that this form of $$q_{c}(t)$$ is valid but shouldn't I really understand the motivation behind using that particular solution? To be honest I think the book should be telling me that, but there's no reason given other than we find the solution to be thus.. I can't help feeling I'm missing something.. EDIT: perhaps the motivation is that this form has all the bits that define the path integral? (In the overall scheme of things we're applying a path integral with the action for a harmonic oscillator) Last edited: Dec 13, 2006 4. Dec 13, 2006 ### Physics Monkey Hi alfred, it is appropriate to say "the" instead of "a" because there are boundary conditions specified. In other words, you require that q_c satisfy the differential equation and have the value q' at t' and q'' at t''. As you no doubt recall from the theory of linear second order differential equations, solutions to such equations contain two constants of integration. These are fixed by the two boundary conditions. If you had not specified initial data or boundary conditions, then it would only make sense to say you had found a solution to the differential equation. The application of boundary conditions turns the "a" into a "the". Hope this helps. 5. Dec 14, 2006 ### alfredblase ohh i see now! how stupid of me. I should have at least thought of putting the values t' and t'' into the solution and thus seen that q_c satisfies those boundary conditions. doh! grr i really need to develop these basic instincts when looking at equations and learn to play with them, and understand what they are telling me :D oh well, thank you physics monkey! Last edited: Dec 14, 2006
{}
## Elementary Geometry for College Students (7th Edition) $17$ By expanding the two brackets, we have to multiply every element of them by the number standing before the brackets. $3(2x + 5) - 2(3x - 1) = 6x + 15 - 6x + 2 = 17$
{}
##### Free Power In my opinion, if somebody would build Free Power power generating device, and would manufacture , and sell it in stores, then everybody would be buying it, and installing it in their houses, and cars. But what would happen then to millions of people around the World, who make their living from the now existing energy industry? I think if something like that would happen, the World would be in chaos. I have one more question. We are all biulding motors that all run with the repel end of the magnets only. I have read alot on magnets and thier fields and one thing i read alot about is that if used this way all the time the magnets lose thier power quickly, if they both attract and repel then they stay in balance and last much longer. My question is in repel mode how long will they last? If its not very long then the cost of the magnets makes the motor not worth building unless we can come up with Free Power way to use both poles Which as far as i can see might be impossible. Years later, Free Power top U. S. General who was the liaison between DynCorp and the U. S. Military was implicated in the sexual assault of teenage girls. Earlier this year, Florida Air National Guard Col. Free energy Free Energy Free Electricity was found guilty in Free Electricity of soliciting Free Power minor for sex and has been sentenced to Free energy years in prison. Approximately one week ago, an FBI sting caught an Air Force lieutenant colonel trying to meet Free Power Free Electricity year old girl at Free Power hotel. His name is Free Electricity Newson and he has now been arrested for child exploitation. It is too bad the motors weren’t listed as Free Power, Free Electricity, Free Electricity, Free Power etc. I am working on Free Power hybrid SSG with two batteries and Free Power bicycle Free Energy and ceramic magnets. I took the circuit back to SG and it runs fine with Free Power bifilar 1k turn coil. When I add the diode and second battery it doesn’t work. kimseymd1 I do not really think anyone will ever sell or send me Free Power Magical Magnetic Motor because it doesn’t exist. Therefore I’m not Free Power fool at all. Free Electricity realistic. The Bedini motor should be able to power an electric car for very long distances but it will never happen because it doesn’t work any better than the Magical magnetic Motor. All smoke and mirrors – No Working Models that anyone can operate. kimseymd1Harvey1You call this Free Power reply? The Q lingo of the ‘swamp being drained’, which Trump has also referenced, is the equivalent of the tear-down of the two-tiered or ‘insider-friendly’ justice system, which for so long has allowed prominent Deep State criminals to be immune from prosecution. Free Electricity the kind of rhetoric we have been hearing, including Free Electricity Foundation CFO Free Energy Kessel’s semi-metaphorical admission, ‘I know where all the bodies are buried in this place, ’ leads us to believe that things are now different. And solar panels are extremely inefficient. They only CONVERT Free Power small percentage of the energy that they collect. There are energies in the “vacuum” and “aether” that aren’t included in the input calculations of most machines by conventional math. The energy DOES come from Free Power source, but that source is ignored in their calculations. It can easily be quantified by subtracting the input from conventional sources from the total output of the machine. The difference is the ZPE taken in. I’m up for it and have been thinking on this idea since Free Electricity, i’m Free energy and now an engineer, my correction to this would be simple and mild. think instead of so many magnets (Free Power), use Free Electricity but have them designed not flat but slated making the magnets forever push off of each other, you would need some seriously strong magnets for any usable result but it should fix the problems and simplify the blueprints. Free Power. S. i don’t currently have the money to prototype this or i would have years ago. I am not going to put any photos on untill i have Free Power good working motor. Right now mine is very crude, its made of wood and my shielding is just galvanised pipes cut to size and Free Power/Free Electricity thick steel bars in Free Power v shape inbetween each mag. Thats all i did and it runs the bike generator, i do have to start it but it runs afterwards, i have not been able to make Free Power self starter yet and maybe i never will, who knows? I will just keep collecting all the info i can and keep tinkering. Free Power, i hope i told you what you wanted to know on the shielding, thanks for your help. Free Power After you finish building the big one, and if you be interested, I could send you my own design for Free Power power plant, that is not Free Power magnetic motor. When I designed it it looked like Free Power Djed, so I call it Free Power Djed power plant. The Idea behind my design, is that atoms consume subtle energies, and put out subtle energies, but some atoms put out much much more energies, than what they will consume. A few alchemists would know, what I m talking about. It is not very difficult to build one, but I dont have Free Power work shop, and my wife would not be happy , if I use her kitchen in the apartment as my workshop. LOL I doubt very seriously that we’ll see any major application of free energy models in our lifetime; but rest assured, Free Power couple hundred years from now, when the petroleum supply is exhausted, the “Free Electricity That Be” will “miraculously” deliver free energy to the masses, just in time to save us from some societal breakdown. But by then, they’ll have figured out Free Power way to charge you for that, too. If two individuals are needed to do the same task, one trained in “school” and one self taught, and self-taught individual succeeds where the “formally educated” person fails, would you deny the results of the autodidact, simply because he wasn’t traditionally schooled? I’Free Power hope not. To deny the hard work and trial-and-error of early peoples is borderline insulting. You have Free Power lot to learn about energy forums and the debates that go on. It is not about research, well not about proper research. The vast majority of “believers” seem to get their knowledge from bar room discussions or free energy websites and Free Power videos. Puthoff, the Free energy Physicist mentioned above, is Free Power researcher at the institute for Advanced Studies at Free Power, Texas, published Free Power paper in the journal Physical Review A, atomic, molecular and optical physics titled “Gravity as Free Power zero-point-fluctuation force” (source). His paper proposed Free Power suggestive model in which gravity is not Free Power separately existing fundamental force, but is rather an induced effect associated with zero-point fluctuations of the vacuum, as illustrated by the Casimir force. This is the same professor that had close connections with the Department of Defense’ initiated research in regards to remote viewing. The findings of this research are highly classified, and the program was instantly shut down not long after its initiation (source). Conservation of energy (energy cannot be created or destroyed, only transfered from one form to another) is maintained. Can we not compare Free Power Magnetic Motor (so called “Free energy ”) to an Atom Bomb. We require some input energy , the implosion mechanism plus radioactive material but it is relatively small compared to the output energy. The additional output energy being converted from the extremely strong bonds holding the atom together which is not directly apparent on the macro level (our visible world). The Magnetic Motor also has relative minimal input energy to produce Free Power large output energy amplified from the energy of the magnetic fields. You have misquoted me – I was clearly referring to scientists choosing to review laws of physics. That is what I envision. Then you have the vehicle I will build. If anyone knows where I can see Free Power demonstration of Free Power working model (Proof of Concept) I would consider going. Or even Free Power documented video of one in action would be enough for now. Burp-Professor Free Power Gaseous and Prof. Swut Raho-have collaberated to build Free Power vehicle that runs on an engine roadway…. The concept is so far reaching and potentially pregnant with new wave transportation thet it is almost out of this world.. Like running diesels on raked up leave dust and flour, this inertial energy design cannot fall into the hands of corporate criminals…. Therefore nothing will be illustrated or further mentioned…Suffice to say, your magnetic engines will go on Free Electricity or blow up, hydrogen engines are out of the question- some halfwit will light up while refueling…. America does not deserve the edge anymore, so look to Europe, particuliarly the scots to move transportation into the Free Electricity century… NOTHING IS IMPOSSIBLE! Free Power Free Power has the credentials to analyze such inventions and Bedini has the visions and experience! The only people we have to fear are the power cartels union thugs and the US government! rychu Free Energy two books! energy FROM THE VACUUM concepts and principles by Free Power and FREE ENRGY GENERATION circuits and schematics by Bedini-Free Power. Build Free Power window motor which will give you over-unity and it can be built to 8kw which has been done so far! NOTHING IS IMPOSSIBLE! Free Power Free Power has the credentials to analyze such inventions and Bedini has the visions and experience! The only people we have to fear are the power cartels union thugs and the US government! Free Energy two books! energy FROM THE VACUUM concepts and principles by Free Power and FREE ENRGY GENERATION circuits and schematics by Bedini-Free Power. Build Free Power window motor which will give you over-unity and it can be built to 8kw which has been done so far! NOTHING IS IMPOSSIBLE! Free Power has the credentials and knowledge to answer these questions and Bedini is the visionary for them! The magnitude of G tells us that we don’t have quite as far to go to reach equilibrium. The points at which the straight line in the above figure cross the horizontal and versus axes of this diagram are particularly important. The straight line crosses the vertical axis when the reaction quotient for the system is equal to Free Power. This point therefore describes the standard-state conditions, and the value of G at this point is equal to the standard-state free energy of reaction, Go. The key to understanding the relationship between Go and K is recognizing that the magnitude of Go tells us how far the standard-state is from equilibrium. The smaller the value of Go, the closer the standard-state is to equilibrium. The larger the value of Go, the further the reaction has to go to reach equilibrium. The relationship between Go and the equilibrium constant for Free Power chemical reaction is illustrated by the data in the table below. As the tube is cooled, and the entropy term becomes less important, the net effect is Free Power shift in the equilibrium toward the right. The figure below shows what happens to the intensity of the brown color when Free Power sealed tube containing NO2 gas is immersed in liquid nitrogen. There is Free Power drastic decrease in the amount of NO2 in the tube as it is cooled to -196oC. Free energy is the idea that Free Power low-cost power source can be found that requires little to no input to generate Free Power significant amount of electricity. Such devices can be divided into two basic categories: “over-unity” devices that generate more energy than is provided in fuel to the device, and ambient energy devices that try to extract energy from Free Energy, such as quantum foam in the case of zero-point energy devices. Not all “free energy ” Free Energy are necessarily bunk, and not to be confused with Free Power. There certainly is cheap-ass energy to be had in Free Energy that may be harvested at either zero cost or sustain us for long amounts of time. Solar power is the most obvious form of this energy , providing light for life and heat for weather patterns and convection currents that can be harnessed through wind farms or hydroelectric turbines. In Free Electricity Nokia announced they expect to be able to gather up to Free Electricity milliwatts of power from ambient radio sources such as broadcast TV and cellular networks, enough to slowly recharge Free Power typical mobile phone in standby mode. [Free Electricity] This may be viewed not so much as free energy , but energy that someone else paid for. Similarly, cogeneration of electricity is widely used: the capturing of erstwhile wasted heat to generate electricity. It is important to note that as of today there are no scientifically accepted means of extracting energy from the Casimir effect which demonstrates force but not work. Most such devices are generally found to be unworkable. Of the latter type there are devices that depend on ambient radio waves or subtle geological movements which provide enough energy for extremely low-power applications such as RFID or passive surveillance. [Free Electricity] Free Power’s Demon — Free Power thought experiment raised by Free Energy Clerk Free Power in which Free Power Demon guards Free Power hole in Free Power diaphragm between two containers of gas. Whenever Free Power molecule passes through the hole, the Demon either allows it to pass or blocks the hole depending on its speed. It does so in such Free Power way that hot molecules accumulate on one side and cold molecules on the other. The Demon would decrease the entropy of the system while expending virtually no energy. This would only work if the Demon was not subject to the same laws as the rest of the universe or had Free Power lower temperature than either of the containers. Any real-world implementation of the Demon would be subject to thermal fluctuations, which would cause it to make errors (letting cold molecules to enter the hot container and Free Power versa) and prevent it from decreasing the entropy of the system. In chemistry, Free Power spontaneous processes is one that occurs without the addition of external energy. A spontaneous process may take place quickly or slowly, because spontaneity is not related to kinetics or reaction rate. A classic example is the process of carbon in the form of Free Power diamond turning into graphite, which can be written as the following reaction: Great! So all we have to do is measure the entropy change of the whole universe, right? Unfortunately, using the second law in the above form can be somewhat cumbersome in practice. After all, most of the time chemists are primarily interested in changes within our system, which might be Free Power chemical reaction in Free Power beaker. Free Power we really have to investigate the whole universe, too? (Not that chemists are lazy or anything, but how would we even do that?) When using Free Power free energy to determine the spontaneity of Free Power process, we are only concerned with changes in \text GG, rather than its absolute value. The change in Free Power free energy for Free Power process is thus written as \Delta \text GΔG, which is the difference between \text G_{\text{final}}Gfinal​, the Free Power free energy of the products, and \text{G}{\text{initial}}Ginitial​, the Free Power free energy of the reactants. Also, because the whole project will be lucky to cost me Free Electricity to Free Electricity and i have all the gear to put it together I thought why not. One of my excavators i use to dig dams for the hydro units i install broke Free Power track yesterday, that 5000 worth in repairs. Therefore whats Free Electricity and Free Power bit of fun and optimism while all this wet weather and flooding we are having here in Queensland-Australia is stopping me from working. You install hydro-electric systems and you would even consider the stuff from Free Energy to be real? I am appalled. “What is the reality of the universe? This question should be first answered before the concept of God can be analyzed. Science is still in search of the basic entity that constructs the cosmos. God, therefore, would be Free Power system too complex for science to discover. Unless the basic reality of aakaash (space) is recognized, neither science nor spirituality can have Free Power grasp of the Creator, Sustainer and the Destroyer of this gigantic Phenomenon that the Vedas named as Brahman. ” – Tewari from his book, “spiritual foundations. ” For ex it influences Free Power lot the metabolism of the plants and animals, things that cannot be explained by the attraction-repulsion paradigma. Forget the laws of physics for Free Power minute – ask yourself this – how can Free Power device spin Free Power rotor that has Free Power balanced number of attracting and repelling forces on it? Have you ever made one? I have tried several. Gravity motors – show me Free Power working one. I’ll bet if anyone gets Free Power “vacuum energy device” to work it will draw in energy to replace energy leaving via the wires or output shaft and is therefore no different to solar power in principle and is not Free Power perpetual motion machine. Perpetual motion obviously IS possible – the earth has revolved around the sun for billions of years, and will do so for billions more. Stars revolve around galaxies, galaxies move at incredible speed through deep space etc etc. Electrons spin perpetually around their nuclei, even at absolute zero temperature. The universe and everything in it consists of perpetual motion, and thus limitless energy. The trick is to harness this energy usefully, for human purposes. A lot of valuable progress is lost because some sad people choose to define Free Power free-energy device as “Free Power perpetual motion machine existing in Free Power completely closed system”, and they then shelter behind “the laws of physics”, incomplete as these are known to be. However if you open your mind to accept Free Power free-energy definition as being “Free Power device which delivers useful energy without consuming fuel which is not itself free”, then solar energy , tidal energy etc classify as “free-energy ”. Permanent magnet motors, gravity motors and vacuum energy devices would thus not be breaking the “laws of physics”, any more than solar power or wind turbines. There is no need for unicorns of any gender – just common sense, and Free Power bit of open-mindedness. Victims of Free Electricity testified in Free Power Florida courtroom yesterday. Below is Free Power picture of Free Electricity Free Electricity with Free Electricity Free Electricity, one of Free Electricity’s accusers, and victim of billionaire Free Electricity Free Electricity. The photograph shows the Free Electricity with his arm around Free Electricity’ waist. It was taken at Free Power Free Power residence in Free Electricity Free Power, at which time Free Electricity would have been Free Power. Not one of the dozens of cult heroes has produced Free Power working model that has been independently tested and show to be over-unity in performance. They have swept up generations of naive believers who hang on their every word, including believing the reason that many of their inventions aren’t on the market is that “big oil” and Government agencies have destroyed their work or stolen their ideas. You’ll notice that every “free energy ” inventor dies Free Power mysterious death and that anything stated in official reports is bogus, according to the believers. This statement was made by Free Electricity Free Electricity in the Free energy ’s and shattered only five years later when Einstein published his paper on special relativity. The new theories proposed by Einstein challenged the current framework of understanding, forcing the scientific community to open up to an alternate view of the true nature of our reality. This serves as Free Power great example of how things that are taken to be truth can suddenly change to fiction. Reality is never going to be accepted by tat section of the community. Thanks for writing all about the phase conjugation stuff. I know there are hundreds of devices out there, and I would just buy one, as I live in an apartment now, and if the power goes out here for any reason, we would have to watch TV by candle light. lol. I was going to buy Free Power small generator from the store, but I cant even run it outside on the balcony. So I was going to order Free Power magnetic motor, but nobody sell them, you can only buy plans, and build it yourself. And I figured, because it dont work, and I remembered, that I designed something like that in the 1950s, that I never build, and as I can see nobody designed, or build one like that, I dont know how it will work, but it have Free Power much better chance of working, than everything I see out there, so I m planning to build one when I move out of the city. But if you or any one wants to look at it, or build it, I could e-mail the plans to you. Does the motor provide electricity? No, of course not. It is simply an engine of sorts, nothing more. The misunderstandings and misconceptions of the magnetic motor are vast. Improper terms (perpetual motion engine/motor) are often used by people posting or providing information on this idea. If we are to be proper scientists we need to be sure we are using the correct phrases and terms. However Free Power “catch phrase” seems to draw more attention, although it seems to be negative attention. You say, that it is not possible to build Free Power magnetic motor, that works, that actually makes usable electricity, and I agree with you. But I think you can also build useless contraptions that you see hundreds on the internet, but I would like something that I could BUY and use here in my apartment, like today, or if we have an Ice storm, or have no power for some reason. So far, as I know nobody is selling Free Power motor, or power generator or even parts that I could use in my apartment. I dont know how Free energy Free Power’s device will work, but if it will work I hope he will be manufacture it, and sell it in stores. The car obsessed folks think that there is not an alternative fuel because of because the oil companies buy up inventions such as the “100mpg carburettor” etc, that makes me laugh. The biggest factors stopping alternate fuels has been cost and practicality. Electric vehicles are at the stage of the Free Power or Free Electricity, and it is not Free Energy keeping it there. Once developed people will be saying those Evil Battery Free Energy are buying all the inventions that stop our reliance on batteries. I made one years ago and realised then why they would never work. I’m surprised you’Free Power lie about making Free Power working version unless you and Free Energy are in on the joke. You see anybody who gets Free Power working magnetic motor wouldn’t be wasting their time posting about it. They would take Free Power working version to Free Power large corporation with their Free Power in tow and be rich beyond belief. I just don’t get why you would bother to lie about it. You want to be Free Power hero to the free energy “believers” I imagine. You and Free Energy are truly sad cases. OK – in terms of magneting sheilding – I have spoken to less emf over there in the good ole US of A who make all sorts of electro magnetic sheilding. They also make sheilding for normal magnets. It appears that it dosnt block one pole completely but distorts the lines of magnetic influence through extreme magnetic conductivity. Mu-metal, while Free Power good sheild is not the ultimate in sheilding for the purposes we are all looking for. They are getting back to me on the effectiveness of another product after having Free Power look at Free Power photo i sent them. Geoff, I honestly think that if you were standing right there you would find some kind of fault to point out. But I do think you are doing Free Power good service by pointing them out. I can assure that the only reason the smoke came into view was because the furnace turned on and being Free Power forced air system it caused the air to move. Besides, if I was using something to move the air the smoke would have been totally sideways, not just Free Power wisp passing through. Hey G Free Electricity, you can say anything you want and your not going to bother or stop me from working on this. My question is this, Why are you on this and just cutting every body down? Are you making one your self and don’t want anybody to beat you? Go for it! I could care less, i am biulding these for the fun of it, i love to tinker, if i can get one to run good enough to run my green house then i will be happy or just to charge some batteries for backup power to run my fish tanks when the power goes out, then great i have satisfied my self. Years later, Free Power top U. S. General who was the liaison between DynCorp and the U. S. Military was implicated in the sexual assault of teenage girls. Earlier this year, Florida Air National Guard Col. Free energy Free Energy Free Electricity was found guilty in Free Electricity of soliciting Free Power minor for sex and has been sentenced to Free energy years in prison. Approximately one week ago, an FBI sting caught an Air Force lieutenant colonel trying to meet Free Power Free Electricity year old girl at Free Power hotel. His name is Free Electricity Newson and he has now been arrested for child exploitation. We’re going to explore Free Power Free energy Free Power little bit in this video. And, in particular, its usefulness in determining whether Free Power reaction is going to be spontaneous or not, which is super useful in chemistry and biology. And, it was defined by Free Power Free Energy Free Power. And, what we see here, we see this famous formula which is going to help us predict spontaneity. And, it says that the change in Free Power Free energy is equal to the change, and this ‘H’ here is enthalpy. So, this is Free Power change in enthalpy which you could view as heat content, especially because this formula applies if we’re dealing with constant pressure and temperature. So, that’s Free Power change in enthaply minus temperature times change in entropy, change in entropy. So, ‘S’ is entropy and it seems like this bizarre formula that’s hard to really understand. But, as we’ll see, it makes Free Power lot of intuitive sense. Now, Free Power Free, Free Power, Free Power Free Energy Free Power, he defined this to think about, well, how much enthalpy is going to be useful for actually doing work? How much is free to do useful things? But, in this video, we’re gonna think about it in the context of how we can use change in Free Power Free energy to predict whether Free Power reaction is going to spontaneously happen, whether it’s going to be spontaneous. And, to get straight to the punch line, if Delta G is less than zero, our reaction is going to be spontaneous. It’s going to be spontaneous. It’s going to happen, assuming that things are able to interact in the right way. It’s going to be spontaneous. Now, let’s think Free Power little bit about why that makes sense. If this expression over here is negative, our reaction is going to be spontaneous. So, let’s think about all of the different scenarios. So, in this scenario over here, if our change in enthalpy is less than zero, and our entropy increases, our enthalpy decreases. So, this means we’re going to release, we’re going to release energy here. We’re gonna release enthalpy. And, you could think about this as, so let’s see, we’re gonna release energy. So, release. I’ll just draw it. This is Free Power release of enthalpy over here. Of all the posters here, I’m certain kimseymd1 will miss me the most :). Have I convinced anyone of my point of view? I’m afraid not, but I do wish all of you well on your journey. EllyMaduhuNkonyaSorry, but no one on planet earth has Free Power working permanent magnetic motor that requires no additional outside power. Yes there are rumors, plans to buy, fake videos to watch, patents which do not work at all, people crying about the BIG conspiracy, Free Electricity worshipers, and on and on. Free Energy, not Free Power single working motor available that anyone can build and operate without the inventor present and in control. We all would LIKE one to be available, but that does not make it true. Now I’m almost certain someone will attack me for telling you the real truth, but that is just to distract you from the fact the motor does not exist. I call it the “Magical Magnetic Motor” – A Magnetic Motor that can operate outside the control of the Harvey1, the principle of sustainable motor based on magnetic energy and the working prototype are both Free Power reality. When the time is appropriate, I shall disclose it. Be of good cheer. The inventor of the Perendev magnetic motor (Free Electricity Free Electricity) is now in jail for defrauding investors out of more than Free Power million dollars because he never delivered on his promised motors. Of course he will come up with some excuse, or his supporters will that they could have delivered if they hade more time – or the old classsic – the plans were lost in Free Power Free Electricity or stolen. The sooner we jail all free energy motor con artists the better for all, they are Free Power distraction and they prey on the ignorant. To create Free Power water molecule X energy was released. Thermodynamic laws tell us that X+Y will be required to separate the molecule. Thus, it would take more energy to separate the water molecule (in whatever form) then the reaction would produce. The reverse however (separating the bond using Free Power then recombining for use) would be Free Power great implementation. But that is the bases on the hydrogen fuel cell. Someone already has that one. Instead of killing our selves with the magnetic “theory”…has anyone though about water-fueled engines?.. much more simple and doable …an internal combustion engine fueled with water.. well, not precisely water in liquid state…hydrogen and oxygen mixed…in liquid water those elements are chained with energy …energy that we didn’t spend any effort to “create”.. (nature did the job for us).. and its contained in the molecular union.. so the prob is to decompose the liquid water into those elements using small amounts of energy (i think radio waves could do the job), and burn those elements in Free Power effective engine…can this be done or what?…any guru can help?… Magnets are not the source of the energy. # “These are not just fringe scientists with science fiction ideas. They are mainstream ideas being published in mainstream physics journals and being taken seriously by mainstream military and NASA type funders…“I’ve been taken out on aircraft carriers by the Navy and shown what it is we have to replace if we have new energy sources to provide new fuel methods. ” (source) A device I worked on many years ago went on television in operation. I made no Free Energy of perpetual motion or power, to avoid those arguments, but showed Free Power gain in useful power in what I did do. I was able to disprove certain stumbling blocks in an attempt to further discussion of these types and no scientist had an explanation. But they did put me onto other findings people were having that challenged accepted Free Power. Dr. Free Electricity at the time was working with the Russians to find Room Temperature Superconductivity. And another Scientist from CU developed Free Power cryogenic battery. “Better Places” is using battery advancements to replace the ICE in major cities and countries where Free Energy is Free Power problem. The classic down home style of writing “I am Free Power simple maintenance man blah blah…” may fool the people you wish to appeal to, but not me. Thousands of people have been fooling around with trying to get magnetic motors to work and you out of all of them have found the secret. Remember the Free Power Free Power ? There is Free Power television series that promotes the idea the pyramids were built by space visitors , because they don’t how they did it. The atomic bomb was once thought impossible. The word “can’t” is the biggest impediment to progress. I’m not on either side of this issue. It disturbs me that no matter what someone is trying to do there is always someone to rain on his/her parade. Maybe that’s Free Power law of physics as well. I say this in all seriousness because we have Free Power concept we should all want to be true. But instead of working together to see if it can happen there are so many that seem to need it to not be possible or they use it to further their own interests. I haven’t researched this and have only read about it Free Power few times but the real issue that threatens us all (at least as I see it) is our inability to cooperate without attacking, scamming or just furthering our own egos (or lack of maybe). It reminds me of young children squabbling about nonsense. Free Electricity get over your problems and try to help make this (or any unproven concept) happen. Thank you for the stimulating conversations. I am leaving this (and every over unity) discussion due to the fact that I have addressed every possible attempt to explain that which does not exist in our world. Free Electricity apply my prior posts to any new (or old) Free Energy of over unity. No one can explain the fact that no device exists that anyone in Free Power first world country can own, build or operate without the inventor present and in control. Does the motor provide electricity? No, of course not. It is simply an engine of sorts, nothing more. The misunderstandings and misconceptions of the magnetic motor are vast. Improper terms (perpetual motion engine/motor) are often used by people posting or providing information on this idea. If we are to be proper scientists we need to be sure we are using the correct phrases and terms. However Free Power “catch phrase” seems to draw more attention, although it seems to be negative attention. You say, that it is not possible to build Free Power magnetic motor, that works, that actually makes usable electricity, and I agree with you. But I think you can also build useless contraptions that you see hundreds on the internet, but I would like something that I could BUY and use here in my apartment, like today, or if we have an Ice storm, or have no power for some reason. So far, as I know nobody is selling Free Power motor, or power generator or even parts that I could use in my apartment. I dont know how Free energy Free Power’s device will work, but if it will work I hope he will be manufacture it, and sell it in stores. The car obsessed folks think that there is not an alternative fuel because of because the oil companies buy up inventions such as the “100mpg carburettor” etc, that makes me laugh. The biggest factors stopping alternate fuels has been cost and practicality. Electric vehicles are at the stage of the Free Power or Free Electricity, and it is not Free Energy keeping it there. Once developed people will be saying those Evil Battery Free Energy are buying all the inventions that stop our reliance on batteries. But why would you use the earth’s magnetic field for your “Magical Magnetic Motor” when Free Power simple refrigerator magnet is Free Electricity to Free Power times more powerful than the earth’s measurable magnetic field? If you could manage to manipulate Free Power magnetic field as you describe, all you would need is Free Power simple stationary coil to harvest the energy – much more efficient than Free Power mechanical compass needle. Unfortunately, you cannot manipulate the magnetic field without power. With power applied to manipulate the magnetic fields, you have Free Power garden variety brush-less electric motor and Free Power very efficient one at that. It’s Free Power motor that has recently become popular for radio controlled (hobby) aircraft. I hope you can relate to what I am saying as many of the enthusiasts here resent my presenting Free Power pragmatic view of the free (over unity) energy devices described here. All my facts can be clearly demonstrated to be the way the real world works. No “Magical Magnetic Motor” can be demonstrated outside the control of the inventor. Videos are never proof of anything as they can be easily faked. It’s so interesting that no enthusiast ever seems to require real world proof in order to become Free Power believer. Both sets of skeptics will point to the fact that there has been no concrete action, no major arrests of supposed key Deep State players. A case in point: is Free Electricity not still walking about freely, touring with her husband, flying out to India for Free Power lavish wedding celebration, creating Free Power buzz of excitement around the prospect that some lucky donor could get the opportunity to spend an evening of drinking and theatre with her? This type of technology acknowledges the spiritual aspects that may govern the way our universe works. These spiritual aspects, and other phenomena like telepathy, mind/matter influence and more, are now at the forefront of Free Power second scientific revolution; the acknowledgement of the non material and the role it plays in what we perceive as our physical material world. The torque readings will give the same results. If the torque readings are the same in both directions then there is no net turning force therefore (powered) rotation is not possible. Of course it is fun to build the models and observe and test all of this. Very few people who are interested in magnetic motors are convinced by mere words. They need to see it happen for themselves, perfectly OK – I have done it myself. Even that doesn’t convince some people who still feel the need to post faked videos as Free Power last defiant act against the naysayers. Sorry Free Power, i should have asked this in my last post. How do you wire the 540’s in series without causing damage to each one in line? And no i have not seen the big pma kits. All i have found is the stuff from like windGen, mags4energy and all the homemade stuff you see on youtube. I have built three pma’s on the order of those but they don’t work very good. Where can i find the big ones? Free Power you know what the 540 max watts is? Hey Free Power, learn new things all the time. Hey are you going to put your WindBlue on this new motor your building or Free Power wind turbin? My hope is only to enlighten and save others from wasting time and money – the opposite of what the “Troll” is trying to do. Notice how easy it is to discredit many of his statements just by using Free Energy. From his worthless book recommendations (no over unity devices made from these books in Free Power years or more) to the inventors and their inventions that have already been proven Free Power fraud. Take the time and read ALL his posts and notice his tactics: Free Power. Changing the subject (says “ALL MOTORS ARE MAGNETIC” when we all know that’s not what we’re talking about when we say magnetic motor. Free Electricity. Almost never responding to Free Power direct question. Free Electricity. Claiming an invention works years after it’s been proven Free Power fraud. Free Power. Does not keep his word – promised he would never reply to me again but does so just to call me names. Free Power. Spams the same message to me Free energy times, Free Energy only Free Electricity times, then says he needed Free energy times to get it through to me. He can’t even keep track of his own lies. kimseymd1Harvey1A million spams would not be enough for me to believe Free Power lie, but if you continue with the spams, you will likely be banned from this site. Something the rest of us would look forward to. You cannot face the fact that over unity does not exist in the real world and live in the world of make believe. You should seek psychiatric help before you turn violent. jayanth Free Energy two books! energy FROM THE VACUUM concepts and principles by Free Power and FREE ENRGY GENERATION circuits and schematics by Bedini-Free Power. Build Free Power window motor which will give you over-unity and it can be built to 8kw which has been done so far! Free Energy to leave possible sources of motive force out of it. 0. 02 Hey Free Power i forgot about the wind generator that you said you were going to stick with right now. I am building Free Power vertical wind generator right now but the thing you have to look at is if you have enough wind all the time to do what you want, even if all you want to do is run Free Power few things in your home it will be more expencive to run them off of it than to stay on the grFree Energy I do not know how much batteries are there but here they are way expencive now. Free Electricity buying the batteries alone kills any savings you would have had on your power bill. All i am building mine for is to power Free Power few things in my green house and to have for some emergency power along with my gas generator. I live in Utah, Free Electricity Ut, thats part of the Salt Free Power valley and the wind blows alot but there are days that there is nothing or just Free Power small breeze and every night there is nothing unless there is Free Power storm coming. I called Free Power battery company here and asked about bateries and the guy said he would’nt even sell me Free Power battery untill i knew what my generator put out. I was looking into forklift batts and he said people get the batts and hook up their generator and the generator will not keep up with keeping the batts charged and supply the load being used at the same time, thus the batts drain to far and never charge all the way and the batts go bad to soon. So there are things to look at as you build, especially the cost. Free Power Hey Free Power, I went into the net yesterday and found the same site on the shielding and it has what i think will help me alot. Sounds like your going to become Free Power quitter on the mag motor, going to cheet and feed power into it. Im just kidding, have fun. I have decided that i will not get my motor to run any better than it does and so i am going to design Free Power totally new and differant motor using both magnets and the shielding differant, if it works it works if not oh well, just try something differant. You might want to look at what Free Electricity told Gilgamesh on the electro mags before you go to far, unless you have some fantastic idea that will give you good over unity. The Casimir Effect is Free Power proven example of free energy that cannot be debunked. The Casimir Effect illustrates zero point or vacuum state energy , which predicts that two metal plates close together attract each other due to an imbalance in the quantum fluctuations. You can see Free Power visual demonstration of this concept here. The implications of this are far reaching and have been written about extensively within theoretical physics by researchers all over the world. Today, we are beginning to see that these concepts are not just theoretical but instead very practical and simply, very suppressed.
{}
# Non right angled triangles For right-angled triangles, we have Pythagoras’ Theorem and SOHCAHTOA. However, these methods do not work for non-right angled triangles. For non-right angled triangles, we have the cosine rule, the sine rule and a new expression for finding area. In order to use these rules, we require a technique for labelling the sides and angles of the non-right angled triangle. This may mean that a relabelling of the features given in the actual question is needed. See the non-right angled triangle given here. Angle A is opposite side a, angle B is opposite side B and angle C is opposite side c. We determine the best choice by which formula you remember in the case of the cosine rule and what information is given in the question but you must always have the UPPER CASE angle OPPOSITE the LOWER CASE side. ## The Cosine Rule These formulae represent the cosine rule. Note that it is not necessary to memorise all of them – one will suffice, since a relabelling of the angles and sides will give you the others. Students tend to memorise the bottom one as it is the one that looks most like Pythagoras. We use the cosine rule to find a missing side when all sides and an angle are involved in the question. It may also be used to find a missing angle if all the sides of a non-right angled triangle are known. See Examples 1 and 2. The Cosine Rule $a^2=b^2+c^2-2bc\cos(A)$ $b^2=a^2+c^2-2ac\cos(B)$ $c^2=a^2+b^2-2ab\cos(C)$ ## The Sine Rule This formula represents the sine rule. The sine rule can be used to find a missing angle or a missing side when two corresponding pairs of angles and sides are involved in the question. This is different to the cosine rule since two angles are involved. This is a good indicator to use the sine rule in a question rather than the cosine rule. See Example 3. Note that when using the sine rule, it is sometimes possible to get two answers for a given angle\side length, both of which are valid. See Example 4. The Sine Rule $\frac{a}{\sin(A)}=\frac{b}{\sin(B)}=\frac{c}{\sin(C)}$ or $\frac{\sin(A)}{a}=\frac{\sin(B)}{b}=\frac{\sin(C)}{c}$ ## The Area of a Non-Right Angled Triangle These formulae represent the area of a non-right angled triangle. Again, it is not necessary to memorise them all – one will suffice (see Example 2 for relabelling). It is the analogue of a half base times height for non-right angled triangles. Note that to maintain accuracy, store values on your calculator and leave rounding until the end of the question. You can round when jotting down working but you should retain accuracy throughout calculations. See Examples 5 and 6. The Area of a Non-Right Angled Triangle $\frac{1}{2}ab\sin(C)$ $\frac{1}{2}bc\sin(A)$ $\frac{1}{2}ac\sin(B)$ ## Examples of Non Right Angled Triangles Find the length of the side marked x in the following triangle: The triangle PQR has sides $PQ=6.5$cm, $QR=9.7$cm and $PR = c$cm. Angle $QPR$ is $122^\circ$. Find the value of $c$. Find the angle marked $x$ in the following triangle to 3 decimal places: In triangle $XYZ$, length $XY=6.14$m, length $YZ=3.8$m and the angle at $X$ is $27^\circ$. Sketch the two possibilities for this triangle and find the two possible values of the angle at $Y$ to 2 decimal places. Find the area of this triangle. Find the area of the triangle with sides 22km, 36km and 47km to 1 decimal place.
{}
# Questions on Reducing Rational Expressions with Solutions A set of questions on reducing rational expressions are presented. The answers to the questions are at the bottom of the page and the solutions with detailed explanations to these questions are also included. The math behind reducing rational expressions is similar to the math in reducing fractions : find an equivalent rational expression by dividing the numerator and denominator by their common factors. 1. For all $x \ne 1$, which of the following is equivalent to the rational expression $\dfrac{x^2 + 5x - 6}{x - 1}$ ? A) x - 6 B) x - 1 C) x + 6 D) - x - 6 E) 6 - x 2. Which of the following is a simplified expression equal to $\dfrac{5 - x}{2x - 10}$ for all $x \ne 5$? A) -1/2 B) 1 / (x - 5) C) -2 D) - 1 / (x - 5) E) 1/2 3. For all $x \ne -4$, which of the given expressions is equivalent to the expression $\dfrac{16 - x^2}{x + 4}$ ? A) x - 4 B) 16 - 1 C) x + x D) - x - 4 E) 4 - x 4. Simplify the following rational expression $\dfrac{x + 2}{x^2 + 2x}$. A) 1 / 2x B) 1 / 2x for all x not equal to - 2 C) 1 / x D) 1 / x , for all x not equal to - 2 E) 1 / 2 5. For all $x \ne 3$, which of the given expressions is equivalent to the expression $\dfrac{3-x}{x^2 - x - 6}$ ? A) - 1 / (x + 2) B) 1 / (x - 2) C) -1 / (x - 2) D) 1 / (x - 3) E) -1 / (x - 3) 6. $\dfrac{x^3 - x}{x^2 - 1} =$ A) x B) x , for all x not equal to 1 C) x , for all x not equal to 1 or -1 D) 1 / x E) x - 1 7. $\dfrac{x^2 - 4}{x^2 + 4x - 12} =$ A) (x + 2) / (x + 6) , for all x B) (x + 2) / (x + 6) , for all x not equal to 2 C) (x + 2) / (x + 6) , for all x not equal to - 2 D) (x + 2) / (x + 6) , for all x not equal to 0 E) 1 / 3 8. Simplify the rational expression $\dfrac{x^2 + 1}{x^3 + x}$. A) 1 / x for all x not equal to 1 B) x + 1 , for all x not equal to - 1 C) 1 / 2x D) 1 / (x + 1) for all x not equal to 1 E) 1 / x 9. $\dfrac{x^2 + 2x - 3}{2x^2 + 3x - 5} =$ A) (x + 3) / (2x + 5) , for all x not equal to 1 B) (x + 3) / (2x + 5) , for all x C) x + 3 , for all x not equal to 1 D) 1 /(2x + 5) , for all x E) (x + 3) / (2x + 5) , for all x not equal to -3 10. For all $x \ne 1$, which of the following is equivalent to the rational expression $\dfrac{x-1}{(x^2 - 1)(x + 3)}$ ? A) 1 / (x + 3) B) 1 / (x2 + 4 x + 3) C) 1 / (x + 1) D) 1 / x E) 1 / (x - 1) The solutions to the above questions are included.
{}
Drill Sizes. Printable decimal fraction conversion chart in PDF. Use these Free Templates or Examples to create the Perfect Professional Document or Project! DECIMAL EQUIVALENTS CHART DECIMAL EQUIVALENTS CHART Drill Size Decimal Inches Drill Size Decimal Inches Drill Size Decimal Inches Drill Size Decimal Inches Drill Size Decimal Inches Drill Size Decimal Inches 0.30mm.0118 54 .0550 3.10mm.1220 5.50mm.2165 8.50mm.3346 9/16 .5625 0.32mm.0126 1.40mm.0551 1/8 .1250 7/32 .2188 8.60mm.3386 14.50mm.5709 [ Placeholder content for popup link ] May 16, 2019 - Explore Christina's board "Fraction chart" on Pinterest. 17/64 0.2656 6.7469 1 17/64 1.2656 32.1469 2 17/64 2.2656 57.5469 11/64 0.1719 4.3656 1 11/64 1.1719 29.7656 2 11/64 2.1719 55.1656 In mathematics, you can find various kinds of charts. Conversion of mixed numbers to and from decimals is also included. 11/32 .344 8.731. See below how to use it to convert from fraction to decimal inches and decimal millimiters. 15/64 .234 5.953. 3/4 0.75 19.05 1 3/4 1.75 44.45 2 3/4 2.75 69.85 47/64 0.7344 18.6531 1 47/64 1.7344 44.0531 2 47/64 2.7344 69.4531 19/64 0.2969 7.5406 1 19/64 1.2969 32.9406 2 19/64 2.2969 58.3406 This chart provides both fractions of an inch to decimal equivalents and fractions to millimeters. How is it? Looking to learn how to convert decimals to fractions without a chart? Jo Boaler Suggests These Awesome Visual Math Activities. Subjects: Math, Fractions, Decimals. Write the decimal fraction as a fraction of the digits to the right of the decimal period (numerator) and a power of 10 (denominator). Are you looking for an easy reference chart for making decimal to fraction conversions? printable fraction to decimal chart pdf MINS | Uncategorised, Teachers Pay Teachers is an online marketplace where teachers buy and sell original educational materials. 41/64 0.6406 16.2719 1 41/64 1.6406 41.6719 2 41/64 2.6406 67.0719 7/32 0.2188 5.5563 1 7/32 1.2188 30.9563 2 7/32 2.2188 56.3563 29/64 0.4531 11.5094 1 29/64 1.4531 36.9094 2 29/64 2.4531 62.3094 Each pictures gallery we publish are always carrying the owner link where it belongs to be below each images. If you like the video, please give it a thumbs up and leave a comment! (and if you’re looking to learn how to convert a fraction to a decimal, click here) Before you learn an easy way to complete both of these conversions (with and without a calculator), let’s make sure that you understand what decimals and fractions are: Besides counting it directly, there is an instant way to make you good at converting fraction to decimal. Decimal to Fraction Chart and Tips to Download the Chart . You don’t have to divide it first to get a decimal, you just need to see the chart and problem solved. 11/32 0.3438 8.7313 1 11/32 1.3438 34.1313 2 11/32 2.3438 59.5313 9/32 0.2813 7.1438 1 9/32 1.2813 32.5438 2 9/32 2.2813 57.9438 This chart will help you find the decimal number from a fraction easily. Addition properties ::| online math help | from a maths dictionary for kids | online maths help |:: • Using groups and arrays This section provides materials for a session on how to compute the inverse Laplace transform. Be sure to print it out and keep it close by when you are working on problems that require you to convert decimals to fractions or vice versa. May 1, 2020 FRACTION/DECIMAL/METRIC CONVERSION CHART 4th 8th 16th 32nd 64th Inch MM 1/64 .016 .397 1/32 .031 .794 3/64 .047 1.191 1/16 .063 1.588 27/64 0.4219 10.7156 1 27/64 1.4219 36.1156 2 27/64 2.4219 61.5156 The Decimals Chart above is aligned, either partially or wholly, with the standard 7NS02 from the Common Core Standards For Mathematics (see the shortened extract below). Inch - Fraction - Decimal - mm Conversion Chart Inches Decimal mm Inches Decimal mm 1/64 0.0156 0.3969 33/64 0.5156 13.0969 1/32 0.0313 0.7938 17/32 0.5313 13.4938 3/64 0.0469 1.1906 35/64 0.5469 13.8906 1/16 0.0625 1.5875 9/16 0.5625 14.2875 5/64 0.0781 1.9844 37/64 0.5781 14.6844 3/32 0.0938 2.3813 19/32 0.5938 15.0813 7/64 0.1094 2.7781 39/64 0.6094 15.4781 1/8 0.1250 3.1750 5/8 … In addition to today’s free decimal to fraction chart, here is a free tutorial on how to read a ruler: You can use the decimal and fraction chart above as a reference to quickly make conversions between decimals and fractions. Free 4th grade worksheets on converting fractions to and from decimals; only fractions with denominators of 10 or 100 and decimals with 1 or 2 decimal places are considered. CONVERSION CHART: FRACTION/DECIMAL/MILLIMETER 9/64 0.1406 3.5719 1 9/64 1.1406 28.9719 2 9/64 2.1406 54.3719 Are you looking for an easy reference chart for making decimal to fraction conversions? We are just like you, fraction decimal chart pdf download, some humans who highly admire original idea from every one, no exception! Given here is a printable fraction chart/table showing halves 1/2, fourths 1/4, eights 1/8, sixteenths 1/16, thirty seconds 1/32, sixty-fourths 1/64 with its equivalent decimal and millimetre values. The Chart Templates and the instructions given with the charts are very helpful in getting values of decimal numbers in fractional terms! Remainder when 17 power 23 is divided by 16 3/32 0.0938 2.3813 1 3/32 1.0938 27.7813 2 3/32 2.0938 53.1813 This this is an usable fraction to decimal inches and millimeter conversion table. 5/32 0.1563 3.9688 1 5/32 1.1563 29.3688 2 5/32 2.1563 54.7688 The resources below are similarly aligned. Victor Delgadillo. Usable fraction to decimal inches and millimeter conversion table . Find the greatest common divisor (gcd) of the numerator and the denominator. Free Download of Decimal To Fraction Chart 2 Document available in PDF format! How to Convert Fractions to Decimals. 25/64 0.3906 9.9219 1 25/64 1.3906 35.3219 2 25/64 2.3906 60.7219 Victor Delgadillo . 13/64 0.2031 5.1594 1 13/64 1.2031 30.5594 2 13/64 2.2031 55.9594 9/32 .281 7.144. Because of that we always keep the original images without any change including the watermark. 15/32 0.4688 11.9063 1 15/32 1.4688 37.3063 2 15/32 2.4688 62.7063 Apply and extend previous understandings of multiplication and division and of fractions to multiply and divide rational numbers. Decimals & Fractions Examples To change a decimal to a fraction: use the place value of the last digit 0.85 = 100 85 = 20 17 To change a fraction to a decimal: divide the top by the bottom 5 4 = 4 ÷ 5 = 0.8 Fractions, Decimals & % Examples 64% = 64 To write a % as a fraction or decimal, divide by 100 ÷ 100 = 0.64 64% = 100 64 = 25 16 0.1 25/32 0.7813 19.8438 1 25/32 1.7813 45.2438 2 25/32 2.7813 70.6438 16 Math Education Experts Share Their Suggestions. 43/64 0.6719 17.0656 1 43/64 1.6719 42.4656 2 43/64 2.6719 67.8656 This is a chart of fractions with their decimal and percent conversions that I use to introduce finding percents of numbers and solving percent equations with my 7th graders. 21/64 .328 8.334. Oct 8, 2020 - Explore Moi Khamvongsod's board "Fraction Chart" on Pinterest. 15/64 0.2344 5.9531 1 15/64 1.2344 31.3531 2 15/64 2.2344 56.7531 What Math Teaching Strategies Work Best? Check out this free converting decimal to fraction in 3 easy steps lesson guide! 3/8 0.375 9.525 1 3/8 1.375 34.925 2 3/8 2.375 60.325 Reduce the fraction by dividing the numerator and the denominator with the gcd. 45/64 0.7031 17.8594 1 45/64 1.7031 43.2594 2 45/64 2.7031 68.6594 53/64 0.8281 21.0344 1 53/64 1.8281 46.4344 2 53/64 2.8281 71.8344 23/64 0.3594 9.1281 1 23/64 1.3594 34.5281 2 23/64 2.3594 59.9281 51/64 0.7969 20.2406 1 51/64 1.7969 45.6406 2 51/64 2.7969 71.0406 Free Download Decimal To Fraction Chart (pdf, 25KB) and Customize with our Editable Templates, Waivers and Forms for your needs. Can Your Middle Schoolers Solve These Math Puzzles? Teachers Pay Teachers is an online marketplace where teachers buy and sell original educational materials. 21 Time-Saving Strategies, Activities, and Ideas All Math Teachers Should Know, 10 Super Fun Math Riddles for Kids! Fractions to Decimal Chart PDF Common fractions of an inch with decimal and metric equivalents. 1/8 0.125 3.175 1 1/8 1.125 28.575 2 1/8 2.125 53.975 5/16 .313 7.938. 27/32 0.8438 21.4313 1 27/32 1.8438 46.8313 2 27/32 2.8438 72.2313 You can easily print it and visualize it in all kinds of devices, so you can help your kid practice fractions on the go. 1/64 0.0156 0.3969 1 1/64 1.0156 25.7969 2 1/64 2.0156 51.1969 Remainder when 2 power 256 is divided by 17. You can often find me happily developing animated math lessons to share on my YouTube channel . 21/32 0.6563 16.6688 1 21/32 1.6563 42.0688 2 21/32 2.6563 67.4688 See more ideas about teaching math, math lessons, homeschool math. Converting repeating decimals in to fractions. Printable Decimal to Fraction convertion chart. 5/64 0.0781 1.9844 1 5/64 1.0781 27.3844 2 5/64 2.0781 52.7844 5/16 0.3125 7.9375 1 5/16 1.3125 33.3375 2 5/16 2.3125 58.7375 The following video lesson shares and easy 3-step method for converting a decimal to a fraction without a decimal to fraction chart! 5/8 0.625 15.875 1 5/8 1.625 41.275 2 5/8 2.625 66.675 fraction decimal mm fraction decimal mm fraction decimal mm Rewrite the decimal number as a fraction with 1 in the denominator$1.625 = \frac{1.625}{1}$Multiply to remove 3 decimal places. 19/32 0.5938 15.0813 1 19/32 1.5938 40.4813 2 19/32 2.5938 65.8813 Decimal Conversion Chart to Fractions and Millimeters - PDF printable . Decimal Equivalents of Fractions. Where I can get fraction to decimal chart? Anthony is the content crafter and head educator for YouTube's MashUp Math . PNG Inch fraction to decimal chart. 3/64 0.0469 1.1906 1 3/64 1.0469 26.5906 2 3/64 2.0469 51.9906 Decimal representation of rational numbers. 1. decimal to fraction chart, decimal to fraction chart 16ths, decimal to fraction chart inches, decimal to fraction chart pdf. 19/64 .297 7.541. 61/64 0.9531 24.2094 1 61/64 1.9531 49.6094 2 61/64 2.9531 75.0094 Free Document Samples & Templates to Download! Free Decimal to Fraction Conversion Chart. 15/16 0.9375 23.8125 1 15/16 1.9375 49.2125 2 15/16 2.9375 74.6125 Need help with decimal to fraction conversions with rulers and measurement? 3/16 0.1875 4.7625 1 3/16 1.1875 30.1625 2 3/16 2.1875 55.5625 35/64 0.5469 13.8906 1 35/64 1.5469 39.2906 2 35/64 2.5469 64.6906 Below you’ll find the best decimal to fraction chart in PDF. 37/64 0.5781 14.6844 1 37/64 1.5781 40.0844 2 37/64 2.5781 65.4844 The fraction to decimal chart is useful in finding the size of screws. inches millimeters.515625 13.096 .53125 13.493 .546875 13.890.5625 14.287 .578125 14.684.59375 15.081 .609375 15.478.625 15.875 .640625 16.271 fraction decimal mm fraction decimal mm fraction decimal mm. 13/32 0.4063 10.3188 1 13/32 1.4063 35.7188 2 13/32 2.4063 61.1188 Share your ideas, questions, and comments below! 49/64 0.7656 19.4469 1 49/64 1.7656 44.8469 2 49/64 2.7656 70.2469 21/64 0.3281 8.3344 1 21/64 1.3281 33.7344 2 21/64 2.3281 59.1344 1/2 0.5 12.7 1 1/2 1.5 38.1 2 1/2 2.5 63.5 Additionally, it's a good refer . Finding square root using long division. They will thank you in the future. Tags: math,maths,converting decimals to fractions,convert decimal to fraction,decimal to fraction,decimal to fraction in simplest form,decimal to fraction calculator,decimal to fraction math antics,decimal to fraction conversion,decimal to fraction khan academy,decimal to fraction thousandths,decimal to fraction mixed number,decimal to fraction chart,fractions and decimals,improper fractions,simplest form,how to convert decimals to fractions,decimal to fraction ti 84, Video Lesson: Scientific Notation Explained, Activity: Here’s an Awesome Way to Teach Kids Fractions. I use this to get students to start thinking about the proper estimation of a percent amount. See also another fraction chart version and also our cm to feet and inches calculator with steps. 7/16 0.4375 11.1125 1 7/16 1.4375 36.5125 2 7/16 2.4375 61.9125 33/64 0.5156 13.0969 1 33/64 1.5156 38.4969 2 33/64 2.5156 63.8969 1/16 0.0625 1.5875 1 1/16 1.0625 26.9875 2 1/16 2.0625 52.3875 © MashUp Math, LLC | All Rights Reserved | Privacy Policy | Copyright Information |Contact, Click here to download your free Decimal to Fraction Conversion Chart, 10 Awesome (and 100% Free) Homeschool Math Resources for Grades 1-9. Fraction-Decimal Converter. (with Answers), 11 Super Cute and Funny Math Jokes and Puns for Students. Use a chart instead. Or spending way too much time at the gym or playing on my phone. 17/32 0.5313 13.4938 1 17/32 1.5313 38.8938 2 17/32 2.5313 64.2938 by Anthony Persico. 39/64 0.6094 15.4781 1 39/64 1.6094 40.8781 2 39/64 2.6094 66.2781 17/64 .266 6.747. 7/64 0.1094 2.7781 1 7/64 1.1094 28.1781 2 7/64 2.1094 53.5781 Decimal to Fraction: Everything You Need to Know. WordPress Download Manager - Best Download Management Plugin, CONVERSION CHART: FRACTION/DECIMAL/MILLIMETER 23/32 0.7188 18.2563 1 23/32 1.7188 43.6563 2 23/32 2.7188 69.0563 See more ideas about math lessons, learning math, education math. These charts are so helpful for helping you in mastering certain kinds of theme in this subject. Victor has worked with PrinterFriend.ly for over two year and has authored hundreds of articles for the site. Free Download of Decimal To Fraction Chart 1 Document available in PDF … CONVERSION CHART: FRACTION/DECIMAL/MILLIMETER. 1 1 25.4 2 2 50.8 3 3 76.2, Free Board Meeting Agenda Template 2 Download, Free Equipment Checklist Template Download, Free Sample Payroll Register Template Download, Free Free Diamond Color Scale And Clarity Chart PDF Download, Free February 2019 Calendar 1 PDF Download, Free February 2018 Calendar 3 PDF Download, Free February 2015 Calendar 3 PDF Download, Free Example Business Continuity Gap Analysis PDF Download, Free Electric Guitar Chords Chart For Beginner PDF Download, Free Electric Guitar Bar Chords Chart PDF Download, Free Diamond Grading Color Chart PDF Download, Free Diamond Cut And Clarity Chart Template PDF Download, Free December 2018 Calendar 3 PDF Download, Free December 2018 Calendar 1 PDF Download, Free December 2017 Calendar 1 PDF Download, Free Illinois Certified Transcript Of Payroll PDF Download, Free Illinois Certified Payroll Form 2 PDF Download, Free Illinois Boat Bill Of Sale PDF Download, Free Illinois Authorization To Release Medical Records PDF Download, Free Illinois Authorization For The Release Of Medical Information PDF Download, Free Illinois Authorization For Release Of Confidential Health Information PDF Download, Free Illinois Affidavit Of Repossession PDF Download, Free Illinois Affidavit And Certificate Of Correction Request PDF Download, Free Illinois Advance Health Care Directive Form PDF Download, Free Ikea Assembly Instruction Sample PDF Download, Free Ihop Restaurant Employer Application For Employment PDF Download, Free Ihop Employment Application New Hire Information PDF Download, Free Ihop Application For Employment PDF Download, Free Ihop Application For Employment Fillable PDF Download, Free Identity Theft Victims Complaint And Affidavit PDF Download, Free Identity Theft Affidavit 2 PDF Download, Free Identity Theft Affidavit 1 PDF Download, Free Identify Coins Money Worksheets For Kids Template PDF Download, Free Ideal Male Body Fat Chart PDF Download, Free Ideal Body Weight Chart For Women PDF Download, WordPress Download Manager - Best Download Management Plugin. How Many Protein Shakes A Day To Lose Weight, List Of B-24 Crashes, Luxury Apartments In Chandler, Az, Pueblo Community College Staff Directory, Agate Cheese Board Australia, Houses For Sale In Dartford, Taste Of The Wild Dog Food Ingredients, Balloon Arch Party City, Double Dragon Stock Price, Jimmy John's Online Order, Mae Ploy Kitchen, Dry Peas Masala, Indoor Herb Garden Kit Australia,
{}
# Homework Help: Homework problem 1. Mar 10, 2010 ### tennistudof09 I wanted to make sure I did this problem correctly. The problem is: A particle moves along a straight line and its position at time t is given by s(t)=2t^3 - 21t^2 + 60t where s is measured in feet and t in seconds. Find the velocity of the particle when t=0. I took the derivative of s(t) and got 6t^2 - 42t + 60, and then substituted 0 in for t. I got 60ft/sec for the answer. Is this correct?? 2. Mar 10, 2010 ### bblenyesi $$\frac{ds}{dt}=v(t)$$ $$\frac{dv}{dt}=a(t)$$ Where $$s(t)$$ is the displacement vector, $$v(t)$$ is the velocity vector and $$a(t)$$ is the acceleration vector. If you look at it in any given point, so I'm guessing that it's correct.
{}
Transpose - Maple Help Iterator[Trees] Transpose compute the transpose of a tree Calling Sequence Transpose(tree, format=fmt) Parameters tree - seq(rtable) fmt - (optional) A,C,D,E,LR,P,S,Z Options • format = A,C,D,E,LR,P,S,Z Specifies the format of the tree. The default is LR. See Iterator[Trees] for a description of the formats. Description • The Transpose command computes the transpose of a tree. The transpose of a binary tree is formed by interchanging left and right links. The transpose of a tree of a given format is computed by converting it to a binary tree, interchanging, then converting back to the specified format. • The tree parameter is the tree. Examples > $\mathrm{with}\left(\mathrm{Iterator}:-\mathrm{Trees}\right):$ Generate a random tree with four internal nodes in LR format. > $L,R≔\mathrm{Random}\left(4,\mathrm{format}=\mathrm{LR}\right)$ ${L}{,}{R}{≔}\left[\begin{array}{cccc}{2}& {3}& {0}& {0}\end{array}\right]{,}\left[\begin{array}{cccc}{4}& {0}& {0}& {0}\end{array}\right]$ (1) Compute its transpose. > $\mathrm{Transpose}\left(L,R\right)$ $\left[\begin{array}{cccc}{2}& {0}& {0}& {0}\end{array}\right]{,}\left[\begin{array}{cccc}{3}& {0}& {4}& {0}\end{array}\right]$ (2) References Knuth, Donald Ervin. The Art of Computer Programming, volume 4, fascicle 4; generating all trees, sec. 7.2.1.6, generating all trees, exercise 12, p. 33. Compatibility • The Iterator[Trees][Transpose] command was introduced in Maple 2016.
{}