text
stringlengths 209
724k
|
|---|
semi-supervised learning has proven to be a powerful paradigm for leveraging unlabeled data to mitigate the reliance on large labeled datasets . in this work , we unify the current dominant approaches for semi-supervised learning to produce a new algorithm , mixmatch , that works by guessing low-entropy labels for data-augmented unlabeled examples and mixing labeled and unlabeled data using mixup . we show that mixmatch obtains state-of-the-art results by a large margin across many datasets and labeled data amounts . for example , on cifar-10 with 250 labels , we reduce error rate by a factor of 4 ( from 38 % to 11 % ) and by a factor of 2 on stl-10 . we also demonstrate how mixmatch can help achieve a dramatically better accuracy-privacy trade-off for differential privacy . finally , we perform an ablation study to tease apart which components of mixmatch are most important for its success . story_separator_special_tag we design a novel , communication-efficient , failure-robust protocol for secure aggregation of high-dimensional data . our protocol allows a server to compute the sum of large , user-held data vectors from mobile devices in a secure manner ( i.e . without learning each user 's individual contribution ) , and can be used , for example , in a federated learning setting , to aggregate user-provided model updates for a deep neural network . we prove the security of our protocol in the honest-but-curious and active adversary settings , and show that security is maintained even if an arbitrarily chosen subset of users drop out at any time . we evaluate the efficiency of our protocol and show , by complexity analysis and a concrete implementation , that its runtime and communication overhead remain low even on large data sets and client pools . for 16-bit input values , our protocol offers $ 1.73 x communication expansion for 210 users and 220-dimensional vectors , and 1.98 x expansion for 214 users and 224-dimensional vectors over sending data in the clear . story_separator_special_tag transfer learning aims at reusing the knowledge in some source tasks to improve the learning of a target task . many transfer learning methods assume that the source tasks and the target task be related , even though many tasks are not related in reality . however , when two tasks are unrelated , the knowledge extracted from a source task may not help , and even hurt , the performance of a target task . thus , how to avoid negative transfer and then ensure a `` safe transfer '' of knowledge is crucial in transfer learning . in this paper , we propose an adaptive transfer learning algorithm based on gaussian processes ( at-gp ) , which can be used to adapt the transfer learning schemes by automatically estimating the similarity between a source and a target task . the main contribution of our work is that we propose a new semi-parametric transfer kernel for transfer learning from a bayesian perspective , and propose to learn the model with respect to the target task , rather than all tasks as in multi-task learning . we can formulate the transfer learning problem as a unified gaussian process ( gp story_separator_special_tag we demonstrate that a character-level recurrent neural network is able to learn out-of-vocabulary ( oov ) words under federated learning settings , for the purpose of expanding the vocabulary of a virtual keyboard for smartphones without exporting sensitive text to servers . high-frequency words can be sampled from the trained generative model by drawing from the joint posterior directly . we study the feasibility of the approach in two settings : ( 1 ) using simulated federated learning on a publicly available non-iid per-user dataset from a popular social networking website , ( 2 ) using federated learning on data hosted on user mobile devices . the model achieves good recall and precision compared to ground-truth oov words in setting ( 1 ) . with ( 2 ) we demonstrate the practicality of this approach by showing that we can learn meaningful oov words with good character-level prediction accuracy and cross entropy loss . story_separator_special_tag the protection of user privacy is an important concern in machine learning , as evidenced by the rolling out of the general data protection regulation ( gdpr ) in the european union ( eu ) in may 2018. the gdpr is designed to give users more control over their personal data , which motivates us to explore machine learning frameworks for data sharing that do not violate user privacy . to meet this goal , in this paper , we propose a novel lossless privacy-preserving tree-boosting system known as secureboost in the setting of federated learning . secureboost first conducts entity alignment under a privacy-preserving protocol and then constructs boosting trees across multiple parties with a carefully designed encryption strategy . this federated learning system allows the learning process to be jointly conducted over multiple parties with common user samples but different feature sets , which corresponds to a vertically partitioned data set . an advantage of secureboost is that it provides the same level of accuracy as the non-privacy-preserving approach while at the same time , reveals no information of each private data provider . we show that the secureboost framework is as accurate as other non-federated gradient tree-boosting story_separator_special_tag the explosion of image data on the internet has the potential to foster more sophisticated and robust models and algorithms to index , retrieve , organize and interact with images and multimedia data . but exactly how such data can be harnessed and organized remains a critical problem . we introduce here a new database called imagenet , a large-scale ontology of images built upon the backbone of the wordnet structure . imagenet aims to populate the majority of the 80,000 synsets of wordnet with an average of 500-1000 clean and full resolution images . this will result in tens of millions of annotated images organized by the semantic hierarchy of wordnet . this paper offers a detailed analysis of imagenet in its current state : 12 subtrees with 5247 synsets and 3.2 million images in total . we show that imagenet is much larger in scale and diversity and much more accurate than the current image datasets . constructing such a large-scale database is a challenging task . we describe the data collection scheme with amazon mechanical turk . lastly , we illustrate the usefulness of imagenet through three simple applications in object recognition , image classification and automatic story_separator_special_tag we introduce a new language representation model called bert , which stands for bidirectional encoder representations from transformers . unlike recent language representation models ( peters et al. , 2018a ; radford et al. , 2018 ) , bert is designed to pre-train deep bidirectional representations from unlabeled text by jointly conditioning on both left and right context in all layers . as a result , the pre-trained bert model can be fine-tuned with just one additional output layer to create state-of-the-art models for a wide range of tasks , such as question answering and language inference , without substantial task-specific architecture modifications . bert is conceptually simple and empirically powerful . it obtains new state-of-the-art results on eleven natural language processing tasks , including pushing the glue score to 80.5 ( 7.7 point absolute improvement ) , multinli accuracy to 86.7 % ( 4.6 % absolute improvement ) , squad v1.1 question answering test f1 to 93.2 ( 1.5 point absolute improvement ) and squad v2.0 test f1 to 83.1 ( 5.1 point absolute improvement ) . story_separator_special_tag federated learning allows for population level models to be trained without centralizing client data by transmitting the global model to clients , calculating gradients locally , then averaging the gradients . downloading models and uploading gradients uses the client 's bandwidth , so minimizing these transmission costs is important . the data on each client is highly variable , so the benefit of training on different clients may differ dramatically . to exploit this we propose active federated learning , where in each round clients are selected not uniformly at random , but with a probability conditioned on the current model and the data on the client to maximize efficiency . we propose a cheap , simple and intuitive sampling scheme which reduces the number of required training iterations by 20-70 % while maintaining the same model accuracy , and which mimics well known resampling techniques under certain conditions . story_separator_special_tag abstract : several machine learning models , including neural networks , consistently misclassify adversarial examples -- -inputs formed by applying small but intentionally worst-case perturbations to examples from the dataset , such that the perturbed input results in the model outputting an incorrect answer with high confidence . early attempts at explaining this phenomenon focused on nonlinearity and overfitting . we argue instead that the primary cause of neural networks ' vulnerability to adversarial perturbation is their linear nature . this explanation is supported by new quantitative results while giving the first explanation of the most intriguing fact about them : their generalization across architectures and training sets . moreover , this view yields a simple and fast method of generating adversarial examples . using this approach to provide examples for adversarial training , we reduce the test set error of a maxout network on the mnist dataset . story_separator_special_tag the following topics are dealt with : image segmentation ; image texture ; image motion analysis ; object detection ; tracking ; feature selection ; clustering ; image reconstruction ; face recognition ; image sequences ; computer vision ; image sensors ; and object recognition . story_separator_special_tag we train a recurrent neural network language model using a distributed , on-device learning framework called federated learning for the purpose of next-word prediction in a virtual keyboard for smartphones . server-based training using stochastic gradient descent is compared with training on client devices using the federated averaging algorithm . the federated algorithm , which enables training on a higher-quality dataset for this use case , is shown to achieve better prediction recall . this work demonstrates the feasibility and benefit of training language models on client devices without exporting sensitive user data to servers . the federated learning environment gives users greater control over the use of their data and simplifies the task of incorporating privacy by default with distributed training and aggregation across a population of client devices . story_separator_special_tag a very simple way to improve the performance of almost any machine learning algorithm is to train many different models on the same data and then to average their predictions . unfortunately , making predictions using a whole ensemble of models is cumbersome and may be too computationally expensive to allow deployment to a large number of users , especially if the individual models are large neural nets . caruana and his collaborators have shown that it is possible to compress the knowledge in an ensemble into a single model which is much easier to deploy and we develop this approach further using a different compression technique . we achieve some surprising results on mnist and we show that we can significantly improve the acoustic model of a heavily used commercial system by distilling the knowledge in an ensemble of models into a single model . we also introduce a new type of ensemble composed of one or more full models and many specialist models which learn to distinguish fine-grained classes that the full models confuse . unlike a mixture of experts , these specialist models can be trained rapidly and in parallel . story_separator_special_tag deep learning has recently become hugely popular in machine learning , providing significant improvements in classification accuracy in the presence of highly-structured and large databases . researchers have also considered privacy implications of deep learning . models are typically trained in a centralized manner with all the data being processed by the same training algorithm . if the data is a collection of users ' private data , including habits , personal pictures , geographical positions , interests , and more , the centralized server will have access to sensitive information that could potentially be mishandled . to tackle this problem , collaborative deep learning models have recently been proposed where parties locally train their deep learning structures and only share a subset of the parameters in the attempt to keep their respective training sets private . parameters can also be obfuscated via differential privacy ( dp ) to make information extraction even more challenging , as proposed by shokri and shmatikov at ccs'15 . unfortunately , we show that any privacy-preserving collaborative deep learning is susceptible to a powerful attack that we devise in this paper . in particular , we show that a distributed , federated , or story_separator_special_tag large-scale labeled data are generally required to train deep neural networks in order to obtain better performance in visual feature learning from images or videos for computer vision applications . to avoid extensive cost of collecting and annotating large-scale datasets , as a subset of unsupervised learning methods , self-supervised learning methods are proposed to learn general image and video features from large-scale unlabeled data without using any human-annotated labels . this paper provides an extensive review of deep learning-based self-supervised general visual feature learning methods from images or videos . first , the motivation , general pipeline , and terminologies of this field are described . then the common deep neural network architectures that used for self-supervised learning are summarized . next , the schema and evaluation metrics of self-supervised learning methods are reviewed followed by the commonly used datasets for images , videos , audios , and 3d data , as well as the existing self-supervised visual feature learning methods . finally , quantitative performance comparisons of the reviewed methods on benchmark datasets are summarized and discussed for both image and video feature learning . at last , this paper is concluded and lists a set of promising story_separator_special_tag federated learning ( fl ) is a machine learning setting where many clients ( e.g . mobile devices or whole organizations ) collaboratively train a model under the orchestration of a central server ( e.g . service provider ) , while keeping the training data decentralized . fl embodies the principles of focused data collection and minimization , and can mitigate many of the systemic privacy risks and costs resulting from traditional , centralized machine learning and data science approaches . motivated by the explosive growth in fl research , this paper discusses recent advances and presents an extensive collection of open problems and challenges . story_separator_special_tag bill baird { publications references 1 ] b. baird . bifurcation analysis of oscillating neural network model of pattern recognition in the rabbit olfactory bulb . in d. 3 ] b. baird . bifurcation analysis of a network model of the rabbit olfactory bulb with periodic attractors stored by a sequence learning algorithm . 5 ] b. baird . bifurcation theory methods for programming static or periodic attractors and their bifurcations in dynamic neural networks . story_separator_special_tag we present a scalable approach for semi-supervised learning on graph-structured data that is based on an efficient variant of convolutional neural networks which operate directly on graphs . we motivate the choice of our convolutional architecture via a localized first-order approximation of spectral graph convolutions . our model scales linearly in the number of graph edges and learns hidden layer representations that encode both local graph structure and features of nodes . in a number of experiments on citation networks and on a knowledge graph dataset we demonstrate that our approach outperforms related methods by a significant margin . story_separator_special_tag gradient boosting decision trees ( gbdts ) have become very successful in recent years , with many awards in machine learning and data mining competitions . there have been several recent studies on how to train gbdts in the federated learning setting . in this paper , we focus on horizontal federated learning , where data samples with the same features are distributed among multiple parties . however , existing studies are not efficient or effective enough for practical use . they suffer either from the inefficiency due to the usage of costly data transformations such as secure sharing and homomorphic encryption , or from the low model accuracy due to differential privacy designs . in this paper , we study a practical federated environment with relaxed privacy constraints . in this environment , a dishonest party might obtain some information about the other parties ' data , but it is still impossible for the dishonest party to derive the actual raw data of other parties . specifically , each party boosts a number of trees by exploiting similarity information based on locality-sensitive hashing . we prove that our framework is secure without exposing the original record to other parties story_separator_special_tag federated learning enables a large amount of edge computing devices to jointly learn a model without data sharing . as a leading algorithm in this setting , federated averaging ( \\texttt { fedavg } ) runs stochastic gradient descent ( sgd ) in parallel on a small subset of the total devices and averages the sequences only once in a while . despite its simplicity , it lacks theoretical guarantees under realistic settings . in this paper , we analyze the convergence of \\texttt { fedavg } on non-iid data and establish a convergence rate of $ \\mathcal { o } ( \\frac { 1 } { t } ) $ for strongly convex and smooth problems , where $ t $ is the number of sgds . importantly , our bound demonstrates a trade-off between communication-efficiency and convergence rate . as user devices may be disconnected from the server , we relax the assumption of full device participation to partial device participation and study different averaging schemes ; low device participation rate can be achieved without severely slowing down the learning . our results indicate that heterogeneity of data slows down the convergence , which matches empirical observations . story_separator_special_tag machine learning relies on the availability of vast amounts of data for training . however , in reality , data are mostly scattered across different organizations and can not be easily integrated due to many legal and practical constraints . to address this important challenge in the field of machine learning , we introduce a new technique and framework , known as federated transfer learning ( ftl ) , to improve statistical modeling under a data federation . ftl allows knowledge to be shared without compromising user privacy and enables complementary knowledge to be transferred across domains in a data federation , thereby enabling a target-domain party to build flexible and effective models by leveraging rich labels from a source domain . this framework requires minimal modifications to the existing model structure and provides the same level of accuracy as the nonprivacy-preserving transfer learning . it is flexible and can be effectively adapted to various secure multiparty machine learning tasks . story_separator_special_tag after entering the big data era , a new term of big knowledge has been coined to deal with challenges in mining a mass of knowledge from big data . while researchers used to explore the basic characteristics of big data , we have not seen any studies on the general and essential properties of big knowledge . to fill this gap , this paper studies the concepts of big knowledge , big-knowledge system , and big-knowledge engineering . ten massiveness characteristics for big knowledge and big-knowledge systems , including massive concepts , connectedness , clean data resources , cases , confidence , capabilities , cumulativeness , concerns , consistency , and completeness , are defined and explored . based on these characteristics , a comprehensive investigation is conducted on some large-scale knowledge engineering projects , including the fifth comprehensive traffic survey in shanghai , the china 's xia-shang-zhou chronology project , the troy and trojan war project , and the international human genome project , as well as the online free encyclopedia wikipedia . we also investigate the recent research efforts on knowledge graphs , where they are analyzed to determine which ones can be considered as big knowledge story_separator_special_tag submitted paper will be peer reviewed by conference committees , and accepted papers after registration and presentation will be published in the international conference proceedings series by acm ( isbn : 978-14503-8834-4 ) , which will be archived in the acm digital library , and indexed by ei compendex , scopus , etc . mlmi 2019 proceedings ( isbn : 978-1-4503-7248-0 ) has been indexed by ei-compendex & scopus . mlmi 2018 proceedings ( isbn : 978-1-4503-6556-7 ) has been indexed by ei-compendex & scopus . publication after successfully held in vietnam and jakarta , this year mlmi will be held in hangzhou , china , september 18-20. it is supported by hangzhou dianzi university , china . insightful presentations , engaging discussions , vibrant networking mlmi 2020 has it all . with leading academics on the scientific committee of the event , the program is guaranteed to address the most relevant topics in the field of machine learning and machine intelligence . it 's an opportunity to source feedback on your research , to get published in conference proceedings , and to explore the beautiful city hangzhou , china . story_separator_special_tag now that data science receives a lot of attention , the three disciplines of data analysis , databases , and sciences are discussed with respect to the roles they play . in several discussions , i observed misunderstandings of artificial intelligence . hence , it might be the right time to give a personal view of ai and the part of machine learning therein . since the relation between machine learning and statistics is so close that sometimes the boundaries are blurred , explicit pointers to statistical research are made . although not at all complete , the references are intended to support further interdisciplinary understanding of the fields . story_separator_special_tag we propose a new regularization method based on virtual adversarial loss : a new measure of local smoothness of the conditional label distribution given input . virtual adversarial loss is defined as the robustness of the conditional label distribution around each input data point against local perturbation . unlike adversarial training , our method defines the adversarial direction without label information and is hence applicable to semi-supervised learning . because the directions in which we smooth the model are only virtually adversarial , we call our method virtual adversarial training ( vat ) . the computational cost of vat is relatively low . for neural networks , the approximated gradient of virtual adversarial loss can be computed with no more than two pairs of forward- and back-propagations . in our experiments , we applied vat to supervised and semi-supervised learning tasks on multiple benchmark datasets . with a simple enhancement of the algorithm based on the entropy minimization principle , our vat achieves state-of-the-art performance for semi-supervised learning tasks on svhn and cifar-10 . story_separator_special_tag we extend generative adversarial networks ( gans ) to the semi-supervised context by forcing the discriminator network to output class labels . we train a generative model g and a discriminator d on a dataset with inputs belonging to one of n classes . at training time , d is made to predict which of n+1 classes the input belongs to , where an extra class is added to correspond to the outputs of g. we show that this method can be used to create a more data-efficient classifier and that it allows for generating higher quality samples than a regular gan . story_separator_special_tag semi-supervised learning ( ssl ) provides a powerful framework for leveraging unlabeled data when labels are limited or expensive to obtain . ssl algorithms based on deep neural networks have recently proven successful on standard benchmark tasks . however , we argue that these benchmarks fail to address many issues that ssl algorithms would face in real-world applications . after creating a unified reimplementation of various widely-used ssl techniques , we test them in a suite of experiments designed to address these issues . we find that the performance of simple baselines which do not use unlabeled data is often underreported , ssl methods differ in sensitivity to the amount of labeled and unlabeled data , and performance can degrade substantially when the unlabeled dataset contains out-of-distribution examples . to help guide ssl research towards real-world applicability , we make our unified reimplemention and evaluation platform publicly available . story_separator_special_tag we present an unsupervised visual feature learning algorithm driven by context-based pixel prediction . by analogy with auto-encoders , we propose context encoders a convolutional neural network trained to generate the contents of an arbitrary image region conditioned on its surroundings . in order to succeed at this task , context encoders need to both understand the content of the entire image , as well as produce a plausible hypothesis for the missing part ( s ) . when training context encoders , we have experimented with both a standard pixel-wise reconstruction loss , as well as a reconstruction plus an adversarial loss . the latter produces much sharper results because it can better handle multiple modes in the output . we found that a context encoder learns a representation that captures not just appearance but also the semantics of visual structures . we quantitatively demonstrate the effectiveness of our learned features for cnn pre-training on classification , detection , and segmentation tasks . furthermore , context encoders can be used for semantic inpainting tasks , either stand-alone or as initialization for non-parametric methods . story_separator_special_tag federated learning improves data privacy and efficiency in machine learning performed over networks of distributed devices , such as mobile phones , iot and wearable devices , etc . yet models trained with federated learning can still fail to generalize to new devices due to the problem of domain shift . domain shift occurs when the labeled data collected by source nodes statistically differs from the target node 's unlabeled data . in this work , we present a principled approach to the problem of federated domain adaptation , which aims to align the representations learned among the different nodes with the data distribution of the target node . our approach extends adversarial adaptation techniques to the constraints of the federated setting . in addition , we devise a dynamic attention mechanism and leverage feature disentanglement to enhance knowledge transfer . empirically , we perform extensive experiments on several image and text classification tasks and show promising results under unsupervised federated domain adaptation setting . story_separator_special_tag in this paper , we introduce a new model for leveraging unlabeled data to improve generalization performances of image classifiers : a two-branch encoder-decoder architecture called hybridnet . the first branch receives supervision signal and is dedicated to the extraction of invariant class-related representations . the second branch is fully unsupervised and dedicated to model information discarded by the first branch to reconstruct input data . to further support the expected behavior of our model , we propose an original training objective . it favors stability in the discriminative branch and complementarity between the learned representations in the two branches . hybridnet is able to outperform state-of-the-art results on cifar-10 , svhn and stl-10 in various semi-supervised settings . in addition , visualizations and ablation studies validate our contributions and the behavior of the model on both cifar-10 and stl-10 datasets . story_separator_special_tag which active learning methods can we expect to yield good performance in learning binary and multi-category logistic regression classifiers ? addressing this question is a natural first step in providing robust solutions for active learning across a wide variety of exponential models including maximum entropy , generalized linear , log-linear , and conditional random field models . for the logistic regression model we re-derive the variance reduction method known in experimental design circles as ` a-optimality . ' we then run comparisons against different variations of the most widely used heuristic schemes : query by committee and uncertainty sampling , to discover which methods work best for different classes of problems and why . we find that among the strategies tested , the experimental design methods are most likely to match or beat a random sample baseline . the heuristic alternatives produced mixed results , with an uncertainty sampling variant called margin sampling and a derivative method called qbb-mm providing the most promising performance at very low computational cost . computational running times of the experimental design methods were a bottleneck to the evaluations . meanwhile , evaluation of the heuristic methods lead to an accumulation of negative results . story_separator_special_tag we quantitatively investigate how machine learning models leak information about the individual data records on which they were trained . we focus on the basic membership inference attack : given a data record and black-box access to a model , determine if the record was in the model 's training dataset . to perform membership inference against a target model , we make adversarial use of machine learning and train our own inference model to recognize differences in the target model 's predictions on the inputs that it trained on versus the inputs that it did not train on . we empirically evaluate our inference techniques on classification models trained by commercial `` machine learning as a service '' providers such as google and amazon . using realistic datasets and classification tasks , including a hospital discharge dataset whose membership is sensitive from the privacy perspective , we show that these models can be vulnerable to membership inference attacks . we then investigate the factors that influence this leakage and evaluate mitigation strategies . story_separator_special_tag variational autoencoders ( vaes ) learn representations of data by jointly training a probabilistic encoder and decoder network . typically these models encode all features of the data into a single variable . here we are interested in learning disentangled representations that encode distinct aspects of the data into separate variables . we propose to learn such representations using model architectures that generalise from standard vaes , employing a general graphical model structure in the encoder and decoder . this allows us to train partially-specified models that make relatively strong assumptions about a subset of interpretable variables and rely on the flexibility of neural networks to learn representations for the remaining variables . we further define a general objective for semi-supervised learning in this model class , which can be approximated using an importance sampling procedure . we evaluate our framework 's ability to learn disentangled representations , both by qualitative exploration of its generative capacity , and quantitative evaluation of its discriminative ability on a variety of models and datasets . story_separator_special_tag abstract : in this paper we present a method for learning a discriminative classifier from unlabeled or partially labeled data . our approach is based on an objective function that trades-off mutual information between observed examples and their predicted categorical class distribution , against robustness of the classifier to an adversarial generative model . the resulting algorithm can either be interpreted as a natural generalization of the generative adversarial networks ( gan ) framework or as an extension of the regularized information maximization ( rim ) framework to robust classification against an optimal adversary . we empirically evaluate our method - which we dub categorical generative adversarial networks ( or catgan ) - on synthetic data as well as on challenging image classification tasks , demonstrating the robustness of the learned classifiers . we further qualitatively assess the fidelity of samples generated by the adversarial generator that is learned alongside the discriminative classifier , and identify links between the catgan objective and discriminative clustering algorithms ( such as rim ) . story_separator_special_tag the recently proposed temporal ensembling has achieved state-of-the-art results in several semi-supervised learning benchmarks . it maintains an exponential moving average of label predictions on each training example , and penalizes predictions that are inconsistent with this target . however , because the targets change only once per epoch , temporal ensembling becomes unwieldy when learning large datasets . to overcome this problem , we propose mean teacher , a method that averages model weights instead of label predictions . as an additional benefit , mean teacher improves test accuracy and enables training with fewer labels than temporal ensembling . without changing the network architecture , mean teacher achieves an error rate of 4.35 % on svhn with 250 labels , outperforming temporal ensembling trained with 1000 labels . we also show that a good network architecture is crucial to performance . combining mean teacher and residual networks , we improve the state of the art on cifar-10 with 4000 labels from 10.55 % to 6.28 % , and on imagenet 2012 with 10 % of the labels from 35.24 % to 9.11 % . story_separator_special_tag we introduce a pretraining technique called selfie , which stands for selfie supervised image embedding . selfie generalizes the concept of masked language modeling of bert ( devlin et al. , 2019 ) to continuous data , such as images , by making use of the contrastive predictive coding loss ( oord et al. , 2018 ) . given masked-out patches in an input image , our method learns to select the correct patch , among other `` distractor '' patches sampled from the same image , to fill in the masked location . this classification objective sidesteps the need for predicting exact pixel values of the target patches . the pretraining architecture of selfie includes a network of convolutional blocks to process patches followed by an attention pooling network to summarize the content of unmasked patches before predicting masked ones . during finetuning , we reuse the convolutional weights found by pretraining . we evaluate selfie on three benchmarks ( cifar-10 , imagenet 32 x 32 , and imagenet 224 x 224 ) with varying amounts of labeled data , from 5 % to 100 % of the training sets . our pretraining method provides consistent improvements to resnet-50 story_separator_special_tag federated learning allows edge devices to collaboratively learn a shared model while keeping the training data on device , decoupling the ability to do model training from the need to store the data in the cloud . we propose federated matched averaging ( fedma ) algorithm designed for federated learning of modern neural network architectures e.g . convolutional neural networks ( cnns ) and lstms . fedma constructs the shared global model in a layer-wise manner by matching and averaging hidden elements ( i.e . channels for convolution layers ; hidden states for lstm ; neurons for fully connected layers ) with similar feature extraction signatures . our experiments indicate that fedma not only outperforms popular state-of-the-art federated learning algorithms on deep cnn and lstm architectures trained on real world datasets , but also reduces the overall communication burden . story_separator_special_tag federated learning is a distributed form of machine learning where both the training data and model training are decentralized . in this paper , we use federated learning in a commercial , global-scale setting to train , evaluate and deploy a model to improve virtual keyboard search suggestion quality without direct access to the underlying user data . we describe our observations in federated training , compare metrics to live deployments , and present resulting quality increases . in whole , we demonstrate how federated learning can be applied end-to-end to both improve user experiences and enhance user privacy . story_separator_special_tag today 's ai still faces two major challenges . one is that in most industries , data exists in the form of isolated islands . the other is the strengthening of data privacy and security . we propose a possible solution to these challenges : secure federated learning . beyond the federated learning framework first proposed by google in 2016 , we introduce a comprehensive secure federated learning framework , which includes horizontal federated learning , vertical federated learning and federated transfer learning . we provide definitions , architectures and applications for the federated learning framework , and provide a comprehensive survey of existing works on this subject . in addition , we propose building data networks among organizations based on federated mechanisms as an effective solution to allow knowledge to be shared without compromising user privacy . story_separator_special_tag the reinforcement learning paradigm is a popular way to address problems that have only limited environmental feedback , rather than correctly labeled examples , as is common in other machine learning contexts . while significant progress has been made to improve learning in a single task , the idea of transfer learning has only recently been applied to reinforcement learning tasks . the core idea of transfer is that experience gained in learning to perform one task can help improve learning performance in a related , but different , task . in this article we present a framework that classifies transfer learning methods in terms of their capabilities and goals , and then use it to survey the existing literature , as well as to suggest future directions for transfer learning work . story_separator_special_tag the eu and other public organizations at different levels of national and local government across the world have funded and invested in numerous research and development projects on big data transport applications over last few years . the mid and long term effectiveness of these applications is very difficult to measure , and the benefits and usability of these applications are not easy to calculate . noesis , funded under eu h2020 program , aims to design a decision supported tool by gathering and analyzing these applications as use cases to formulate sufficient knowledge for policy makers to make informed decisions for their big data transport applications . the challenges in this work are associated with a small number of samples , with incomplete information , but having a good size of features that need to be analyzed to make a confident enough recommendation . this paper reports various statistical and machine learning approaches used to address these challenges and their results . story_separator_special_tag federated learning ( fl ) is a heavily promoted approach for training ml models on sensitive data , e.g. , text typed by users on their smartphones . fl is expressly designed for training on data that are unbalanced and non-iid across the participants . to ensure privacy and integrity of the federated model , latest fl approaches use differential privacy or robust aggregation to limit the influence of `` outlier '' participants . first , we show that on standard tasks such as next-word prediction , many participants gain no benefit from fl because the federated model is less accurate on their data than the models they can train locally on their own . second , we show that differential privacy and robust aggregation make this problem worse by further destroying the accuracy of the federated model for many participants . then , we evaluate three techniques for local adaptation of federated models : fine-tuning , multi-task learning , and knowledge distillation . we analyze where each technique is applicable and demonstrate that all participants benefit from local adaptation . participants whose local models are poor obtain big accuracy improvements over conventional fl . participants whose local models are story_separator_special_tag the genetic testing and genetic screening of children are commonplace . decisions about whether to offer genetic testing and screening should be driven by the best interest of the child . the growing literature on the psychosocial and clinical effects of such testing and screening can help inform best practices . this technical report provides ethical justification and empirical data in support of the proposed policy recommendations regarding such practices in a myriad of settings .
|
intensity-modulated radiation therapy ( imrt ) can sculpt the high-dose volume around the site of disease with hitherto unachievable precision . conformal avoidance of normal tissues goes hand in hand with this . inhomogeneous dose painting is possible . the technique has become a clinical reality and is likely to be the dominant approach this decade for improving the clinical practice of photon therapy . this series will explore all aspects of the `` imrt chain '' . only 15 years ago just a handful of physicists were working on this subject . imrt has developed so rapidly that its recent past is also its ancient history . this article will review the history of imrt with just a glance at precursors . the physical basis of imrt is then described including an attempt to introduce the concepts of convex and concave dose distributions , ill-conditioning , inverse-problem degeneracy , cost functions and complex solutions all with a minimum of technical jargon or mathematics . the many techniques for inverse planning are described and the review concludes with a look forward to the future of image-guided imrt ( ig-imrt ) . story_separator_special_tag this paper reports on the analysis of intensity modulated radiation treatment optimization problems in the presence of non-convex feasible parameter spaces caused by the specification of dose-volume constraints for the organs-at-risk ( oars ) . the main aim was to determine whether the presence of those non-convex spaces affects the optimization of clinical cases in any significant way . this was done in two phases : ( 1 ) using a carefully designed two-dimensional mathematical phantom that exhibits two controllable minima and with randomly initialized beamlet weights , we developed a methodology for exploring the nature of the convergence characteristics of quadratic cost function optimizations ( deterministic or stochastic ) . the methodology is based on observing the statistical behaviour of the residual cost at the end of optimizations in which the stopping criterion is progressively more demanding and carrying out those optimizations to very small error changes per iteration . ( 2 ) seven clinical cases were then analysed with dose-volume constraints that are stronger than originally used in the clinic . the clinical cases are two prostate cases differently posed , a meningioma case , two head-and-neck cases , a spleen case and a spine case . of story_separator_special_tag dose optimization requires that the treatment goals be specified in a meaningful manner , but also that alterations to the specification lead to predictable changes in the resulting dose distribution . within the framework of constrained optimization , it is possible to devise a tool that quantifies the impact on the objective of target volume coverage of any change to a dosimetric constraint of normal tissue or target dose homogeneity . this sensitivity analysis relies on properties of the lagrange function that is associated with the constrained optimization problem , but does not depend on the method used to solve this problem . it is useful particularly in cases with multiple target volumes and critical normal structures , where constraints and objectives can interact in a non-intuitive manner . story_separator_special_tag the major challenge in intensity-modulated radiotherapy planning is to find the right balance between tumor control and normal tissue sparing . the most desirable solution is never physically feasible , and a compromise has to be found . one possible way to approach this problem is constrained optimization . in this context , it is worthwhile to quantitatively predict the impact of adjustments of the constraints on the optimum dose distribution . this has been dealt with in regard to cost functions in a previous paper . the aim of the present paper is to introduce spatial resolution to this formalism . our method reveals the active constraints in a target subvolume that was previously selected by the practitioner for its insufficient dose . this is useful if a multitude of constraints can be the cause of a cold spot . the response of the optimal dose distribution to an adjustment of constraints ( perturbation ) is predicted . we conclude with a clinical example . story_separator_special_tag objective radiobiological models provide a means of evaluating treatment plans . keeping in mind their inherent limitations , they can also be used prospectively to design new treatment strategies which maximise therapeutic ratio . we propose here a new method to customise fractionation and prescription dose . methods to illustrate our new approach , two non-small cell lung cancer treatment plans and one prostate plan from our archive are analysed using the in-house software tool biosuite . biosuite computes normal tissue complication probability and tumour control probability using various radiobiological models and can suggest radiobiologically optimal prescription doses and fractionation schemes with limited toxicity . results dose response curves present varied aspects depending on the nature of each case . the optimisation process suggests doses and fractionation schemes differing from the original ones . patterns of optimisation depend on the degree of conformality , the behaviour of the normal tissue ( i.e . . story_separator_special_tag abstractpurpose . to investigate the potential role of incidental heart irradiation on the risk of radiation pneumonitis ( rp ) for patients receiving definitive radiation therapy for non-small-cell lung cancer ( nsclc ) . material and methods . two hundred and nine patient datasets were available for this study . heart and lung dose-volume parameters were extracted for modeling , based on monte carlo-based heterogeneity corrected dose distributions . clinical variables tested included age , gender , chemotherapy , pre-treatment weight-loss , performance status , and smoking history . the risk of rp was modeled using logistic regression . results . the most significant univariate variables were heart related , such as heart heart v65 ( percent volume receiving at least 65 gy ) ( spearman rs = 0.245 , p < 0.001 ) . the best-performing logistic regression model included heart d10 ( minimum dose to the hottest 10 % of the heart ) , lung d35 , and maximum lung dose ( spearman rs = 0.268 , p < 0.0001 ) . when classified by predicted risk , the . story_separator_special_tag determining the 'best ' optimization parameters in imrt planning is typically a time-consuming trial-and-error process with no unambiguous termination point . recently we and others proposed a goal-programming approach which better captures the desired prioritization of dosimetric goals . here , individual prescription goals are addressed stepwise in their order of priority . in the first step , only the highest order goals are considered ( target coverage and dose-limiting normal structures ) . in subsequent steps , the achievements of the previous steps are turned into hard constraints and lower priority goals are optimized , in turn , subject to higher priority constraints . so-called 'slip ' factors were introduced to allow for slight , clinically acceptable violations of the constraints . focusing on head and neck cases , we present several examples for this planning technique . the main advantages of the new optimization method are ( i ) its ability to generate plans that meet the clinical goals , as well as possible , without tuning any weighting factors or dose-volume constraints , and ( ii ) the ability to conveniently include more terms such as fluence map smoothness . lower level goals can be optimized to story_separator_special_tag in this work a prioritized optimization algorithm is adapted and applied to treatment planning for intensity modulated proton therapy ( impt ) . originally , this algorithm was developed for intensity modulated radiation therapy ( imrt ) with photons . prioritized optimization converts the clinical hierarchy of treatment goals into an effective optimization scheme for treatment planning . it presents an alternative to conventional methods that combine all optimization goals into a single optimization run with a weighted sum of all planning aims in the objective function . the highest order goal in the first step is to achieve a homogeneous dose distribution of the prescribed dose in the tumour . in subsequent steps the dose to organs at risk ( oars ) is minimized dependent upon their clinical priority , whereby the results of previous steps are turned into hard constraints . the large number of degrees of freedom through the additional energy modulation of protons enables a better protection of oars under the perpetuation of the prescribed dose in the planning target volume ( ptv ) . the solution space of subsequent optimization steps can be extended by introducing a slip factor . this slip factor allows a story_separator_special_tag optimization problems in imrt inverse planning are inherently multicriterial since they involve multiple planning goals for targets and their neighbouring critical tissue structures . clinical decisions are generally required , based on tradeoffs among these goals . since the tradeoffs can not be quantitatively determined prior to optimization , the decision-making process is usually indirect and iterative , requiring many repetitive optimizations . this situation becomes even more challenging for cases with a large number of planning goals . to address this challenge , a multicriteria optimization strategy called lexicographic ordering ( lo ) has been implemented and evaluated for imrt planning . the lo approach is a hierarchical method in which the planning goals are categorized into different priority levels and a sequence of suboptimization problems is solved in order of priority . this prioritization concept is demonstrated using two clinical cases ( a simple prostate case and a relatively complex head and neck case ) . in addition , a unique feature of lo in a decision support role is discussed . we demonstrate that a comprehensive list of planning goals ( e.g. , 23 for the head and neck case ) can be optimized using only a story_separator_special_tag treatment planning for intensity modulated radiation therapy ( imrt ) is challenging due to both the size of the computational problems ( thousands of variables and constraints ) and the multi-objective , imprecise nature of the goals . we apply hierarchical programming to imrt treatment planning . in this formulation , treatment planning goals/objectives are ordered in an absolute hierarchy , and the problem is solved from the top-down such that more important goals are optimized in turn . after each objective is optimized , that objective function is converted into a constraint when optimizing lower-priority objectives . we also demonstrate the usefulness of a linear/quadratic formulation , including the use of mean-tail-dose ( mean dose to the hottest fraction of a given structure ) , to facilitate computational efficiency . in contrast to the conventional use of dose-volume constraints ( no more than x % volume of a structure should receive more than y dose ) , the mean-tail-dose formulation ensures convex feasibility spaces and convex objective functions . to widen the search space without seriously degrading higher priority goals , we allowed higher priority constraints to relax or slip a clinically negligible amount during lower priority iterations . story_separator_special_tag in multi-objective radiotherapy planning , we are interested in pareto surfaces of dimensions 2 up to about 10 ( for head and neck cases , the number of structures to trade off can be this large ) . a key question that has not been answered yet is : how many plans does it take to sufficiently represent a high-dimensional pareto surface ? in this paper , we present a method to answer this question , and we show that the number of points needed is modest : 75 plans always controlled the error to within 5 % , and in all cases but one , n + 1 plans , where n is the number of objectives , was enough for < 15 % error . we introduce objective correlation matrices and principal component analysis ( pca ) of the beamlet solutions as two methods to understand this . pca reveals that the feasible beamlet solutions of a pareto database lie in a narrow , small dimensional subregion of the full beamlet space , which helps explain why the number of plans needed to characterize the database is small . story_separator_special_tag multiobjective radiotherapy planning aims to capture all clinically relevant trade-offs between the various planning goals . this is accomplished by calculating a representative set of pareto optimal solutions and storing them in a database . the structure of these representative pareto sets is still not fully investigated . we propose two methods for a systematic analysis of multiobjective databases : principal component analysis and the isomap method . both methods are able to extract the key trade-offs from a database and provide information which can lead to a better understanding of the clinical case and intensity-modulated radiation therapy planning in general . story_separator_special_tag approaches to approximate the efficient and pareto sets of multiobjective programs are reviewed . special attention is given to approximating structures , methods generating pareto points , and approximation quality . the survey covers 48 articles published since 1975 . story_separator_special_tag preface . acknowledgements . notation and symbols . part i : terminology and theory . 1. introduction . 2. concepts . 3. theoretical background . part ii : methods . 1. introduction . 2. no-preference methods . 3. a posteriori methods . 4. a priori methods . 5. interactive methods . part iii : related issues . 1. comparing methods . 2. software . 3. graphical illustration . 4. future directions . 5. epilogue . references . index . story_separator_special_tag the authors recently proposed the normal constraint ( nc ) method for generating a set of evenly spaced solutions on a pareto frontier for multiobjective optimization problems . since few methods offer this desirable characteristic , the new method can be of significant practical use in the choice of an optimal solution in a multiobjective setting . this paper s specific contribution is two-fold . first , it presents a new formulation of the nc method that incorporates a critical linear mapping of the design objectives . this mapping has the desirable property that the resulting performance of the method is entirely independent of the design objectives scales . we address here the fact that scaling issues can pose formidable difficulties . secondly , the notion of a pareto filter is presented and an algorithm thereof is developed . as its name suggests , a pareto filter is an algorithm that retains only the global pareto points , given a set of points in objective space . as is explained in the paper , the pareto filter is useful in the application of the nc and other methods . numerical examples are provided . story_separator_special_tag purpose to describe a fast projection algorithm for optimizing intensity modulated proton therapy ( impt ) plans and to describe and demonstrate the use of this algorithm in multicriteria impt planning . methods the authors develop a projection-based solver for a class of convex optimization problems and apply it to impt treatment planning . the speed of the solver permits its use in multicriteria optimization , where several optimizations are performed which span the space of possible treatment plans . the authors describe a plan database generation procedure which is customized to the requirements of the solver . the optimality precision of the solver can be specified by the user . results the authors apply the algorithm to three clinical cases : a pancreas case , an esophagus case , and a tumor along the rib cage case . detailed analysis of the pancreas case shows that the algorithm is orders of magnitude faster than industry-standard general purpose algorithms ( mosek 's interior point optimizer , primal simplex optimizer , and dual simplex optimizer ) . additionally , the projection solver has almost no memory overhead . conclusions the speed and guaranteed accuracy of the algorithm make it suitable for story_separator_special_tag in many fields , we come across problems where we want to optimize several conflicting objectives simultaneously . to find a good solution for such multi-objective optimization problems , an approximation of the pareto set is often generated . in this paper , we con- sider the approximation of pareto sets for problems with three or more convex objectives and with convex constraints . for these problems , sandwich algorithms can be used to de- termine an inner and outer approximation between which the pareto set is 'sandwiched ' . using these two approximations , we can calculate an upper bound on the approximation error . this upper bound can be used to determine which parts of the approximations must be improved and to provide a quality guarantee to the decision maker . in this paper , we extend higher dimensional sandwich algorithms in three different ways . firstly , we introduce the new concept of adding dummy points to the inner approx- imation of a pareto set . by using these dummy points , we can determine accurate inner and outer approximations more e\xb1ciently , i.e. , using less time-consuming optimizations . secondly , we introduce a new method story_separator_special_tag we consider the problem of approximating pareto surfaces of convex multicriteria optimization problems by a discrete set of points and their convex combinations . finding the scalarization parameters that optimally limit the approximation error when generating a single pareto optimal solution is a nonconvex optimization problem . this problem can be solved by enumerative techniques but at a cost that increases exponentially with the number of objectives . we present an algorithm for solving the pareto surface approximation problem that is practical with 10 or less conflicting objectives , motivated by an application to radiation therapy optimization . our enumerative scheme is , in a sense , dual to a family of previous algorithms . the proposed technique retains the quality of the best previous algorithm in this class while solving fewer subproblems . a further improvement is provided by a procedure for discarding subproblems based on reusing information from previous solves . the combined effect of the enhancements is empirically demonstrated to reduce the computational expense of solving the pareto surface approximation problem by orders of magnitude . for problems where the objectives have positive curvature , an improved bound on the approximation error is demonstrated using transformations of story_separator_special_tag inherently , imrt treatment planning involves compromising between different planning goals . multi-criteria imrt planning directly addresses this compromising and thus makes it more systematic . usually , several plans are computed from which the planner selects the most promising following a certain procedure . applying pareto navigation for this selection step simultaneously increases the variety of planning options and eases the identification of the most promising plan . pareto navigation is an interactive multi-criteria optimization method that consists of the two navigation mechanisms 'selection ' and 'restriction ' . the former allows the formulation of wishes whereas the latter allows the exclusion of unwanted plans . they are realized as optimization problems on the so-called plan bundle -- a set constructed from pre-computed plans . they can be approximately reformulated so that their solution time is a small fraction of a second . thus , the user can be provided with immediate feedback regarding his or her decisions . pareto navigation was implemented in the mira navigator software and allows real-time manipulation of the current plan and the set of considered plans . the changes are triggered by simple mouse operations on the so-called navigation star and lead to story_separator_special_tag purpose we completed an implementation of pencil-beam scanning ( pbs ) , a technology whereby a focused beam of protons , of variable intensity and energy , is scanned over a plane perpendicular to the beam axis and in depth . the aim of radiotherapy is to improve the target to healthy tissue dose differential . we illustrate how pbs achieves this aim in a patient with a bulky tumor . methods and materials our first deployment of pbs uses `` broad '' pencil-beams ranging from 20 to 35 mm ( full-width-half-maximum ) over the range interval from 32 to 7 g/cm 2 . such beam-brushes offer a unique opportunity for treating bulky tumors . we present a case study of a large ( 4,295 cc clinical target volume ) retroperitoneal sarcoma treated to 50.4 gy relative biological effectiveness ( rbe ) ( presurgery ) using a course of photons and protons to the clinical target volume and a course of protons to the gross target volume . results we describe our system and present the dosimetry for all courses and provide an interdosimetric comparison . discussion the use of pbs for bulky targets reduces the complexity of treatment planning story_separator_special_tag purpose : to introduce a method to simultaneously explore a collection of pareto surfaces . the method will allow radiotherapy treatment planners to interactively explore treatment plans for different beam angle configurations as well as different treatment modalities . methods : the authors assume a convex optimization setting and represent the pareto surface for each modality or given beam set by a set of discrete points on the surface . weighted averages of these discrete points produce a continuous representation of each pareto surface . the authors calculate a set of pareto surfaces and use linear programming to navigate across the individual surfaces , allowing switches between surfaces . the switches are organized such that the plan profits in the requested way , while trying to keep the change in dose as small as possible . results : the system is demonstrated on a phantom pancreas imrt case using 100 different five beam configurations and a multicriteria formulation with six objectives . the system has intuitive behavior and is easy to control . also , because the underlying linear programs are small , the system is fast enough to offer real-time exploration for the pareto surfaces of the given beam story_separator_special_tag we consider pareto surface based multi-criteria optimization for step and shoot imrt planning . by analyzing two navigation algorithms , we show both theoretically and in practice that the number of plans needed to form convex combinations of plans during navigation can be kept small ( much less than the theoretical maximum number needed in general , which is equal to the number of objectives for on-surface pareto navigation ) . therefore a workable approach for directly deliverable navigation in this setting is to segment the underlying pareto surface plans and then enforce the mild restriction that only a small number of these plans are active at any time during plan navigation , thus limiting the total number of segments used in the final plan . story_separator_special_tag the optimization of beam angles in imrt planning is still an open problem , with literature focusing on heuristic strategies and exhaustive searches on discrete angle grids . we show how a beam angle set can be locally refined in a continuous manner using gradient-based optimization in the beam angle space . the gradient is derived using linear programming duality theory . applying this local search to 100 random initial angle sets of a phantom pancreatic case demonstrates the method , and highlights the many-local-minima aspect of the bao problem . due to this function structure , we recommend a search strategy of a thorough global search followed by local refinement at promising beam angle sets . extensions to nonlinear imrt formulations are discussed . story_separator_special_tag purpose : in current intensity-modulated radiation therapy ( imrt ) plan optimization , the focus is on either finding optimal beam angles ( or other beam delivery parameters such as field segments , couch angles , gantry angles ) or optimal beam intensities . in this article we offer a mixed integer programming ( mip ) approach for simultaneously determining an optimal intensity map and optimal beam angles for imrt delivery . using this approach , we pursue an experimental study designed to ( a ) gauge differences in plan quality metrics with respect to different tumor sites and different mip treatment planning models , and ( b ) test the concept of critical-normal-tissue-ring a tissue ring of 5 mm thickness drawn around the planning target volume ( ptv ) and its use for designing conformal plans . methods and materials : our treatment planning models use two classes of decision variables to capture the beam configuration and intensities simultaneously . binary ( 0/1 ) variables are used to capture `` on '' or `` off '' or `` yes '' or `` no '' decisions for each field , and nonnegative continuous variables are used to represent intensities of story_separator_special_tag we view the beam orientation optimization ( boo ) problem in intensity-modulated radiation therapy ( imrt ) treatment planning as a global optimization problem with expensive objective function evaluations . we propose a response surface method that , in contrast with other approaches , allows for the generation of problem data only for promising beam orientations as the algorithm progresses . this enables the consideration of additional degrees of freedom in the treatment delivery , i.e. , many more candidate beam orientations than is possible with existing approaches to boo . this ability allows us to include noncoplanar beams and consider the question of whether or not noncoplanar beams can provide significant improvement in treatment plan quality . we also show empirically that using our approach , we can generate clinically acceptable treatment plans that require fewer beams than are used in current practice . story_separator_special_tag purpose : the purpose of this article is to explore the use of the accelerated exhaustive search strategy for developing and validating methods for optimizing beam orientations for intensity-modulated radiation therapy ( imrt ) . combining beam-angle optimization ( bao ) with intensity distribution optimization is expected to improve the quality of imrt treatment plans . however , bao is one of the most difficult problems to solve adequately because of the huge hyperspace of possible beam configurations ( e.g. , selecting 7 of 36 uniformly spaced coplanar beams would require the intercomparison of 8,347,680 imrt plans ) . methods and materials : an influence vector ( iv ) approximation technique for high-speed estimation of imrt dose distributions was used in combination with a fast gradient search algorithm ( newton s method ) for imrt optimization . in the iv approximation , it is assumed that the change in intensity of a ray ( or bixel ) proportionately changes dose along the ray . evidence is presented that the iv approximation is valid for bao . the scatter contribution at points away from the ray is accounted for fully in imrt optimization after the optimum beam orientation has been determined story_separator_special_tag purpose : to introduce icycle , a novel algorithm for integrated , multicriterial optimization of beam angles , and intensity modulated radiotherapy ( imrt ) profiles . methods : a multicriterial plan optimization with icycle is based on a prescription calledwish-list , containing hard constraints and objectives with ascribed priorities . priorities are ordinal parameters used for relative importance ranking of the objectives . the higher an objective priority is , the higher the probability that the corresponding objective will be met . beam directions are selected from an input set of candidate directions . input sets can be restricted , e.g. , to allow only generation of coplanar plans , or to avoid collisions between patient/couch and the gantry in a noncoplanar setup . obtaining clinically feasible calculation times was an important design criterium for development of icycle . this could be realized by sequentially adding beams to the treatment plan in an iterative procedure . each iteration loop starts with selection of the optimal direction to be added . then , a pareto-optimal imrt plan is generated for the ( fixed ) beam setup that includes all so far selected directions , using a previously published algorithm for story_separator_special_tag intensity-modulated radiation therapy is the technique of delivering radiation to cancer patients by using non-uniform radiation fields from selected angles , with the aim of reducing the intensity of the beams that go through critical structures while reaching the dose prescription in the target volume . two decisions are of fundamental importance : to select the beam angles and to compute the intensity of the beams used to deliver the radiation to the patient . often , these two decisions are made separately : first , the treatment planners , on the basis of experience and intuition , decide the orientation of the beams and then the intensities of the beams are optimized by using an automated software tool . automatic beam angle selection ( also known as beam angle optimization ) is an important problem and is today often based on human experience . in this context , we face the problem of optimizing both the decisions , developing an algorithm which automatically selects the beam angles and computes the beam intensities . we propose a hybrid heuristic method , which combines a simulated annealing procedure with the knowledge of the gradient . gradient information is used to quickly story_separator_special_tag imrt treatment plans for step-and-shoot delivery have traditionally been produced through the optimization of intensity distributions ( or maps ) for each beam angle . the optimization step is followed by the application of a leaf-sequencing algorithm that translates each intensity map into a set of deliverable aperture shapes . in this article , we introduce an automated planning system in which we bypass the traditional intensity optimization , and instead directly optimize the shapes and the weights of the apertures . we call this approach direct aperture optimization . this technique allows the user to specify the maximum number of apertures per beam direction , and hence provides significant control over the complexity of the treatment delivery . this is possible because the machine dependent delivery constraints imposed by the mlc are enforced within the aperture optimization algorithm rather than in a separate leaf-sequencing step . the leaf settings and the aperture intensities are optimized simultaneously using a simulated annealing algorithm . we have tested direct aperture optimization on a variety of patient cases using the egs4/beam monte carlo package for our dose calculation engine . the results demonstrate that direct aperture optimization can produce highly conformal step-and-shoot treatment story_separator_special_tag we consider the problem of intensity-modulated radiation therapy ( imrt ) treatment planning using direct aperture optimization . while this problem has been relatively well studied in recent years , most approaches employ a heuristic approach to the generation of apertures . in contrast , we use an exact approach that explicitly formulates the fluence map optimization ( fmo ) problem as a convex optimization problem in terms of all multileaf collimator ( mlc ) deliverable apertures and their associated intensities . however , the number of deliverable apertures , and therefore the number of decision variables and constraints in the new problem formulation , is typically enormous . to overcome this , we use an iterative approach that employs a subproblem whose optimal solution either provides a suitable aperture to add to a given pool of allowable apertures or concludes that the current solution is optimal . we are able to handle standard consecutiveness , interdigitation and connectedness constraints that may be imposed by the particular mlc system used , as well as jaws-only delivery . our approach has the additional advantage that it can explicitly account for transmission of dose through the part of an aperture that is story_separator_special_tag navigation-based multi-criteria optimization has been introduced to radiotherapy planning in order to allow the interactive exploration of trade-offs between conflicting clinical goals . however , this has been mainly applied to fluence map optimization . the subsequent leaf sequencing step may cause dose discrepancy , leading to human iteration loops in the treatment planning process that multi-criteria methods were meant to avoid . to circumvent this issue , this paper investigates the application of direct aperture optimization methods in the context of multi-criteria optimization . we develop a solution method to directly obtain a collection of apertures that can adequately span the entire pareto surface . to that end , we extend the column generation method for direct aperture optimization to a multi-criteria setting in which apertures that can improve the entire pareto surface are sequentially identified and added to the treatment plan . our proposed solution method can be embedded in a navigation-based multi-criteria optimization framework , in which the treatment planner explores the trade-off between treatment objectives directly in the space of deliverable apertures . our solution method is demonstrated for a paraspinal case where the trade-off between target coverage and spinal-cord sparing is studied . the computational story_separator_special_tag purpose : to make the planning of volumetric modulated arc therapy ( vmat ) faster and to explore the tradeoffs between planning objectives and delivery efficiency . methods : a convex multicriteria dose optimization problem is solved for an angular grid of 180 equi-spaced beams . this allows the planner to navigate the ideal dose distribution pareto surface and select a plan of desired target coverage versus organ at risk sparing . the selected plan is then made vmat deliverable by a fluence map merging and sequencing algorithm , which combines neighboring fluence maps based on a similarity score and then delivers the merged maps together , simplifying delivery . successive merges are made as long as the dose distribution quality is maintained . the complete algorithm is called vmerge . results : vmerge is applied to three cases : a prostate , a pancreas , and a brain . in each case , the selected pareto-optimal plan is matched almost exactly with the vmat merging routine , resulting in a high quality plan delivered with a single arc in less than five minutes on average . vmerge offers significant improvements over existing vmat algorithms . the first is the story_separator_special_tag to formulate and solve the fluence-map merging procedure of the recently-published vmat treatment-plan optimization method , called vmerge , as a bi-criteria optimization problem . using an exact merging method rather than the previously-used heuristic , we are able to better characterize the trade-off between the delivery efficiency and dose quality . vmerge begins with a solution of the fluence-map optimization problem with 180 equi-spaced beams that yields the ? ideal ? dose distribution . neighboring fluence maps are then successively merged , meaning that they are added together and delivered as a single map . the merging process improves the delivery efficiency at the expense of deviating from the initial high-quality dose distribution . we replace the original merging heuristic by considering the merging problem as a discrete bi-criteria optimization problem with the objectives of maximizing the treatment efficiency and minimizing the deviation from the ideal dose . we formulate this using a network-flow model that represents the merging problem . since the problem is discrete and thus non-convex , we employ a customized box algorithm to characterize the pareto frontier . the pareto frontier is then used as a benchmark to evaluate the performance of the standard vmerge story_separator_special_tag purpose : to develop a method for inverse volumetric-modulated arc therapy ( vmat ) planning that combines multicriteria optimization ( mco ) with direct machine parameter optimization . the ultimate goa . story_separator_special_tag for a bounded system of linear equalities and inequalities , we show that the np-hard 0-norm minimization problem is completely equivalent to the concave p-norm minimization problem , for a sufficiently small p. a local solution to the latter problem can be easily obtained by solving a provably finite number of linear programs . computational results frequently leading to a global solution of the 0-minimization problem and often producing sparser solutions than the corresponding 1-solution are given . a similar approach applies to finding minimal 0-solutions of linear programs . story_separator_special_tag < sup > 0 < /sup > norm based signal recovery is attractive in compressed sensing as it can facilitate exact recovery of sparse signal with very high probability . unfortunately , direct < sup > 0 < /sup > norm minimization problem is np-hard . this paper describes an approximate < sup > 0 < /sup > norm algorithm for sparse representation which preserves most of the advantages of < sup > 0 < /sup > norm . the algorithm shows attractive convergence properties , and provides remarkable performance improvement in noisy environment compared to other popular algorithms . the sparse representation algorithm presented is capable of very fast signal recovery , thereby reducing retrieval latency when handling story_separator_special_tag an intensity-modulated radiation therapy ( imrt ) field is composed of a series of segmented beams . it is practically important to reduce the number of segments while maintaining the conformality of the final dose distribution . in this article , the authors quantify the complexity of an imrt fluence map by introducing the concept of sparsity of fluence maps and formulate the inverse planning problem into a framework of compressing sensing . in this approach , the treatment planning is modeled as a multiobjective optimization problem , with one objective on the dose performance and the other on the sparsity of the resultant fluence maps . a pareto frontier is calculated , and the achieved dose distributions associated with the pareto efficient points are evaluated using clinical acceptance criteria . the clinically acceptable dose distribution with the smallest number of segments is chosen as the final solution . the method is demonstrated in the application of fixed-gantry imrt on a prostate patient . the result shows that the total number of segments is greatly reduced while a satisfactory dose distribution is still achieved . with the focus on the sparsity of the optimal solution , the proposed method is story_separator_special_tag purpose : a new treatment scheme coined as dense angularly sampled and sparse intensity modulated radiation therapy ( dassim-rt ) has recently been proposed to bridge the gap between imrt and vmat . by increasing the angular sampling of radiation beams while eliminating dispensable segments of the incident fields , dassim-rt is capable of providing improved conformity in dose distributions while maintaining high delivery efficiency . the fact that dassim-rt utilizes a large number of incident beams represents a major computational challenge for the clinical applications of this powerful treatment scheme . the purpose of this work is to provide a practical solution to the dassim-rt inverse planning problem . methods : the inverse planning problem is formulated as a fluence-map optimization problem with total-variation ( tv ) minimization . a newly released l1-solver , template for first-order conic solver ( tfocs ) , was adopted in this work . tfocs achieves faster convergence with less memory usage as compared with conventional quadratic programming ( qp ) for the tv form through the effective use of conic forms , dual-variable updates , and optimal first-order approaches . as such , it is tailored to specifically address the computational challenges of story_separator_special_tag purpose to provide a mathematical approach for quantifying the tradeoff between intensity-modulated radiotherapy ( imrt ) complexity and plan quality . methods and materials we solve a multi-objective program that includes imrt complexity , measured as the number of monitor units ( mu ) needed to deliver the plan using a multileaf collimator , as an objective . clinical feasibility of plans is ensured by the use of hard constraints in the formulation . optimization output is a pareto surface of treatment plans , which allows the tradeoffs between imrt complexity , tumor coverage , and tissue sparing to be observed . paraspinal and lung cases are presented . results although the amount of possible mu reduction is highly dependent on the difficulty of the underlying treatment plan ( difficult plans requiring a high degree of intensity modulation are more sensitive to mu reduction ) , in some cases the number of mu can be reduced more than twofold with a < 1 % increase in the objective function . conclusions the largely increased number of mu and irradiation time in imrt is sometimes unnecessary . tools like the one presented should be considered for integration into daily clinical practice story_separator_special_tag it is now well understood that ( 1 ) it is possible to reconstruct sparse signals exactly from what appear to be highly incomplete sets of linear measurements and ( 2 ) \xa0that this can be done by constrained 1 minimization . in this paper , we study a novel method for sparse signal recovery that in many situations outperforms 1 minimization in the sense that substantially fewer measurements are needed for exact recovery . the algorithm consists of solving a sequence of weighted 1-minimization problems where the weights used for the next iteration are computed from the value of the current solution . we present a series of experiments demonstrating the remarkable performance and broad applicability of this algorithm in the areas of sparse signal recovery , statistical estimation , error correction and image processing . interestingly , superior gains are also achieved when our method is applied to recover signals with assumed near-sparsity in overcomplete representations not by reweighting the 1 norm of the coefficient sequence as is common , but by reweighting the 1 norm of the transformed object . an immediate consequence is the possibility of highly efficient data acquisition protocols by improving on a technique story_separator_special_tag abstract purpose : selection of beam configuration in currently available intensity-modulated radiotherapy ( imrt ) treatment planning systems is still based on trial-and-error search . computer beam orientation optimization has the potential to improve the situation , but its practical implementation is hindered by the excessive computing time associated with the calculation . the purpose of this work is to provide an effective means to speed up the beam orientation optimization by incorporating a priori geometric and dosimetric knowledge of the system and to demonstrate the utility of the new algorithm for beam placement in imrt . methods and materials : beam orientation optimization was performed in two steps . first , the quality of each possible beam orientation was evaluated using beam's-eye-view dosimetrics ( bevd ) developed in our previous study . a simulated annealing algorithm was then employed to search for the optimal set of beam orientations , taking into account the bevd scores of different incident beam directions . during the calculation , sampling of gantry angles was weighted according to the bevd score computed before the optimization . a beam direction with a higher bevd score had a higher probability of being included in the trial story_separator_special_tag purpose to test whether multicriteria optimization ( mco ) can reduce treatment planning time and improve plan quality in intensity-modulated radiotherapy ( imrt ) . methods and materials ten imrt patients ( 5 with glioblastoma and 5 with locally advanced pancreatic cancers ) were logged during the standard treatment planning procedure currently in use at massachusetts general hospital ( mgh ) . planning durations and other relevant planning information were recorded . in parallel , the patients were planned using an mco planning system , and similar planning time data were collected . the patients were treated with the standard plan , but each mco plan was also approved by the physicians . plans were then blindly reviewed 3 weeks after planning by the treating physician . results in all cases , the treatment planning time was vastly shorter for the mco planning ( average mco treatment planning time was 12 min ; average standard planning time was 135 min ) . the physician involvement time in the planning process increased from an average of 4.8 min for the standard process to 8.6 min for the mco process . in all cases , the mco plan was blindly identified as story_separator_special_tag background : health systems in sub-saharan africa are not prepared for the rapid rise in cancer rates projected in the region over the next decades . more must be understood about the current state of cancer care in this region to target improvement efforts . yaounde general hospital ( ygh ) currently is the only site in cameroon ( population : 18.8 million ) where adults can receive chemotherapy from trained medical oncologists . the experiences of patients at this facility represent a useful paradigm for describing cancer care in this region . methods : in july and august 2010 , our multidisciplinary team conducted closed-end interviews with 79 consecutive patients who had confirmed breast cancer , kaposi sarcoma , or lymphoma . results : thirty-five percent of patients waited > 6 months to speak to a health care provider after the first sign of their cancer . the delay between first consultation with a health care provider and receipt of a cancer diagnosis was > 3 months for 47 % of patients . the total delay from the first sign of cancer to receipt of the correct diagnosis was > 6 months for 63 % of patients . twenty-three story_separator_special_tag an algorithm , which calculates the motions of the collimator jaws required to generate a given arbitrary intensity profile , is presented . the intensity profile is assumed to be piecewise linear , i.e. , to consist of segments of straight lines . the jaws move unidirectionally and continuously with variable speed during radiation delivery . during each segment , at least one of the jaws is set to move at the maximum permissible speed . the algorithm is equally applicable for multileaf collimators ( mlc ) , where the transmission through the collimator leaves is taken into account . examples are presented for different intensity profiles with varying degrees of complexity . typically , the calculation takes less than 10 ms on a vax 8550 computer . story_separator_special_tag intensity-modulated radiation therapy ( imrt ) generally requires complex equipment for delivery . just one study has investigated the use of 'jaws-only ' imrt with not discouraging conclusions . however , the monitor-unit efficiency is still considered to be too low compared with the use of a multileaf collimator ( mlc ) . in this paper a new imrt delivery technique is proposed which does not require the mlc and is only moderately more complex than the use of jaws alone . in this method a secondary collimator ( mask ) is employed together with the jaws . this mask may translate parallel to the jaw axes . two types of mask have been investigated . one is a regular binary-attenuation pattern and the other is a random binary-attenuation pattern . studies show that the monitor-unit efficiency of this 'jaws-plus-mask ' technique , with a random binary mask , is more than double that of the jaws-only technique for typical two-dimensional intensity-modulated beams of size 10 \xd7 10 bixels2 and with a peak value of 10 mu ( or quantized into 10 fluence increments ) . for two-dimensional intensity-modulated beams of size 15 \xd7 15 bixels2 with a peak value story_separator_special_tag using direct aperture optimization , we have developed an inverse planning approach that is capable of producing efficient intensity modulated radiotherapy ( imrt ) treatment plans that can be delivered without a multileaf collimator . this `` jaws-only '' approach to imrt uses a series of rectangular field shapes to achieve a high degree of intensity modulation from each beam direction . direct aperture optimization is used to directly optimize the jaw positions and the relative weights assigned to each aperture . because the constraints imposed by the jaws are incorporated into the optimization , the need for leaf sequencing is eliminated . results are shown for five patient cases covering three treatment sites : pancreas , breast , and prostate . for these cases , between 15 and 20 jaws-only apertures were required per beam direction in order to obtain conformal imrt treatment plans . each plan was delivered to a phantom , and absolute and relative dose measurements were recorded . the typical treatment time to deliver these plans was 18 min . the jaws-only approach provides an additional imrt delivery option for clinics without a multileaf collimator .
|
conservation equations . thermodynamics of irreversible processes : the linear region . nonlinear thermodynamics . systems involving chemical reactions and diffusion-stability . mathematical tools . simple autocatalytic models . some further aspects of dissipative structures and self-organization phenomena . general comments . birth and death descriptions of fluctuations : nonlinear master equation . self-organization in chemical reactions . regulatory processes at the subcellular level . regulatory processes at the cellular level . cellular differentiation and patter formation . thermodynamics of evolution . thermodynamics of ecosystems . perspectives and concluding remarks . references . index . story_separator_special_tag complexity in nature is astounding yet the explanation lies in the fundamental laws of physics . the second law of thermodynamics and the principle of least action are the two theories of science that have always stood the test of time . in this article , we use these fundamental principles as tools to understand how and why things happen . in order to achieve that , it is of absolute necessity to define things precisely yet preserving their applicability in a broader sense . we try to develop precise , mathematically rigorous definitions of the commonly used terms in this context , such as action , organization , system , process , etc. , and in parallel argue the behavior of the system from the first principles . this article , thus , acts as a mathematical framework for more discipline-specific theories . \xa9 2015 wiley periodicals , inc. complexity , 2015 story_separator_special_tag a comprehensive review of spatiotemporal pattern formation in systems driven away from equilibrium is presented , with emphasis on comparisons between theory and quantitative experiments . examples include patterns in hydrodynamic systems such as thermal convection in pure fluids and binary mixtures , taylor-couette flow , parametric-wave instabilities , as well as patterns in solidification fronts , nonlinear optics , oscillatory chemical reactions and excitable biological media . the theoretical starting point is usually a set of deterministic equations of motion , typically in the form of nonlinear partial differential equations . these are sometimes supplemented by stochastic terms representing thermal or instrumental noise , but for macroscopic systems and carefully designed experiments the stochastic forces are often negligible . an aim of theory is to describe solutions of the deterministic equations that are likely to be reached starting from typical initial conditions and to persist at long times . a unified description is developed , based on the linear instabilities of a homogeneous state , which leads naturally to a classification of patterns in terms of the characteristic wave vector q0 and frequency 0 of the instability . type is systems ( 0=0 , q0 0 ) are stationary story_separator_special_tag the theory of self-organization and adaptivity has grown out of a variety of disciplines , including thermodynamics , cybernetics and computer modelling . the present article reviews its most important concepts and principles . it starts with an intuitive overview , illustrated by the examples of magnetization and benard convection , and concludes with the basics of mathematical modelling . self-organization can be defined as the spontaneous creation of a globally coherent pattern out of local interactions . because of its distributed character , this organization tends to be robust , resisting perturbations . the dynamics of a self-organizing system is typically non-linear , because of circular or feedback relations between the components . positive feedback leads to an explosive growth , which ends when all components have been absorbed into the new configuration , leaving the system in a stable , negative feedback state . non-linear systems have in general several stable states , and this number tends to increase ( bifurcate ) as an increasing input of energy pushes the system farther from its thermodynamic equilibrium . to adapt to a changing environment , the system needs a variety of stable states that is large enough to react story_separator_special_tag isolated systems tend to evolve towards equilibrium , a special state that has been the focus of many-body research for a century . yet much of the richness of the world around us arises from conditions far from equilibrium . phenomena such as turbulence , earthquakes , fracture , and life itself occur only far from equilibrium . subjecting materials to conditions far from equilibrium leads to otherwise unattainable properties . for example , rapid cooling is a key process in manufacturing the strongest metallic alloys and toughest plastics . processes that occur far from equilibrium also create some of the most intricate structures known , from snowflakes to the highly organized structures of life . while much is understood about systems at or near equilibrium , we are just beginning to uncover the basic principles governing systems far from equilibrium . story_separator_special_tag knowledge of the statistical properties of chemical systems at equilibrium can be very helpful for understanding their behavior . however , much of the world surrounding us is not in equilibrium . in his perspective , egolf explains how equilibrium properties , such as free energies , can nevertheless be determined based on nonequilibrium data . he highlights the report by liphardt et al . , who show experimentally that by measuring the work required to unfold an rna molecule repeatedly as a function of its extension , the free energy of unfolding can be determined . story_separator_special_tag an account of the experimental discovery of complex dynamical behavior in the continuous-flow , stirred tank reactor ( cstr ) belousov-zhabotinsky ( bz ) reaction , as well as numerical simulations based on the bz chemistry are given . the most recent four- and three-variable models that are deduced from the well-accepted , updated chemical mechanism of the bz reaction and which exhibit robust chaotic states are summarized . chaos has been observed in experiments and simulations embedded in the regions of complexities at both low and high flow rates . the deterministic nature of the observed aperiodicities at low flow rates is unequivocally established . however , controversy still remains in the interpretation of certain aperiodicities observed at high flow rates . story_separator_special_tag active systems can produce a far greater variety of ordered patterns than conventional equilibrium systems . in particular , transitions between disorder and either polar- or nematically ordered phases have been predicted and observed in two-dimensional active systems . however , coexistence between phases of different types of order has not been reported . we demonstrate the emergence of dynamic coexistence of ordered states with fluctuating nematic and polar symmetry in an actomyosin motility assay . combining experiments with agent-based simulations , we identify sufficiently weak interactions that lack a clear alignment symmetry as a prerequisite for coexistence . thus , the symmetry of macroscopic order becomes an emergent and dynamic property of the active system . these results provide a pathway by which living systems can express different types of order by using identical building blocks . story_separator_special_tag traditional approaches to materials synthesis have largely relied on uniform , equilibrated phases leading to static condensed-matter structures ( e.g. , monolithic single crystals ) . departures from these modes of materials design are pervasive in biology . from the folding of proteins to the reorganization of self-regulating cytoskeletal networks , biological materials reflect a major shift in emphasis from equilibrium thermodynamic regimes to out-of-equilibrium regimes . here , equilibrium structures , determined by global free-energy minima , are replaced by highly structured dynamical states that are out of equilibrium , calling into question the utility of global thermodynamic energy minimization as a first-principles approach . thus , the creation of new materials capable of performing life-like functions such as complex and cooperative processes , self-replication , and self-repair , will ultimately rely upon incorporating biological principles of spatiotemporal modes of self-assembly . elucidating fundamental principles for the design of such out-of-equilibrium dynamic self-assembling materials systems is the focus of this issue of mrs bulletin . story_separator_special_tag living systems are open , out-of-equilibrium thermodynamic entities , that maintain order by locally reducing their entropy . aging is a process by which these systems gradually lose their ability to maintain their out-of-equilibrium state , as measured by their free-energy rate density , and hence , their order . thus , the process of aging reduces the efficiency of those systems , making them fragile and less adaptive to the environmental fluctuations , gradually driving them towards the state of thermodynamic equilibrium . in this paper , we discuss the various metrics that can be used to understand the process of aging from a complexity science perspective . among all the metrics that we propose , action efficiency , is observed to be of key interest as it can be used to quantify order and self-organization in any physical system . based upon our arguments , we present the dependency of other metrics on the action efficiency of a system , and also argue as to how each of the metrics , influences all the other system variables . in order to support our claims , we draw parallels between technological progress and biological growth . such parallels are story_separator_special_tag this book has discussed some of the most important aspects in the current state of the sciences of complexity , self-organization , and evolution . a central theme in this field is the search for mechanisms that can explain the self-organization of complex systems . the quest for the main guiding principles for causal explanations can be viewed as a very timely and central aspect of this search . this book is devoted to such topics and is a necessary read for anyone working at the forefront of complexity , self-organization , and evolution . as an addition to the lines of reasoning in this book , we focus on a quantitative description of self-organization and evolution . to create a measure of a degree of organization , we have applied the principle of least action from physics . action for a trajectory is defined as the integral of the difference between kinetic and potential energy over time . this principle states that the equations of motion in nature are obeyed when action is minimized . in complex systems , there are constraints to motion that prevent the agents from moving along the paths of least action . using free story_separator_special_tag in this paper , we model the bus networks of six major indian cities as graphs in l-space , and evaluate their various statistical properties . while airline and railway networks have been extensively studied , a comprehensive study on the structure and growth of bus networks is lacking . in india , where bus transport plays an important role in day-to-day commutation , it is of significant interest to analyze its topological structure and answer basic questions on its evolution , growth , robustness and resiliency . although the common feature of small-world property is observed , our analysis reveals a wide spectrum of network topologies arising due to significant variation in the degree-distribution patterns in the networks . we also observe that these networks although , robust and resilient to random attacks are particularly degree-sensitive . unlike real-world networks , such as internet , www and airline , that are virtual , bus networks are physically constrained . our findings therefore , throw light on the evolution of such geographically and constrained networks that will help us in designing more efficient bus networks in the future . story_separator_special_tag in this paper , we study the structural properties of the complex bus network of chennai . we formulate this extensive network structure by identifying each bus stop as a node , and a bus which stops at any two adjacent bus stops as an edge connecting the nodes . rigorous statistical analysis of this data shows that the chennai bus network displays small-world properties and a scale-free degree distribution with the power-law exponent , = 3:8. i. introduction a. chennai bus network chennai is one of the metropolitan cities in india with a structured and a close-knit bus transport network . the chennai bus network ( cbn ) is operated by the metropolitan transport corporation ( mtc ) , a state government undertaking . spanning an area of 3,929 sq . km and with over 800 routes sprawling across entire chennai , this extensive network also boasts of the largest bus terminus in asia . with the population of the city being the sixth largest in the country , this medium of transport is most widely used for day-to-day commutation . the reason that the bus network , in general , achieves this favourable status lies primarily in two story_separator_special_tag in recent times , the domain of network science has become extremely useful in understanding the underlying structure of various real-world networks and to answer non-trivial questions regarding them . in this study , we rigourously analyze the statistical properties of the bus networks of six major indian cities as graphs in l- and p-space , using tools from network science . although public transport networks , such as airline and railway networks have been extensively studied , a comprehensive study on the structure and growth of bus networks is lacking . in india , where bus networks play an important role in day-to-day commutation , it is of significant interest to analyze their topological structure , and answer some of the basic questions on their evolution , growth , robustness and resiliency . we start from an empirical analysis of these networks , and determine their principle characteristics in terms of the complex network theory . the common features of small-world property and heavy tails in degree-distribution plots are observed in all the networks studied . our analysis further reveals a wide spectrum of network topologies arising due to an interplay between preferential and random attachment of nodes . story_separator_special_tag bus transportation is the most convenient and cheapest way of public transportation in indian cities . due to cost-effectiveness and wide reachability , buses bring people to their destinations every day . although the bus transportation has numerous advantages over other ways of public transportation , this mode of transportation also poses a serious threat of spreading contagious diseases throughout the city . it is extremely difficult to predict the extent and spread of such an epidemic . earlier studies have focused on the contagion processes on scale-free network topologies ; whereas , real-world networks such as bus networks exhibit a wide-spectrum of network topology . therefore , we aim in this study to understand this complex dynamical process of epidemic outbreak and information diffusion on the bus networks for six different indian cities using si and sir models . we identify epidemic thresholds for these networks which help us in controlling outbreaks by developing node-based immunization techniques . \xa9 2016 wiley periodicals , inc. complexity , 2016 story_separator_special_tag far-from-equilibrium systems are ubiquitous in nature . they are also rich in terms of diversity and complexity . therefore , it is an intellectual challenge to be able to understand the physics of far-from-equilibrium phenomena . in this article , we revisit a standard tabletop experiment , the rayleigh benard convection , to explore some fundamental questions and present a new perspective from a first-principles point of view . we address how nonequilibrium fluctuations differ from equilibrium fluctuations , how emergence of order out of equilibrium breaks symmetries in the system , and how free energy of a system gets locally bifurcated to operate a carnot-like engine to maintain order . the exploration and investigation of these nontrivial questions are the focus of this article . story_separator_special_tag a challenge in fundamental physics and especially in thermodynamics is to understand emergent order in far-from-equilibrium systems . while at equilibrium , temperature plays the role of a key thermodynamic variable whose uniformity in space and time defines the equilibrium state the system is in , this is not the case in a far-from-equilibrium driven system . when energy flows through a finite system at steady-state , temperature takes on a time-independent but spatially varying character . in this study , the convection patterns of a rayleigh-benard fluid cell at steady-state is used as a prototype system where the temperature profile and fluctuations are measured spatio-temporally . the thermal data is obtained by performing high-resolution real-time infrared calorimetry on the convection system as it is first driven out-of-equilibrium when the power is applied , achieves steady-state , and then as it gradually relaxes back to room temperature equilibrium when the power is removed . our study provides new experimental data on the non-trivial nature of thermal fluctuations when stable complex convective structures emerge . the thermal analysis of these convective cells at steady-state further yield local equilibrium-like statistics . in conclusion , these results correlate the spatial ordering of the story_separator_special_tag in this paper we present a detailed description of the statistical and computational techniques that were employed to study a driven far-from-equilibrium steady-state rayleigh-b { e } nard system in the non-turbulent regime ( $ ra\\leq 3500 $ ) . in our previous work on the rayleigh-b { e } nard convection system we try to answer two key open problems that are of great interest in contemporary physics : ( i ) how does an out-of-equilibrium steady-state differ from an equilibrium state and ( ii ) how do we explain the spontaneous emergence of stable structures and simultaneously interpret the physical notion of temperature when out-of-equilibrium . we believe that this paper will offer a useful repository of the technical details for a first principles study of similar kind . in addition , we are also hopeful that our work will spur considerable interest in the community which will lead to the development of more sophisticated and novel techniques to study far-from-equilibrium behavior . story_separator_special_tag a chimera state is a spatio-temporal pattern in a network of identical coupled oscillators in which synchronous and asynchronous oscillation coexist . this state of broken symmetry , which usually coexists with a stable spatially symmetric state , has intrigued the nonlinear dynamics community since its discovery in the early 2000s . recent experiments have led to increasing interest in the origin and dynamics of these states . here we review the history of research on chimera states and highlight major advances in understanding their behaviour . story_separator_special_tag ising discussed the following model of a ferromagnetic body : assume n elementary magnets of moment to be arranged in a regular lattice ; each of them is supposed to have only two possible orientations , which we call positive and negative . assume further that there is an interaction energy u for each pair of neighbouring magnets of opposite direction . further , there is an external magnetic field of magnitude h such as to produce an additional energy of h ( + h ) for each magnet with positive ( negative ) direction . story_separator_special_tag the aim of this chapter is to present examples from the physical sciences where monte carlo methods are widely applied . here we focus on examples from statistical physics and discuss two of the most studied models , the ising model and the potts model for the interaction among classical spins . these models have been widely used for studies of phase transitions . 13 . story_separator_special_tag the transfer matrix methodology is proposed as a systematic tool for the statistical mechanical description of dna protein drug binding involved in gene regulation . we show that a genetic system of several cis-regulatory modules is calculable using this method , considering explicitly the site-overlapping , competitive , cooperative binding of regulatory proteins , their multilayer assembly and dna looping . in the methodological section , the matrix models are solved for the basic types of short- and long-range interactions between dna-bound proteins , drugs and nucleosomes . we apply the matrix method to gene regulation at the or operator of phage . the transfer matrix formalism allowed the description of the -switch at a single-nucleotide resolution , taking into account the effects of a range of inter-protein distances . our calculations confirm previously established roles of the contact ci cro rnap interactions . concerning long-range interactions , we show that while the dna loop between the or and ol operators is important at the lysogenic ci concentrations , the interference between the adjacent promoters pr and prm becomes more important at small ci concentrations . a large change in the expression pattern may arise in this regime due to story_separator_special_tag computational properties of use to biological organisms or to the construction of computers can emerge as collective properties of systems having a large number of simple equivalent components ( or neurons ) . the physical meaning of content-addressable memory is described by an appropriate phase space flow of the state of a system . a model of such a system is given , based on aspects of neurobiology but readily adapted to integrated circuits . the collective properties of this model produce a content-addressable memory which correctly yields an entire memory from any subpart of sufficient size . the algorithm for the time evolution of the state of the system is based on asynchronous parallel processing . additional emergent collective properties include some capacity for generalization , familiarity recognition , categorization , error correction , and time sequence retention . the collective properties are only weakly sensitive to details of the modeling or the failure of individual devices . story_separator_special_tag bacterial chemotaxis is controlled by the signaling of a cluster of receptors . a cooperative model is presented , in which coupling between neighboring receptor dimers enhances the sensitivity with which stimuli can be detected , without diminishing the range of chemoeffector concentration over which chemotaxis can operate . individual receptor dimers have two stable conformational states : one active , one inactive . noise gives rise to a distribution between these states , with the probability influenced by ligand binding , and also by the conformational states of adjacent receptor dimers . the two-state model is solved , based on an equivalence with the ising model in a randomly distributed magnetic field . the model has only two effective parameters , and unifies a number of experimental findings . according to the value of the parameter comparing coupling and noise , the signal can be arbitrarily sensitive to changes in the fraction of receptor dimers to which the ligand is bound . the counteracting effect of a change of methylation level is mapped to an induced field in the ising model . by returning the activity to the prestimulus level , this adapts the receptor cluster to a new story_separator_special_tag in all organisms , dna molecules are tightly compacted into a dynamic 3d nucleoprotein complex . in bacteria , this compaction is governed by the family of nucleoid-associated proteins ( naps ) . under conditions of stress and starvation , an nap called dps ( dna-binding protein from starved cells ) becomes highly up-regulated and can massively reorganize the bacterial chromosome . although static structures of dps-dna complexes have been documented , little is known about the dynamics of their assembly . here , we use fluorescence microscopy and magnetic-tweezers measurements to resolve the process of dna compaction by dps . real-time in vitro studies demonstrated a highly cooperative process of dps binding characterized by an abrupt collapse of the dna extension , even under applied tension . surprisingly , we also discovered a reproducible hysteresis in the process of compaction and decompaction of the dps-dna complex . this hysteresis is extremely stable over hour-long timescales despite the rapid binding and dissociation rates of dps . a modified ising model is successfully applied to fit these kinetic features . we find that long-lived hysteresis arises naturally as a consequence of protein cooperativity in large complexes and provides a useful mechanism story_separator_special_tag the partition function of a two-dimensional `` ferromagnetic '' with scalar `` spins '' ( ising model ) is computed rigorously for the case of vanishing field . the eigenwert problem involved in the corresponding computation for a long strip crystal of finite width ( $ n $ atoms ) , joined straight to itself around a cylinder , is solved by direct product decomposition ; in the special case $ n=\\ensuremath { \\infty } $ an integral replaces a sum . the choice of different interaction energies ( $ \\ifmmode\\pm\\else\\textpm\\fi { } j , \\ifmmode\\pm\\else\\textpm\\fi { } { j } ^ { \\ensuremath { ' } } $ ) in the ( 0 1 ) and ( 1 0 ) directions does not complicate the problem . the two-way infinite crystal has an order-disorder transition at a temperature $ t= { t } _ { c } $ given by the condition $ sinh ( \\frac { 2j } { k { t } _ { c } } ) sinh ( \\frac { 2 { j } ^ { \\ensuremath { ' } } } { k { t } _ { c } } ) =1. $ story_separator_special_tag the kuramoto model describes a large population of coupled limit-cycle oscillators whose natural frequencies are drawn from some prescribed distribution . if the coupling strength exceeds a certain threshold , the system exhibits a phase transition : some of the oscillators spontaneously synchronize , while others remain incoherent . the mathematical analysis of this bifurcation has proved both problematic and fascinating . we review 25 years of research on the kuramoto model , highlighting the false turns as well as the successes , but mainly following the trail leading from kuramoto s work to crawford s recent contributions . it is a lovely winding road , with excursions through mathematical biology , statistical physics , kinetic theory , bifurcation theory , and plasma physics . \xa9 2000 elsevier science b.v. all rights reserved . story_separator_special_tag synchronization phenomena in large populations of interacting elements are the subject of intense research efforts in physical , biological , chemical , and social systems . a successful approach to the problem of synchronization consists of modeling each member of the population as a phase oscillator . in this review , synchronization is analyzed in one of the most representative models of coupled phase oscillators , the kuramoto model . a rigorous mathematical treatment , specific numerical methods , and many variations and extensions of the original model that have appeared in the last few years are presented . relevant applications of the model in different contexts are also included . story_separator_special_tag abstract in this article , we have generalised the kuramoto model to allow one to model neuronal synchronisation more appropriately . the generalised version allows for different connective arrangements , time-varying natural frequencies and time-varying coupling strengths to be realised within the framework of the original kuramoto model . by incorporating the above mentioned features into the original kuramoto model one can allow for the adaptive nature of neurons in the brain to be accommodated . extensive tests using the generalised kuramoto model were performed on a n = 4 coupled oscillator network . examination of how different connective arrangements , time-varying natural frequencies and time-varying coupling strengths affected synchronisation separately and in combination are reported . the effects on synchronisation for large n are also reported . story_separator_special_tag oscillating chemical reactions result from complex periodic changes in the concentration of the reactants . in spatially ordered ensembles of candle flame oscillators the fluctuations in the ratio of oxygen atoms with respect to that of carbon , hydrogen and nitrogen produces an oscillation in the visible part of the flame related to the energy released per unit mass of oxygen . thus , the products of the reaction vary in concentration as a function of time , giving rise to an oscillation in the amount of soot and radiative emission . synchronisation of interacting dynamical sub-systems occurs as arrays of flames that act as master and slave oscillators , with groups of candles numbering greater than two , creating a synchronised motion in three-dimensions . in a ring of candles the visible parts of each flame move together , up and down and back and forth , in a manner that appears like a worship . here this effect is shown for rings of flames which collectively empower a central flame to pulse to greater heights . in contrast , situations where the central flames are suppressed are also found . the phenomena leads to in-phase synchronised states emerging story_separator_special_tag the field of far-from-equilibrium thermodynamics is often quoted as work in progress despite the extensive depth in the equilibrium based theory . one of the major shortcomings of equilibrium based theory is its inability to explain the emergence of order . some examples of far-from-equilibrium systems include , reaction diffusion systems , ordered patterns in solids such as snowflakes and alloys in a stronger molecular structure when heated . this paper looks into some of the standard far-from-equilibrium systems such as rayleigh b\xe9nard cells , the kuramoto model , the ising model , spatial population growth and heat flow through a simple solid . stochastic simulations were carried out in order to explicitly compute the variations in system s intensive properties spatially and temporally . as all of these systems evolved into a steady-state they exhibited certain similarities that were characteristically different from that of the system when at equilibrium . one of the striking differences was a non-gaussian probability distribution of the thermodynamic parameters when driven far-from-equilibrium . this spread of thermodynamic values across systems serve as the common connection as order emerges in out-of-equilibrium systems at steady-state . story_separator_special_tag abstract this review summarizes results for rayleigh-benard convection that have been obtained over the past decade or so . it concentrates on convection in compressed gases and gas mixtures with prandtl numbers near one and smaller . in addition to the classical problem of a horizontal stationary fluid layer heated from below , it also briefly covers convection in such a layer with rotation about a vertical axis , with inclination , and with modulation of the vertical acceleration . story_separator_special_tag recent advances in the understanding of rayleigh-b\\'enard convection and turbulence are reviewed in light of work using liquid helium . the discussion includes both experiments which have probed the steady flows preceding time dependence and experiments which have been directed toward understanding the ways in which turbulence evolves . comparison is made where appropriate to the many important contributions which have been obtained using room-temperature fluids , and a discussion is given explaining the advantages of cryogenic techniques . brief reviews are given for recent experimental investigations of convection in $ ^ { 3 } \\mathrm { he } $ - $ ^ { 4 } \\mathrm { he } $ mixtures -- -in both the superfluid and the normal states -- -and investigations of convection in rotating layers of liquid helium . story_separator_special_tag part i. benard convection and rayleigh-benard convection : 1. benard 's experiments 2. linear theory of rayleigh-benard convection 3. theory of surface tension driven benard convection 4. surface tension driven benard convection experiments 5. linear rayleigh-benard convection experiments 6. supercritical rayleigh-benard convection experiments 7. nonlinear theory of rayleigh-benard convection 8. miscellaneous topics part ii . taylor vortex flow : 9. circular couette flow 10. rayleigh 's stability criterion 11. g. i. taylor 's work 12. other early experiments 13. supercritical taylor vortex experiments 14. experiments with two independently rotating cylinders 15. nonlinear theory of taylor vortices 16. miscellaneous topics . story_separator_special_tag our unifying theory of turbulent thermal convection [ grossmann and lohse , j. fluid . mech . 407 , 27 ( 2000 ) ; phys . rev . lett . 86 , 3316 ( 2001 ) ; phys . rev . e 66 , 016305 ( 2002 ) ] is revisited , considering the role of thermal plumes for the thermal dissipation rate and addressing the local distribution of the thermal dissipation rate , which had numerically been calculated by verzicco and camussi [ j. fluid mech . 477 , 19 ( 2003 ) ; eur . phys . j. b 35 , 133 ( 2003 ) ] . predictions for the local heat flux and for the temperature and velocity fluctuations as functions of the rayleigh and prandtl numbers are offered . we conclude with a list of suggestions for measurements that seem suitable to verify or falsify our present understanding of heat transport and fluctuations in turbulent thermal convection . story_separator_special_tag turbulent rayleigh-benard convection displays a large-scale order in the form of rolls and cells on lengths larger than the layer height once the fluctuations of temperature and velocity are removed . these turbulent superstructures are reminiscent of the patterns close to the onset of convection . here we report numerical simulations of turbulent convection in fluids at different prandtl number ranging from 0.005 to 70 and for rayleigh numbers up to 107. we identify characteristic scales and times that separate the fast , small-scale turbulent fluctuations from the gradually changing large-scale superstructures . the characteristic scales of the large-scale patterns , which change with prandtl and rayleigh number , are also correlated with the boundary layer dynamics , and in particular the clustering of thermal plumes at the top and bottom plates . our analysis suggests a scale separation and thus the existence of a simplified description of the turbulent superstructures in geo- and astrophysical settings . story_separator_special_tag characteristic properties of turbulent rayleigh-b\\'enard convection in the bulk and the boundary layers are summarized for a wide range of rayleigh and prandtl numbers , with a specific emphasis on low-prandtl-number convection . story_separator_special_tag a data writer is described comprising : a memory to store at least one amount of source data that is to be written to a data storage medium ; a processor to arrange the source data into subsets and generate ecc data in respect of each subset , wherein the source data and the associated ecc data are to be written to a data storage medium via a plurality of individual data channels , and wherein the ecc data comprises at least a first degree of ecc protection having a first level of redundancy in respect of a first subset and a second degree of ecc protection having a second level of redundancy in respect of a second subset ; a plurality of data writing elements , each to write data from an associated data channel , concurrently with the writing by the other data writing elements of data from respective data channels , to a data storage medium ; and a controller , to control the writing by the data writing elements of the source data and the associated ecc data to the data storage medium . story_separator_special_tag many combinatorial optimization problems can be mapped to finding the ground states of the corresponding ising hamiltonians . the physical systems that can solve optimization problems in this way , namely ising machines , have been attracting more and more attention recently . our work shows that ising machines can be realized using almost any nonlinear self-sustaining oscillators with logic values encoded in their phases . many types of such oscillators are readily available for large-scale integration , with potentials in high-speed and low-power operation . in this paper , we describe the operation and mechanism of oscillator-based ising machines . the feasibility of our scheme is demonstrated through several examples in simulation and hardware , among which a simulation study reports average solutions exceeding those from state-of-art ising machines on a benchmark combinatorial optimization problem of size 2000 . story_separator_special_tag abstractwe introduce a model of interacting lattices at different resolutions driven by the two-dimensional ising dynamics with a nearest-neighbor interaction . we study this model both with tools borrowed from equilibrium statistical mechanics as well as non-equilibrium thermodynamics . our findings show that this model keeps the signature of the equilibrium phase transition . the critical temperature of the equilibrium models corresponds to the state maximizing the entropy and delimits two out-of-equilibrium regimes , one satisfying the onsager relations for systems close to equilibrium and one resembling convective turbulent states . since the model preserves the entropy and energy fluxes in the scale space , it seems a good candidate for parametric studies of out-of-equilibrium turbulent systems . story_separator_special_tag nonequilibrium thermodynamics has shown its applicability in a wide variety of different situations pertaining to fields such as physics , chemistry , biology , and engineering . as successful as it is , however , its current formulation considers only systems close to equilibrium , those satisfying the so-called local equilibrium hypothesis . here we show that diffusion processes that occur far away from equilibrium can be viewed as at local equilibrium in a space that includes all the relevant variables in addition to the spatial coordinate . in this way , nonequilibrium thermodynamics can be used and the difficulties and ambiguities associated with the lack of a thermodynamic description disappear . we analyze explicitly the inertial effects in diffusion and outline how the main ideas can be applied to other situations . story_separator_special_tag possible universal dynamics of a many-body system far from thermal equilibrium are explored . a focus is set on meta-stable non-thermal states exhibiting critical properties such as self-similarity and independence of the details of how the respective state has been reached . it is proposed that universal dynamics far from equilibrium can be tuned to exhibit a dynamical phase transition where these critical properties change qualitatively . this is demonstrated for the case of a superfluid two-component bose gas exhibiting different types of long-lived but non-thermal critical order . scaling exponents controlled by the ratio of experimentally tuneable coupling parameters offer themselves as natural smoking guns . the results shed light on the wealth of universal phenomena expected to exist in the far-from-equilibrium realm . story_separator_special_tag usually , in a nonequilibrium setting , a current brings mass from the highest density regions to the lowest density ones . although rare , the opposite phenomenon ( known as `` uphill diffusion '' ) has also been observed in multicomponent systems , where it appears as an artificial effect of the interaction among components . we show here that uphill diffusion can be a substantial effect , i.e. , it may occur even in single component systems as a consequence of some external work . to this aim we consider the two-dimensional ferromagnetic ising model in contact with two reservoirs that fix , at the left and the right boundaries , magnetizations of the same magnitude but of opposite signs.we provide numerical evidence that a class of nonequilibrium steady states exists in which , by tuning the reservoir magnetizations , the current in the system changes from `` downhill '' to `` uphill '' . moreover , we also show that , in such nonequilibrium setup , the current vanishes when the reservoir magnetization attains a value approaching , in the large volume limit , the magnetization of the equilibrium dynamics , thus establishing a relation between equilibrium
|
"deep architectures have demonstrated state-of-the-art results in a variety of settings , especially(...TRUNCATED)
|
"the rapid uptake of mobile devices and the rising popularity of mobile applications and services po(...TRUNCATED)
|
"we define an invariant $ abla_g ( m ) $ of pairs m , g , where m is a 3-manifold obtained by surger(...TRUNCATED)
|
"we study the orbits of g=gl ( v ) in the enhanced nilpotent cone , where is the variety of nilpoten(...TRUNCATED)
|
"we consider open-domain question answering ( qa ) where answers are drawn from either a corpus , a (...TRUNCATED)
|
"abstract background motor vehicle emissions contribute nearly a quarter of the world 's energy-rela(...TRUNCATED)
|
"this report is a terminal evaluation of a un environment-gef project implemented between 2011 and 2(...TRUNCATED)
|
End of preview. Expand
in Data Studio
No dataset card yet
- Downloads last month
- 1